Skip to main content

MassTransit - Message requeued for long running tasks in RabbitMQ

I recently upgraded the (development) RabbitMQ cluster of one of my clients to RabbitMQ 3.9. The upgrade went smoothly and none of the development teams mentioned any issues. So I was happily preparing for the production upgrade.

A few weeks later I was contacted by one of the team leads who was investigating a specific issue he had in one of his applications; he was using a message published to RabbitMQ to trigger a long running task (a batch job). This message was picked by a Windows Service that uses a MassTransit consumer to execute this long running task. The strange this was that the task sometimes failed. The normal behavior in MassTransit is that this message would end up in the error queue (maybe after a few retries). However this didn’t happen and the message was put back on the queue.

What was going on?

I started by having a look at the error logs and notice a message like this:

"Message ACK failed: 258",
"The channel was closed: AMQP close-reason, initiated by Peer, code=406, text='PRECONDITION_FAILED - delivery acknowledgement on channel 1 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more', classId=0, methodId=0 1"

This already explains the behaviour we are seeing. As the message wasn’t acknowledged in the predefined interval (1800000ms), it was put back on the queue.

The strange this is that this was recent behavior, this application was in use for a long time and they never had this issue before.

In the RabbitMQ documentation I noticed the following:

In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.

If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to. All outstanding deliveries on that channel, from all consumers, will be requeued.

Could it be that my upgrade to RabbitMQ 3.9 was causing this issue. A search on the Github site brought me to the following PR: https://github.com/rabbitmq/rabbitmq-server/pull/2990. In RabbitMQ 3.8 a timeout value was introduced.

Ok, we got one step further in our investigation. Let’s see how we can fix this…

The workaround

I used the following workaround to temporarily fix the issue. I added a rabbitmq.conf file and added a higher timeout value:

If you don't know where you need to add this config file, you can see the location where RabbitMQ checks for config files in the logs:

2022-06-29 14:42:46.591000+02:00 [info] <0.219.0>  config file(s) : c:/Users/bawu/AppData/Roaming/RabbitMQ/advanced.config

2022-06-29 14:42:46.591000+02:00 [info] <0.219.0>                 : c:/Users/bawu/AppData/Roaming/RabbitMQ/rabbitmq.conf

Why is this a workaround?

So why don’t I accept this as the final solution? I don’t think it is a good idea to have a very high timeout value. This will negatively impact the performance of the system. I always prefer the ‘FAIL FAST’ principle.

In MassTransit it isn’t recommended to use a standard consumer for long running tasks. Instead we should use a ‘Job consumer’ which is specifically made for this purpose.

But I’ll leave that for another blog post…

Popular posts from this blog

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...