I recently upgraded the (development) RabbitMQ cluster of one of my clients to RabbitMQ 3.9. The upgrade went smoothly and none of the development teams mentioned any issues. So I was happily preparing for the production upgrade.
A few weeks later I was contacted by one of the team leads who was investigating a specific issue he had in one of his applications; he was using a message published to RabbitMQ to trigger a long running task (a batch job). This message was picked by a Windows Service that uses a MassTransit consumer to execute this long running task. The strange this was that the task sometimes failed. The normal behavior in MassTransit is that this message would end up in the error queue (maybe after a few retries). However this didn’t happen and the message was put back on the queue.
What was going on?
I started by having a look at the error logs and notice a message like this:
"Message ACK failed: 258",
"The channel was closed: AMQP close-reason, initiated by Peer, code=406, text='PRECONDITION_FAILED - delivery acknowledgement on channel 1 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more', classId=0, methodId=0 1"
This already explains the behaviour we are seeing. As the message wasn’t acknowledged in the predefined interval (1800000ms), it was put back on the queue.
The strange this is that this was recent behavior, this application was in use for a long time and they never had this issue before.
In the RabbitMQ documentation I noticed the following:
In modern RabbitMQ versions, a timeout is enforced on consumer delivery acknowledgement. This helps detect buggy (stuck) consumers that never acknowledge deliveries. Such consumers can affect node's on disk data compaction and potentially drive nodes out of disk space.
If a consumer does not ack its delivery for more than the timeout value (30 minutes by default), its channel will be closed with a PRECONDITION_FAILED channel exception. The error will be logged by the node that the consumer was connected to. All outstanding deliveries on that channel, from all consumers, will be requeued.
Could it be that my upgrade to RabbitMQ 3.9 was causing this issue. A search on the Github site brought me to the following PR: https://github.com/rabbitmq/rabbitmq-server/pull/2990. In RabbitMQ 3.8 a timeout value was introduced.
Ok, we got one step further in our investigation. Let’s see how we can fix this…
The workaround
I used the following workaround to temporarily fix the issue. I added a rabbitmq.conf
file and added a higher timeout value:
If you don't know where you need to add this config file, you can see the location where RabbitMQ checks for config files in the logs:
2022-06-29 14:42:46.591000+02:00 [info] <0.219.0> config file(s) : c:/Users/bawu/AppData/Roaming/RabbitMQ/advanced.config
2022-06-29 14:42:46.591000+02:00 [info] <0.219.0> : c:/Users/bawu/AppData/Roaming/RabbitMQ/rabbitmq.conf
Why is this a workaround?
So why don’t I accept this as the final solution? I don’t think it is a good idea to have a very high timeout value. This will negatively impact the performance of the system. I always prefer the ‘FAIL FAST’ principle.
In MassTransit it isn’t recommended to use a standard consumer for long running tasks. Instead we should use a ‘Job consumer’ which is specifically made for this purpose.
But I’ll leave that for another blog post…