RabbitMQ has been the message broker of my choice for a long time. It has served me well over the years and I still like to use it today. Recently, I was able to add an extra reason to the list why I like RabbitMQ when I noticed that a new feature was added in RabbitMQ 3.9; Streams.
RabbitMQ Streams
From the documentation:
Streams are a new persistent and replicated data structure in RabbitMQ 3.9 which models an append-only log with non-destructive consumer semantics.
With streams you get Kafka like functionality in RabbitMQ without all the complexity that comes with maintaining and managing your Kafka cluster. It has been created with the following use cases in mind:
- Large amount of subscribers; in traditional queuing we use a dedicated queue for each consumers. This becomes ineffective whehn we have large number of consumers.
- Time-travelling; Streams will allow consumers to attach at any point in the log and read from there.
- Performance: Streams have been designed with performance as a major goal
- Large logs: Streams are designed to store larger amounts of data in an efficient manner with minimal in-memory overhead.
If you want a good introduction, have a look at the following video:
Using RabbitMQ Streams with .NET
Although I hope that there will soon be a MassTransit Rider implementation for RabbitMQ Streams, right now the way to go is through the RabbitMQ.Stream.Client NuGet package.
Let’s build a small sample application to try it out…
Enable the RabbitMQ Streams plugin
Before we can start we need to enable RabbitMQ Streams. This is available as a separate plugin and is not activated out of the box:
rabbitmq-plugins enable rabbitmq_stream
Tip: If you want to test it locally you can use the RabbitMQ image I’ve created a Dockerfile where this plugin is already enabled: https://github.com/wullemsb/docker-rabbitmq
Create the stream
First we need to construct a configuration object. RabbitMQ streams is using a separate port (by default this is port 5552).
Now that we are connected to our RabbitMQ cluster, we can create a new Stream through CreateStream()
. This is an idempotent operation so you can safely call this multiple times.
It is important when creating the stream to specify a retention policy to prevent the stream from growing infinitely. In our example we limit the queue to 200000 bytes.
Add a producer
Now it is time to create our producer.
And we can start publishing events through the Send()
method.
The Send()
method expect a publishingId
that should be incremented for each send and our message payload.
Add a consumer
Almost there. Time to consume these published messages…
Notice the OffsetSpec
. This allows us to specify from where the stream should be consumed. In this example we have set it to OffsetTypeFirst
meaning the beginning of the stream.
If we now run the application, we get output like this:
Here is the full example: