I’m building a data pipeline using Dataflowto migrate data from a database to an external API(A separate post about Dataflow is coming up!). Goal is to parallelize the work as much as possible while keeping the memory usage under control.
To achieve this, I use a streaming approach to limit the amount of data that is in memory at a certain moment in time.
Here is the (simplified) code I use to fetch the data from the database:
I’m using ADO.NET out-of-the-box combined with Dapper. I fetch the data from the database and send it to a Dataflow BatchBlock (a block that groups the data in batches of a certain size). Once the buffer of the BatchBlock is full, I use a Thread.Sleep() to wait until the buffer is no longer full and the Batchblock can accept new messages.
Nothing special and this should allow me to keep the memory consumption under control.
However when I executed this code, I saw the memory usage growing out of control:
So where is my mistake?
The answer should be found inside the Dapper documentation. There is a section about buffered vs unbuffered readers:
Dapper's default behavior is to execute your SQL and buffer the entire reader on return. This is ideal in most cases as it minimizes shared locks in the db and cuts down on db network time.
However when executing huge queries you may need to minimize memory footprint and only load objects as needed. To do so pass,
buffered: false
into theQuery
method.
Dapper will read the full dataset before it allows you to yield through the results by default. This perfectly makes sense in most use cases, but not in the way I want to use it here.
Fixing this is easy, as suggested in the docs, I pass an extra buffered:false parameter into the query method: