Skip to main content

Dapper–Buffered vs unbuffered readers

I’m building a data pipeline using Dataflowto migrate data from a database to an external API(A separate post about Dataflow is coming up!). Goal is to parallelize the work as much as possible while keeping the memory usage under control.

To achieve this, I use a streaming approach to limit the amount of data that is in memory at a certain moment in time.

Here is the (simplified) code I use to fetch the data from the database:

I’m using ADO.NET out-of-the-box combined with Dapper. I fetch the data from the database and send it to a Dataflow BatchBlock (a block that groups the data in batches of a certain size). Once the buffer of the BatchBlock is full, I use a Thread.Sleep() to wait until the buffer is no longer full and the Batchblock can accept new messages.

Nothing special and this should allow me to keep the memory consumption under control.

However when I executed this code, I saw the memory usage growing out of control:

So where is my mistake?

The answer should be found inside the Dapper documentation. There is a section about buffered vs unbuffered readers:

Dapper's default behavior is to execute your SQL and buffer the entire reader on return. This is ideal in most cases as it minimizes shared locks in the db and cuts down on db network time.

However when executing huge queries you may need to minimize memory footprint and only load objects as needed. To do so pass, buffered: false into the Query method.

Dapper will read the full dataset before it allows you to yield through the results by default. This perfectly makes sense in most use cases, but not in the way I want to use it here.

Fixing this is easy, as suggested in the docs, I pass an extra buffered:false parameter into the query method:

Popular posts from this blog

.NET 8–Keyed/Named Services

A feature that a lot of IoC container libraries support but that was missing in the default DI container provided by Microsoft is the support for Keyed or Named Services. This feature allows you to register the same type multiple times using different names, allowing you to resolve a specific instance based on the circumstances. Although there is some controversy if supporting this feature is a good idea or not, it certainly can be handy. To support this feature a new interface IKeyedServiceProvider got introduced in .NET 8 providing 2 new methods on our ServiceProvider instance: object? GetKeyedService(Type serviceType, object? serviceKey); object GetRequiredKeyedService(Type serviceType, object? serviceKey); To use it, we need to register our service using one of the new extension methods: Resolving the service can be done either through the FromKeyedServices attribute: or by injecting the IKeyedServiceProvider interface and calling the GetRequiredKeyedServic...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...