Skip to main content

How to detect heap allocations

A few weeks ago I talked about static anonymous functions and how they can help to limit the number of heap allocations when using lambdas. A colleague contacted me after this post with the question how to detect those allocations.

Great question! Let me share you some ways on how to do this.

Let me first give you a general answer and let’s then dive in 2 specific tools.

To discover excessive allocations when using lambdas you can use any memory profiler tool and look for allocations of *__DisplayClass* or various variants of Action* and Func*.

With this information, you already know what to look for.

Visual Studio Performance profiler

A first option to help you is the Performance Profiler in Visual Studio.

  • Change the Build type to Release.
  • Now go to Debug –> Performance Profiler…
  • There are multiple profiling targets available, but we want to use the .NET Object Allocation tracking option so select this check box.

  • Click the Start button to run the tool.

  • After closing the profiled application or clicking on Stop collection we can view all allocations on the Allocation tab

    More information: Analyze memory usage for .NET objects - Visual Studio (Windows) | Microsoft Learn

    Roslyn Clr Heap Allocation Analyzer

    Another option is the Roslyn based C# heap allocation diagnostic analyzer that can detect explicit and many implicit allocations like boxing, display classes a.k.a closures, implicit delegate creations, etc.

    It can be installed directly in your project as a NuGet package:

    dotnet add package ClrHeapAllocationAnalyzer

    As with any analyzer, it gives you inline hints:

    and warnings:

    If you want to see it in action, have a look at the following video:

    Remark: If you are using JetBrains Rider, you can achieve the same thing using the Heap Allocation Viewer plugin.

    Popular posts from this blog

    Azure DevOps/ GitHub emoji

    I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

    Kubernetes–Limit your environmental impact

    Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

    Podman– Command execution failed with exit code 125

    After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...