Skip to main content

Migrating from XUnit v2 to v3 - What’s new?

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects.

In this post I'll explain some of the reasons why I think you should consider migrating to the v3 version.

From libraries to executables

The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued v2:

  • Dependency Resolution: The compiler now handles dependency resolution at build time instead of runtime
  • Process Isolation: Tests run in separate processes, providing better isolation than the Application Domain approach used in v2
  • Simplified Execution: You can directly run your test assembly without requiring separate runner tools

When you build a v3 test project, you get:

  • For .NET Framework: A .exe file that directly runs your tests
  • For .NET: A .dll file containing your tests plus a .exe stub launcher

This also aligns with the Microsoft.Test.Platform vision and decouples you from the Microsoft.NET.Test.Sdk.

If you still want to use the VSTest Runner, you can by referencing the Microsoft.NET.Test.Sdk project SDK and include the xunit.runner.visualstudio package.

Console, Trace and Debug output support

Starting from xUnit V3 output from Console, Trace and Debug statements can be captured. I think this is an improvement that will greatly help during debugging a failing test.

This feature is disabled by default for backwards compatibility but can be enabled easily by adding the following assembly-level attributes:

Share a single fixture among all the test classes

A feature that I missed compared to other testing frameworks was the lack of support for creating a single test context that could be used for all the tests in a test assembly. XUnit V3 solves this with the introduction of an Assembly Fixture. XUnit will create a single instance of this fixture and share it among all your test classes.

An example from the documentation:

More information

What's New in v3? [2025 August 14] | xUnit.net

Microsoft.Testing.Platform overview - .NET | Microsoft Learn

Microsoft Testing Platform [2025 June 7] | xUnit.net

Sharing Context between Tests | xUnit.net

Popular posts from this blog

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...