Skip to main content

Why you should clean up your test directories

Today I lost a lot of time investigating a stupid(aren’t they all?) issue with some failing tests on the build server. The strange thing was that when I ran the same tests locally, they always succeeded...what was going wrong?

Just for completeness, here is the test task configuration I was using:

Nothing special I would think. It was only when diving deeper into the build output that I discovered what was going wrong.

Here is the output that explains the problem:

2025-05-20T13:19:22.3855459Z vstest.console.exe

2025-05-20T13:19:22.3855564Z "D:\b\3\_work\210\s\IAM.Core.Tests\bin\Release\net6.0\IAM.Core.Tests.dll"

2025-05-20T13:19:22.3855657Z "D:\b\3\_work\210\s\IAM.Core.Tests\bin\Release\net8.0\IAM.Core.Tests.dll"

2025-05-20T13:19:22.3855761Z "D:\b\3\_work\210\s\Mestbank.Core.Tests\bin\Release\net6.0\Mestbank.Core.Tests.dll"

2025-05-20T13:19:22.3855857Z "D:\b\3\_work\210\s\Mestbank.Core.Tests\bin\Release\net8.0\Mestbank.Core.Tests.dll"

2025-05-20T13:19:22.3855962Z "D:\b\3\_work\210\s\Mestbank.Loket.Tests\bin\Release\net6.0\Mestbank.Loket.Tests.dll"

2025-05-20T13:19:22.3856057Z "D:\b\3\_work\210\s\Mestbank.Loket.Tests\bin\Release\net8.0\Mestbank.Loket.Tests.dll"

During the test discovery process, the test task discovers some older tests still compiled for .NET 6.0. These older tests are run as well but fail because the backend systems they interact with have changed.

Why is this happening? The main reason is because I had set the clean settting to false:


As a consequence, old build results were not deleted and still available on the build server. As the test task uses a wildcard pattern to search for test files, it discovered not only the new tests but also the older tests still available on the build server.

The solution was easy. We changed the setting to true and choose the All build directories clean option:

More information

Azure DevOps– Clean the work directory of your build agent

Popular posts from this blog

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...