Skip to main content

Posts

Showing posts from September, 2025

SonarQube–The ‘MakeUniqueDir’ task failed unexpectedly

In our efforts to improve the (code) quality of our applications, we started an initiative to get all our teams integrate their projects in SonarQube. We have SonarQube running for a long time inside our organization, but adoption remained fragmented. The initiative turned out quite successful, but as a consequence, we encountered some issues with SonarQube. Teams started to complain that there build pipelines became flaky and sometimes resulted in errors. The reported error was related to SonarQube and the message was like this: Error MBS0418: The ‘MakeUniqueDir’ task failed unexpectedly. System.UnauthorizedAccessException: Access to the path ‘’'db1_work80_sonarqubeout’ is denied. We found out that the problem was related to our build server setup where we have multiple agents running on the same server. As multiple agents try to execute the ’Prepare Analysis’ task, it sometimes fails with the error message above. Furher research brought us to the NodeReuse parameter of...

Microsoft.Extensions.AI – Part IX–Semantic kernel integration

Semantic Kernel was the first AI library specifically created to build AI agent and chat experiences in .NET. Later the .NET team started working on Microsoft.Extensions.AI as a common abstraction layer for integrating AI capabilities in your .NET applications. As a consequence, these 2 libraries have some overlap and similar abstractions exist in both libraries. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations Part VIII – Evaluations (continued) Part IX (this post) -  Semantic Kernel Integration What now? The good news is that Microsoft is actively working on aligning both libraries and (re)building Semantic Kernel on top of the same Microsoft.Extensions.AI abstractions. This mea...

Discovering Visual Studio 2026 – Code coverage

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Today I want to take a look at Code Coverage in Visual Studio. “Wait… what?!” I here you think, “Code coverage is not a new feature in Visual Studio.”. And yes you are right, But until this version Code Coverage was only available for the Enterprise Edition of Visual Studio. With Visual Studio 2026, it is finally a part of the Community and Professional Edition as well. (I always thought it was strange to call yourself professional but don’t focus on code coverage.) How to use Code Coverage in Visual Studio So, if you never had the opportunity to use the Code Coverage feature in Visual Studio, let me walk you through the steps. Go to the Test menu and select the Analyze Code Coverage for All Tests option from the menu. Another option is to right click...

Discovering Visual Studio 2026 – Copilot Actions

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Until recently you had 2 kinds of interactions with GitHub Copilot; either it was automatically with features like AI autocompletion in your editor, next edit suggestions or the intelligent copy paste feature I talked about yesterday; or it was manually by using prompts through one of the available chat modes. With the introduction of Copilot Actions, a third interaction mode is introduced. Copilot Actions Copilot Actions give you direct access to Copilot from the context menu inside the editor without the need to type any prompt. Right now, the list of available actions is limited to the following 5: Explain Optimize selection Generate comments Generate tests Add to Chat Remark: The Optimize option is only available when you have...

Discovering Visual Studio 2026–Adaptive paste

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Let’s be honest. Every developer copy and pastes other code. Typically after pasting there is some cleanup left to do; correcting styles, adapt to your naming conventions, fixing small errors. process often comes with extra steps. What if the pasted code is automatically adapted incorporating one or more of the following actions Aligning syntax and styling with the document Inferring parameter adjustments Fixing minor errors Supporting language translation, human and code-based Completing patterns or filling in blanks Wouldn’t that be great? Enter adaptive paste. Adaptive paste The adaptive paste UI appears when you do a regular paste (CTRL-V). Press the TAB key afterwards to get an Adaptive Paste suggestion. You can also trigger Ada...

Auto update the .NET core versions on your server

.NET Full Framework updates on your server(s) become available as Windows Updates and can be pushed through centralized tools like Microsoft Intune, System Center Configuration Manager (SCCM), or Windows Server Update Services (WSUS), allowing IT ops teams to control update scheduling and minimize unexpected downtime. However such an option didn’t exist for a long time for .NET Core. This changed some time ago when .NET Core updates became available via Microsoft Updates as an opt-in(!) feature. How to enable automatic updates for .NET Core Enabling automatic .NET updates on your Windows Server requires modifying the Windows Registry. You have several options depending on your needs: Enable All .NET Updates (Recommended for most scenarios): [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NET] "AllowAUOnServerOS" = dword:00000001 Version-Specific Updates : .NET 9.0 : [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NET\9.0] .NET 8.0 : [HKEY_LOCAL_MACHINE\SOFTWARE\Microsof...

Discovering Visual Studio 2026–Installation

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Start by downloading the version you prefer here: https://visualstudio.microsoft.com/insiders/ After downloading the installer and executing the installer, the Visual Studio Installed is loaded and you are welcomed by a new screen: This is already a great feature as it allows you to import the configuration settings and extensions from a previous version. We don’t have installed Visual Studio yet but I’m already happy! The remaining part of the installation process is not changed, you can select the Workloads and Individual components you like and start the installation. I didn’t have the feeling that the installation process was much slower or faster but don’t take it as an official measure. After the installation completed, I opened up an existing sol...

How to get rid of the smartcard popup when interacting with LDAP over SSL

In one of our applications we are connecting with LDAP through System.DirectoryServices.AccountManagement. . This code worked fine for years until we had to make the switch from LDAP to LDAPS and incorporate SSL in our connections. Let me start by showing you the original code (or at least a part of it): We thought that making the switch to SSL would be easy. We therefore added the ContextOptions.SecureSocketLayer to the ContextOptions enum; However after doing that, we get a SmartCard popup everytime this code is called: I couldn’t find a good solution to fix it while keeping the PrincipalContext class. After some help of GitHub Copilot and some research I discovered that I could get it working when I switched to the underlying LdapConnection and explicitly setting the ClientCertificate to null : More information c# - PrincipalContext with smartcard inserted - Stack Overflow c# - How to validate server SSL certificate for LDAP+SSL connection - Stack Overflow

Migrating from XUnit v2 to v3–Troubleshooting

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I’ll share some of the migration problems I encountered and how to fix them. XUnit.Abstractions is not recognized This is an easy one. The XUnit.Abstractions has become an internal namespace and should no longer be referenced. Just remove any using Xunit.Abstractions statement from you code. No v3 version of Serilog.Sinks.XUnit After switching to the v3 version of the Xunit packages, I noticed that the old XUnit v2 version was still used somewhere causing the following compiler error: The type 'FactAttribute' exists in both 'xunit.core, Version=2.4.2.0, Culture=neutral, Publ...

Migrating from XUnit v2 to v3 – Getting started

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. Yesterday I talked about some of the features that I like in the new version. Today I want to walk you through the basic steps needed to migrate an existing V2 project to V3. Understanding the architectural changes Before diving into the migration steps, it's crucial to understand the fundamental changes in xUnit v3 that impact how you'll structure and run your tests From Libraries to Executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued ...

Migrating from XUnit v2 to v3 - What’s new?

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I'll explain some of the reasons why I think you should consider migrating to the v3 version. From libraries to executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued v2: Dependency Resolution : The compiler now handles dependency resolution at build time instead of runtime Process Isolation : Tests run in separate processes, providing better isolation than the Application Domain approach used in v2 Simplified Execution :...

Writing your own batched sink in Serilog

Serilog is one of the most popular structured logging libraries for .NET, offering excellent performance and flexibility. While Serilog comes with many built-in sinks for common destinations like files, databases, and cloud services, we created a custom sink to guarantee compatibility with an existing legacy logging solution. However as we noticed some performance issues, we decided to rewrite the implementation to use a batched sink. In this post, we'll explore how to build your own batched sink in Serilog, which can significantly improve performance when dealing with high-volume logging scenarios. At least that is what we are aiming for… Understanding Serilog's batched sink architecture Serilog has built-in batching support and handles most of the complexity of batching log events for you. Internally will handle things like: Collecting log events in an internal queue Periodically flushing batches based on time intervals or batch size limits Handling backpre...

Ollama– Running LLM’s locally

Ollama remains my go to tool to run LLM’s locally. With the latest release the Ollama team introduced a user interface. This means you no longer need to use the command line or tools like OpenWebUI to interact with the available language models. After installing the latest release, you are welcomed by a new chat window similar to ChatGPT: Interacting with the model can be done directly through the UI:   A history of earlier conversations is stored and available:   You can easily switch between models by clicking on the model dropdown. If a model is not yet available locally, you can download it immediately by clicking on the Download icon: If you need a bigger context window, you can now change this directly from the settings: Some other feature worth mentioning are the file support (simply drag and drop a file to the chat window) and multimodal support: All this makes the new Ollama app a good starting point to try and interact with the available LLM's local...

Part VIII – Evaluations(continued)

I promised to continue my blog post from yesterday about Microsoft.Extensions.AI.Evaluation. Today we have a look at caching the responses and reporting. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations Part VIII – Evaluations (continued) The example I showed yesterday was a simple example of how you can integrate LLM validation into your tests and check the relevance of the LLM response. However this is only one of the many metrics you typically want to check. A more realistic test scenario will evaluate a large range of metrics and as tests can be run quite frequently caching the responses of our LLM models will save us both money and time (as tests can run faster). Let’s update our prev...

Microsoft.Extensions.AI–Part VIII–Evaluations

Back from holiday with charged batteries, we continue our journey exploring the Microsoft.Extensions.AI library. Today we have a look at evaluating AI models. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations What is Microsoft.Extensions.AI.Evaluation? Microsoft.Extensions.AI.Evaluation is a set of libraries with one common goal; simplifying the process of evaluating the quality and accuracy of responses generated by AI models. Measuring the quality of your AI apps is challenging, you need to evaluate metrics like: Relevance: How effective is the response for a given prompt? Truthfulness: Is the response factually correct? Coherence: Is the response logically structured and consiste...

The prompt as documentation: Should AI-generated code include its origin story?

In a recent code review, I stumbled upon something that made me pause: a developer had included the original AI prompt as a comment block above a set of classes. At first glance, it seemed like unnecessary clutter. But as I read through both the prompt and the resulting code, I realized I was maybe witnessing the birth of a new documentation practice that could fundamentally change how we understand and maintain AI-assisted codebases.   The case for prompts as living documentation Everyone who took the time to dig a little deeper in using AI as part of his day-to-day coding activities knows that a carefully designed and written prompt can make all the difference. So, wouldn't it be unfortunate that all the effort we put into it is lost after the AI agent has done his job? Some other reasons I could think of that makes storing these prompts valuable: Reproducibility : If we need to modify the AI generated code, we could adjust the prompt and regenerate rather than hand-edi...

Custom chat modes in Github Copilot

Out of the box, you get 3 modes in VS Code as explained in the documentation: Chat mode Description Ask mode Ask mode is optimized for answering questions about your codebase, coding, and general technology concepts. Use ask mode to understand how a piece of code works, brainstorm software design ideas, or explore new technologies. Edit mode Edit mode is optimized for making code edits across multiple files in your project. VS Code directly applies the code changes in the editor, where you can review them in-place. Use edit mode for coding tasks when you have a good understanding of the changes that you want to make, and which files you want to edit. Agent mode Agent mode is optimized for making autonomous edits across multiple files in your project. Use agent mode for coding tasks when you have a less well-defined task that might also require running terminal commands and tools. ...

Why rational humans make irrational decisions

I always made the assumption that in some way we as humans take rational decisions in a business context. Maybe we do something foolish at home, but at work we weigh options, calculate outcomes, and make logical decisions based on available information. Yes, right? Then I encountered Daniel Kahneman's Thinking, Fast and Slow , and this comfortable assumption crumbled. The book reveals a uncomfortable truth: we're far less rational than we'd like to believe. One of the most pervasive examples of our flawed reasoning is the sunk cost fallacy – our tendency to continue investing in failing ventures simply because we've already invested so much. The blizzard we drive into Kahneman paints a vivid picture: you've bought expensive concert tickets, but a dangerous blizzard hits on the night of the show. The rational choice is clear – stay home and stay safe. The money is already spent; driving into dangerous conditions won't bring it back. Yet many of us would ...

Run an Azure Pipelines build agent in WSL2

At my current employer we are still using a local build server to host our Azure Pipeline agents and run our builds. Having multiple agents running on the same machine works most of the time as most frameworks and libraries we depend on allow multiple side-by-side installations. Unfortunately there is one framework that doesn't like this; node.js. So far, we have worked around this by using NVM(Node Version Manager) to switch between node.js version. Of course this only works as long as no 2 builds are running at the same time that use a different node.js version. We did a previous attempt to fix this problem by running docker on our build server and host separate build agents in a container. But it introduced too much overhead on our build server and we never succeeded in getting it stable. As we had to move our build environment to a new server, we thought it would be a good time to finally fix this problem; this time by running multiple Linux distributions using WSL2 instead...