Skip to main content

Posts

Showing posts from 2025

How to get rid of the smartcard popup when interacting with LDAP over SSL

In one of our applications we are connecting with LDAP through System.DirectoryServices.AccountManagement. . This code worked fine for years until we had to make the switch from LDAP to LDAPS and incorporate SSL in our connections. Let me start by showing you the original code (or at least a part of it): We thought that making the switch to SSL would be easy. We therefore added the ContextOptions.SecureSocketLayer to the ContextOptions enum; However after doing that, we get a SmartCard popup everytime this code is called: I couldn’t find a good solution to fix it while keeping the PrincipalContext class. After some help of GitHub Copilot and some research I discovered that I could get it working when I switched to the underlying LdapConnection and explicitly setting the ClientCertificate to null : More information c# - PrincipalContext with smartcard inserted - Stack Overflow c# - How to validate server SSL certificate for LDAP+SSL connection - Stack Overflow

Migrating from XUnit v2 to v3–Troubleshooting

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I’ll share some of the migration problems I encountered and how to fix them. XUnit.Abstractions is not recognized This is an easy one. The XUnit.Abstractions has become an internal namespace and should no longer be referenced. Just remove any using Xunit.Abstractions statement from you code. No v3 version of Serilog.Sinks.XUnit After switching to the v3 version of the Xunit packages, I noticed that the old XUnit v2 version was still used somewhere causing the following compiler error: The type 'FactAttribute' exists in both 'xunit.core, Version=2.4.2.0, Culture=neutral, Publ...

Migrating from XUnit v2 to v3 – Getting started

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. Yesterday I talked about some of the features that I like in the new version. Today I want to walk you through the basic steps needed to migrate an existing V2 project to V3. Understanding the architectural changes Before diving into the migration steps, it's crucial to understand the fundamental changes in xUnit v3 that impact how you'll structure and run your tests From Libraries to Executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued ...

Migrating from XUnit v2 to v3 - What’s new?

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I'll explain some of the reasons why I think you should consider migrating to the v3 version. From libraries to executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued v2: Dependency Resolution : The compiler now handles dependency resolution at build time instead of runtime Process Isolation : Tests run in separate processes, providing better isolation than the Application Domain approach used in v2 Simplified Execution :...

Writing your own batched sink in Serilog

Serilog is one of the most popular structured logging libraries for .NET, offering excellent performance and flexibility. While Serilog comes with many built-in sinks for common destinations like files, databases, and cloud services, we created a custom sink to guarantee compatibility with an existing legacy logging solution. However as we noticed some performance issues, we decided to rewrite the implementation to use a batched sink. In this post, we'll explore how to build your own batched sink in Serilog, which can significantly improve performance when dealing with high-volume logging scenarios. At least that is what we are aiming for… Understanding Serilog's batched sink architecture Serilog has built-in batching support and handles most of the complexity of batching log events for you. Internally will handle things like: Collecting log events in an internal queue Periodically flushing batches based on time intervals or batch size limits Handling backpre...

Ollama– Running LLM’s locally

Ollama remains my go to tool to run LLM’s locally. With the latest release the Ollama team introduced a user interface. This means you no longer need to use the command line or tools like OpenWebUI to interact with the available language models. After installing the latest release, you are welcomed by a new chat window similar to ChatGPT: Interacting with the model can be done directly through the UI:   A history of earlier conversations is stored and available:   You can easily switch between models by clicking on the model dropdown. If a model is not yet available locally, you can download it immediately by clicking on the Download icon: If you need a bigger context window, you can now change this directly from the settings: Some other feature worth mentioning are the file support (simply drag and drop a file to the chat window) and multimodal support: All this makes the new Ollama app a good starting point to try and interact with the available LLM's local...

Part VIII – Evaluations(continued)

I promised to continue my blog post from yesterday about Microsoft.Extensions.AI.Evaluation. Today we have a look at caching the responses and reporting. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations Part VIII – Evaluations (continued) The example I showed yesterday was a simple example of how you can integrate LLM validation into your tests and check the relevance of the LLM response. However this is only one of the many metrics you typically want to check. A more realistic test scenario will evaluate a large range of metrics and as tests can be run quite frequently caching the responses of our LLM models will save us both money and time (as tests can run faster). Let’s update our prev...

Microsoft.Extensions.AI–Part VIII–Evaluations

Back from holiday with charged batteries, we continue our journey exploring the Microsoft.Extensions.AI library. Today we have a look at evaluating AI models. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations What is Microsoft.Extensions.AI.Evaluation? Microsoft.Extensions.AI.Evaluation is a set of libraries with one common goal; simplifying the process of evaluating the quality and accuracy of responses generated by AI models. Measuring the quality of your AI apps is challenging, you need to evaluate metrics like: Relevance: How effective is the response for a given prompt? Truthfulness: Is the response factually correct? Coherence: Is the response logically structured and consiste...

The prompt as documentation: Should AI-generated code include its origin story?

In a recent code review, I stumbled upon something that made me pause: a developer had included the original AI prompt as a comment block above a set of classes. At first glance, it seemed like unnecessary clutter. But as I read through both the prompt and the resulting code, I realized I was maybe witnessing the birth of a new documentation practice that could fundamentally change how we understand and maintain AI-assisted codebases.   The case for prompts as living documentation Everyone who took the time to dig a little deeper in using AI as part of his day-to-day coding activities knows that a carefully designed and written prompt can make all the difference. So, wouldn't it be unfortunate that all the effort we put into it is lost after the AI agent has done his job? Some other reasons I could think of that makes storing these prompts valuable: Reproducibility : If we need to modify the AI generated code, we could adjust the prompt and regenerate rather than hand-edi...

Custom chat modes in Github Copilot

Out of the box, you get 3 modes in VS Code as explained in the documentation: Chat mode Description Ask mode Ask mode is optimized for answering questions about your codebase, coding, and general technology concepts. Use ask mode to understand how a piece of code works, brainstorm software design ideas, or explore new technologies. Edit mode Edit mode is optimized for making code edits across multiple files in your project. VS Code directly applies the code changes in the editor, where you can review them in-place. Use edit mode for coding tasks when you have a good understanding of the changes that you want to make, and which files you want to edit. Agent mode Agent mode is optimized for making autonomous edits across multiple files in your project. Use agent mode for coding tasks when you have a less well-defined task that might also require running terminal commands and tools. ...

Why rational humans make irrational decisions

I always made the assumption that in some way we as humans take rational decisions in a business context. Maybe we do something foolish at home, but at work we weigh options, calculate outcomes, and make logical decisions based on available information. Yes, right? Then I encountered Daniel Kahneman's Thinking, Fast and Slow , and this comfortable assumption crumbled. The book reveals a uncomfortable truth: we're far less rational than we'd like to believe. One of the most pervasive examples of our flawed reasoning is the sunk cost fallacy – our tendency to continue investing in failing ventures simply because we've already invested so much. The blizzard we drive into Kahneman paints a vivid picture: you've bought expensive concert tickets, but a dangerous blizzard hits on the night of the show. The rational choice is clear – stay home and stay safe. The money is already spent; driving into dangerous conditions won't bring it back. Yet many of us would ...

Run an Azure Pipelines build agent in WSL2

At my current employer we are still using a local build server to host our Azure Pipeline agents and run our builds. Having multiple agents running on the same machine works most of the time as most frameworks and libraries we depend on allow multiple side-by-side installations. Unfortunately there is one framework that doesn't like this; node.js. So far, we have worked around this by using NVM(Node Version Manager) to switch between node.js version. Of course this only works as long as no 2 builds are running at the same time that use a different node.js version. We did a previous attempt to fix this problem by running docker on our build server and host separate build agents in a container. But it introduced too much overhead on our build server and we never succeeded in getting it stable. As we had to move our build environment to a new server, we thought it would be a good time to finally fix this problem; this time by running multiple Linux distributions using WSL2 instead...

From AutoMapper Profiles to Mapster: Centralizing your mapping logic

Due to the licensing changes for AutoMapper, we decided to make the switch to Mapster. Although most changes where rather obvious and easy to achieve, there was one question we couldn’t answer immediatelly:  "How do I organize my mapping configurations like I did with AutoMapper Profiles?" While Mapster doesn't have a direct equivalent to AutoMapper's Profile class, it offers enough flexibility to create your own alternative. The AutoMapper Profile pattern In AutoMapper, we organized our mappings like this: We can add the profiles by automatically scanning for profiles: Mapster's Approach: Configuration classes Mapster allows you to specify the mapping configuration through the TypeAdapterConfig class. Here is the same mapping as shown above: But using this approach doesn’t allow us to scan for configuration classes automatically.  An approach that does work is by creating a class that implements the Mapster IRegistration interface: Now ...

The two-ear Product Manager: Why balanced listening is your superpower

As a software architect I've worked with multiple product managers over the years. Most of them position themselves as the "voice of the customer" or the "bridge between business and technology." Recently, while listening to the Software Captains podcast interview with Peter Janssens(in Dutch) about Product Management, Peter shared a quote that perfectly crystallized what effective product management really should look like in practice: A product manager has two ears and should use them equally—one ear to listen to the business and customers, one ear to listen to the product and development team. This simple metaphor aligns with my experience when my collaboration with a product manager resulted in the best possible products. I unfortunately worked with some product managers that only used one ear, resulting in a mediocre product, budget overruns and lack of innovation.  The two ear product manager really makes all the difference. So if you are a produ...

Fixing ValidationProblemDetails serialization Issues when using the JSON Source Generator in ASP.NET Core

As I gladly accept any kind of performance improvement I can get in my applications, I like to use the System.Text.Json source generator to generate the serialization logic for my Data Transfer Objects. However after upgrading a project to .NET 8, I started to get errors. The problem When using ASP.NET Core's [ApiController] attribute with automatic model validation, the framework automatically returns ValidationProblemDetails objects for validation errors. However, if you've configured your application to use System.Text.Json source generators for performance benefits, you might encounter serialization exceptions like: System.NotSupportedException: JsonTypeInfo metadata for type 'Microsoft.AspNetCore.Mvc.ValidationProblemDetails' was not provided by TypeInfoResolver of type '[]'. If using source generation, ensure that all root types passed to the serializer have been annotated with 'JsonSerializableAttribute', along with any types that might...

How to hide ‘Server’ and ‘X-Powered-By’ headers in ASP.NET Core

As we see security as a top priority, for every new application that we put in production, we let it be penetration tested first. One remark we got with the last pen test was about the information our servers inadvertently revealed through HTTP response headers. Although I think it is not the biggest possible security issue, exposing details about their technology stack through headers like Server and X-Powered-By , gives some reconnaissance information to potential attackers for free. n this post, we'll explore why you should hide these headers and demonstrate several methods to remove or customize them in ASP.NET Core applications. Generated with Bing Image Creator Why hide server headers? Server identification headers might seem harmless, but they can pose security risks: Information Disclosure : Headers like Server: Kestrel or X-Powered-By: ASP.NET immediately tell attackers what technology stack you're using, making it easier for them to target known vulnerabili...

Let GitHub Copilot create custom instructions based on your codebase

If you are not using custom instructions with GitHub Copilot, than this post will maybe help to finally get started. Writing your own set of custom instructions can be a challenge and although multiple examples are available , it still can be a challenge coming up with the right set of instructions. But what if we can let GitHub Copilot create the instructions for us? Let’s find out how… Why custom instructions? Custom instructions in GitHub Copilot can significantly improve your coding experience and productivity in several key ways: Code quality and consistency Custom instructions help ensure Copilot generates code that follows your specific style guidelines, naming conventions, and architectural patterns. Instead of getting generic suggestions, you'll receive code that matches your project's existing standards and practices. C ontext awareness By providing instructions about your tech stack, frameworks, and project structure, Copilot can make more relevant s...