Skip to main content

Posts

Migrating from XUnit v2 to v3–Troubleshooting

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I’ll share some of the migration problems I encountered and how to fix them. XUnit.Abstractions is not recognized This is an easy one. The XUnit.Abstractions has become an internal namespace and should no longer be referenced. Just remove any using Xunit.Abstractions statement from you code. No v3 version of Serilog.Sinks.XUnit After switching to the v3 version of the Xunit packages, I noticed that the old XUnit v2 version was still used somewhere causing the following compiler error: The type 'FactAttribute' exists in both 'xunit.core, Version=2.4.2.0, Culture=neutral, Publ...
Recent posts

Migrating from XUnit v2 to v3 – Getting started

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. Yesterday I talked about some of the features that I like in the new version. Today I want to walk you through the basic steps needed to migrate an existing V2 project to V3. Understanding the architectural changes Before diving into the migration steps, it's crucial to understand the fundamental changes in xUnit v3 that impact how you'll structure and run your tests From Libraries to Executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued ...

Migrating from XUnit v2 to v3 - What’s new?

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I'll explain some of the reasons why I think you should consider migrating to the v3 version. From libraries to executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued v2: Dependency Resolution : The compiler now handles dependency resolution at build time instead of runtime Process Isolation : Tests run in separate processes, providing better isolation than the Application Domain approach used in v2 Simplified Execution :...

Writing your own batched sink in Serilog

Serilog is one of the most popular structured logging libraries for .NET, offering excellent performance and flexibility. While Serilog comes with many built-in sinks for common destinations like files, databases, and cloud services, we created a custom sink to guarantee compatibility with an existing legacy logging solution. However as we noticed some performance issues, we decided to rewrite the implementation to use a batched sink. In this post, we'll explore how to build your own batched sink in Serilog, which can significantly improve performance when dealing with high-volume logging scenarios. At least that is what we are aiming for… Understanding Serilog's batched sink architecture Serilog has built-in batching support and handles most of the complexity of batching log events for you. Internally will handle things like: Collecting log events in an internal queue Periodically flushing batches based on time intervals or batch size limits Handling backpre...

Ollama– Running LLM’s locally

Ollama remains my go to tool to run LLM’s locally. With the latest release the Ollama team introduced a user interface. This means you no longer need to use the command line or tools like OpenWebUI to interact with the available language models. After installing the latest release, you are welcomed by a new chat window similar to ChatGPT: Interacting with the model can be done directly through the UI:   A history of earlier conversations is stored and available:   You can easily switch between models by clicking on the model dropdown. If a model is not yet available locally, you can download it immediately by clicking on the Download icon: If you need a bigger context window, you can now change this directly from the settings: Some other feature worth mentioning are the file support (simply drag and drop a file to the chat window) and multimodal support: All this makes the new Ollama app a good starting point to try and interact with the available LLM's local...

Part VIII – Evaluations(continued)

I promised to continue my blog post from yesterday about Microsoft.Extensions.AI.Evaluation. Today we have a look at caching the responses and reporting. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations Part VIII – Evaluations (continued) The example I showed yesterday was a simple example of how you can integrate LLM validation into your tests and check the relevance of the LLM response. However this is only one of the many metrics you typically want to check. A more realistic test scenario will evaluate a large range of metrics and as tests can be run quite frequently caching the responses of our LLM models will save us both money and time (as tests can run faster). Let’s update our prev...

Microsoft.Extensions.AI–Part VIII–Evaluations

Back from holiday with charged batteries, we continue our journey exploring the Microsoft.Extensions.AI library. Today we have a look at evaluating AI models. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations What is Microsoft.Extensions.AI.Evaluation? Microsoft.Extensions.AI.Evaluation is a set of libraries with one common goal; simplifying the process of evaluating the quality and accuracy of responses generated by AI models. Measuring the quality of your AI apps is challenging, you need to evaluate metrics like: Relevance: How effective is the response for a given prompt? Truthfulness: Is the response factually correct? Coherence: Is the response logically structured and consiste...