Skip to main content

Posts

Showing posts from August, 2025

Type Aliases in C#: Bringing F#-style readability to your C# code

As I like to to program not only in C# but also F#, there are some paradigms and features in F# that influence my C# coding style. One of those features is F# type abbreviations that make complex type signatures more understandable and expressive. Since C#12, you have a similar feature available in C#: type aliases using the using directive. But although this option already exists for some time, I don’t see many C# developers use it. But I’ll hope that this blog post can help to change this and increase adoption… What are Type Aliases? Type aliases allow you to create shorthand names for existing types, making your code more readable and self-documenting. Instead of repeatedly writing complex generic types or lengthy class names, you can define a meaningful alias that captures the intent of your data structure. The F# connection In F#, type abbreviations have been a feature for years: These abbreviations don't create new types—they're simply aliases that make the ...

Refactor an Azure DevOps pipeline into multiple stages

While helping a team getting their failing build back up and running I noticed that they were using one big build pipeline that only consists of one stage. This not only made the pipeline more difficult to understand but also makes the build time very long and forces you to rerun the full build if one specific step fails. Let me walk you through several scenario’s how we logically split this pipeline into multiple stages but before I do that, here is the original YAML pipeline: In the example above we have a single stage containing all the steps. Time to refactor… Approach 1 - Build → Test → Publish Stages Description This is the most straightforward approach: Build Stage: NuGet tool installation Package restore Build all solutions Angular npm install and build Test Stage: Run unit tests (.NET) Run Angular tests Publish Stage: Publish all applications Publish build artifacts Advantages Clear separation of concerns : Each sta...

Supercharge your EF Core debugging with Query Tags

Debugging database queries in Entity Framework Core can sometimes feel like searching for a needle in a haystack. When your application generates dozens or hundreds of SQL queries, identifying which LINQ query produced which SQL statement becomes a real challenge. Fortunately, I discovered an elegant solution that EF Core provides: Query Tags . Query Tags Query Tags allow you to add custom comments to the SQL queries generated by your LINQ expressions. These comments appear directly in the generated SQL, making it incredibly easy to trace back from a SQL query to the specific code that created it. To use this feature you need to apply the TagWith method to your on any IQueryable and pass a descriptive comment: This generates SQL that looks like this: Instead of trying to reverse-engineer which code generated a particular SQL query, you can immediately see the purpose and origin of each query in your database logs or profiler. Advanced techniques Chaining multiple ta...

View your installed MCP servers in VSCode

Managing Model Context Protocol (MCP) servers in VS Code has become significantly easier with the dedicated management interface in the Extensions view. While you can configure MCP servers in multiple places throughout VS Code, the Extension tab provides a centralized, visual approach to monitoring and controlling all your available MCP servers. The challenge of multiple configuration points VS Code offers several ways to configure MCP servers, which provides flexibility but can also create complexity: Workspace settings via .vscode/mcp.json files for project-specific configurations User settings for global MCP server configurations across all workspaces Automatic discovery of servers defined in other tools like Claude Desktop Direct installation from the curated MCP server list Command-line configuration using the --add-mcp option And more… While this flexibility is powerful for developers working across different projects and environments, it ca...

Azure Pipelines and NuGet Package Source Mapping–Not best friends (yet)

I think everyone has encountered the frustrating scenario where your .NET solution builds perfectly on your local machine but mysteriously fails in Azure Pipelines. Most of the time there is a mistake you made that is easy to fix but sometimes it is the build tooling itself that causes the problem. And that is unfortunately exactly the case when using NuGet Package Source mapping in Azure Pipelines. What is Package Source Mapping? Package Source Mapping in NuGet, introduced in version 6.0, is a security-enhancing feature that allows developers to explicitly define which package sources should be used for specific packages in a project. Traditionally, NuGet would scan all configured sources—public or private—to find and restore packages, which could pose risks by inadvertently pulling packages from untrusted locations. With Package Source Mapping, developers can centralize and control package restoration by specifying patterns that map packages to designated sources in the nuget.c...

Azure DevOps Pipelines: Fixing the old NuGet version problem

While working with Azure DevOps pipelines I encountered a frustrating NuGet package restore failure. The culprit behind these issues is Azure DevOps using an outdated version of NuGet by default, which lead to version conflicts and compatibility problems with modern .NET projects. The problem: Old NuGet versions cause conflicts When running NuGet restore tasks in Azure DevOps, you might encounter errors like these: ##[error]The nuget command failed with exit code(1) and error(NU1107: Version conflict detected for Castle.Core. Install/reference Castle.Core 3.1.0 directly to project SOFACore.PerformanceBenchmarks to resolve this issue. SOFACore.PerformanceBenchmarks -> Castle.Core.AsyncInterceptor 0.1.0 -> Castle.Core (>= 3.1.0) SOFACore.PerformanceBenchmarks -> SOFACore.NHibernate -> NHibernate 2.1.2.4000 -> Castle.DynamicProxy 2.1.0 -> Castle.Core (= 1.1.0). NU1107: Version conflict detected for xunit.v3.extensibility.core. Install/reference xunit.v3.ex...

Understanding SERVICE_SID_TYPE_UNRESTRICTED in Azure DevOps Agent Configuration

When configuring self-hosted Azure DevOps agents on Windows, one often-overlooked setting can significantly improve security and resource access control: the SERVICE_SID_TYPE_UNRESTRICTED. I first learned about this when configuring a new build agent on our build server. While going through the configuration steps, I got an extra question if I wanted to enable the SERVICE_SID_TYPE_UNRESTRICTED for the agent service. As I had no clue what this option means, I decided to dive in and write a blog post about it. What is SERVICE_SID_TYPE_UNRESTRICTED ? Windows services can be assigned a Service SID (Security Identifier) to help manage access to resources. By default, Azure DevOps agents run with SERVICE_SID_TYPE_NONE , which means no service-specific SID is added to the process token. Setting the SID type to UNRESTRICTED adds a unique SID like NT SERVICE\vstsagent.{tenant}.{pool}.{agent} to the agent's process token. This allows you to: Grant access to local resourc...

Azure DevOps Server–Agent download location has changed

It seems that I’m a little behind on what has changed in Azure DevOps Server recently. While helping my team migrate our existing build servers to some new infrastructure, we encountered a problem when trying to download the agent binaries. When trying to download the agent binaries (available through Collection Settings –> Agent pools –> Default pool –> Agents –> New agent ), it failed with a 404. The URL used was: https://vstsagentpackage.azureedge.net/agent/3.238.0/vsts-agent-win-x64-3.238.0.zip Now if you are unlike me, you have probably followed up all Azure DevOps related announcement so you didn’t miss this announcement mentioning: The current content delivery network (CDN) provider Edgio, used by Azure DevOps is retiring. We’re urgently transitioning to a solution served by Akamai and Azure Front Door CDNs to maintain the responsiveness of our services. Whoops! I certainly missed that one… The new URL is this: https://download.agent.dev.azur...

Keeping your Azure DevOps Agents clean: A guide to maintenance jobs

If you've ever managed self-hosted agents in Azure DevOps, you know how quickly disk space can vanish. Between build artifacts, source code, and temporary files, agents can become cluttered fast. That’s where maintenance jobs come in—a built-in feature designed to keep your agents tidy and your pipelines running smoothly. What are Maintenance Jobs? Maintenance jobs are automated tasks that run on your agents to clean up unused working directories and repositories. These jobs help: Free up disk space by removing stale pipeline data Improve agent performance and reliability Reduce manual cleanup efforts You can configure how often these jobs run and how many days of unused data to retain. How do they work? Maintenance jobs operate within agent pools . Each agent pool can be configured to run maintenance jobs on a schedule. These jobs target: Working directories (e.g., C:\agent\work\{id} ) Repository caches ...

Microsoft.Extensions.AI–Part VII–MCP integration

Our journey continues, as we keep finding new features to explore in the Microsoft.Extensions.AI library. Today we have a look at the support for MCP. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration (this post) What is MCP? I know, I know, you should have been hiding under a rock if you have never heard about MCP before. But just in case; MCP (Model Context Protocol) is an open protocol developed by Anthropic that provides a standardized way to connect AI models to different data sources and tools. This allows us to use tool calling without having to build our own plugins (as I demonstrated in Part III of this blog series). Using MCP with Microsoft.Extensions.AI The first thing you need is an MCP server. Today there...

Microsoft.Extensions.AI–Part VI–Structured Output

Still not at the end of our journey, as we keep finding new features to explore in the Microsoft.Extensions.AI library. Today we have a look at the support for Structured Output. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output (this post) What is structured output? By default, the LLM replies in free form text. This is great during chat conversations but not so great if you want to use the LLM response in a programmatic context. By using structured output, you can specify a JSON schema that describes the exact output the LLM should return. Using structured output with Microsoft.Extensions.AI To use structured output with Microsoft.Extensions.AI you have specific methods available in the ChatClientStructuredOutputExtensions class. By passing a generi...

Microsoft.Extensions.AI –Part V–Chat history

We continue our journey through the Microsoft.Extensions.AI library. Another basic feature that you certainly will need when building your own AI agents, is a way to keep track of your chat history. This is useful as it allows the LLM to build up a context based on the interactions that already took place. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history (this post) Chat history The basics to maintaining a history is simple. You need to build up a list of previously exchanged chat messages: Remark: Notice the different roles we can link to the message so the LLM knows who provided what information. Once we have that list, we pass it along when calling the LLM instead of only our specific input: The AI service can now use this information during our interactions: Stateless vs state...

Microsoft.Extensions.AI–Part IV–Telemetry integration

Back from holiday with my batteries charged 100%. Time to continue our journey in the Microsoft.Extensions.AI library. Today we have a look at (Open)Telemetry integration. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration (this post) Sooner or later you’ll arrive at a moment where you want to better understand what is going on in the interaction between your chat client and the LLM. That is the moment you want to integrate telemetry in your application. In the Microsoft.Extensions.AI library, this can be done through the the OpenTelemetryChatClient . You can plug this client in by calling the UseOpenTelemetry method on the ChatClientBuilder : If we now run our application and take a look at the OpenTelemetry data in our Aspire dashboard, we get a lot of useful information on what is going on behind the scenes: ...