Skip to main content

Posts

Showing posts from 2025

Type Aliases in C#: Bringing F#-style readability to your C# code

As I like to to program not only in C# but also F#, there are some paradigms and features in F# that influence my C# coding style. One of those features is F# type abbreviations that make complex type signatures more understandable and expressive. Since C#12, you have a similar feature available in C#: type aliases using the using directive. But although this option already exists for some time, I don’t see many C# developers use it. But I’ll hope that this blog post can help to change this and increase adoption… What are Type Aliases? Type aliases allow you to create shorthand names for existing types, making your code more readable and self-documenting. Instead of repeatedly writing complex generic types or lengthy class names, you can define a meaningful alias that captures the intent of your data structure. The F# connection In F#, type abbreviations have been a feature for years: These abbreviations don't create new types—they're simply aliases that make the ...

Refactor an Azure DevOps pipeline into multiple stages

While helping a team getting their failing build back up and running I noticed that they were using one big build pipeline that only consists of one stage. This not only made the pipeline more difficult to understand but also makes the build time very long and forces you to rerun the full build if one specific step fails. Let me walk you through several scenario’s how we logically split this pipeline into multiple stages but before I do that, here is the original YAML pipeline: In the example above we have a single stage containing all the steps. Time to refactor… Approach 1 - Build → Test → Publish Stages Description This is the most straightforward approach: Build Stage: NuGet tool installation Package restore Build all solutions Angular npm install and build Test Stage: Run unit tests (.NET) Run Angular tests Publish Stage: Publish all applications Publish build artifacts Advantages Clear separation of concerns : Each sta...

Supercharge your EF Core debugging with Query Tags

Debugging database queries in Entity Framework Core can sometimes feel like searching for a needle in a haystack. When your application generates dozens or hundreds of SQL queries, identifying which LINQ query produced which SQL statement becomes a real challenge. Fortunately, I discovered an elegant solution that EF Core provides: Query Tags . Query Tags Query Tags allow you to add custom comments to the SQL queries generated by your LINQ expressions. These comments appear directly in the generated SQL, making it incredibly easy to trace back from a SQL query to the specific code that created it. To use this feature you need to apply the TagWith method to your on any IQueryable and pass a descriptive comment: This generates SQL that looks like this: Instead of trying to reverse-engineer which code generated a particular SQL query, you can immediately see the purpose and origin of each query in your database logs or profiler. Advanced techniques Chaining multiple ta...

View your installed MCP servers in VSCode

Managing Model Context Protocol (MCP) servers in VS Code has become significantly easier with the dedicated management interface in the Extensions view. While you can configure MCP servers in multiple places throughout VS Code, the Extension tab provides a centralized, visual approach to monitoring and controlling all your available MCP servers. The challenge of multiple configuration points VS Code offers several ways to configure MCP servers, which provides flexibility but can also create complexity: Workspace settings via .vscode/mcp.json files for project-specific configurations User settings for global MCP server configurations across all workspaces Automatic discovery of servers defined in other tools like Claude Desktop Direct installation from the curated MCP server list Command-line configuration using the --add-mcp option And more… While this flexibility is powerful for developers working across different projects and environments, it ca...

Azure Pipelines and NuGet Package Source Mapping–Not best friends (yet)

I think everyone has encountered the frustrating scenario where your .NET solution builds perfectly on your local machine but mysteriously fails in Azure Pipelines. Most of the time there is a mistake you made that is easy to fix but sometimes it is the build tooling itself that causes the problem. And that is unfortunately exactly the case when using NuGet Package Source mapping in Azure Pipelines. What is Package Source Mapping? Package Source Mapping in NuGet, introduced in version 6.0, is a security-enhancing feature that allows developers to explicitly define which package sources should be used for specific packages in a project. Traditionally, NuGet would scan all configured sources—public or private—to find and restore packages, which could pose risks by inadvertently pulling packages from untrusted locations. With Package Source Mapping, developers can centralize and control package restoration by specifying patterns that map packages to designated sources in the nuget.c...

Azure DevOps Pipelines: Fixing the old NuGet version problem

While working with Azure DevOps pipelines I encountered a frustrating NuGet package restore failure. The culprit behind these issues is Azure DevOps using an outdated version of NuGet by default, which lead to version conflicts and compatibility problems with modern .NET projects. The problem: Old NuGet versions cause conflicts When running NuGet restore tasks in Azure DevOps, you might encounter errors like these: ##[error]The nuget command failed with exit code(1) and error(NU1107: Version conflict detected for Castle.Core. Install/reference Castle.Core 3.1.0 directly to project SOFACore.PerformanceBenchmarks to resolve this issue. SOFACore.PerformanceBenchmarks -> Castle.Core.AsyncInterceptor 0.1.0 -> Castle.Core (>= 3.1.0) SOFACore.PerformanceBenchmarks -> SOFACore.NHibernate -> NHibernate 2.1.2.4000 -> Castle.DynamicProxy 2.1.0 -> Castle.Core (= 1.1.0). NU1107: Version conflict detected for xunit.v3.extensibility.core. Install/reference xunit.v3.ex...

Understanding SERVICE_SID_TYPE_UNRESTRICTED in Azure DevOps Agent Configuration

When configuring self-hosted Azure DevOps agents on Windows, one often-overlooked setting can significantly improve security and resource access control: the SERVICE_SID_TYPE_UNRESTRICTED. I first learned about this when configuring a new build agent on our build server. While going through the configuration steps, I got an extra question if I wanted to enable the SERVICE_SID_TYPE_UNRESTRICTED for the agent service. As I had no clue what this option means, I decided to dive in and write a blog post about it. What is SERVICE_SID_TYPE_UNRESTRICTED ? Windows services can be assigned a Service SID (Security Identifier) to help manage access to resources. By default, Azure DevOps agents run with SERVICE_SID_TYPE_NONE , which means no service-specific SID is added to the process token. Setting the SID type to UNRESTRICTED adds a unique SID like NT SERVICE\vstsagent.{tenant}.{pool}.{agent} to the agent's process token. This allows you to: Grant access to local resourc...

Azure DevOps Server–Agent download location has changed

It seems that I’m a little behind on what has changed in Azure DevOps Server recently. While helping my team migrate our existing build servers to some new infrastructure, we encountered a problem when trying to download the agent binaries. When trying to download the agent binaries (available through Collection Settings –> Agent pools –> Default pool –> Agents –> New agent ), it failed with a 404. The URL used was: https://vstsagentpackage.azureedge.net/agent/3.238.0/vsts-agent-win-x64-3.238.0.zip Now if you are unlike me, you have probably followed up all Azure DevOps related announcement so you didn’t miss this announcement mentioning: The current content delivery network (CDN) provider Edgio, used by Azure DevOps is retiring. We’re urgently transitioning to a solution served by Akamai and Azure Front Door CDNs to maintain the responsiveness of our services. Whoops! I certainly missed that one… The new URL is this: https://download.agent.dev.azur...

Keeping your Azure DevOps Agents clean: A guide to maintenance jobs

If you've ever managed self-hosted agents in Azure DevOps, you know how quickly disk space can vanish. Between build artifacts, source code, and temporary files, agents can become cluttered fast. That’s where maintenance jobs come in—a built-in feature designed to keep your agents tidy and your pipelines running smoothly. What are Maintenance Jobs? Maintenance jobs are automated tasks that run on your agents to clean up unused working directories and repositories. These jobs help: Free up disk space by removing stale pipeline data Improve agent performance and reliability Reduce manual cleanup efforts You can configure how often these jobs run and how many days of unused data to retain. How do they work? Maintenance jobs operate within agent pools . Each agent pool can be configured to run maintenance jobs on a schedule. These jobs target: Working directories (e.g., C:\agent\work\{id} ) Repository caches ...

Microsoft.Extensions.AI–Part VII–MCP integration

Our journey continues, as we keep finding new features to explore in the Microsoft.Extensions.AI library. Today we have a look at the support for MCP. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration (this post) What is MCP? I know, I know, you should have been hiding under a rock if you have never heard about MCP before. But just in case; MCP (Model Context Protocol) is an open protocol developed by Anthropic that provides a standardized way to connect AI models to different data sources and tools. This allows us to use tool calling without having to build our own plugins (as I demonstrated in Part III of this blog series). Using MCP with Microsoft.Extensions.AI The first thing you need is an MCP server. Today there...

Microsoft.Extensions.AI–Part VI–Structured Output

Still not at the end of our journey, as we keep finding new features to explore in the Microsoft.Extensions.AI library. Today we have a look at the support for Structured Output. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output (this post) What is structured output? By default, the LLM replies in free form text. This is great during chat conversations but not so great if you want to use the LLM response in a programmatic context. By using structured output, you can specify a JSON schema that describes the exact output the LLM should return. Using structured output with Microsoft.Extensions.AI To use structured output with Microsoft.Extensions.AI you have specific methods available in the ChatClientStructuredOutputExtensions class. By passing a generi...

Microsoft.Extensions.AI –Part V–Chat history

We continue our journey through the Microsoft.Extensions.AI library. Another basic feature that you certainly will need when building your own AI agents, is a way to keep track of your chat history. This is useful as it allows the LLM to build up a context based on the interactions that already took place. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history (this post) Chat history The basics to maintaining a history is simple. You need to build up a list of previously exchanged chat messages: Remark: Notice the different roles we can link to the message so the LLM knows who provided what information. Once we have that list, we pass it along when calling the LLM instead of only our specific input: The AI service can now use this information during our interactions: Stateless vs state...

Microsoft.Extensions.AI–Part IV–Telemetry integration

Back from holiday with my batteries charged 100%. Time to continue our journey in the Microsoft.Extensions.AI library. Today we have a look at (Open)Telemetry integration. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration (this post) Sooner or later you’ll arrive at a moment where you want to better understand what is going on in the interaction between your chat client and the LLM. That is the moment you want to integrate telemetry in your application. In the Microsoft.Extensions.AI library, this can be done through the the OpenTelemetryChatClient . You can plug this client in by calling the UseOpenTelemetry method on the ChatClientBuilder : If we now run our application and take a look at the OpenTelemetry data in our Aspire dashboard, we get a lot of useful information on what is going on behind the scenes: ...

Microsoft.Extensions.AI–Part III–Tool calling

I'm on a journey discovering what is possible with the Microsoft.Extensions.AI library and you are free to join. Yesterday I looked at how to integrate the library in an ASP.NET Core application. Today I want to dive into a specific feature; tool calling. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling (this post) What is tool calling? With tool calling you are providing your LLM with a set of tools (typically .NET methods) that it can call. This allows your LLM to interact with the outside world in a controlled way. In Semantic Kernel these tools were called ‘plugins’ but the concept is the same. To be 100% correct it is not the LLM itself that is calling these tools but the model can request to invoke a tool with specific arguments (for example a weather tool with the location as a parameter). It is up to the client to invoke the tool and pa...

Microsoft.Extensions.AI–Part II - ASP.NET Core Integration

Last week I finally started my journey with Microsoft.Extensions.AI after having used only Semantic Kernel for all my agentic AI workflows. I started with a short introduction on what Microsoft.Extensions.AI is and we created our first 'Hello AI' demo combining Microsoft.Extensions.AI and AI Foundry Local. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration (this post) Most of the time you will not have your AI workloads running in a console application but integrated in an ASP.NET Core app so that is exactly what we are trying to achieve today. Integrating Microsoft.Extensions.AI in ASP.NET Core We’ll start simple, we want to show a Razor page where we can enter some text and let the LLM respond. Important is that the results are streamed to the frontend. Start by creating a new ASP.NET Core application. Use the Razor pages template in Visual Studio: We up...

GitHub Copilot–We still need the human in the loop

I picked up a bug today where we got a NullReferenceException . I thought this was a good scenario where I could ask GitHub Copilot to find and fix the issue for me. Here is the orignal code containing the issue: I asked Copilot to investigate and fix the issue using the /fix slash command; /fix This code returns a NullReferenceException in some situations. Can you investigate an issue and suggest a solution? GitHub Copilot was successful in identifying the root cause of the problem. I was passing a ConnectionName using a different casing as the key found in the dictionary (e.g. Northwind vs northwind ). That’s good. However then I noticed the solution it suggested: Although that is a workable solution that certainly fixes the issue, it is certainly not the simplest and most performant solution. I undid the changes done by Copilot and updated the Dictionary construction instead: The human in the loop is still required... More information Tips & Tricks for Git...

Start your own coding adventure with GitHub Copilot

Imagine learning programming concepts not through dry textbooks or boring exercises, but by embarking on epic quests in mystical realms. Doesn't sound that appealing to you? Yes? Join Copilot Adventures , Microsoft's innovative approach to coding education that transforms programming practice into an engaging, story-driven experience. What is Copilot Adventures? Copilot Adventures is an open-source educational project that combines the power of GitHub Copilot with immersive storytelling to teach programming concepts. Instead of solving abstract problems, you work through coding challenges embedded in rich fantasy narratives—from mechanical clockwork towns to enchanted forests where mystical creatures perform sacred dances. The project leverages GitHub Copilot, Microsoft's AI-powered coding assistant, to help learners write code while exploring these fictional worlds. It's essentially a "choose your own adventure" for programmers, where each story presen...

An introduction to Microsoft.Extensions.AI–Part I

Last year, when the AI hype really exploded, the 'go to' library to build AI solutions in .NET at that time from Microsoft was Semantic Kernel. So although at that time still in preview, I started using Semantic Kernel and never looked back. Later Microsoft introduced Microsoft.Extensions.AI but I never had the time to take a good look at it. Now I finally found some time to explore it further. My goal is to write a few posts in which I recreate an application that I originally created in Semantic Kernel to see how far we can get. But that will be mainly for the upcoming posts. In this post we focus on the basics to get started. What is Microsoft.Extensions.AI? Microsoft.Extensions.AI libraries provide a unified approach for representing generative AI components and enable seamless integration and interoperability with various AI services. Think of it as the dependency injection and logging abstractions you already know and love but specifically designed for AI services. ...

The one question that transforms every coaching session

As rewarding coaching can be, as challenging it is. While reading the 'How to be a more effective coach?' post by JD Meier, he shared one 'bonus' question that really created a breakthrough in how I tackle these coaching conversations. What would make this conversation wildly valuable for you today? This one question makes all the difference as it shifts the focus from you to them , immediately. Why this question works so well It transfers ownership immediately The moment you ask this question, something profound happens. The conversation stops being about your agenda and becomes entirely about theirs. You're not trying to fix, advise, or direct. Instead, you're creating a container for their most important work to emerge. This transfer of ownership is crucial because: People are more invested in solutions they help create People often know what they need better than we do It honors their autonomy and expertise in their own lives ...

Getting started with AI development in .NET

Getting started in the world of AI development can be a challenge. Every day new libraries, models and possibilities appear. So what is your first step and where can I find good examples on how to tackle different problems? This is where Microsoft wants to help through the AI Dev Gallery. The AI Dev Gallery is an open-source app designed to help Windows developers integrate AI capabilities within their own apps and projects. The app contains the following: Over 25 interactive samples powered by local AI models Easily explore, download, and run models from Hugging Face and GitHub The ability to view the C# source code and simply export a standalone Visual Studio project for each sample You can download the AI Dev Gallery directly or through the Windows App Store: A walkthrough Let me walk you through some of the features in the AI Dev Gallery application. After opening the app you arrive on the Home page where you have a carrousel of different use cases: ...

GitHub Copilot walkthrough

Although GitHub Copilot is now available for some time in VSCode, Visual Studio and almost every other popular IDE, for a lot of people it still feels new and unfamiliar. If you are one of these people I have some good news for you; the Visual Studio team got you covered. Because a GitHub Copilot walkthrough was added to Visual Studio. This walkthrough is an interactive guide that helps you to understand and use GitHub Copilot’s features step-by-step. To active the walkthrough, click on the GitHub Copilot icon in the top right corner and choose GitHub Copilot Walkthrough from the context menu: This will give you a general introduction on what GitHub Copilot has to offer:   To be honest I am a little bit disappointed in what this walkthrough shows. My hope was that it would walk you through a set of scenarios, showing how GitHub Copilot can help in each of these cases. Maybe something for a next Visual Studio Release? More information Agent mode for every developer...