Skip to main content

Posts

Microsoft.Extensions.AI–Part VII–MCP integration

Our journey continues, as we keep finding new features to explore in the Microsoft.Extensions.AI library. Today we have a look at the support for MCP. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration (this post) What is MCP? I know, I know, you should have been hiding under a rock if you have never heard about MCP before. But just in case; MCP (Model Context Protocol) is an open protocol developed by Anthropic that provides a standardized way to connect AI models to different data sources and tools. This allows us to use tool calling without having to build our own plugins (as I demonstrated in Part III of this blog series). Using MCP with Microsoft.Extensions.AI The first thing you need is an MCP server. Today there...
Recent posts

Microsoft.Extensions.AI–Part VI–Structured Output

Still not at the end of our journey, as we keep finding new features to explore in the Microsoft.Extensions.AI library. Today we have a look at the support for Structured Output. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output (this post) What is structured output? By default, the LLM replies in free form text. This is great during chat conversations but not so great if you want to use the LLM response in a programmatic context. By using structured output, you can specify a JSON schema that describes the exact output the LLM should return. Using structured output with Microsoft.Extensions.AI To use structured output with Microsoft.Extensions.AI you have specific methods available in the ChatClientStructuredOutputExtensions class. By passing a generi...

Microsoft.Extensions.AI –Part V–Chat history

We continue our journey through the Microsoft.Extensions.AI library. Another basic feature that you certainly will need when building your own AI agents, is a way to keep track of your chat history. This is useful as it allows the LLM to build up a context based on the interactions that already took place. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history (this post) Chat history The basics to maintaining a history is simple. You need to build up a list of previously exchanged chat messages: Remark: Notice the different roles we can link to the message so the LLM knows who provided what information. Once we have that list, we pass it along when calling the LLM instead of only our specific input: The AI service can now use this information during our interactions: Stateless vs state...

Microsoft.Extensions.AI–Part IV–Telemetry integration

Back from holiday with my batteries charged 100%. Time to continue our journey in the Microsoft.Extensions.AI library. Today we have a look at (Open)Telemetry integration. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration (this post) Sooner or later you’ll arrive at a moment where you want to better understand what is going on in the interaction between your chat client and the LLM. That is the moment you want to integrate telemetry in your application. In the Microsoft.Extensions.AI library, this can be done through the the OpenTelemetryChatClient . You can plug this client in by calling the UseOpenTelemetry method on the ChatClientBuilder : If we now run our application and take a look at the OpenTelemetry data in our Aspire dashboard, we get a lot of useful information on what is going on behind the scenes: ...

Microsoft.Extensions.AI–Part III–Tool calling

I'm on a journey discovering what is possible with the Microsoft.Extensions.AI library and you are free to join. Yesterday I looked at how to integrate the library in an ASP.NET Core application. Today I want to dive into a specific feature; tool calling. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling (this post) What is tool calling? With tool calling you are providing your LLM with a set of tools (typically .NET methods) that it can call. This allows your LLM to interact with the outside world in a controlled way. In Semantic Kernel these tools were called ‘plugins’ but the concept is the same. To be 100% correct it is not the LLM itself that is calling these tools but the model can request to invoke a tool with specific arguments (for example a weather tool with the location as a parameter). It is up to the client to invoke the tool and pa...

Microsoft.Extensions.AI–Part II - ASP.NET Core Integration

Last week I finally started my journey with Microsoft.Extensions.AI after having used only Semantic Kernel for all my agentic AI workflows. I started with a short introduction on what Microsoft.Extensions.AI is and we created our first 'Hello AI' demo combining Microsoft.Extensions.AI and AI Foundry Local. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration (this post) Most of the time you will not have your AI workloads running in a console application but integrated in an ASP.NET Core app so that is exactly what we are trying to achieve today. Integrating Microsoft.Extensions.AI in ASP.NET Core We’ll start simple, we want to show a Razor page where we can enter some text and let the LLM respond. Important is that the results are streamed to the frontend. Start by creating a new ASP.NET Core application. Use the Razor pages template in Visual Studio: We up...

GitHub Copilot–We still need the human in the loop

I picked up a bug today where we got a NullReferenceException . I thought this was a good scenario where I could ask GitHub Copilot to find and fix the issue for me. Here is the orignal code containing the issue: I asked Copilot to investigate and fix the issue using the /fix slash command; /fix This code returns a NullReferenceException in some situations. Can you investigate an issue and suggest a solution? GitHub Copilot was successful in identifying the root cause of the problem. I was passing a ConnectionName using a different casing as the key found in the dictionary (e.g. Northwind vs northwind ). That’s good. However then I noticed the solution it suggested: Although that is a workable solution that certainly fixes the issue, it is certainly not the simplest and most performant solution. I undid the changes done by Copilot and updated the Dictionary construction instead: The human in the loop is still required... More information Tips & Tricks for Git...