Skip to main content

Posts

Showing posts from 2025

Microsoft.Extensions.AI–Part III–Tool calling

I'm on a journey discovering what is possible with the Microsoft.Extensions.AI library and you are free to join. Yesterday I looked at how to integrate the library in an ASP.NET Core application. Today I want to dive into a specific feature; tool calling. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling (this post) What is tool calling? With tool calling you are providing your LLM with a set of tools (typically .NET methods) that it can call. This allows your LLM to interact with the outside world in a controlled way. In Semantic Kernel these tools were called ‘plugins’ but the concept is the same. To be 100% correct it is not the LLM itself that is calling these tools but the model can request to invoke a tool with specific arguments (for example a weather tool with the location as a parameter). It is up to the client to invoke the tool and pa...

Microsoft.Extensions.AI–Part II - ASP.NET Core Integration

Last week I finally started my journey with Microsoft.Extensions.AI after having used only Semantic Kernel for all my agentic AI workflows. I started with a short introduction on what Microsoft.Extensions.AI is and we created our first 'Hello AI' demo combining Microsoft.Extensions.AI and AI Foundry Local. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration (this post) Most of the time you will not have your AI workloads running in a console application but integrated in an ASP.NET Core app so that is exactly what we are trying to achieve today. Integrating Microsoft.Extensions.AI in ASP.NET Core We’ll start simple, we want to show a Razor page where we can enter some text and let the LLM respond. Important is that the results are streamed to the frontend. Start by creating a new ASP.NET Core application. Use the Razor pages template in Visual Studio: We up...

GitHub Copilot–We still need the human in the loop

I picked up a bug today where we got a NullReferenceException . I thought this was a good scenario where I could ask GitHub Copilot to find and fix the issue for me. Here is the orignal code containing the issue: I asked Copilot to investigate and fix the issue using the /fix slash command; /fix This code returns a NullReferenceException in some situations. Can you investigate an issue and suggest a solution? GitHub Copilot was successful in identifying the root cause of the problem. I was passing a ConnectionName using a different casing as the key found in the dictionary (e.g. Northwind vs northwind ). That’s good. However then I noticed the solution it suggested: Although that is a workable solution that certainly fixes the issue, it is certainly not the simplest and most performant solution. I undid the changes done by Copilot and updated the Dictionary construction instead: The human in the loop is still required... More information Tips & Tricks for Git...

Start your own coding adventure with GitHub Copilot

Imagine learning programming concepts not through dry textbooks or boring exercises, but by embarking on epic quests in mystical realms. Doesn't sound that appealing to you? Yes? Join Copilot Adventures , Microsoft's innovative approach to coding education that transforms programming practice into an engaging, story-driven experience. What is Copilot Adventures? Copilot Adventures is an open-source educational project that combines the power of GitHub Copilot with immersive storytelling to teach programming concepts. Instead of solving abstract problems, you work through coding challenges embedded in rich fantasy narratives—from mechanical clockwork towns to enchanted forests where mystical creatures perform sacred dances. The project leverages GitHub Copilot, Microsoft's AI-powered coding assistant, to help learners write code while exploring these fictional worlds. It's essentially a "choose your own adventure" for programmers, where each story presen...

An introduction to Microsoft.Extensions.AI–Part I

Last year, when the AI hype really exploded, the 'go to' library to build AI solutions in .NET at that time from Microsoft was Semantic Kernel. So although at that time still in preview, I started using Semantic Kernel and never looked back. Later Microsoft introduced Microsoft.Extensions.AI but I never had the time to take a good look at it. Now I finally found some time to explore it further. My goal is to write a few posts in which I recreate an application that I originally created in Semantic Kernel to see how far we can get. But that will be mainly for the upcoming posts. In this post we focus on the basics to get started. What is Microsoft.Extensions.AI? Microsoft.Extensions.AI libraries provide a unified approach for representing generative AI components and enable seamless integration and interoperability with various AI services. Think of it as the dependency injection and logging abstractions you already know and love but specifically designed for AI services. ...

The one question that transforms every coaching session

As rewarding coaching can be, as challenging it is. While reading the 'How to be a more effective coach?' post by JD Meier, he shared one 'bonus' question that really created a breakthrough in how I tackle these coaching conversations. What would make this conversation wildly valuable for you today? This one question makes all the difference as it shifts the focus from you to them , immediately. Why this question works so well It transfers ownership immediately The moment you ask this question, something profound happens. The conversation stops being about your agenda and becomes entirely about theirs. You're not trying to fix, advise, or direct. Instead, you're creating a container for their most important work to emerge. This transfer of ownership is crucial because: People are more invested in solutions they help create People often know what they need better than we do It honors their autonomy and expertise in their own lives ...

Getting started with AI development in .NET

Getting started in the world of AI development can be a challenge. Every day new libraries, models and possibilities appear. So what is your first step and where can I find good examples on how to tackle different problems? This is where Microsoft wants to help through the AI Dev Gallery. The AI Dev Gallery is an open-source app designed to help Windows developers integrate AI capabilities within their own apps and projects. The app contains the following: Over 25 interactive samples powered by local AI models Easily explore, download, and run models from Hugging Face and GitHub The ability to view the C# source code and simply export a standalone Visual Studio project for each sample You can download the AI Dev Gallery directly or through the Windows App Store: A walkthrough Let me walk you through some of the features in the AI Dev Gallery application. After opening the app you arrive on the Home page where you have a carrousel of different use cases: ...

GitHub Copilot walkthrough

Although GitHub Copilot is now available for some time in VSCode, Visual Studio and almost every other popular IDE, for a lot of people it still feels new and unfamiliar. If you are one of these people I have some good news for you; the Visual Studio team got you covered. Because a GitHub Copilot walkthrough was added to Visual Studio. This walkthrough is an interactive guide that helps you to understand and use GitHub Copilot’s features step-by-step. To active the walkthrough, click on the GitHub Copilot icon in the top right corner and choose GitHub Copilot Walkthrough from the context menu: This will give you a general introduction on what GitHub Copilot has to offer:   To be honest I am a little bit disappointed in what this walkthrough shows. My hope was that it would walk you through a set of scenarios, showing how GitHub Copilot can help in each of these cases. Maybe something for a next Visual Studio Release? More information Agent mode for every developer...

Giving Copilot Agent in Visual Studio a try

After using the GitHub Copilot Agent mode for some time in VS Code, I finally found some time to give it a try in Visual Studio. Agent mode got introduced as part of the 17.14 release so if you are still using an older Visual Studio version, please update first. Remark: Other interesting AI features in the same release are Next Edit Suggestions and AI comment generation . Let’s dive in! Open Visual Studio and go to the GitHub Copilot Chat window :   Click on the dropdown icon next to Ask and choose Agent : Remark: Notice that the Edits feature has disappeared. So you only have Ask or Agent . I don’t know if the feature is still there, but at least I couldn’t find it… Now we can ask the agent to perform specific tasks or let it figure out a solution based on a PRD (Product Requirement Document) or similar you’ve provided. So far so good. The not-so-secret superpower of agents is that they are not limited to your IDE but can interact with the outsid...

Supercharging On-Device AI: Foundry Local + Semantic Kernel

So far my ‘go to’ approach for using language models locally was through Ollama and Semantic Kernel. With the announcement of Foundry Local at Build, I decided to try to combine AI Foundry Local with Semantic Kernel. Time to dive in… What Is Foundry Local? Foundry Local is Microsoft’s local execution runtime for large language models. Unlike cloud-hosted models, Foundry Local runs entirely on your device, giving you privacy, customization, and cost-efficiency. Thanks to its simple CLI and REST API, it integrates smoothly into existing workflows and can support a variety of models and use cases. The easiest way to get started with Foundry Local is through winget: winget install Microsoft.FoundryLocal Once Local Foundry is installed, you can request the list of available models: foundry model list Download the model you want to use: foundry model download phi-3.5-mini Now you can run foundry using the downloaded model: foundry model run phi-3.5-mini If you forgot, ...

Nobody wants software

Last week I was watching a panel session recorded at the GOTO conference last year. During this session Daniel North, the originator of BDD, made the analogy between surgery and software development: No one wants surgery and if you really need it, you want the least amount of surgery to get away with. What people want is to be well and that their problems are solved. This also applies to software. This comparison of software development to surgery reveals a truth about what we're really trying to accomplish as developers. Just as no patient walks into a hospital hoping for the most complex, invasive procedure possible, no user opens your application excited about the thousands of lines of code running beneath the surface. What patients really want When someone needs surgery, they don't want surgery, they want to be well. They want their problem solved with the least disruption, the smallest incision, the quickest recovery time possible. The surgeon's skill isn...

Finding inspiration for good custom instructions for GitHub Copilot

One of the best ways to improve the results you get back from GitHub Copilot is by carefully defining your custom instructions. This helps the LLM to better understand your application, preferred technologies, coding guidelines, etc.. This information is shared with the LLM for every request, so you don’t have to provide all these details every time in your prompts. But creating such a set of custom instructions can be a challenge. If you are looking for inspiration, here are some possible sources: Awesome Copilot Instructions Link: Code-and-Sorts/awesome-copilot-instructions: ✨ Curated list of awesome GitHub copilot-instructions.md files Description: Contains a list of copilot instructions for different programming languages Cursor Rules Link: Free AI .cursorrules & .mdc Config Generator | Open Source Developer Tools Description: Originally created for the Cursor IDE but also applicable when defining custom instructions for GitHub Copilot. No examples for .NET or CSh...

Monitor your A/B test in .NET

I’ m currently working on a new feature in one of our microservices. As this new feature could have large performance impact, we did some performance benchmarks up front through BenchMarkDotNet . The results looked promising but we were not 100% confident that these results are representative for real life usage. Therefore we decided to implement A/B testing. Yesterday I showed how to implement A/B testing in .NET using .NET Feature Management . Today I want to continue on the same topic and show you how we added telemetry. Measuring and analyzing results The most critical aspect of A/B testing is measuring results. You need to track relevant metrics for each variation to determine which performs better. The most simple way to do this is to fall back to the built-in logging in .NET Core : Although this is a good starting point, we can simplify and improve this by taking advantage of the built-in telemetry through OpenTelemetry and/or Application Insights. Using OpenTelemetry...

A/B testing in .NET using Microsoft Feature Management

I’ m currently working on a new feature in one of our microservices. As this new feature could have a large performance impact, we did some performance benchmarks up front through BenchMarkDotNet . The results looked promising but we were not 100% confident that these results are representative for real life usage. Therefore we decided to implement A/B testing. In this post, we'll explore what A/B testing is, why it matters, and how to implement it effectively using .NET Feature Management . What is A/B testing? A/B testing, also known as split testing, is a method of comparing two versions of a web page, application feature, or user experience to determine which one performs better. In its simplest form, you show version A to half your users and version B to the other half, then measure which version achieves your desired outcome more effectively. The beauty of A/B testing lies in its scientific approach. Rather than making changes based on opinions or assumptions, you...

Avoiding unexpected costs from GitHub Copilot premium requests

Starting from last week June 18, 2025 an important change was activated in GitHub Copilot. From that day on you need to pay extra for any premium requests that exceed your monthly allowance. This means that we should start considering more how we use Copilot and be aware of the (potential) costs. What are premium requests? Premium requests are requests that use more advanced processing power and apply to any request that is not using the default models (GPT-4o and GPT-4.1 at the moment of writing this post). The number of premium requests you get out-of-the-box is different depending on the plan you are using: Important! Be also aware that a multiplier is applied to the cost depending on the model used. For some models (e.g. GPT-4.5) this can add up quickly as the multiplier can be as high as 50!   Disabling premium requests If you want to avoid unexpected costs, you can disable premium requests so that they don’t exceed your monthly allowance of premium requests. ...