I'm on a journey discovering what is possible with the Microsoft.Extensions.AI
library and you are free to join. Yesterday I looked at how to integrate the library in an ASP.NET Core application. Today I want to dive into a specific feature; tool calling.
This post is part of a blog series. Other posts so far:
- Part I – An introduction to Microsoft.Extensions.AI
- Part II – ASP.NET Core integration
- Part III –Tool calling (this post)
What is tool calling?
With tool calling you are providing your LLM with a set of tools (typically .NET methods) that it can call. This allows your LLM to interact with the outside world in a controlled way. In Semantic Kernel these tools were called ‘plugins’ but the concept is the same.
To be 100% correct it is not the LLM itself that is calling these tools but the model can request to invoke a tool with specific arguments (for example a weather tool with the location as a parameter). It is up to the client to invoke the tool and pass the result back to the LLM.
Remark: You maybe wonder, what about MCP? You can see MCP as a kind of standardized way to do tool calling. But I have planned a separate post where I specifically dive into MCP integration with Microsoft.Extensions.AI
.
Integrating tool calling in Microsoft.Extensions.AI
Microsoft.Extensions.AI
provides 3 building blocks to add tool calling:
- AIFunction: The .NET method(aka Tool) that can be described to an AI model and invoked.
- AIFunctionFactory: A factory class that helps you create
AIFunction
instances based on .NET methods. - FunctionInvokingChatClient: A wrapper for
IChatClient
that adds automatic function-invocation capabilities.
Let’s put these building blocks in action.
We take our application from last post and continue there:
- Let us start by creating our .NET method that will be exposed as a tool to our LLM:
- Now we take our original
IChatClient
and wrap it in aFunctionInvokingChatClient
: - Remark: Notice that I’m using Ollama in this example as I couldn’t get Tool calling working when using AI Foundry Local.
- The last step required is to construct a
ChatOptions
object and pass our tools: - Remark: notice that we provided extra information about the tool when creating the
AIFunction
.
- Don’t forget to pass our a
ChatOptions
object when calling theGetStreamingResponseAsync
method:
Remark: Be aware that not every model supports tool calling. You can try different models to find one that works or use an OpenAI model.