Skip to main content

Microsoft.Extensions.AI–Part VII–MCP integration

Our journey continues, as we keep finding new features to explore in the Microsoft.Extensions.AI library. Today we have a look at the support for MCP.

This post is part of a blog series. Other posts so far:

What is MCP?

I know, I know, you should have been hiding under a rock if you have never heard about MCP before. But just in case; MCP (Model Context Protocol) is an open protocol developed by Anthropic that provides a standardized way to connect AI models to different data sources and tools.

This allows us to use tool calling without having to build our own plugins (as I demonstrated in Part III of this blog series).

Using MCP with Microsoft.Extensions.AI

The first thing you need is an MCP server. Today there are typically 2 ways that an MCP server can exchange information:

  • Standard Input Output (STDIO)
  • Streamable HTTP

Remark: The Streamable HTTP option replaces the HTTP+SSE transport from protocol version 2024-11-05. You can optionally still use Server-Sent Events (SSE) to stream multiple server messages.

I will be using Streamable HTTP in this example with the Playwright MCP server.

  • Start the server by executing the following command:

npx @playwright/mcp@latest --port 8931

  • Now open your project and add the following NuGet package:

dotnet add ModelContextProtocol

  • Next step is to instantiate an MCP client and get the tools provided by the MCP server:
  • As these tools are instances of McpClientTool, which inherits from AIFunction, we can pass them directly in as ‘plugins’ to our ChatOptions:

That’s all that we need to do!

More information

Tool calling using MCP with Semantic Kernel

Overview | MCP C# SDK

Popular posts from this blog

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

VS Code Planning mode

After the introduction of Plan mode in Visual Studio , it now also found its way into VS Code. Planning mode, or as I like to call it 'Hannibal mode', extends GitHub Copilot's Agent Mode capabilities to handle larger, multi-step coding tasks with a structured approach. Instead of jumping straight into code generation, Planning mode creates a detailed execution plan. If you want more details, have a look at my previous post . Putting plan mode into action VS Code takes a different approach compared to Visual Studio when using plan mode. Instead of a configuration setting that you can activate but have limited control over, planning is available as a separate chat mode/agent: I like this approach better than how Visual Studio does it as you have explicit control when plan mode is activated. Instead of immediately diving into execution, the plan agent creates a plan and asks some follow up questions: You can further edit the plan by clicking on ‘Open in Editor’: ...