Last week I finally started my journey with Microsoft.Extensions.AI
after having used only Semantic Kernel
for all my agentic AI workflows. I started with a short introduction on what Microsoft.Extensions.AI
is and we created our first 'Hello AI' demo combining Microsoft.Extensions.AI
and AI Foundry Local.
This post is part of a blog series. Other posts so far:
- Part I – An introduction to Microsoft.Extensions.AI
- Part II – ASP.NET Core integration (this post)
Most of the time you will not have your AI workloads running in a console application but integrated in an ASP.NET Core app so that is exactly what we are trying to achieve today.
Integrating Microsoft.Extensions.AI in ASP.NET Core
We’ll start simple, we want to show a Razor page where we can enter some text and let the LLM respond. Important is that the results are streamed to the frontend.
- Start by creating a new ASP.NET Core application. Use the Razor pages template in Visual Studio:
- We update our Program.cs file to include the same code to build up our
IChatClient
:
- But now we add one extra line to register our
IChatClient
in the ASP.NET Core DI container:
- Next step is to write a ‘chat’ API endpoint that takes some input and return the output from the LLM.
- Remark: Notice that I’m using the
GetStreamingResponseAsync
to stream the results to the frontend so that the user doesn’t have to wait until the full response is generated.
- Let us now also update the frontend to show an input field and a box where we print out the info returned by the LLM:
- And a little bit of Javascript to call out our chat api and process the response:
That’s it!