As Semantic Kernel could work with any OpenAI compatible endpoint, and Ollama exposes it language models through an OpenAI compatible API, combining the 2 was always possible. However not all features of Ollama were accessible through Semantic Kernel.
With the recent release of a dedicated Ollama connector for Semantic Kernel, we can start using some of the more advanced Semantic Kernel features directly targetting Ollama deployed models.
The new connector is using Ollama Sharp(I talked about it in this post) so you can directly access the library if needed.
Giving the new connector a try…
- Create a new Console application and add the Microsoft.SemanticKernel.Connectors.Ollama NuGet package:
dotnet add package Microsoft.SemanticKernel.Connectors.Ollama --version 1.21.1-alpha
- Now instead of creating a Semantic Kernel instance, we can directly create an
OllamaChatCompletionService
instance:
- The remaining part of the code remains the same as with the default Semantic Kernel ChatCompletionService:
- What now is different is that we can access the underlying OllamaSharp objects if we want to:
Nice!
More information
Interact with Ollama through C# (bartwullems.blogspot.com)
awaescher/OllamaSharp: The easiest way to use the Ollama API in .NET (github.com)
Introducing new Ollama Connector for Local Models | Semantic Kernel (microsoft.com)