Skip to main content

Tool support in OllamaSharp

Yesterday I talked about OllamaSharp as an alternative (to Semantic Kernel) to talk to your Ollama endpoint using C#. The reason I wanted to directly use Ollama and not use Semantic Kernel was because I wanted to give the recently announced Ollama Tool support a try. And that is exactly what we are going to explore in this post.


Keep reading...

Tool support

Tool support allows a model to answer a given prompt using tools it knows about, making it possible to interact with the outside world and do things like for example calling an API. It makes your model a lot smarter as it can start using information that was not part of the originally trained model and do more things than just returning a response.

Remark: A similar feature exists in Semantic Kernel through the concept of Plugins but as far as I’m aware the Plugins are not using the Ollama tools support (yet).  When writing this post I noticed that a new Ollama connector was released for Semantic Kernel which uses OllamaSharp behind the scenes. I’ll have a look if this new connector allows to call the Ollama tool support and update this post with my findings. Promised!

To show the power of tools calling, we first ask some weather information using the example we created yesterday:


The model nicely explains us that it cannot access weather info.

OK. Let’s start creating a tool that allows to return the weather for a location. Important is that we give a good description to the tool and to the parameters expected by the tool:


Now we can update our chat example from yesterday and pass on the tool instance:

Remark: Notice that we also changed the model used. Right now a limited list of models support tools calling.

If we now ask for the weather, we’ll get the following response back:

What wasn’t immediately clear to me is that the Ollama tools support will not invoke the tool for you. So you need to call it yourself based on the provided ToolCall information you get back from the LLM.

Here is an updated version where I call the GetWeather method I have available in my code:

Now we get the following results back:

Nice! 

Remark: Be aware that this code is far from production ready but at least you get the idea.

More information

Tool support · Ollama Blog

Introducing new Ollama Connector for Local Models | Semantic Kernel (microsoft.com)

wullemsb/OllamaSharpDemo: Tools demo for OllamaSharp (github.com)

Popular posts from this blog

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

VS Code Planning mode

After the introduction of Plan mode in Visual Studio , it now also found its way into VS Code. Planning mode, or as I like to call it 'Hannibal mode', extends GitHub Copilot's Agent Mode capabilities to handle larger, multi-step coding tasks with a structured approach. Instead of jumping straight into code generation, Planning mode creates a detailed execution plan. If you want more details, have a look at my previous post . Putting plan mode into action VS Code takes a different approach compared to Visual Studio when using plan mode. Instead of a configuration setting that you can activate but have limited control over, planning is available as a separate chat mode/agent: I like this approach better than how Visual Studio does it as you have explicit control when plan mode is activated. Instead of immediately diving into execution, the plan agent creates a plan and asks some follow up questions: You can further edit the plan by clicking on ‘Open in Editor’: ...