Skip to main content

Run LLMs locally using Podman AI Lab

So far I’ve always used Ollama as a way to run LLM's locally on my development machine. However recently I discovered the Podman AI Lab extension as an alternative solution to work with Large Language Models on your local machine.

In this post I’ll share my experience when trying the Podman AI lab extension.

Remark: I assume you already have Podman Desktop up and running on your machine. (If not check one of my previous posts; Kubernetes–Setup a local cluster through Podman Desktop)

Installation

You need to have at least the following versions to get started:

If the prerequisites are ok, installing the extension should be really easy. Based on the documentation it should be sufficient to click on the installation link to get going. Unfortunately this didn’t work on my machine and I got the following error message instead:

Turns out that the update had failed for an unknown reason and that I was still running an older Podman Desktop version. After updating, I could go to the Extension Catalog and install the Podman AI lab extension:

After installation, an extra icon appears on the left:

Interact with an LLM locally

Now that the extension itself is up and running, we can download a model. Therefore go to the model catalog and click on the download icon next to one of the listed models:

Downloading a model can take some time, so the ideal moment to grab a coffee…

Once the model is downloaded you can start a model service. A model service is an inference server that is running in a container and exposes the model through a REST api.

Go to the Services tab and click on the New Model Service on the top right:

Choose the downloaded model from the list and click on Create Service:

Once the service is started you can generate a code snippet to interact with the model:


Model playground

If you want to interact with the model directly without writing your own code, you can use the built-in playground.

Therefore go to Playgrounds and click on New Playground on the top right:

Choose the downloaded model and click on Create Playground:

Once the playground is ready, you get a chat interface to interact with the model:

Nice!

More information

Running large language models locally using Ollama (bartwullems.blogspot.com)

Podman AI Lab | Podman Desktop (podman-desktop.io)


Popular posts from this blog

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

VS Code Planning mode

After the introduction of Plan mode in Visual Studio , it now also found its way into VS Code. Planning mode, or as I like to call it 'Hannibal mode', extends GitHub Copilot's Agent Mode capabilities to handle larger, multi-step coding tasks with a structured approach. Instead of jumping straight into code generation, Planning mode creates a detailed execution plan. If you want more details, have a look at my previous post . Putting plan mode into action VS Code takes a different approach compared to Visual Studio when using plan mode. Instead of a configuration setting that you can activate but have limited control over, planning is available as a separate chat mode/agent: I like this approach better than how Visual Studio does it as you have explicit control when plan mode is activated. Instead of immediately diving into execution, the plan agent creates a plan and asks some follow up questions: You can further edit the plan by clicking on ‘Open in Editor’: ...