Skip to main content

Explore and test local modals using Ollama and OpenWebUI

If you are following my blog you probably noticed that I'm experimenting a lot with Large Language models locally. I typically expose these LLM's locally through Ollama and use either Semantic Kernel or the API directly to test and interact with these models.

Recently I discovered OpenWebUI, an open source web interface designed primarily for interacting with AI language models. It offers a clean, intuitive interface that makes it easy to have conversations with AI models while providing advanced features for developers and power users.

Some of the key features of OpenWebUI are:

  • OpenAI API Integration: Effortlessly integrate OpenAI-compatible APIs for versatile conversations alongside Ollama models.
  • Granular Permissions and User Groups: Create detailed user roles and permissions for a secure and customized user environment.

  • Full Markdown and LaTeX Support: Comprehensive Markdown and LaTeX capabilities for enriched interaction.

  • Model Builder: Easily create Ollama models directly from Open WebUI.

  • Local and Remote RAG Integration: Cutting-edge Retrieval Augmented Generation (RAG) technology within chats.

  • Web Search for RAG: Perform web searches and inject results directly into your chat experience.

  • Web Browsing Capabilities: Seamlessly integrate websites into your chat experience.

  • Image Generation Integration: Incorporate image generation capabilities using various APIs.

  • Concurrent Model Utilization: Engage with multiple models simultaneously for optimal responses.

  • Role-Based Access Control (RBAC): Ensure secure access with restricted permissions..

  • Pipelines Plugin Framework: Integrate custom logic and Python libraries into Open WebUI.

We’ll explore some of these features in later post, but today we focus on how to get started.

Getting started

The easiest way to get started with OpenWebUI is through Docker. So let’s run the following command to run OpenWebUI:

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda

Remark: There are multiple parameters that can be provided. Check out the readme file for more details.

After the container has started, you can browse to http://localhost:3000. The first time you run the container, you get a welcome page:

After clicking on Get started, you need to create an admin account:

Now we arrive on the main OpenWebUI screen. If you used ChatGPT or Microsoft Copilot before, the experience feels familiar.

You can select a different model by clicking on the dropdown in the left corner:

Now we can send a message to the model and wait for some results:

The output is shown in the chat interface:

Configure OpenWebUI

There are a lot of things we can configure when using OpenWebUI. To access the admin settings, click on the user icon in the top right corner and choose Admin Panel:

On the Admin panel, you can further click on the Settings tab:

Now we get an overview of all available settings. For example in the connections section, we can see and manage the list of available connections:

By clicking on the configure icon next to a specific connection, we can configure it further:

 

Conclusion

OpenWebUI makes it really easy to test AI models while giving users and organizations the control they need. Whether you're an individual developer, researcher, or enterprise user, I would recommend giving it a try.

More information

open-webui/open-webui: User-friendly AI Interface (Supports Ollama, OpenAI API, ...)

Home | Open WebUI

Popular posts from this blog

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

DevToys–A swiss army knife for developers

As a developer there are a lot of small tasks you need to do as part of your coding, debugging and testing activities.  DevToys is an offline windows app that tries to help you with these tasks. Instead of using different websites you get a fully offline experience offering help for a large list of tasks. Many tools are available. Here is the current list: Converters JSON <> YAML Timestamp Number Base Cron Parser Encoders / Decoders HTML URL Base64 Text & Image GZip JWT Decoder Formatters JSON SQL XML Generators Hash (MD5, SHA1, SHA256, SHA512) UUID 1 and 4 Lorem Ipsum Checksum Text Escape / Unescape Inspector & Case Converter Regex Tester Text Comparer XML Validator Markdown Preview Graphic Col...