Maybe you’ve heared about Bolt.new, the AI solution from StackBlitz that allows you to prompt, edit, and deploy full-stack web and mobile applications in a breeze. It uses an in-browser AI web development agent that leverages StackBlitz’s WebContainers to allow for full stack application development. The application presents users with a simple, chat-based environment in which one prompts an agent to make code changes that are updated in real time in the WebContainers dev environment.
I find it a great way to get a head start when building small(ler) web applications. But what if due to company policies or other reasons, you are not allowed to use bolt online?
In that case I have some good news for you, as the team from StackBlitz also created bolt.diy, the open source version of Bolt.new which allows you to choose the LLM that you use for each prompt.
Installing Bolt.diy
The easiest way to install Bolt.diy is through Docker.
- Start by cloning the repository locally:
git clone https://github.com/stackblitz-labs/bolt.diy.git
- Now we can build the docker image locally:
docker build . --target bolt-ai-development
- Once the container is built, we can run it through:
docker compose --profile development up
Connecting Bolt.diy with Ollama
Before we can give Bolt.diy a try, we first need to connect it to a provider. In this example, we’ll use a local LLM exposed through Ollama. Let’s see how we can configure this. As we have Bolt.diy running in a docker container and Ollama directly on our system, we need to configure the OLLAMA endpoint.
- Start by adding a .env.local file. Inside this file I configure the URL to my local OLLAMA instance:
# You only need this environment variable set if you want to use oLLAMA models | |
# DONT USE http://localhost:11434 due to IPV6 issues | |
# USE EXAMPLE http://127.0.0.1:11434 | |
OLLAMA_API_BASE_URL= http://host.docker.internal:11434 |
- This .env.local file is also referenced inside the docker compose yaml:
services: | |
app-dev: | |
image: bolt-ai:development | |
build: | |
target: bolt-ai-development | |
env_file: '.env.local' | |
- Remark: Don’t forget to restart the container otherwise your changes are not detected.
- Now we can browse to Bolt.diy locally (in my case http://localhost:5173/):
- Be patient: it can take some time before the site is loaded the first time
- Once the site is up and running, hover over the Bolt.diy icon and click on the Settings icon in the bottom left corner.The control panel appears:
- Click on Local Providers. Check that the toggle button is activated next to the Ollama provider:
Giving Bolt.diy a try
Now we are finally ready to give Bolt.diy a try.
- Select Ollama from the dropdown list and also select a model of your choice:
- Now you can describe what you want to generate and let the agent do its work:
That’s it! I’ll dive more in the features of Bolt.new and Bolt.diy in a later post…