In a previous posted I introduced you to Continue in combination with Ollama, as a way to run a fully local AI Code Assistant.
Remark: This post is part of a bigger series. Here are the other related posts:
- Part 1 – Introduction
- Part 2 - Configuration
- Part 3 – Editing and Actions
- Part 4 (this post) - Learning from your codebase
Today I want to continue by having a look at how continue can learn from your codebase and provide suggestions based on that.
But before I can show you this feature we first need to download an embedding model. Embedding models are models that are trained specifically to generate vector embeddings: long arrays of numbers that represent semantic meaning for a given sequence of text. These arrays can be stored in a database, and used to search for data that is similar in meaning.
We’ll use the nomic-embed-text embeddings, so let’s download that one:
ollama pull nomic-embed-text
Now we need to update the Continue configuration by changing the config.json file.
- Click on the gear icon in the bottom right corner of the Continue window:
- The config.json file is loaded. Scroll to the bottom to find the embeddingsprovider section. Update it with the following information:
- Don’t forget to save the file to apply the changes.
Now continue will index the code(this can take some time depending on the size of your codebase). But once that is done, you can start using the @codebase context provider to use this information.
For example, we can use it to generate a new class based on existing classes in your code:
Here is the result:
Nice!