While preparing a demo for my team, I encountered the following error after upgrading to Semantic Kernel 1.20.0(alpha). Microsoft.SemanticKernel.HttpOperationException: Service request failed. Status: 404 (Not Found) ---> System.ClientModel.ClientResultException: Service request failed. Status: 404 When I took a look at the request URI, I noticed that the following URI was used: I switched back to the original version I was using (v1.17.2) and now I could see that a different URI was used: Do you notice the difference? Somehow the 'v1' part in the URI disappeared... A look at the Semantic Kernel Github repo brought me to the following issue: .Net: Bug: HTTP 404 - POST /chat/completions · Issue #8525 · microsoft/semantic-kernel (github.com) It seems that it is related to the OpenAI version in use. A fix is to stay a little bit longer on the v1.17.2 version until a new release with the following fix is available: .Net: OpenAI + AzureOpenAI Connector SDK u
When I‘m coding, I'm assisted today by Github Copilot. And although Github does a lot of effort to keep your data private, not every organisation allows to use it. If you are working for such an organisation, does this mean that you cannot use an AI code assistant? Luckily, the answer is no. In this post I’ll show you how to combine Continue , an open-source AI code assistent with Ollama to run a fully local Code Assistant. Keep reading… What is Continue? Continue is an open-source AI code assistant that can be easily integrated in popular IDEs like like VS Code and JetBrains , providing custom autocomplete and chat experiences. It offers features you expect from most code assistants like autocomplete, code explanation, chat, refactoring, ask questions about your code base and more and all of this based the AI model of your choice. Installing and configuring Continue in VSCode As mentioned in the intro you can integrate in both VS Code and JetBrains IDEs like Rider or