Skip to main content

Posts

Showing posts from September, 2024

Azure DevOps–Sharing service connections

In Azure DevOps, managing external service connections for pipelines and deployments can become complex, especially when you have multiple projects. Sharing service connections across projects in Azure DevOps not only streamlines pipeline configuration but also enforces consistent security policies. In this post, I’ll show you how to share service connections across multiple projects. But let me first explain what service connections actually are. What is a service connection in Azure DevOps? Azure DevOps uses service connections to allow pipelines to communicate with external services like Azure, AWS, Docker registries, GitHub, and more. These connections store credentials, allowing seamless integration with external systems without hardcoding secrets in your pipelines. You can configure service connections at the project level by going to Project Settings > Service connections . If you click on New Service Connection , you get a list a possible connection types: How t

Tool support in OllamaSharp

Yesterday I talked about OllamaSharp as an alternative (to Semantic Kernel) to talk to your Ollama endpoint using C#. The reason I wanted to directly use Ollama and not use Semantic Kernel was because I wanted to give the recently announced Ollama Tool support a try. And that is exactly what we are going to explore in this post. Keep reading... Tool support Tool support allows a model to answer a given prompt using tools it knows about, making it possible to interact with the outside world and do things like for example calling an API. It makes your model a lot smarter as it can start using information that was not part of the originally trained model and do more things than just returning a response. Remark: A similar feature exists in Semantic Kernel through the concept of Plugins but as far as I’m aware the Plugins are not using the Ollama tools support (yet).  When writing this post I noticed that a new Ollama connector was released for Semantic Kernel which uses OllamaSha

Interact with Ollama through C#

If you are a C# developer and want to interact with Ollama(which allows you to interact with Large Language Models locally), the easiest solution is to use Semantic Kernel. This is possible because Ollama exposes an OpenAI compatible API. However I wanted to try some Ollama specific features that were not yet exposed through Semantic Kernel. Does this mean that I can no longer use C#? Remark: While writing this post I noticed that an Ollama connector was released for Semantic Kernel that also uses OllamaSharp behind the scenes. The good news is you still can. Thanks to OllamaSharp you get .NET bindings for the Ollama API . Getting started Let’s write a simple demo application to try OllamaSharp: Create a new Console application: dotnet new console -o OllamaSharpDemo Add the OllamaSharp Nuget package: dotnet add package ollamasharp Now we can start writing our code. First create a new OllamaApiClient instance and specify the model we’ll use: Next

Running a fully local AI Code Assistant with Continue–Part 4– Learning from your codebase

In a previous posted I introduced you to Continue in combination with Ollama, as a way to run a fully local AI Code Assistant. Remark: This post is part of a bigger series. Here are the other related posts: Part 1 – Introduction Part 2 -  Configuration Part 3 – Editing and Actions Part 4 (this post) -  Learning from your codebase Today I want to continue by having a look at how continue can learn from your codebase and provide suggestions based on that. But before I can show you this feature we first need to download an embedding model. Embedding models are models that are trained specifically to generate vector embeddings : long arrays of numbers that represent semantic meaning for a given sequence of text. These arrays can be stored in a database, and used to search for data that is similar in meaning. We’ll use the nomic-embed-text embeddings, so let’s download that one: ollama pull nomic-embed-text Now we need to update the Continue configuration by c

Running a fully local AI Code Assistant with Continue–Part 3–Editing and Actions

In a previous posted I introduced you to Continue in combination with Ollama, as a way to run a fully local AI Code Assistant. In a first post I showed you how to download and install the necessary models and how to integrate it inside VSCode. A second post was focused on the configuration part (and a little bit of troubleshooting). Today I want to continue by having a look at 2 other features; edit and actions. Editing The editing feature allows you to select a piece of text and describe how it should be changed. The nice thing about this feature is that you can use without leaving the code file or block you are working on. Just select a piece of code and hit CTRL-I. We now get an Edit popup where can enter our prompt(or select an existing prompt): Hit submit to get an answer back: Now you get a list of suggested code changes that you can accept or reject: Remark: Depending on the model you are using, the quality of the result can differ and also the waiting tim

Running a fully local AI Code Assistant with Continue–Part 2–Configuring the VSCode extension

In a previous posted I introduced you to Continue in combination with Ollama, as a way to run a fully local AI Code Assistant. In a first post I showed you how to download and install the necessary models and how to integrate it inside VSCode. We had a look at the chat integration and autocomplete. Today I want to continue by having a look at how we can configure the VSCode Addin. I originally had planned to write about another feature of Continue, but when I opened VSCode today, I got the following error message: Whoops! It seems that the configured language model was not available locally on my machine. And indeed when I took a look at the list of installed models, the ‘starcoder2:3b’ model wasn’t there: Instead I had the ‘starcoder2:latest’ model installed. So let’s use this moment to show how you can configure the Continue VSCode Addin. Therefore click on the ‘Gear’ icon in the bottom right corner of the Continue chat screen: This will open a config.json file where we ca

Conditional compilation symbols in C#

A few months ago, I created a scalable data migration tool for a customer using Dataflow . I’ll promise I write a blog post about it when time permits. The tool was originally created with some assumptions about your primary key strategy; in this case that an identity column was used. However this week I was asked if the tool could be used for another application. The only problem that for this application a sequential guid was used for primary key values. I could make the full application configurable to handle multiple key strategies but as this was a one shot migration, I decided to use a different strategy and use conditional compilation symbols to handle this scenario. Let me first explain what conditional compilation symbols are… What Are Conditional Compilation Symbols? Conditional compilation symbols in C# are essentially preprocessor directives that allow the compiler to include or omit portions of code based on certain conditions. These symbols are particularly

Semantic Kernel - 404 error when using the v1.20.0 version

While preparing a demo for my team, I encountered the following error after upgrading to Semantic Kernel 1.20.0(alpha). Microsoft.SemanticKernel.HttpOperationException: Service request failed. Status: 404 (Not Found) ---> System.ClientModel.ClientResultException: Service request failed. Status: 404 When I took a look at the request URI, I noticed that the following URI was used:   I switched back to the original version I was using (v1.17.2) and now I could see that a different URI was used:   Do you notice the difference? Somehow the 'v1' part in the URI disappeared... A look at the Semantic Kernel Github repo brought me to the following issue: .Net: Bug: HTTP 404 - POST /chat/completions · Issue #8525 · microsoft/semantic-kernel (github.com) It seems that it is related to the OpenAI version in use. A fix is to stay a little bit longer on the v1.17.2 version until a new release with the following fix is available: .Net: OpenAI + AzureOpenAI Connector SDK u

Running a fully local AI Code Assistant with Continue–Part 1–Introduction

When I‘m coding, I'm assisted today by Github Copilot. And although Github does a lot of effort to keep your data private, not every organisation allows to use it. If you are working for such an organisation, does this mean that you cannot use an AI code assistant? Luckily, the answer is no. In this post I’ll show you how to combine Continue , an open-source AI code assistent with Ollama to run a fully local Code Assistant. Keep reading… Note: This post is part of a bigger series. Check out the other posts here: Part 1 – Introduction Part 2 -  Configuration Part 3 – Editing and Actions Part 4  -  Learning from your codebase Part 5 – Read your documentation Part 6 – Troubleshooting What is Continue? Continue is an open-source AI code assistant that can be easily integrated in popular IDEs like like VS Code and JetBrains , providing custom autocomplete and chat experiences. It offers features you expect from most code assistants like autocomplete, code explanation, chat, re

ADFS–Export and import Relying Party data

At one of my clients we have multiple ADFS instances, one for testing purposes and one for production usage. The information on both servers is almost the same, only the endpoints for each relying party are different. Before we typically copied information from one server to another manually, typing over all the information. Of course this is a a cumbersome and error-prone process. I decided to simplify and automate this process with the help of some Powershell. The good news was that the hard work was already done for me, as Brad Held already created a script for exactly that purpose: PowerShell Gallery | Copy-RelyingPartyTrust 1.1 Here is how to use this script: Copy-RelyingPartyTrust.ps1 -sourceRPID testing:saml:com -path C:\Folder -filename SamlTest.json -import false As I found the script a little bit confusing I took the freedom to adapt the code and split it out in 2 separate scripts. Here is the export script: You can use this script like this: export.ps1 -rp

Becoming a professional software developer: Identity over skills

In my experience as a software architect working with developers, I’ve seen a common struggle: the inability to consistently apply good practices, like unit testing, refactoring, or writing clean code, especially when deadlines loom. At the start of a project, everyone’s committed to following best practices—writing tests, maintaining code quality, and ensuring scalability. But as the pressure of deadlines kicks in, those good intentions often get thrown overboard in favor of quick fixes and shortcuts. What I’ve realized is that the real issue isn’t a lack of skill or knowledge; it’s a mindset problem. Developers may know what the best practices are, but they don’t always see themselves as the type of developer who religiously follows them, no matter the circumstances. This ties into an insight I gained from reading Atomic Habits by James Clear: True change happens not when we aim to achieve specific goals, but when we shift our identity. For developers, this means moving

Github Actions–Deprecation warnings

Today I had to tweak some older Github Action workflows. When I took a look at the workflow output I noticed the following warnings: Here is a simplified version of the workflow I was using: The fix was easy, I had to update the action steps to use the latest versions(v4):

Semantic Kernel - Multi agent systems

Yesterday I talked about the new agent abstraction in Semantic Kernel and how it can simplify the steps required to build your own AI agent.  But what could be better than having one agent? Multiple agents of course! And that is exactly what was recently introduced as a preview in Semantic Kernel. As explained in this blog post , there are multiple ways that multiple agents can work together. The simplest way is as a group chat where multiple agents can talk back-and-forth with each other. To avoid that these agents get stuck in a loop this is combined with a custom termination strategy that specifies when the conversation is over. Here is a small example. I start with the default Semantic Kernel configuration to create a kernel instance: Now I define the instructions for the different agents and create them: Remark: Notice that I can use different kernels with different models if I want to. To make sure that the conversation is ended I need to specify a TerminationStr