Skip to main content

Posts

Showing posts from December, 2024

Change the line ending of a file in VS Code

Line endings, also known as newline characters, signify the end of a line of text. They might seem insignificant, but they play a crucial role in ensuring our files are accurately interpreted and processed. One of the biggest places where this impacts us is when talking about cross-platform compatibility. Different operating systems interpret line endings differently. A file created on Windows might not display correctly on Unix systems without the proper line ending conversion. This is because on Windows we use Carriage Return + Line Feed ( CRLF or \r\n ) whereas on Unix/Linux/MacOS : Line Feed ( LF or \n ) is used. I got into trouble with line endings when trying to run a docker image. Instead of running the image as expected, I got a “file not found” error for a specific file. After some investigation, I found out that the root cause was indeed the used line ending. After changing it and rebuilding the docker image I was finally able to run it successfully. In VS Code, ...

Monitor Github Copilot usage

Of course you have provided all your developers with a Github Copilot license. But do you have a clue on how they are using it? In this post I show you how to use the Github Copilot Metrics API to monitor the usage. Let's dive in... Connect to the Metrics API To start calling the Metrics API endpoint , we need to generate a Personal Access Token(PAT). Click on your profile icon in the upper-right corner of any of your Github Pages and choose Settings from the dropdown. In the left sidebar, click on Developer settings at the bottom. Click on Fine-grained tokens under Personal access tokens .   Click on Generate new token .   Under Token name , enter a name for the token.   Set the Resource owner to the organization you want to monitor the metrics for. Under Permissions , select at least the following permission sets: "GitHub Copilot Business" organiz...

Generate text embeddings with Semantic Kernel and Ollama

Retrieval-augmented regeneration, also known as RAG, is an NLP technique that can help improve the quality of large language models (LLMs). It allows your AI agent to retrieve data from external sources to generate grounded responses. This helps avoid that your agent hallucinates and returns incorrect information. There are multiple ways to retrieve this external data. In this post I want to show you how to generate vector embeddings that can be stored and retrieved from a vector database. This means we first need to decide which database to use. The list of options keeps growing (even the new SQL server version will support vector embeddings out of the box). As we want to demonstrate this feature using Semantic Kernel, we need to take a look at one of the available connectors . Qdrant Vector Database I decided to use Qdrant for this blog post. Qdrant is an AI-native vector database and a semantic search engine. You can use it to extract meaningful information from unstructure...

Structured output with Ollama

I talked about Structured Output before in the context of Semantic Kernel. Structured Outputs is a feature that ensures the model will always generate responses that follow a supplied JSON Schema, so you can process the responses in automated fashion without worrying about getting invalid JSON back. Recently support for Structured Output was announced by Ollama . In this post I want to show you how you can use this in combination with OllamaSharp , the C# client for Ollama. Using Structured Ouput in OllamaSharp Remark: Make sure you have latest Ollama version running on your local machine before you continue. Add the OllamaSharp client to your project: dotnet add package OllamaSharp Now let’s first define our response model: Afterwards we need to initiate a new OllamaSharp client instance: We create a new request object. Notice that we specify a JSON schema object based on the Recipe model we created earlier If we now invoke the appl...

Github Copilot Edits

I continue my journey on getting the most out of Github Copilot. Today I want to take a look at Copilot Edits as another way to use AI in your day-to-day coding experience. Until recently, you either had to use completions or the chat experience.  With Copilot Edits, a third option is added to the list. Why Copilot Edits? Where the completions or the chat experience are a great fit for single file changes, they can be cumbersome for bigger changes that span multiple files. When using Copilot Edits, you can specify a set of files that should edited and then ask Copilot to do some changes.  Remark: At the moment of writing Copilot Edits seems only to be available inside VS Code and not (yet) in Visual Studio. Let’s give it a try… Click on the Github Copilot icon at the top and choose Open Copilot Edits from the dropdown(or just hit CTRL-Shift-I) This will open up the Copilot Edits view where we can start a new editing session .   First we need t...

Sequential GUIDs with .NET 9

Sequential GUIDs (Globally Unique Identifiers) are a specific type of GUID that are optimized for scenarios where insertion order matters, especially in databases. Traditional GUIDs are randomly generated and can cause performance issues in systems where GUIDs are used as primary keys due to their lack of natural order. Sequential GUIDs address this problem by maintaining some level of order, making them more efficient for certain use cases. Before .NET 9 I typically had 2 approaches to generate a sequential guid: I let the database generate the sequential guid for me through the NEWSEQUENTIALID function: I generate the id myself using a library like RT.Comb : Starting from .NET 9 I have a new option to add to my list as the Guid class got a new method Guid.CreateVersion7() ,  that returns sequential Guids, according to version 7 UUID RFC 9562. This means that they can be ordered and used as a database table primary key, for example, because the values won't be s...

From dependence to independence, to interdependence

Last week I was listening to an Coaching For Leaders podcast when Stephen M. R. Covey was the guest. Stephen is a bestselling author and former CEO Covey Leadership Center. Stephen is also the son of Dr. Stephen R. Covey’s, author of the well known 7 habits of highly effective people book. When asked about a key lesson from his father, Stephen talks about the evolution from dependence to independence to interdependence. Having never read the book, I though it would a good idea to a) Put the book on my reading list. b) Already dive in a little bit more into this topic. What did I learn? The basic idea is that personal and professional growth can be seen as a journey through three stages: dependence, independence, and interdependence. Each stage represents a significant transformation in how we relate to ourselves, others, and the world.   Dependence: The starting point Dependence is where we all begin. As children, we rely on our parents to meet our basic needs and ...

CS8999–Line does not start with the same whitespace

Raw string literals are a powerful feature introduced in C# 11 that simplify the way we handle strings, especially those containing special characters or spanning multiple lines. What are raw string literals? A raw string literal starts and ends with a minimum of three double-quote characters ( """ ). This allows you to include characters like backslashes ( \ ), single quotes ( ' ), and double quotes ( " ) without needing to escape them. Here's a simple example: The above example is a single line string literal. But the feature really shines when using raw string literals to span multiple lines. This is particularly useful for embedding large blocks of text, such as XML or JSON, directly into your code. Remark: It is very important to check the documentation as some very specific rules are applicable when using multiline string literals. I stumbled over this myself when I added a JSON schema description inside a piece of code: The compiler c...

Semantic Kernel - Structured output

One of the challenges when integrating a large language model into your backend processes is that the response you get back is non-deterministic. This is not a big problem if you only want to output the response as text, but it can be a challenge to process the response in an automated fashion. Prompting for JSON Of course you can use prompt engineering to ask the LLM to return the response as JSON and even provide an example to steer the LLM, but still it can happen that the JSON you get back is not formatted correctly. Here is a possible prompt: A trick that also can help as mentioned in the Anthropic documentation is to prefill the response with a part of the JSON message. JSON mode Although the techniques above will certainly help, they are not fool proof. A first improvement on this approach was the introduction of JSON mode in the OpenAI API. When JSON mode is turned on, the model's output is ensured to be valid JSON, except for some edge cases that are descri...

Who owns AI in your organization?

AI is the new shiny toy in many organizations, promising innovation, efficiency, and a competitive edge. But with great potential comes great complexity—and in many cases, inter-departmental turf wars over who should "own" AI. IT wants control for its infrastructure expertise, Data Science claims it as their domain due to their deep knowledge of models and analytics, and business units see it as a tool to drive their specific goals. So, who really owns AI? I think that’s the wrong question to ask…. AI’s transformative potential means it touches almost every part of an organization. Each department has valid reasons for their claim, but this fragmented approach can lead to inefficiencies, duplicated efforts, and missed opportunities. AI is not a standalone tool that fits neatly into one department. In my opinion it’s a cross-functional enabler that thrives on collaboration. Framing AI as something to be "owned" misses its broader organizational value. Instead...

Efficient searching in .NET with SearchValues

While browsing through the list of changes in .NET 9, I noticed a remark about the SearchValues functionality. I had no idea what it does so time to further investigate this feature... The SearchValues<T> type was introduced in .NET 8 to help optimize search. It allows us to specify the values we want to search for. Based on the provided values the runtime will come up with different strategies to optimize the search performance. After the SearchValues<T> is created, we can use it on any ReadOnlySpan<T> . Here is a small example: In .NET 8, the SearchValues was limited to simple data types like char. Starting from .NET 9, you can also use string values: Remark: Use SearchValues<T> when you expect to use the same search values a lot. More information SearchValues Class (System.Buffers) | Microsoft Learn SearchValues<T> Class (System.Buffers) | Microsoft Learn What's new in .NET 9 | Microsoft Learn

Free ebook - Practical debugging for .NET Developers

Even with the introduction of AI in the software development process, debugging remains an important part of our job. Being able to investigate difficult problems efficiently and find solutions fast is an important skill to master. The good news is that it is a skill you can learn and the even better news is that you can get help through the ‘Practical Debugging for .NET Developers’ book by  Michael Shpilt that is now available for free!   I like to share a fragment of the introduction: The best software engineers I know are excellent at debugging. They find solutions to difficult problems where no one else can. They accomplish this by using the right tools, knowing what to look for, and having a deep understanding of both their own domain and the .NET ecosystem. Moreover, they conduct a systematic investigation using proven debugging methodologies. Debugging is not an art—it’s something that can be taught, and this book is going to do exactly that. This book is all abou...

Docker - Environment variable is not picked up

I lost a lot of time today with a stupid issue I had with docker. I’m playing around with OpenUI (more about it in another post) and wanted to use it in combination with Ollama to limit the costs. In the documentation I found I could do this by setting the OLLAMA_HOST environment variable on startup. I know I had to use the --env or –e parameter to pass the value. So this was my original attempt: docker run --rm --name openui -p 7878:7878 ghcr.io/wandb/openui --env OLLAMA_HOST= http://host.docker.internal:11434 Unfortunately when I took a look at the logs, I noticed that the default value was still used: I lost a lot of time until I finally discovered that the environment variable should be set BEFORE the image. Here is the updated and working command: docker run --rm --name openui -p 7878:7878 --env OLLAMA_HOST=http://host.docker.internal:11434 ghcr.io/wandb/openui More information Set environment variables | Docker Docs GitHub - wandb/openui: OpenUI let's yo...

Custom instructions when using GitHub Copilot

Last week when talking about a new release of the JetBrains AI assistant, I noticed a specific feature I really liked; the prompt library. This allows you to tweak the prompts that are used in specific contexts. This made me wonder, does a similar feature exists for GitHub Copilot? Let’s find out… Custom instructions(preview) For GitHub Copilot, a similar feature is in preview; Custom Instructions . With custom instructions you can provide extra context that will be added to your conversations so that Copilot can generate higher quality responses. To use this feature, we first need to enable it because it is still in preview. I’ll show you how to this using Visual Studio(check the link at the bottom of this post to see the instructions for VS Code). Open Visual Studio (make sure you have the latest version installed) Go to Tools   -> Options Search for custom instructions Select the checkbox for (Preview) Enable custom instructions to be loaded from .g...

The downside of hiring for cultural fit

For years, hiring for cultural fit has been a cornerstone of my recruitment strategy. The idea is simple: more than focusing only on skills, I focused on finding people who aligned with our company's values, norms, and practices to create a harmonious and productive work environment. This also means that I put a lot of focus on building a strong culture inside my team. Cultural fit However, when listening to Malcolm Gladwell and Adam Grant on the Work Life podcast, I had to re-think this approach as it highlighted the potential downsides of overemphasizing cultural fit. In the podcast Adam says that startups where founders put culture fit first, their organizations are dramatically less likely to fail. So culture fit clearly wins in terms of startups surviving and then going public. But studies have shown that after their IPO, these ‘culture fit’ firms grow at slower rates.  He thinks that what is happening is that early on when you have a really clear mission, it's very ...

JetBrains AI Assistent–Ollama support

I talked about Ollama before as a way to run a Large-Language-Model(LLM) locally. This opens the door to try out multiple modals at a low(er) cost (although also a lower performance) and could be interesting if you are not allowed to share any data with an AI provider. For example you are a developer but your employer doesn’t allow you to use AI tools for that reason. If this is a use case that is relevant for you, than I have some good news for you. With the latest version of JetBrains AI Assistent(available in JetBrains Rider but also other IDE’s) you can now use Let me show you how to use this: Open JetBrains Rider(or any other IDE that integrates the JetBrains AI Assistent) Hit Ctrl-Alt-S or go to the Settings through the Settings icon at the top right   Go to the AI Assistant section under Tools Check the Enable Ollama checkbox. A warning message appears about Data Sharing with Third-Party AI Service Providers. Click OK to con...

Ollama - Unable to locate runners

I'm a big fan of Ollama as a way to try and run a large language model locally. Today I got into trouble when I tried to connect to Ollama. When I tried to run Ollama through ollama serve I got the following error message: time=2024-12-02T21:15:55.398+01:00 level=ERROR source=common.go:279 msg="empty runner dir" Error: unable to initialize llm runners unable to locate runners in any search path I was able to fix the issue by going the AppData\Local\Ollama folder. There inside updates I found a new(er) version that I installed manually by executing the OllamaSetup.exe. After the setup completed, Ollama was running again as expected. More information Ollama

Visual Studio 2022 17.12- Show inline return values while debugging

With the 17.12 version of Visual Studio 2022 there comes a feature that I was waiting to be added for a long time(and with long I really mean long). Of course you are wondering what feature I'm talking about. Let me first set the scene by showing you the class I want to debug: I created a small Calculator example. Notice that I'm using 2 different syntaxes(the regular syntax and expression-bodied method). This is not an accidental inconsistency from my side as you’ll see later. Now what if I wanted to debug the return values of these functions. Before the latest Visual Studio update I typically used a temporary variable to inspect the return values or took a look at the Autos window or the Watch window. With this release, you finally see the return statements inline in the editor window. Here is an example where I added the breakpoint at the end of the function: Unfortunately this doesn’t work (yet?) when using the expression-bodied method syntax, this is because ...