Skip to main content

Posts

Showing posts from 2025

GitHub Copilot–Code reviews in VSCode

After writing a post about the GitHub Copilot code review feature in Visual Studio last week, I got a question if the same functionality was available in VSCode. Hence this post to show you how to use this feature in VSCode as well. Before you can use this feature, make sure that Editor preview features are enabled in your GitHub settings: Review changes Reviewing your changes works quite similar to Visual Studio: Go to the Source Control tab in VSCode. Hover over Source Control in the sidebar, and then click on the Copilot code review – Uncommited Changes button: Copilot will now review your changes (this can taken some time so be patient). When it has comments, they are shown inline in the file: When no remarks are found during review, you get a pop up message. You can also see the comments in the Problems tab: Remark: Although this is a nice new feature, I had some mixed results when applying it. Sometimes it gave very good feedback, but o...

Using legacy text encodings in .NET Core

Upgrading an (old) .NET application to .NET core turned into a learning experience when we got the following error message after the upgrade was done: System.NotSupportedException : No data is available for encoding 1252. For information on defining a custom encoding, see the documentation for the Encoding.RegisterProvider method. TLDR; The application was using an older Windows-1252 text encoding causing the error above when trying to use this in .NET Core which doesn’t support this encoding out-of-the-box. Introduction to Text Encodings Text encoding is a method used to convert text data into a format that can be easily processed by computers. Computers inherently understand numbers, not characters, so text encoding maps characters to numerical values. This process ensures that text data can be stored, transmitted, and interpreted correctly across different systems and platforms. There are various text encodings, each designed to support different sets of characters. Some c...

GitHub Copilot–Code reviews

The integration of Copilot inside Visual Studio and VS Code keeps expanding. With the 17.13 release of Visual Studio, GitHub Copilot can now review your changes before you commit them. It will examine your changes and provide suggestions inline. Enabling code reviews As this is still a preview feature, you first need to have some feature flags turned on: • Tools > Options > Preview Features > Pull Request Comments   • Tools > Options > GitHub > Copilot > Source Control Integration > Enable Git preview features.    Using code reviews Now when you do some changes, you can ask GitHub Copilot to review them before they are committed. Therefore, go to the Git Changes window and click on the small Review changes with Copilot icon: If the Copilot reviewer has comments, you get a message: You can click on the link to go to the specific comment: Nice!

GitHub Copilot–Code referencing

One of the risks of using tools like GitHub Copilot, is that we start to over rely on these tools and that we blindly accept any suggestion or code change it wants to make. This is not so different than what we had before with the ‘copy-paste’ programmer who just copied the first snippet he could find online. But with the integration of AI inside our IDE’s, it has become a lot easier to just accept these changes without questioning the quality of the suggestion or its source, especially as these tools can generate bigger and bigger code blocks. This is where Code Referencing can help. It introduces a filter that detects when code suggestions match public code on GitHub, providing you with valuable context to make more informed decisions about the code you are about to incorporate into your projects. Let’s give it a try! Enable Code Referencing Before we can use this feature, we have to check if it is enabled. In the upper-right corner of any page on GitHub, click your prof...

Appreciate, don’t evaluate

I like to read a lot . Next to all the fiction novels, I typically go back and forth between technical and more leadership books. The last book at book about leadership I’m going through is ‘Leadership is language’ by L. David Marquet, author of ‘Turn the ship around!’. There are a lot of take aways from this book, but today I want to focus on one aspect that resonated with me: Appreciate, don’t evaluate. As people, including myself,  we kind of struggle with giving other people appreciation for their work. We either forget to do it or when there is acknowledgement, it is typically brief. An example from the book that you maybe recognize yourself: “Great job! Now here’s what I’d change . . .” Not only is the acknowledgement brief, it is immediately followed by criticism. David explains that their are several problems with this kind of ‘celebration’: It does not rest long enough on the accomplishment. It does not invite the person to tell their story,...

GitHub Copilot– Reusable prompts files

Yesterday I talked about custom instructions as a general way to influence the behavior of Github Copilot in multiple contexts(coding, testing, reviewing, ...) in VS Code. With prompt files you can go one step further, and provide specific instructions for a task. Let’s give this feature a try. Active the prompts file feature First we need to activate this feature: Go to File –> Preferences –> Settings . Search for Chat and look for the Prompt files setting. Click on the Edit in settings.json link. The settings.json file is opened with a new chat.promptFiles setting already added. Change the value to true . Now VSCode will look for prompt files in the .github/prompts folder.   Creating a prompt file Now we are ready to add one or more prompts file to the .github/prompts folder. For every task you should create a separate Markdown prompt file. Remark: It is possible to reference other files inside your Markdown prompt file as well...

GitHub Copilot - Custom Instructions

I talked about custom instructions before as a way to tweak the Copilot prompt when interacting with GitHub Copilot. At that time it was rather limited and you had not much control on how it was applied. The way it was handled was through a custom Markdown copilot-instructions.md file. This feature has further evolved and now you can specify instructions for multiple purposes: Code-generation instructions - provide context specific for generating code. You can specify those instructions directly in the settings, or in a Markdown file in your workspace. Test-generation instructions - provide context specific for generating tests.  You can specify those instructions directly in the settings, or in a Markdown file your workspace. Code review instructions - provide context specific for reviewing the current editor selection. You can specify those instructions directly in the settings, or in a Markdown file in your workspace. Commit m...

GitHub Copilot Agent mode (Preview)

Agents are the next big thing in GenAI and with the introduction of Agent mode last week, agentic AI are now coming to GitHub Copilot as well. In agent mode, Copilot can reason and iterate on its own code. It can automatically infer and execute subtasks needed to complete the main task you requested. At the moment of writing this post, the new agent mode is still in preview. So you need to download the VS Code Insiders version available here: Download Visual Studio Code Insiders Once the installation has completed, you can active the agent mode through the Copilot settings: Go to File –> Preferences –> Settings:   Search for ‘chat agent’ and make sure that the Agent mode checkbox is checked: OK, we are good to go. Open the Copilot Chat window and switch to the Edits view:   At the bottom of the Edits view, switch from the Edit mode to the Agent mode: Similar to Copilot edits mode, we select the files that should be included in the...

Chat with your GitHub repository

One way to ask questions about your GitHub repository is through GitHub copilot chat inside the GitHub website. Just click on the Copilot icon next to the Search field and a conversation window pops up where you can start asking questions about your repository. Although this works quite nice, it limits you to what GitHub Copilot has to offer. The good news is that there are some tools that allow you to use your GitHub data with other LLM’s. How does it work? Each of these tools work in a similar way as they read through all available data(source code, documentation,  PR info,…) in your GitHub repo to build up one big context file. For example if I feed one of my repositories into Uithub, I get this file back: This contains the full file tree, all the documentation, source code and so on… This file can than be given to the LLM of your choice(preferably one that accepts a large token size) to ask questions about it. Right now I’m aware of the following tools that do this:...

ASP.NET Core -Automatically add middleware through IStartupFilter

While browsing through the ASP.NET Core documentation , I accidently stumbled over the IStartupFilter interface. I had no idea what it did but it triggered my interest. Let's find out together in this post what this interface does and when it can be useful. What is IStartupFilter? IStartupFilter is an interface that allows you to intercept and modify how the application's request pipeline is built. It's particularly useful when you need to: Ensure certain middleware runs before any other middleware Add middleware consistently across multiple applications Create reusable middleware configurations Modify the order of middleware registration Here's the interface definition: How IStartupFilter Works When ASP.NET Core builds the middleware pipeline, it executes all registered IStartupFilter instances in the order they were registered. Each filter can add middleware before and after calling the next delegate, creating a wrapper around the exis...

Github Copilot–New models added

The list of available models in Github Copilot keeps growing. Whereas last year you could already use GPT-4o, o1, o1-mini and Claud Sonnet 3.5, now you can also try OpenAI o3 and Google Gemini 2.0 Flash. About Gemini 2.0 Flash The Gemini 2.0 Flash model is a highly efficient, large language model (LLM) designed for high-volume, high-frequency tasks. It excels in multimodal reasoning , handling inputs like text, images, and audio, and providing text outputs. With a context window of 1 million tokens , it can process vast amounts of information quickly and accurately. Gemini 2.0 Flash is optimized for speed and practicality, making it ideal for everyday tasks, coding, and complex problem-solving. GitHub Copilot uses Gemini 2.0 Flash hosted on Google Cloud Platform (GCP). When using Gemini 2.0 Flash, prompts and metadata are sent to GCP, which makes the following data commitment : Gemini doesn't use your prompts, or its responses, as data to train its models. About OpenAI o3 ...