Skip to main content

Posts

Showing posts from 2024

Multi model support in Github Copilot

Today I want to point out a new feature that was recently introduced in Github Copilot chat; support for multiple large language models. It is now possible to choose between different models so that you can choose the model that aligns best with the needs you have in your specific project. Models that are supported today are GPT-4o, o1, o1-mini and Claud Sonnet 3.5. Remark: To be exact, I have to point out that Github Copilot was leveraging multiple LLMs already for different use cases. Let me show you how this feature works… VS Code Open Github Copilot Chat inside VSCode Click on the ‘Pick model’ dropdown and select a different model   That’s it! Visual Studio Open Github Copilot Chat inside Visual Studio Click on the ‘Pick model’ dropdown and select a different model   That’s it! Remark: I noticed that the Claud Sonnet 3.5 model is not yet available inside Visual Studio. More information https://github.blog/changelog/2024-09-19-

.NET Conf 2024 is almost there!

Make sure you are well rested as the next 3 days you'll need all your energy to learn about the latest and greatest in .NET at .NET Conf 2024 , a free virtual event from November 12 until November 14. This year you’ll discover the newest features and enhancements in .NET 9, including cloud-native development , AI integration , and performance improvements .   Join me and many others to learn from the .NET team members and the broader .NET community.  Don't forget to collect your free digital swag , enter your details to win a price bag from the sponsors or participate in the challenge to also win a price. More information .NET Conf 2024 What's new in .NET 9 | Microsoft Learn

Web Deploy - WDeployConfigWriter issue

At one of my customers, we are still (happily) using Microsoft Web Deploy to deploy our web applications to IIS. This works great and is all nicely automated and integrated in our Azure DevOps pipelines. Until the moment it no longer works. Something that happened last month when trying to deploy one of our projects. A look at the web deploy logs showed us the following error message: Web deployment task failed. ((2/11/2024 15:00:08) An error occurred when the request was processed on the remote computer.)(2/11/2024 15:00:08) An error occurred when the request was processed on the remote computer.Unable to perform the operation. Please contact your server administrator to check authorization and delegation settings. Time to contact the server administrator... Unfortunately that’s my team in this case. So let’s investigate. We logged in on the web server and checked the event logs. There we got some extra details: A tracing deployment agent exception occurred that was propagat

Embracing change requests: a mindset shift

In software development, change requests(CR) often get a bad reputation. As developers and architects it can feel frustrating to have to redesign and change existing features(especially if the change request has a big impact on the existing system). However, we should see them as a positive sign for a product's success, evolution, and continued relevance. Let’s explore why… Reason 1 - Our system is used! The fact that we get a CR means that at least someone tried our system and even better they see value in using it further because they want to improve it. When users ask for changes, it's because they're actively engaging with the software. They’re uncovering real-world use cases and scenarios that may not have been anticipated during initial design. This feedback loop confirms that the software is doing something valuable—and that users believe it can do even more. Reason 2 – We have learned something! Software, by its nature, is built to be flexible and adaptabl

Applying the ‘Wisdom of the crowd’ effect in software development

I’m currently reading Noise: A Flaw in Human Judgment , the latest book by Daniel Kahneman wo also wrote Thinking, Fast and Slow . I’m only halfway in the book but in one chapter the authors talk about an experiment done by 2 researchers, Edward Vul and Harold Pashler where they gave a person a specific question not once but twice . The hypothesis was that the average of 2 answers was always closer to the truth than each answer independently. And indeed they were right. One knows more than one Turns out that this is related to the wisdom-of-crowds-effect ; if you take the average of a number of independent answers from different people it typically leads to a more correct answer. I never heard about this effect before, but it turns out that I’m applying this principle for a long time based on something I discovered in the Righting Software book by Juval Löwy, the broadband estimation technique . This technique allows you to estimate the implementation effort for a c

.NET 8 upgrade - error NETSDK1045: The current .NET SDK does not support targeting .NET 8.0.

A colleague asked me to create a small fix on an existing library. I implemented the fix and decided to take the occasion to upgrade to .NET 8 as well. How hard can it be… Turns out that this was harder than I thought. After upgrading the target framework moniker to .NET 8, the build started to fail with the following (cryptic) error message: C:\Program Files\dotnet\sdk\6.0.407\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.TargetFrameworkInference.targets(144,5): error NETSDK1045: The current .NET SDK does not support targeting .NET 8.0.  Either target .NET 6.0 or lower, or use a version of the .NET SDK that supports .NET 8.0. .NET 8 was certainly installed on this machine, so that could not be the issue: Then I took a second look at the error message and I noticed something, the compiler was using the .NET 6 SDK although the application itself was configured to use .NET 8. Of course! Now I remembered. In this project I was using a global.json file. Through this file you c

SQL Server - Use Table Valued parameters to construct an IN statement

A colleague created a stored procedure that returns some data from a specific table. Nothing special would you think and you are right. The only reason we were using a stored procedure here is that we had a very specific requirement that every attempt to read data from this specific table should be logged. Here is a simplified version of the stored procedure he created: What I want to talk about in this post is the usage of a (comma separated) string parameter that is used to construct the filter criteria for the query. Remark: This version of the stored procedure is already better than the original version that was using dynamic SQL to construct an IN clause: Using Table-Valued Parameters We can further improve the procedure above by using a table valued parameter. To use table-valued parameters instead of comma-separated strings in your stored procedure, you can follow these steps: Step 1: Create a Table-Valued Parameter Type First, you need to create a table-valued

EF Core–Read and write models

Today I was working with a team that was implementing a CQRS architecture. CQRS (Command Query Responsibility Segregation) is a design pattern that separates the responsibilities of reading and writing data into distinct models. The idea is to use one model to handle commands (which modify data) and another model to handle queries (which retrieve data). This separation allows for better scalability, performance optimization, and flexibility, as the read and write operations can be independently optimized or scaled based on the specific needs of the system. After creating a read model for a specific table in the database, EF core started to complain and returned the following error message: System.InvalidOperationException : Cannot use table 'Categories' for entity type 'CategoryReadModel since it is being used for entity type 'Category' and potentially other entity types, but there is no linking relationship. Add a foreign key to 'CategoryReadModel' on the

Architecting Your Team Setup: Aligning Teams with Software Design (and Vice Versa)

As software architects, we tend to focus heavily on the design of the systems we build—how the various components interact, the data flow, and the technology choices. But architecture doesn’t exist in a vacuum. One often overlooked element is how the structure of our teams can (and should) align with the architecture itself. The relationship between team setup and software design is symbiotic: your team’s structure influences the system’s architecture, and the architecture shapes how teams need to work together. Getting this alignment right can be the key to efficiency, scalability, and long-term success. Why Team Setup Matters in Software Architecture There’s an adage known as Conway’s Law , which states:  Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure. In simple terms, the way your teams are structured will be reflected in your system’s architecture. If your teams d

Implementing an OAuth client credentials flow with ADFS–Part 4–Understanding and fixing returned error codes

It looked like most of the world has made the switch to Microsoft Entra(Azure Active Directory). However one of my clients is still using ADFS. Unfortunately there isn't much information left on how to get an OAuth flow up and running in ADFS. Most of the links I found point to documentation that no longer exists. So therefore this short blog series to show you end-to-end how to get an OAuth Client Credentials flow configured in ADFS. Part 1 - ADFS configuration Part 2 – Application configuration Part 3 – Debugging the flow Part 4 (this post) – Understanding and fixing returned error codes Last post we updated our configuration so we could see any errors returned and are able to debug the authentication flow. In the first 2 posts I showed you everything that was needed to get up and running. The reality was that it took some trial and error to get everything up and running. In this post I share all the errors I got along the way and how I fixed them. IDX10204: Unabl

Implementing an OAuth client credentials flow with ADFS–Part 3–Debugging the flow

It looked like most of the world has made the switch to Microsoft Entra(Azure Active Directory). However one of my clients is still using ADFS. Unfortunately there isn't much information left on how to get an OAuth flow up and running in ADFS. Most of the links I found point to documentation that no longer exists. So therefore this short blog series to show you end-to-end how to get an OAuth Client Credentials flow configured in ADFS. Part 1 - ADFS configuration Part 2 – Application configuration Part 3 (this post) – Debugging the flow In the first 2 posts I showed you the happy path. So if you did everything exactly as I showed, you should end up with a working Client Credentials flow in ADFS. Unfortunately there are a lot of small details that matter, and if you make one mistake you’ll end with a wide range of possible errors. In today’s post, I focus on the preparation work to help us debug the process and better understand what is going on. Updating your OAuth Confi

Implementing an OAuth client credentials flow with ADFS–Part 2–Application configuration

It looked like most of the world has made the switch to Microsoft Entra(Azure Active Directory). However one of my clients is still using ADFS. Unfortunately there isn't much information left on how to get an OAuth flow up and running in ADFS. Most of the links I found point to documentation that no longer exists. So therefore this short blog series to show you end-to-end how to get an OAuth Client Credentials flow configured in ADFS. Part 1 - ADFS configuration Part 2 (this post) – Application configuration After doing all the configuration work in ADFS, I’ll focus today on the necessary work that needs to be done on the application side. Configuring the API We’ll start by configuring the API part. First create a new ASP.NET Core API project dotnet new webapi --use-controllers -o ExampleApi Add the ‘ Microsoft.AspNetCore.Authentication.JwtBearer ’ package to your project: dotnet add package Microsoft.AspNetCore.Authentication.JwtBearer Add the auth

Implementing an OAuth client credentials flow with ADFS–Part 1 - ADFS configuration

It looked like most of the world has made the switch to Microsoft Entra(Azure Active Directory). However one of my clients is still using ADFS. Unfortunately there isn't much information left on how to get an OAuth flow up and running in ADFS. Most of the links I found point to documentation that no longer exists. So therefore this short blog series to show you end-to-end how to get an OAuth Client Credentials flow configured in ADFS. In todays post, I focus on the ADFS configuration. To make it not unnecessary complex, I’ll show the steps using one of the simplest OAuth flows; the Client Credentials flow . OAuth Client Credentials flow The OAuth Client Credentials flow is an authentication method used primarily for machine-to-machine (M2M) communication. In this flow, an application (the "client") requests an access token directly from an OAuth 2.0 authorization server using its own credentials, without involving a user. This access token allows the client to acces

JWT decoder in Visual Studio

So far I always used JWT.io to decode my JWT tokens. But today I discovered that I don't have to leave my Visual Studio IDE anymore. But before I show you this feature, let’s just briefly summarize what JWT tokens are. JWT what? JWT (JSON Web Token) is a compact, URL-safe token format used for securely transmitting information between parties. It is commonly used in authentication and authorization scenarios, especially in web and mobile applications. A JWT consists of three parts: Header : Contains metadata about the token, such as the signing algorithm used (e.g., HS256 or RS256). Payload : Contains the claims, which are statements about the user or other data (e.g., user ID, roles). This data is not encrypted, so it should not include sensitive information. Signature : A cryptographic signature that ensures the token has not been tampered with. It is created by encoding the header and payload, then signing them using a secret key or public/private key pair.

Running a fully local AI Code Assistant with Continue–Part 6–Troubleshooting

In a previous posted I introduced you to Continue in combination with Ollama, as a way to run a fully local AI Code Assistant. Remark: This post is part of a bigger series. Here are the other related posts: Part 1 – Introduction Part 2 -  Configuration Part 3 – Editing and Actions Part 4  -  Learning from your codebase Part 5 – Read your documentation Part 6 (this post) – Troubleshooting Although Continue really looks promising, I stumbled on some hurdles along the way. Here are some tips in case you encounter issues: Tip 1 - Check the Continue logs My first tip is to always check the logs. Continue provides good logging inside the IDE, So go to the output tab and switch to the Continue source to get the generated output: Tip 2 – Check the Continue LLM logs Next to the output of Continue itself, you can find all the LLM specific logs in the Continue LLM Prompt/Conversation output. So don’t forget to check that output as well: Tip 3 – Be patient

Running a fully local AI Code Assistant with Continue–Part 5–Read your documentation

In a previous posted I introduced you to Continue in combination with Ollama, as a way to run a fully local AI Code Assistant. Remark: This post is part of a bigger series. Here are the other related posts: Part 1 – Introduction Part 2 -  Configuration Part 3 – Editing and Actions Part 4  -  Learning from your codebase Part 5 (this post) – Read your documentation Today I want to continue by having a look at how Continue can scrape your documentation website and make the content accessible inside your IDE The @docs context provider To use this feature you need to use the @docs context provider: Once you type @docs you already get a long list of available documentation: This is because Continue offers out-of-the-box a selection of pre-indexed documentation sites. (You can find the full list here ) If you now ask a question, the indexed documentation is used to answer your question: You can see the context used by expanding the context items section: Index you