Skip to main content


Showing posts from February, 2023

Debugging Visual Studio issues

After upgrading to the latest Visual Studio version, I started to get issues. Everytime I opened up Visual Studio it closes after a few seconds without any message or warning. What is going on? Safe Mode The first thing I tried is running Visual Studio in safe mode using the following command: devenv /safemode This will start Visual Studio in safe mode, loading only the default environment and services. This is especially useful if you suspect that the problem is coming from a rogue extension or plugin. This switch prevents all third-party VSPackages from loading when Visual Studio starts. But alas, this time it didn’t help, so let’s try a different approach. The Activity Log So how else can we find out what is going wrong?  A second option we got is to log all Visual Studio activities in the Activity Log. To enable this run Visual Studio  using the following command: devenv /log After Visual Studio has closed, you can find the logged activities at the following loca

Strawberry Shake GraphQL Client–Update client after changing schema

I blogged about the Strawberry Shake Graphql Client before and today it is my go to tool in .NET when consuming GraphQL API’s. But I always forget the steps to take when the consumed GraphQL schema has changed. So this is a reminder for myself. Add the CLI tools It can be that the CLI tools are not yet installed. In that case we should run dotnet tool restore first. This will read out the tool manifest and install any missing tools: >dotnet tool restore Tool '' (version '12.0.1') was restored. Available commands: dotnet-graphql Restore was successful. Now we can invoke the graphql update command. This will read out the .graphqlrc.json configuration file and updates the local schema.graphql file. >dotnet graphql update Download schema started. Download schema completed in 1328 ms If we now build our application, the Strawberry Shake Client code generation will be triggered and new / updated C# code will be generated.

IIS – Avoid giving file access when using Windows Authentication

In our continuous effort to improve security measures and evolve to a 'least privilege' setup, we started to remove user access to disks on our IIS web servers. For most applications this turned out not be an issue, but applications using Windows Authentication started to fail. Let’s investigate what is happening and how we can fix this. Anonymous Authentication Let me first explain what is going on for applications that are not using Windows Authentication . These applications are configured to use Anonymous Access and the anonymous user identity is set to a specific user account or the Application pool identity. This means that only these users needs to have access to the folders on disk. Windows Authentication Once we move to Windows Authentication the situation gets more complex. Now the user that is authenticated needs access on disk to be able to access the site. So probably the easiest solution would be to grant the user access to disk (or just give the Every

Applying Postel’s law in ASP.NET Core–Part II

Yesterday I talked about Postel’s law or the Robustness principle and how it is import in building evolutionary systems. We had a look at how the built-in System.Text.Json serializer handles some scenario's. Today I want to focus on the ASP.NET Core model binding. What is model binding? Typically in ASP.NET Core you don’t access the raw HtpContext yourself to extract the JSON from the request body. Instead you let model binders do the hard work. From the documentation : Controllers and Razor pages work with data that comes from HTTP requests. For example, route data may provide a record key, and posted form fields may provide values for the properties of the model. Writing code to retrieve each of these values and convert them from strings to .NET types would be tedious and error-prone. Model binding automates this process. The model binding system: Retrieves data from various sources such as route data, form fields, and query strings. Provides the data to controll

Applying Postel’s law in ASP.NET Core–Part I

An important aspect in building robust systems is applying ‘Postel’s law’. Postel's Law was formulated by Jon Postel, an early pioneer of the Internet. The law was really a guideline for creators of software protocols. The idea was that different implementations of the protocol should interoperate. The law is today paraphrased as follows: Be conservative in what you do, be liberal in what you accept from others. It is also called the Robustness principle and it is crucial in building evolvable systems. In this post I want to see how this law applies when building ASP.NET Core Web API’s. A typical use case in ASP.NET Core is the following: A client serializes the model in JSON and sends it over HTTP to our ASP.NET Core server. On the other side, the server gets the message, extracts the body of the requests and deserializes it back to a model. It’s this second step specifically I want to focus on, how will ASP.NET Core (or more specifically the System.Text.Json s

C# 11–The scoped keyword

While browsing through the source code of the .NET framework(what else would you do with some free time?) I noticed the usage of the scoped keyword: This triggered my interest as I didn't know that this keyword existed. Time to find out what it does... To explain what this keyword does, I have to talk first about ref struct . C# 8 introduced ref struct where it was used to implement the Span type. A ref struct guarantees that an instance of the type is allocated on the stack and can’t escape to the managed heap. To guarantee this, strict scoping rules are enforced by the compiler. More about ref structure types here: ref struct types - C# reference | Microsoft Learn When you pass references to methods on a ref struct , the compiler ensures that the variables you refer to don't go out of scope before the struct itself. Otherwise, the ref struct might refer to out-of-scope variables: The code above results in a compiler error: The fix is to introduce the scop

.NET 7 - The type initializer for 'NUnit.Engine.Services.RuntimeFrameworkService' threw an exception

After installing .NET 7 on my development machine, the NUnit tests for some of my projects started to fail. A look at the test logs showed the following error message: ========== Starting test discovery ========== NUnit Adapter Test discovery starting Exception System.TypeInitializationException, Exception thrown discovering tests in C:\Users\bawu\source\repos\NationalRegisterNumber\NationalRegisterNumberTests\bin\Debug\net48\NationalRegisterNumber.UnitTests.dll The type initializer for 'NUnit.Engine.Services.RuntimeFrameworkService' threw an exception.    at NUnit.Engine.Services.RuntimeFrameworkService.ApplyImageData(TestPackage package)    at NUnit.Engine.Services.RuntimeFrameworkService.SelectRuntimeFramework(TestPackage package)    at NUnit.Engine.Runners.MasterTestRunner.GetEngineRunner()    at NUnit.Engine.Runners.MasterTestRunner.Explore(TestFilter filter)    at NUnit.VisualStudio.TestAdapter.NUnitEngine.NUnitEngineAdapter.Explore(TestFilter

Your first application version will s*ck

Pardon my French, but I cannot state it in any other way; the first version of your application will s*ck. Allow me to explain... In the beginning of a software project a lot of things are unclear. You don't know the business, requirements are vague (and in fact there is not such a thing as a requirement ), the architectural qualities need to be defined, and so on… And let us not even start talking about the ‘unknown unknowns’. If you use all this vague information to create a project plan, a signed of requirements list, detailed use cases, your solution architecture and I probably forgot some other things, it should not be a surprise that the first release of your application will have room for improvement. In a traditional project based waterfall approach, you are in big trouble because there is only one application version typically delivered at the end. So users end with a sub-optimal solution. Let the change requests come to arrive at something that is at least usabl


In our continuous effort to improve the security of the solutions we build, we activated HSTS at one of our clients. However I noticed that there was a lack of understanding on what HSTS exactly is and how it helps to improve security. Hence this blog post. What is HSTS? Let’s ask Wikipedia : HTTP Strict Transport Security ( HSTS ) is a policy mechanism that helps to protect websites against man-in-the-middle attacks such as protocol downgrade attacks and cookie hijacking . It allows web servers to declare that web browsers (or other complying user agents ) should automatically interact with it using only HTTPS connections. In short HSTS tells a browser to only access a site through HTTPS. Enabling HSTS in IIS Let me first show you to enable it in IIS before I get into more detail. Open IIS (InetMgr) Click on the website where you want to activate HSTS. Remark: HSTS is always activated at the site level. On the Manage Website se

Github–Secret scanning

One of the things you certainly want to avoid as a developer is accidently checking in secrets(passwords, api keys, ...) in your code repository. Such a credential leak can have severe consequences and is something you certainly want to avoid. If you are using GitHub as your source repository and you have a public repository, I have some good news for you. Since December last year, Github made secret scanning available for free  for all public repositories. It is not enabled out-of-the-box but easy to configure and get up and running for you repo. Let me show you... Enable Secret scanning for your public Github repository. Browse to your public repository in Github and click on the Security tab. Click on Secret Scanning on the Security page. Click on the link to the Repository settings to bring you to the correct setting on the Settings page. At the bottom of the page, click on the Enable button in the Secret scanning section. That’s it!

Generate a self-signed certificate for .NET Core

A team member contacted me because he no longer could invoke a local service he was developing. I had a look at the error message the API returned: One or more errors occurred. An error occurred while sending the request. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure This was a local service using a self-signed certificate and it turned out that the certificate was expired. Time to create a new self-signed certificate… Create a self-signed certificate using dotnet dev-certs Generating a new self-signed certificate is easy in .NET Core thanks to the built-in support in the dotnet commandline tool. Open a command prompt and execute the following command: dotnet dev-certs https -ep c:\users\BaWu\localhost.pfx -p crypticpassword Remark:   If the command returns the following response "A valid HTTPS certificate is already p

Azure DevOps– Resize images in the wiki

Today I want to share a small but useful feature I found in the Azure DevOps wiki. Azure DevOps allows you to add an image inside a wiki page using the following markdown syntax: ![Text](URL) The text in the brackets describes the image being linked and the URL points to the image location. By default the image you include using this syntax is included at full size. But did you know it also possible to resize an included image? To do that include a space and an equal sign after the URL and then specify either a width and a height: ![Text](URL =WIDTHxHEIGHT) Or only a width: ![Text](URL =WIDTHx) Notice the ‘x’ which is not a typo.   More information: Markdown syntax for files, widgets, and wikis - Azure DevOps | Microsoft Learn

Azure DevOps - Filter your work items by swimlane

One of the features of Azure DevOps is the Kanban board. On this board you can visualize the work and how it moves through your team. By default the Kanban board is divided into multiple columns each representing a state your work item has to go through. This means that the work itself flows from left to right through your board. It is also possible to divide the work horizontally by using swimlanes . You can use swimlanes for any kind of reason to separate work in multiple dimensions. Typical use cases are to separate BLOCKED vs non-BLOCKED work, separating high-PRIO from low-PRIO work and so on. Last week I was contacted by a colleague who was wondering if it was possible to query the work items based on the swimlane row they are in. She couldn’t find a ‘swimlane’ value to filter on. Still it IS possible to filter on swimlane. The correct field name to use is ‘Board lane’ which turned out to be not so obvious:

Unit testing- Arrange Act Assert

When writing my unit tests I typically use the AAA(Arrange-Act-Assert) pattern. This pattern splits a test in 3 sections: The Arrange section of a unit test method initializes objects and sets the value of the data that is passed to the method under test. The Act section invokes the method under test with the arranged parameters. The Assert section verifies that the action of the method under test behaves as expected. Here is an example from one of my projects using XUnit : And if it’s a test that expects an exception I use the Assert.Throws method: What I don't like in the example, above is that it violates the AAA pattern. The Assert is already happening in the Act part. While reviewing some code from one of my team members, I noticed he used the following approach to split out the Act and Assert part when using Assert.Throws : He is using a Local Function to create this separation. Nice!

Azure DevOps–Nuget package caching

By default when you build your project in Azure DevOps pipelines, any package dependencies will be restored as part of the build process. Of course if you have a large list of packages, this can take a long time. Let’s see how we can reduce build time by using Nuget package caching Lock dependencies using package.lock.json Before we can use the cache task, we need to lock our project’s dependencies. This is done through a package.lock.json file that will be used to generate a unique key for our cache. We don’t have to create this package.lock.json file ourselves but we can tell msbuild to generate it when building our project by setting the RestorePackagesWithLockFile property to true: Don’t forget to check your packages.lock.json file into your source code. Update our Azure DevOps Pipeline Now we need to update our Azure DevOps pipeline to use the Cache task. This task will use the content of the packages.lock.json to produce a dynamic cache key. This will ensure that

Azure Application Insights - Tracking Operations in a Console App

If you are using the Application Insights SDK inside your ASP.NET (Core) application, every incoming HTTP request will start an operation context and will allow you track this request including any dependencies called along the way. But if you are using a console application(as a batch job for example) , there isn't the concept of an incoming request so the SDK doesn't track a lot out-of-the-box. Configure your Console app to use Application Insights Let me show you how you can still track operations in a Console App. Start by adding the Microsoft.ApplicationInsights.WorkerService nuget package to your console app: dotnet add package Microsoft.ApplicationInsights.WorkerService Now you can add the bootstrapping logic to configure App Insights for your console app: The code above will build up the required services and allows you to resolve a TelemetryClient instance that we’ll use in the next part. Remark: Notice the FlushAsync method at the en

Property based testing in C#–Part 5

This is the fifth post in my Property based testing in C# series. If you have missed my previous posts, you can check them here: Part 1 – Introduction Part 2 – An example Part 3 – Finding edge cases Part 4 – Writing your own generators Part 5 – Locking input (this post) Through this series I used FsCheck to generate and run property-based tests. For every property I want to test FsCheck generates random input values. These values can be different between each test run meaning that each test run can end with different results. So if you had a bad input value that failed your test the previous time, it is possible that in a next test run you will not see this bad input again. For simple input values this doesn’t have to be a problem, as you can have a look at the test logs to see the input that failed your test and than you can use this input to write an example-based test. But when you start using more complex inputs, it can be a lot more difficult to replicate

Extensibility in your applications

Extensibility is one of the possible architecture qualities that you can strive for in your software architecture. Wikipedia gives us the following definition for extensibility: Extensibility is a software engineering and systems design principle that provides for future growth. An extensible system is one whose internal structure and dataflow are minimally or not affected by new or modified functionality, for example recompiling or changing the original source code might be unnecessary when changing a system’s behavior, either by the creator or other programmers. I would have asked ChatGPT for a definition but it was overloaded at the moment of writing this post. A typical (but certainly not the only) way to achieve this is through a plug-in architecture. If you want a way on how to do this in .NET, you can have a look here: Create a .NET Core application with plugins - .NET | Microsoft Learn But it is not about the technical implementation details I want to talk.

Azure DevOps Pipelines - Prevent CD pipeline from being triggered directly

Today was my lucky day. I finally found a solution for a problem I had for a long time. It was thanks to a colleague(thanks Ben) that I finally was able to fix it. Let me explain the issue... On one of my projects I had two Azure DevOps YAML pipelines configured: A Continuous Integration(CI) pipeline that is triggered for every check-in into master. A Continuous Deployment(CD) pipeline that is triggered once the CI pipeline completes and deploys the applications to multiple environments. To make this possible I’m using a pipeline trigger in the CD pipeline that is linked to the CI pipeline: Just for completeness, here is the trigger configuration for the CI pipeline as well: The problem I encountered was that every time some code was checked into the master branch both the CI and CD pipeline were triggered! Of course this is not wat I wanted as only the CI pipeline should run and only when the CI pipeline successfully completes, the CD pipeline should start. The

Application Insights–OpenTelemetry integration for ASP.NET Core

OpenTelemetry is becoming THE standard of telemetry instrumentation. And thanks to the Azure Monitor OpenTelemetry Exporter we can keep using Application Insights as our tool of choice. In this post I'll walk you through the steps to link your OpenTelemetry enabled ASP.NET core application to Azure Monitor Application Insights. Important to mention is that this feature is still in preview! At the moment of writing distributed tracing and metrics are supported but some of the other features that we like from the Application Insights SDK are not available(yet). What is not supported? I copied the list from the documentation : Live Metrics Logging API (like console logs and logging libraries) Profiler Snapshot Debugger Azure Active Directory authentication Autopopulation of Cloud Role Name and Cloud Role Instance in Azure environments Autopopulation of User ID and Authenticated User ID when you use the Application Insights JavaScript SDK