Wednesday, June 30, 2021

Avoiding index fragmentation with sequential guids

Using a non sequential GUID in an index is not a good idea as it leads to index fragmentation and decreased performance. We could switch to an identity field but this is not ideal in a highly distributed (micro)services architecture.

RT.Comb to the rescue!

RT.Comb implements the “COMB” technique, as described by Jimmy Nilsson, which replaces the portion of a GUID that is sorted first with a date/time value. This guarantees (within the precision of the system clock) that values will be sequential, even when the code runs on different machines.

RT.Comb is available as a NuGet package and provides different strategies for generating the timestamp optimized for different database platforms:

  • RT.Comb.Provider.Legacy: The original technique. Only recommended if you need to support existing COMB values created using this technique.
  • RT.Comb.Provider.Sql: This is the recommended technique for COMBs stored in Microsoft SQL Server.
  • RT.Comb.Provider.Postgre: This is the recommended technique for COMBs stored in PostgreSQL.

This technique works great for most scenario’s. Unless you are in a high write scenario and records are inserted faster than the precision offered by DateTime.UtcNow(which is 1/300th of a second). In that case you will still not have collisions but it could be that ids will not be sorted correctly.

But don’t worry, a solution is provided by RT.Comb itself.  RT.Comb has a timestamp provider called UtcNoRepeatTimestampProvider which ensures that the current timestamp is at least X ms greater than the previous one. Here is an example on how to use the UtcNoRepeatTimestampProvider:

Tuesday, June 29, 2021

MassTransit–Use record types for your message contracts

So far I always used an interface for my message contracts:

In combination with anonymous types, you don’t even need to implement this interface yourself but create an anonymous type on the fly that will be published:

Let’s see if we can do the same thing by using a record type instead. Here is our rewritten message contract:

We could publish the message in the same way as before:

BUT you can also take advantage of the "target-typing" feature in C# 9, this allows us to create a specific instance of the record type instead of an anonymous type:

Did you notice the difference? We call the constructor of the record type through the 'new ()' syntax.

Monday, June 28, 2021

Getting started with React.js

If you want to get started with React.js, there is a lot of information out there.

I can recommend having a look at React book, your beginner guide to React. It’s a completely free ebook on React.js with all the basic knowledge you need to start building React applications.

Table of content

Check it out on Github pages:

Friday, June 25, 2021

Azure Functions with built-in OpenAPI support

With the latest Visual Studio update, a new Azure Function template was added; “Http Trigger with OpenAPI”. This template will create a new function with the necessary implementation for OpenAPI support. Let’s try this:

  • Open Visual Studio. Choose ‘Create a new Project’.  Search for the ‘Azure Functions’ template and click Next.
  • Specify a name and location for the project and click Create.
  • Now you can choose a specific Azure Functions template. Select the ‘Http Trigger with OpenAPI’ template and click Create.
  • The new function is bootstrapped with the necessary implementation for OpenAPI support. Extra attributes are added on top of your Function method.
  • When you run the function app, you can browse to ‘/api/swagger/ui’ to view the Swagger UI page.

Thursday, June 24, 2021

WebDeploy - EnableMsDeployAppOffline

At one of my customers we are using WebDeploy for years to deploy our web applications to IIS. Although not that well documented, it works great and has good Visual Studio integration.

When reviewing the release pipeline of one of my colleagues I noticed that they introduced tasks in the release pipeline to stop the application pool before deploying the package and start the application pool again once the deployment completed.

This is probably done because ASP.NET Core applications hosted in IIS run in-place and lock the files that they are running from. So if we don’t stop the application pool, web deploy will fail as it cannot replace these files.

Although this solution works, WebDeploy has a built-in alternative; the EnableMsDeployAppOffline flag.

When set to true this flag, WebDeploy will create an app_offline.htm file which unloads the running application, publishes files, then removes that file.

Sidenote: About app_offline.htm

The app_offline.htm is a long existing feature in IIS that can be used to shut own the Application Host and start it back up but without loading any of the modules and only serving the app_offline.htm file.It is an effective way to keep your site from running and showing a ‘busy’ or ‘not available’ message instead.  As long as this file is found in the root folder of your site no code will be run. Once the file gets deleted, the site starts back up.

When using the EnableMsDeployAppOffline flag, the app_offline.htm file is created for you by WebDeploy.

To use it through msbuild you need to add the following command line argument:

Or when using the WinRM – Web App Deployment tasks, you can set the Take App Offline flag:

Wednesday, June 23, 2021

Azure Pipelines - A template expression is not allowed in this context

In my attempt to further optimize our deployment process, I tried to move most of the pipeline logic to a YAML template. This is how the template looked like(I simplified it a little bit):

However when I tried to create a pipeline using this template I got the following error message:

/cd-template.yml (Line: 14, Col: 14): A template expression is not allowed in this context

The problem is that I’m trying to use a template expression inside a pipeline resource.

First I was thinking I did something wrong but then I found the following announcement in the release notes:

Previously, compile-time expressions (${{ }}) were not allowed in the resources section of an Azure Pipelines YAML file. With this release, we have lifted this restriction for containers. This allows you to use runtime parameter contents inside your resources, for example to pick a container at queue time. We plan to extend this support to other resources over time.

As stated above, compile time expressions are not allowed in the resources section. With the change above it becomes possible but only when using the container resource.

So no luck for me! Hopefully this will change in future versions…

Tuesday, June 22, 2021

NuGet Package Explorer–Compiler Flags

After creating a NuGet package as part of my build pipeline, I opened the NuGet package through the NuGet Package explorer to doublecheck if everything is ok.

Unfortunately I got the following warning in the Compiler Flags section:

Present, not reproducible

When I hovered over the warning icon, I got the following extra information:

Ensure you’re using at least the 5.0.300 SDK or MSBuild 16.10

The reason I got this error is that on the build server a newer .NET 5 SDK was installed and used to build this package.

I could easily verify this by calling the following command :

dotnet --list-sdks

The output should show something like this:

5.0.204 [C:\Program Files\dotnet\sdk]

As you can see I didn’t have the 5.0.300 SDK installed. Fixing it can be done by either updating Visual Studio to the latest version or installing the latest .NET SDK.

Monday, June 21, 2021

Visual Studio 2019–Manage Docker Compose launch settings

With the latest Visual Studio update, the Docker Compose tooling got improved and it is not possible to create a launch profile to run any combination of services defined in your compose files. Before you only had once launch profile and you couldn’t choose what services to start.

Let’s find out how to use this new feature:

  • Right-click on you docker-compose project and select Manage Docker Compose Launch Settings
  • The tooling will scan through all Compose files to find all services. Once the scanning is completed you can choose which services to launch
  • Create a new profile by clicking New… and specify a name
  • Now we can configure which services to launch for this profile. Also choose the Launch Service name and action. Once you are done , click OK
  • A new launch profile is created and available to use

Friday, June 18, 2021

Visual Studio 2019 - Create a Docker compose file for an existing solution

Visual Studio makes it really easy to create a Docker compose file for an existing solution.

Here are the steps to get there:

  • Open your solution in Visual Studio
  • Right click on one of your projects and choose Add –> Container Orchestrator Support
  • Choose Docker compose in the Add Container Orchestrator dialog and click OK
  • Choose Linux as the Target OS and click OK
  • A new Docker Compose project is generated
  • Inside this project you find a docker-compose.yml file with a reference to the project you’ve selected
  • To add the other projects, follow the same procedure; right click on another project and choose Add –> Container Orchestrator Support again
  • The docker-compose.yml file will be updated with the new project information

Thursday, June 17, 2021

Azure Pipelines - Use pipeline variables

I’m currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML templates.

In our old setup we had 2 steps:

  • A CI pipeline that builds our application, runs all tests and packages the application in a Docker container. This container is then published to Azure Container Registry.
  • A Release pipeline that is triggered once the CI pipeline completes. The CI pipeline is available as an artifact.

We use the CI pipeline ‘branchname’ together with the ‘buildid’ to find the correct image inside ACR and deploy it:$(Release.Artifacts._CMS - CI.SourceBranchName).$(Release.Artifacts._CMS - CI.BuildId)

To achieve the same thing through Azure Pipelines and YAML templates we need to first define the CI build as a Pipeline Resource which I explained in this post:

Once the pipeline resource is set, we can also use the ‘branchname’ and ‘buildid’. This can be done through Pipeline Resource variables.

The variable names are not the same as when using the Release Pipeline. Here are the available variables:


So to find the correct ACR image inside our YAML pipeline, we need to construct the image name like this:$(resources.pipeline.cms-ci.sourceBranch).$(resources.pipeline.cms-ci.runID)

Wednesday, June 16, 2021

Azure DevOps - Branch policy path filters

Last week I got a tip from a colleague(Thx Sam!). During a ‘Tech sync’ we were discussing on how to avoid committing secrets in your source repository. Of course there exists tools that scan for credentials inside your repository but these tools have to be configured and are not perfect.

Another way to do this by introducing a Reviewer policy together with a path filter in Azure DevOps.   By setting a path filter, the branch policy is only applied when files matching the filter are changed.

Typical places where application secrets are added are config files, application settings, … Let’s define some paths to check:

  • /Config/*
  • *.json
  • *.config

To combine multiple paths you can use ; as a separator:

  • /Config/*;*.json;*.config

To apply this configuration for a repository, go to the cross repositories settings (e.g. /_settings/repositories"><organization name>/_settings/repositories).

Go to the ‘Automatically include code reviewers’ section and click on the ‘+’ sign.

Select the reviewers you want to add and enter the path filter in the ‘For pull requests affecting these folders’ field:

More information:

Tuesday, June 15, 2021

Visual Studio 2019–Editorconfig UI

As I read more code than I write, code consistenty and readability are really important for me. That is why I like the .editorconfig and one of the reasons why I blogged about it before:

Although I really like EditorConfig files, configuring them is not that easy. If you agree then I have great news for you! Starting from Visual Studio 16.10 an EditorConfig designer was added that allows you to easily view and configure your code analysis preferences.

  • Check that you have at least version 16.10 of Visual Studio
  • Open the solution containing the editorconfig file
  • Click on the .editorconfig file. The designer is loaded:

  • The great thing is that if you are looking for a specific setting, you no longer have to open up the documentation but that you can easily search using the provided UI.

Remark: If you still want to edit the file directly in the text view, you still can by pressing F7.


Monday, June 14, 2021

FluentNhibernate–Use Microsoft.Data.SqlClient

Microsoft released version 3 of the Microsoft.Data.SqlClient. This .NET Data Provider for SQL Server provides general connectivity to the database and supports all the latest SQL Server features for applications targeting .NET Framework, .NET Core, and .NET Standard. It can be used as a replacement for the built-in System.Data.SqlClient that will still be available for a long time but doesn’t support newer SQL Server features.

I thought the 3.0 release was a good reason to make the switch in my .NET Core applications. As I’m using NHibernate (together with FluentNHibernate) I had to do some configuration work to get this working.

Remark: If you are using EF Core, there is nothing you need to do if you are using version 3.0 or higher. From that version on the Microsoft SQL Server EF Core provider uses Microsoft.Data.SqlClient by default.

The steps to get it working with (Fluent)NHibernate are short and easy:

  • Step 1 -  Add a reference to Microsoft.Data.SqlClient
  • Step 2 -  Update your FluentNHibernate configuration:

That's all!

Friday, June 11, 2021

Domain Driven Design–The first 15 years

If you are into DDD and you want to have a heads up what happened in the DDD community since the release of the “blue book” by Eric Evans, Domain Driven Design – The first 15 years is a must read.

Fifteen years after the publication of "Domain-Driven Design: Tackling Complexity in the Heart of Software" by Eric Evans, DDD is gaining more adoption than ever. To celebrate the anniversary, we've asked prominent authors in the software design world to contribute old and new essays. 

The book was released in 2015 so maybe it becomes time for another update, but until that happens you can read this edition.

Thursday, June 10, 2021

Visual Studio–Create your CI/CD pipeline using Github Actions

When looking through the preview features activated inside Visual Studio (Options > Environment > Preview Features) I noticed the ‘GitHub Actions support in Publish’ feature:

Let’s try it to see what it does…

  • Open the Start Window in Visual Studio.
  • Click on the Clone a Project button. Enter the repository location and local path and click on Clone.
  • Go to the Solution Explorer. Right click on the Solution and choose Publish from the context menu
  • Let’s publish our Application to Azure. So choose Azure from the list of available Targets and click on Next.
  • Now we need to choose a specific target. Let’s choose Azure App Service and click on Next.
  • On the next screen, we need to choose the App Service Instance we want to use. After doing that, click on Next.
  • As a last step, we need to select the deployment type. It is here that the preview feature appears, as we can choose CI/CD using GitHub Actions. Click on Finish.
  • A yaml file is created containing all the necessary steps to deploy our application through GitHub Actions. Nice!

Remark: This feature only works if your project is linked to a GitHub repository.

Wednesday, June 9, 2021

Azure Pipelines - Pass location of baked manifest files

Inside our CD pipeline, I first want to bake a manifest file through Kustomize(more about Kustomize in a later blog post) and then use the baked manifest to deploy to our AKS cluster.

Both actions are possible through the Kubernetes Manifest task, but I didn’t find immediatelly how I could pass the location of the baked manifest bundle from the bake task to the deployment task.

Here is the original yaml file I created:

The trick is to give the first task a specific name and point to the ‘manifestsBundle’ property through the task name.

Let’s update our yaml file to show this. In the example below I named the bake task ‘bake’ so I can use ‘$(bake.manifestBundle)’ in the second task:

Tuesday, June 8, 2021

Azure Pipelines - Error deploying manifest to Kubernetes cluster

I was trying to deploy a manifest through the Kubernetes manifest task  but the task failed with the following error message:

              error: error validating "/home/vsts/work/_temp/Ingress_tags-api-ingress_1622817055216": error validating data: [ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "serviceName" in io.k8s.api.networking.v1.IngressBackend, ValidationError(Ingress.spec.rules[0].http.paths[0].backend): unknown field "servicePort" in io.k8s.api.networking.v1.IngressBackend]; if you choose to ignore these errors, turn validation off with --validate=false

Before I was using the Kubectl task which didn’t complain at all!?

The Kubernetes manifest task validates the manifest before it deploys it (which is in fact a good thing). So let’s have a look what is wrong with the manifest I try to deploy…

Here is the original manifest file:

The problem is that I’m still using the beta syntax for the ingress controller where my apiversion is pointing to v1. In v1 the syntax changed to specify the service Port and Name. Let’s fix this:

Monday, June 7, 2021

Azure Pipelines–Artifacts

In Azure Pipelines you have the concept of Artifacts. Artifacts can be things like compiled code coming from a CI build, a Docker container, another source repository and so on…

These artifacts can be used inside your Release pipeline to deploy these artifacts on one or more environments. When switching to YAML pipelines I couldn’t find the concept of artifacts inside the schema definition.

Turns out that inside the YAML template, an artifact is defined by a ‘Resource’. I’ll show you 2 examples to explain how to use them.

Use the output of another pipeline as a Pipeline Resource

Most important settings are:

  • The source name: This should match to the name of the pipeline that creates the artifact
  • Trigger: This defines if this pipeline should be triggered when the pipeline you are pointing to completes.

Use another repository as a Repository Resource

Most important settings are:

  • Repository: This should match to the name of the repository you want to use
  • Type: Right now there is support for ‘git’, ‘github’ and ‘bitbucket’. The ‘git’ value should be used when connecting to Azure DevOps.

More information:

    Friday, June 4, 2021

    GraphQL– Optional arguments

    Every field on a GraphQL object type can have zero or more arguments, for example the length field below:

    An argument can be either required or optional. For optional arguments, you can specify a default value, like  METER in the example above.

    To configure this using HotChocolate, you can use the following syntax:

    Thursday, June 3, 2021

    ASP.NET Core - Handle cancelled requests

    When an HTTP request is made to your ASP.NET Core application, it is always possible that the request is aborted. For instance when the user closes it browsers without waiting for the response. In this case, you may want to stop all the work to avoid consuming resources.

    Here are 2 possible approaches on how to handle this scenario. A first option is to check the HttpContext.RequestAborted property:

    A second option is to let the Modelbinder do it’s work and add a parameter of type CancellationToken to the action:

    Wednesday, June 2, 2021

    Azure Kubernetes Service - Time sync issue between nodes

    We encountered a strange issue this week inside our AKS cluster. We discovered that the time was not synced between the different pods and nodes.

    We noticed this because we couldn’t use our  OAuth security tokens as the IssuedAt timing was off.

    To validate this issue we ssh’d into the nodes and ran the following command:

    $: sudo timedatectl status

    This resulted in the following output

    Local time: Wed 2021-6-2 13:48:44 UTC
    Universal time: Wed 2021-6-2 13:48:44 UTC
    RTC time: Wed 2021-6-2 13:48:44
    Time zone: Etc/UTC (UTC, +0000)
    Network time on: yes
    NTP synchronized: no
    RTC in local TZ: no

    The NTP service was disabled and no NTP service was configured. To fix it we opened the timesyncd.conf:

    $: sudo cat /etc/systemd/timesyncd.conf

    and changed the NTP value


    After that we restarted the timesync service:

    $: sudo timedatectl set-ntp true
    $: sudo systemctl restart systemd-timesyncd.service

    Of course this is only good as a temporary workaround. I would expect that this is enabled by default.

    Tuesday, June 1, 2021

    Azure Pipelines–Pipeline Resource Trigger

    Yesterday I started blogging about my journey moving from the ‘classic’ build approach to YAML templates. I shared how you can use a Build completion trigger to link your YAML build to a previously completed build. Although this approach works, it is no longer recommended.

    A better way is to use ‘Pipeline Resource Triggers’.  This is done by defining a pipelines resource inside your YAML template. pipelines is a dedicated resource only for Azure Pipelines. Let’s have a look at the syntax:

    In your resource definition, pipeline is a unique value that you can use to reference the pipeline resource later on. source is the name of the pipeline that produces an artifact.

    Remark: As I mentioned in the example above, the source name is case sensitive.

    More information: