Monday, May 31, 2021

Azure Pipelines–Build completion triggers

I’m currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML templates. It was quite a journey. So expect a lot of posts the upcoming days about this topic where I share everything I learned along the way.

In our original setup we had an Azure DevOps classic pipeline that was used to create a Docker image and push it to ACR (“the CI part”). After that a release pipeline was triggered that took the image from ACR and deploys it in multiple AKS clusters (“the CD part”).

The goal was to keep this way of working and only make the switch from the ‘classic’ build approach to YAML templates. So the first thing I had to find out was how I could trigger the CD build when the CI build completed.

Let me first show you the approach that is most similar to the classic approach using ‘Build completion triggers’. Although this approach still works, it is no longer recommended. The recommended approach is to specify pipeline triggers directly within the YAML file but that is something for tomorrow.

Build completion triggers

Let’s walk through the steps required to use the Build completion trigger:

  • Open the YAML pipeline that you want to be triggered after build completion.
  • Click on ‘Edit’ to open the YAML pipeline in Edit mode

  • Click on the 3 dots and choose Triggers from the context menu

  • The Triggers screen is loaded. Here you can click on ‘+Add’ and select a build 

Thursday, May 27, 2021

NSwag error - System.InvalidOperationException: No service for type 'Microsoft.Extensions.DependencyInjection.IServiceProviderFactory`1[Autofac.ContainerBuilder]' has been registered.

In one of our projects we are using NSwag to generate our TypeScript DTO’s and services used in our Angular frontend. In this project we are using Autofac as our IoC container and have created a few extension methods that hook into the HostBuilder bootstrapping.

Unfortunately our custom logic brought NSwag into trouble and caused our build to fail with the following error message:

System.InvalidOperationException: No service for type 'Microsoft.Extensions.DependencyInjection.IServiceProviderFactory`1[Autofac.ContainerBuilder]' has been registered.

NSwag adds an extra build target in your csproj file and uses that to run the NSwag codegenerator tool:

While investigating the root cause of this issue, we introduced a small workaround where we used a separate program.cs and startup.cs specifically for the NSwag codegenerator.

We added a minimal Program.cs file:

And a minimal Startup.cs file:

The magic to make this work is to change the nswag.json configuration file in the root of our project and point to the alternative files we’ve created above:

"createWebHostBuilderMethod": "Example.App.Web:Example.App.Web.NSwag.NSwagProgram.FakeNSwagHostBuilder",

"startupType": "Example.App.Web:Example.App.Web.NSwag.NSwagStartup",

Although this is not a final solution, until I found the exact root cause, this will do the trick...

Wednesday, May 26, 2021

Microsoft Build 2021–Book of news

Microsoft continues it’s new tradition to bundle all important announcements in a ‘Book of news’. So ,if you don’t have the time to watch Satya Nadella's keynote or the Scott Hanselman and friends version of the keynote have a look at the long list of available sessions, the book of news is there to guarantee you are up to speed with the latest and greatest in the Microsoft ecosystem.

Tuesday, May 25, 2021

Azure Pipelines–Automatic Package Versioning

There are a few options available when configuring your nuget package versioning scheme in your build pipeline.

Let’s explore the differences:

Off

The package version is not changed during the build. Instead the version provided in your csproj file is used.

Use the date and time

When selecting the ‘Use the date and time’ option, it is up to you to provide a Major, Minor and Patch version number. The datetime will be used as the prerelease label:

$(Major).$(Minor).$(Patch).$(date:yyyyMMdd)

Use an environment variable

A third option is to use an environment variable. The environment variable should contain the version number that you want to use and should be in a valid format.

Remark: You should enter the name of the environment variable without $, $env, or %.

Use the build number

A last option is to use the build number. This will use the build number to version the package.

Remark: This will change the build number format to a version compatible definition: '$(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r)'.

Monday, May 24, 2021

.NET Coding Pack

I blogged before about the coding packs for Visual Studio Code. A coding pack is an all-in-one installers, that will set you up in Python, Java,…

As of this week the list of available coding packs got extended with a coding pack for C# and .NET. You can download the pack from http://dot.net/learntocode.

Walk through the different steps of the installation wizard to start your C# development journey:

After completing the installation, Visual Studio is launched. Now some extra extensions are added to VSCode:

Once everything is installed, a .NET Interactive Notebook is available that allows you to learn about and try out all C# features in an interactive way(similar to Jupyter notebooks):

Friday, May 21, 2021

NETSDK1005–Asset file is missing target

When trying to build a project using Azure Pipelines, it failed with the following error message:

NETSDK1005–Asset file is missing target

This error message is not very descriptive and it was not immediatelly obvious where the mistake was.

In this case the problem was related to the fact I was combining an older NuGet.exe to restore the nuget packages in combination with a .NET 5 project.

From the documentation:

NuGet writes a file named project.assets.json in the obj folder, and the .NET SDK uses it to get information about packages to pass into the compiler. In .NET 5, NuGet added a new field named TargetFrameworkAlias, so earlier versions of MSBuild or NuGet generate an assets file without the new field.

To fix the issue I had to change the NuGet Tool Installer task to use NuGet version 5.8 or higher:

More information: https://docs.microsoft.com/en-us/dotnet/core/tools/sdk-errors/netsdk1005

Thursday, May 20, 2021

SonarQube - SQL Server Integrated Security

While moving our build agents from one server to another, I also had to move our SonarQube instance.

To install the SonarQube instance I followed the instructions mentioned here: https://docs.sonarqube.org/latest/setup/install-server/

As I was using SQL Server with Integrated Security, I took special attention when reading this section:

To use integrated security:

  1. Download the Microsoft SQL JDBC Driver 9.2.0 package and copy mssql-jdbc_auth-9.2.0.x64.dll to any folder in your path.
  2. If you're running SonarQube as a Windows service, make sure the Windows account under which the service is running has permission to connect your SQL server. The account should have db_owner database role membership.

    If you're running the SonarQube server from a command prompt, the user under which the command prompt is running should have db_owner database role membership.

  3. Ensure that sonar.jdbc.username or sonar.jdbc.password properties are commented out or SonarQube will use SQL authentication.
sonar.jdbc.url=jdbc:sqlserver://localhost;databaseName=sonar;integratedSecurity=true
This instructions were not completely obvious to me. So here is some extra info in case you don’t get it working with the instructions above:
Tip 1: 
If you are looking for the mentioned dll, you can find it in the following folder inside the zip file: \enu\auth\x64\mssql-jdbc_auth-9.2.0.x64.dll 
Tip 2: 
It wasn’t clear to me what was meant with any folder in your path. I copied the DLL to both the bin and lib folder of my OpenJDK installation.
Tip 3:
After doing the steps above, it still didn’t work. Turns out that SonarQube was expecting the 9.2.0 version. If you follow the download link, you arrive at the latest version of the JDBC driver which is version 9.2.1.
To get the correct DLL, I had to go to Github and download the correct version there: https://github.com/microsoft/mssql-jdbc/releases/tag/v9.2.0

Wednesday, May 19, 2021

Azure Pipelines–SonarQube analysis

After moving our build agents from one server to another, one of the builds no longer worked. When looking at the logs, I noticed that the build failed almost immediatelly with the following error message:

No agent found in pool Azure Pipelines which satisfies the specified demands: java

And indeed when I took a look at the specific pipeline demands I could see that java was required. Turns out that the SonarQube analysis requires java installed on the build server to be able to execute. As the build server was a new clean install, nothing was installed yet.

Install Java OpenJDK

Time to fix that…

The license terms of the Oracle JDK has changed and updates are no longer free. Therefore I choose to install the Azul OpenJDK(download it here).

I used the MSI and walked through the installation wizard. After completing the setup I manually created the JAVA_HOME environment variable and set it to the bin  folder of the Zulu installation(e.g. C:\Program Files\Zulu\zulu-11\bin\)

Normally this last step is not required but for an unknown reason, the environment variable wasn’t created by the installer.

Rescan for capabilities

Now it is time to update the agent capabilities. To trigger a rescan, the agent should be restarted. The agents were running as windows services so I went to the services screen and restarted the specific services. After doing that, the environment variable was detected as part of the capabilities:

Unfortunately this didn’t solve the issue and when I tried to rerun the build it still failed with the same error message.

In the SonarQube documentation I found the following:

If you add a Windows Build Agent and install a non-oracle Java version on it, the agent will fail to detect a needed capability for the SonarQube Azure DevOps plugin. If you are sure that the java executable is available in the PATH environment variable, you can add the missing capability manually by going to your build agent > capabilities > user capabilities > add capability. Here, you can add the key, value pair java, and null which should allow the SonarQube plugin to be scheduled on that build agent. This Bug has been reported to the Microsoft Team with azure-pipelines-agent#2046 but is currently not followed up upon.

Add user capability

Let’s follow the suggestion mentioned above and add the capability ourselves:

  • Go to Project Settings –> Agent Pools. Select the pool that contains the agent.
  • Go to the agents tab and select the agent from the pool.
  • Go to the capabilities tab and click on the ‘+’ sign to add a user defined capability.
  • Enter ‘java’ in the name field and leave the value field empty. Click Add.

Repeat this process for every agent that you want to be able to run this build.

Tuesday, May 18, 2021

NuGet - Add Global Package source to your build server

A few years ago I blogged about  how to add a global package source on your build server. Over the years NuGet has evolved and the approach described in that blog post no longer applies.

For the latest NuGet version, the config files are located here:

  • %appdata%\NuGet\NuGet.config

You can either directly manipulate the values inside this NuGet.config or you can add an extra package source through the nuget sources command:

More information: https://docs.microsoft.com/en-us/nuget/reference/cli-reference/cli-ref-sources

Remark 1: This config file is scoped to the current user. So it is important to execute this using the user account used for your build agent.

Remark 2: If possible I would recommend to avoid this approach and add a nuget.config to your solution or project instead. Add the reference to the package source in there. The advantage of doing that is that it will work on a clean system without any configuration changes.

Monday, May 17, 2021

Azure DevOps–Disable CI trigger

The default YAML pipeline Azure DevOps creates for you looks like this:

This pipeline is triggered every time a change is pushed to the master branch as you can see in the ‘trigger’ section.

But I wanted to trigger this pipeline only manually. To achieve this you need to update the pipeline and set the ‘trigger’ value to ‘none’:

More information: Azure Pipeline Triggers

Friday, May 14, 2021

GraphQL- Schema Stitching Handbook

Although a lot of attention goes to GraphQL Federation, GraphQL Schema Stitching remains a powerful alternative.

For everyone new to GraphQL Schema Stitching I would recommend the Schema Stitching Handbook.

It shows a lot of examples on what is possible through Schema Stitching:

Wednesday, May 12, 2021

Kubernetes- How adding healthchecks made our applications less resilient

When using a container orchestrator like Kubernetes it is recommended to add health check probes to your application. These health probes can be used to check the app’s status and help the container orchestrator decide when to restart a container, start sending traffic, …

So we decided to use the Microsoft.AspNetCore.Diagnostics.HealthChecks package to add a healthcheck endpoint to our ASP.NET Core applications:

Inside the health check, we test our app dependencies to confirm availability and normal functioning. So far, so good…

Unfortunately we went a little bit too far with this approach which got us into trouble. What did we do wrong?

Therefore I need to explain a little bit about our architecture. We use an API gateway as the single entry point for all of our services:

This API gateway is just another ASP.NET Core Application that is using GraphQL schema stitching to bring all our API’s together in one logical schema. And of course we also added a healthcheck endpoint here:

Inside this health check, we check the availability of all the other services. And that is were we got ourselves into trouble. Because now, even when one less important service becomes unavailable, the whole API gateway is marked as unhealthy and becomes unavailable. So instead of making our system more resilient, introducing health checks made our system less resilient. Whoops!

Dumb and smart health checks

The solution is to first of all distinguish between dumb and smart health checks:

  • Smart probes check that an application is working correctly, that it can service requests, and that it can connect to its dependencies (a database, message queue, or other API, for example).
  • Dumb probes check only indicate the application has not crashed. They don't check that the application can connect to its dependencies.

Also think about what dependencies are necessary for a service to be available. Does a service is no longer usable when your logging infrastructure is temporary unavailable? In our case this would avoided our mistake to include all our services in the API gateway health check.

When you are using Kubernetes as your container orchestrator, a third tip could be added to the list. Kubernetes doesn’t have one healthcheck probe but has 3, which are used for different purposes:

  • Liveness probe. This is for detecting whether the application process has crashed/deadlocked. If a liveness probe fails, Kubernetes will stop the pod, and create a new one.
  • Readiness probe. This is for detecting whether the application is ready to handle requests. If a readiness probe fails, Kubernetes will leave the pod running, but won't send any requests to the pod.
  • Startup probe. This is used when the container starts up, to indicate that it's ready. Once the startup probe succeeds, Kubernetes switches to using the liveness probe to determine if the application is alive.

So instead of adding just one healthcheck endpoint to your ASP.NET Core application, I would recommend to configure at least the startup and liveness probes where I would setup the liveness probe to be a ‘dumb’ probe and the startup probe to be a ‘smart’ probe.

Tuesday, May 11, 2021

IIS–Failed Request Tracing

One of the ways to monitor your IIS traffic is through ‘Failed Request Tracing’. This feature allows you to trace the full request pipeline and capture all the details.

Failed-request tracing is designed to buffer the trace events for a request and only flush them to disk if the request "fails," where you provide the definition of "failure". If you want to know why you're getting 404.2 error messages or requests start hanging, failed-request tracing is the way to go.

Installing Failed Request Tracing

Failed Request Tracing is not available out of the box in IIS but could be installed as a part of the ARR(Application Request Routing) feature. ARR can be installed directly from here or through the Web Platform installer when available in IIS.

After the installation is completed, the Failed Request Tracing feature becomes available in IIS. But before you can use it, you need to enable  it.

If Failed Request Tracing is still not available after installing ARR, you can try to enable the Tracing feature by going to Start -> Turn Windows features on or off -> Internet Information Services -> Health and Diagnostics.

Configuring Failed Request Tracing

Let’s first walk through the steps to enable Failed Request Tracing:

  • Open IIS Manager
  • In the Connections pane, expand the machine name, expand Sites, and then click Default Web Site.

  • In the Actions pane, under Configure, click Failed Request Tracing.

  • In the Edit Web Site Failed Request Tracing Settings dialog box, configure the following:

    • Select the Enable check box.

    • You can change the maximum number of trace files if you want to keep more than 50 requests

  • Click OK.

Now we need to configure a failure definition. A failure definition allows us to specify to conditions when a request should be traced.

  • In the Connections pane, expand the machine name, expand Sites, and then click Default Web Site(or the site that you want to have requests traced).

  • Double-click Failed Request Tracing Rules.

  •   In the Actions pane, click Add.

  • In the Add Failed Request Tracing Rule wizard, on the Specify Content to Trace page, select All content (*). Click Next.

  •     On the Define Trace Conditions page, select the Status code(s) check box and enter the status codes you want to to trace.

  • Click Next.

  • On the Select Trace Providers page, under Providers, keep all check boxes selected. Don’t change the other values.

  •     Click Finish.

View the Failed Request messages

  • To view the failed request messages browse to %systemdrive%\inetpub\logs\FailedReqLogFiles\W3SVC1 (or the location that you configured).

  • Double click on a log file in the folder. For every request a new log file is created. Thanks to the available XSLT, the log file is presented in a digestable format:

    Monday, May 10, 2021

    Serilog–Add headers to request log

    By default logging in ASP.NET Core generates a lot of log messages for every request. Thanks to the Serilog's RequestLoggingMiddleware that comes with the Serilog.AspNetCore NuGet package you can reduce this to a single log message:

    But what if you want to extend the log message with some extra data?

    This can be done by setting values on the IDiagnosticContext instance. This interface is injected as a singleton in the DI container.

    Here is an example where we add some header info to the request log:

    Friday, May 7, 2021

    Tools and resources for Agile forecasting

    As a developer or teach lead sooner or later you will be asked to estimate. Of course you could just throw out a ballpark figure but better is to use some historical data about your team performance.

    In that case I would recommend having a look at the online calculators and forecasting spreadsheets created by Focused Objective. They provide a lot of tools but also articles that can help you answer different questions about the current and future performance of your team.

    Thursday, May 6, 2021

    Docker diff

    When investigating a problem with a docker file I wanted to check what was changed when running the docker image.

    I first had to lookup the container id through docker ps:

    C:\Users\bawu>docker ps
    CONTAINER ID   IMAGE                         
    2ca085df3487   masstransit/rabbitmq:latest
     

    Now I could ran docker diff <CONTAINER> using the container ID  to see the files that are changed: :

    C:\Users\bawu>docker diff 2ca085df3487
    C /var
    C /var/log
    C /var/log/rabbitmq
    A /var/log/rabbitmq/log
    A /var/log/rabbitmq/log/crash.log
    C /etc
    C /etc/rabbitmq
    A /etc/rabbitmq/rabbitmq.conf

    Wednesday, May 5, 2021

    Build your ASP.NET Core application outside the Docker container

    Most of the examples you find when using an ASP.NET Core application inside a Docker container use the multistaged build approach.

    In this approach you create a dockerfile where building the application happens inside the docker file, the output of this build will then be used in a second stage to create the final docker image:

    This is fine and results in small optimized docker images. So why this blog post?

    The problem becomes clear when you take a look at our build pipeline:

    What you can see is that we build the application twice; one time to run the unit tests, code analysis, vulnerability scanning etc… and one time to produce the docker image. Although we use different stages in Azure pipelines, it is still a waste of resources.

    An alternative approach is to build your ASP.NET Core application outside the Docker container. The dockerfile is only used to copy the build artifacts from the publish folder into the docker image.

    More information: https://docs.microsoft.com/en-us/dotnet/core/docker/build-container?tabs=windows#create-the-dockerfile

    Tuesday, May 4, 2021

    C# 9 Switch Expressions with Type patterns

    C# 9 allows you to combine the power of pattern matching with switch expressions.

    I had a use case where I had to check the type of an object and depending on the type execute different logic.

    Before C# 7 type checks where not possible, so although I wanted to write the following switch statement, this would not compile:

    In C#7 Type pattern support was added so then I could write the following

    C#9 further improves this by the introduction of switch expressions. Now I could handle this use case like this:

    Neat!

    Monday, May 3, 2021

    Azure DevOps Pipelines - The OutputPath property is not set for project

    When trying to build a csproj file using Azure DevOps pipelines, it failed with the following error message:

    The OutputPath property is not set for project

    The important thing to notice here is that this only happens when pointing the build task to a csproj file instead of a sln file.

    Turns out that there is a small difference between how the platform variable is configured at the solution level vs the project level.

    These are the variable settings we use inside the build task:

    • $(BuildConfiguration)= “Release”
    • $(BuildPlatform)=”Any CPU”

    When I took a look at the csproj file it was expecting “AnyCPU” as the platform setting not “Any CPU”. (Notice the space)

    I fixed it by setting the BuildPlatform to “AnyCPU” and not use the build variable for this specific task.