Friday, October 22, 2021

Service decomposition and service design

Finding the boundaries of your system and decompose it into multiple services sounds easy, but it certainly isn’t.

If you are interested in this topic, check out the blog series by Vadim Samokhin:

Remark: After writing this post, I noticed that Vadim created a blog post linking to the series above and included also some other related posts.

Thursday, October 21, 2021

Azure AKS–Save some money using spot node pools

One of the ways you can save some money using Azure is by using spot node pools for your Azure Kubernetes Service cluster.

What’s a spot node pool?

Using a spot node pool allows you to take advantage of unused Azure capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict spot nodes. Therefore, Spot nodes are great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute workloads, and more.

Remark: A spot node pool can't be the cluster's default node pool. A spot node pool can only be used for a secondary pool.

Pricing for a spot node pool

Pricing for spot instances is variable, based on region and SKU. For more information, see pricing for Linux and Windows. You do have the option to set a max price. In case the price is exceeded the spot node is evicted from your cluster.

Schedule a deployment to use the spot node pool

A spot node pool has the label kubernetes.azure.com/scalesetpriority:spot and the taint kubernetes.azure.com/scalesetpriority=spot:NoSchedule. We use this information to add a toleration in our deployment.yaml:

In case you have multiple spot node pools, you can use a nodeselector to select a specific pool:

More information

Winget–A package manager for Windows

I’ve been using Chocolatey for a long time as an easy way to get my Windows machine configured with all the software I need. With the release of version 1.1 of the Windows Package Manager(WinGet) I thought it was a good time to give it a try.

Installation

Chances are high that WinGet is already available on your machine. Open a terminal and type winget. If it is available you should see something like this:

If not, the Windows Package Manager is distributed with the App Installer from the Microsoft Store. You can also download and install the Windows Package Manager from GitHub, or just directly install the latest available released version.

Searching a package

The list of available packages is quite large(more than 2,600 packages in the Windows Package Manager app repository). Just run winget search <SomePackage> to see if the package you are looking for has available there.

For example let’s search for my favorite git client GitKraken:

PS C:\Users\bawu> winget search gitkraken
Naam      Id                Versie Bron
------------------------------------------
GitKraken Axosoft.GitKraken 8.1.0  winget

For packages inside the Microsoft store you don’t get  a readable id but a hash value instead:

PS C:\Users\bawu> winget search git
Name                                  Id                                         Version                    Source
-------------------------------------------------------------------------------------------------------------------
Learn Pro GIT                         9NHM1C45G44B                               Unknown                    msstore
My Git                                9NLVK2SL2SSP                               Unknown                    msstore
GitCup                                9NBLGGH4XFHP                               Unknown                    msstore
GitVine                               9P3BLC2GW78W                               Unknown                    msstore
GitFiend                              9NMNKLTSZNKC                               Unknown                    msstore
GitIt                                 9NBLGGH40HV7                               Unknown                    msstore
GitHub Zen                            9NBLGGH4RTK3                               Unknown                    msstore
GitLooker                             9PK6TGX9T87P                               Unknown                    msstore
Bhagavad Gita                         9WZDNCRFJCV5                               Unknown                    msstore
Git                                   Git.Git                                    2.33.1                     winget
GitNote                               zhaopengme.gitnote                         3.1.0         Tag: git     winget
Agent Git                             Xidicone.AgentGit                          1.85          Tag: Git     winget
TortoiseSVN                           TortoiseSVN.TortoiseSVN                    1.14.29085    Tag: git     winget
TortoiseGit                           TortoiseGit.TortoiseGit                    2.12.0.0      Tag: git     winget

Installing a package

After you have found the package you want, installing it is as easy as invoking the following command:

winget install --id <SomePackage>

Of course the real fun starts when you create a script that contains all the packages you need for you day-to-day work. Here is the script I’m using:

Tuesday, October 19, 2021

ASP.NET Core–Running Swagger behind a reverse proxy

Today I helped out a colleague who was struggling with the following use case:

We have an ASP.NET Core Web API with OpenAPI(Swagger) integration enabled. This ASP.NET Core Web API was running behind a reverse proxy(Yarp in case you want to know) and isn’t directly accessibel.

To explain the problem he had, let’s start from the following situation:

When browsing to the Swagger endpoint on https://localhost/proxy/swagger, we got the following:

Do you notice the application url in the address bar vs the url in the Servers dropdown?

If we try to invoke a specific endpoint through the Swagger UI, Swagger tries to do the call directly to http://internalname:5000 which results in 404 error as the service is not available directly on this address.

It seems that Swagger doesn’t respect the Forwarded headers as provided through Yarp.

We fixed it by explicitly reading out the X-Forwarded-Host value and using it to fill up the servers dropdown:

Monday, October 18, 2021

Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException–The system cannot find the file specified

After deploying an application to production, it failed with the following error message:

The system cannot find the file specified

Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: at Internal.Cryptography.Pal.CertificatePal.FilterPFXStore (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at Internal.Cryptography.Pal.CertificatePal.FromBlobOrFile (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Security.Cryptography.X509Certificates.X509Certificate..ctor (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a)

Let’s take a look at the code that caused this issue:

Not much I could do wrong with this code. It tries to load a PFX file from disk and open it using the specified password.

I checked if the file was there and that was indeed the case. So we have to search for the root cause somewhere else.

I took a look at the Application Pool settings and noticed that the Load User Profile setting was set to False.

Aha! After changing this back to True the error got away…

Friday, October 15, 2021

DaprCon is coming on October 19th-20th 2021

What is Dapr?

Dapr helps developers build event-driven, resilient distributed applications. Whether on-premises, in the cloud, or on an edge device, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic.

The Dapr ecosystem keeps growing and now they’ll have their first virtual conference ‘DaprCon’ next week. DaprCon will include a variety of content including a keynote, technical sessions, panel discussions and real-world experiences of adopters building with Dapr

DaprCon is a fully virtual event that will be streamed on YouTube and attendance is free! To watch the live events just follow these two links:

More information: https://blog.dapr.io/posts/2021/10/05/join-us-for-daprcon-october-19th-20th-2021/

Thursday, October 14, 2021

MassTransit–Message versioning

MassTransit dedicated a whole documentation page to message versioning but it still wasn’t completely clear to me how it worked.

Let’s use this blog post to see what’s going on…

Publishing messages

Let’s first focus on the sending side.

Publishing a first version of our message contract

We’ll start with a first version of our message contract:

Let’s send this to RabbitMQ using:

Time to open the RabbitMQ Management portal and take a look how the message payload is constructed:

Creating a second version of our message contract

Let’s introduce a v2 version of our message contract:

If we send it to RabbitMQ in the same way:

There isn’t such a big difference when comparing the payloads:

The ‘messagetype’ value is different and of course the message itself. But that’s it.

Send a backwards compatible version

Let’s now construct a message that implements both contracts:

And send that one:

If we now check the payload, we see that 1 message is put on the queue with the following payload:

Take a look at the ‘messagetype’. You can see that it contains both the 2 messagecontracts AND the concrete message type we created:

"messageType": [

"urn:message:Sender:Program+SubmitOrderCommand",
"urn:message:Messages:SubmitOrder",
"urn:message:Messages:SubmitOrderV2"
],

Consuming messages

Now we have a good understanding on what is going on at the sending side, let’s move on to the consuming side.

Consuming v1 of our message contract

Let’s create a consumer for that consumes v1 of our message contract:

And subscribe this consumer:

After publishing a ‘SubmitOrder’ message, our consumer is called as expected.

> Old Order consumed

Consuming v2 of our message contract

Let’s create a consumer for that consumes v2 of our message contract:

And subscribe this consumer:

After publishing a ‘SubmitOrderV2’ message, our consumer is called as expected.

> New Order consumed

So far nothing special.

Consuming the backwards compatible version

The question is what happens when we send our ‘SubmitOrderCommand’ that implements both message contracts.

If we have only one consumer subscribed the behavior is completely the same as before and either the old or the new consumer is called.

But if we have both consumers subscribed:

Each one will get a copy of the message and be called:

> Old Order consumed
> New Order consumed

Ok, that is good to now. But what happens if one of the consumers now fail?

Althought the first consumer is called succesfully, the message will still end up on the error queue:

If we then move the message back to the original queue, both consumers will be called again.

Wednesday, October 13, 2021

.NET 6.0 - Migrate to ASP.NET Core 6

.NET 6 introduces a new hosting model for ASP.NET Core applications. This model is streamlined and reduces the amount of boilerplate code required to get a basic ASP.NET Core application up and running.

There are a lot of blog posts out there explaining this new hosting model, but I like to share the guide written by David Fowler, software architect on the ASP.NET team.

He walks you through the building blocks, explains the differences in the hosting model, shares a list of frequently asked questions and provides you a cheatsheet.

 

Tuesday, October 12, 2021

ASP.NET Core - InvalidOperationException: 'VaryByQueryKeys' requires the response cache middleware.

Yes, I’m still rewriting an existing ASP.NET Web API application to ASP.NET Core.

I ported an existing action method where I was using response caching. I previously ported other action methods that used caching but this one was a little bit different because I had to take the querystring values into account for the caching.

This is easily arranged by specifying the name of the query string parameter using the VaryByQueryKeys property on the ResponseCache attribute.

Small tip: If you want to take all query string parameters into account, you can use “*” as a single value.

When I tried to call this method I got a 500 error. Inside the logs I noticed the following error message:

InvalidOperationException: 'VaryByQueryKeys' requires the response cache middleware.

Although caching seemed to work for other scenario’s, when I used the VaryByQueryKeys property I had to add the response caching middleware.

Here is my Startup.ConfigureServices():

And my Startup.Configure()

Monday, October 11, 2021

ASP.NET Core–Route arguments are case sensitive

I’m currently rewriting an existing ASP.NET Web API application to ASP.NET Core. Along the way I encountered some issues, here is one specific lesson I learned…

After porting an ASP.NET Web API controller to .NET Core, a specific action method looked like this:

Looked OK to me. But when I tried to invoke this action method, I got a 400 error back:

{
  "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
  "title": "One or more validation errors occurred.",
  "status": 400,
  "traceId": "00-3c02d7d0541cbb49a1790b11a71d871a-885c9350d2c4864c-00",
  "errors": {
    "ID": [
      "The value '{id}' is not valid."
    ]
  }
}
It wasn’t immediatelly obvious to me what I did wrong. Maybe you spot my mistake?
Let’s have a look at the route attribute: [HttpGet(“{id}”)]
and at the method argument: GetAuthTypeById(int ID)
As you can see the ‘id’ argument in the route is lowercase where the method argument is uppercase. Turns out that this is the cause of the error above.
To make it work, I have to make sure that both the route attribute and method argument are using the exact same casing: 

Friday, October 8, 2021

ASP.NET Core - Swashbuckle.AspNetCore.SwaggerGen.SwaggerGeneratorException: Ambiguous HTTP method for action

I’m currently rewriting an existing ASP.NET Web API application to ASP.NET Core.

I ported a controller from ASP.NET Web API to ASP.NET Core. Here is the end result:

Everything looked OK at first sight, but when I tried to run the application I got the following error message:

Swashbuckle.AspNetCore.SwaggerGen.SwaggerGeneratorException: Ambiguous HTTP method for action - Controllers.AuthTypeController.GetAll (VLM.IAM.Services). Actions require an explicit HttpMethod binding for Swagger/OpenAPI 3.0

   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GenerateOperations(IEnumerable`1 apiDescriptions, SchemaRepository schemaRepository)

   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GeneratePaths(IEnumerable`1 apiDescriptions, SchemaRepository schemaRepository)

   at Swashbuckle.AspNetCore.SwaggerGen.SwaggerGenerator.GetSwagger(String documentName, String host, String basePath)

   at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)

   at Microsoft.AspNetCore.Server.IIS.Core.IISHttpContextOfT`1.ProcessRequestAsync()

The Swagger/OpenAPI middleware requires that you explicitly annotate all your action methods. So I had to update the GetAll action method and annotate it with a [HttpGet] attribute:

Thursday, October 7, 2021

ASP.NET Core - ActionResult doesn’t work with IList

I’m currently rewriting an existing ASP.NET Web API application to ASP.NET Core. Yesterday I blogged about the use of the ActionResult<T> to combine the type safety of typed controllers with the list of out-of-the-box actionresults.

While introducing the ActionResult<T> class everywhere, I stumbled over one use case where I got a compiler error. Here is the specific code:

When you try to compile this code in Visual Studio it fails with the following error message:

CS0029: Cannot implicitly convert type ‘System.Collection.Generic.IList<T> to Microsoft.AspNetCore.Mvc.ActionResult<System.Collection.Generic.IList<T>>.

The reason is because C# doesn't support implicit cast operators on interfaces. Consequently, we need to convert the interface to a concrete type if we want to use it as our type argument for ActionResult<T>.

An easy fix is to wrap the IList in a concrete type as I did in the example below:

Wednesday, October 6, 2021

ASP.NET Core - How to handle the ‘NotFound’ use case when using a typed controller

I’m currently rewriting an existing ASP.NET Web API application to ASP.NET Core. While doing that, I (re)discovered some ASP.NET Core features I forgot they existed.

One of the features that ASP.NET Core offers are ‘typed controllers’. I don’t think that is the official name but you’ll know what I mean when you take a look at the example below:

In the example above I created an action method GetTodoItem that returned a typed object of type ‘TodoItem’ (or to be even more correct a Task<TodoItem>).  This makes your life as a developer easy as you don’t have to think about the ins and outs of ASP.NET. It almost feels like that this action method is exactly the same as any other method you’ll find on a class.

But what if wanted to return a 404 message when no TodoItem could be found for the specified id. Should I throw a specific exception? In ASP.NET Web API this was possible through the use of the HttpResponseException.

In ASP.NET Core there is a better alternative through the usage of ActionResult<T>. By wrapping the returned object in an ActionResult I can keep most of my code(thanks to the magic of implicit cast operators) and start using ActionResult types for specific use cases.

Let’s rewrite our example:

Tuesday, October 5, 2021

GraphQL - Use @skip and @include on fragments

GraphQL has multiple built-in directives like @deprecated, @include, @skip and some others like @stream and @defer that are not part of the official spec (yet).

With the @include directive you can conditionally include fields based on a specified argument:

The @skip directive does exactly the opposite and excludes fields based on a specified argument:

But applying these directives for every field that should be included or excluded feels like a lot of repetition:

Here is a trick for you, the @skip and @include directives can be used on fields, fragment spreads, and inline fragments. This can help us to make our GraphQL queries more readable.

Here is the example rewritten using an inline fragment:

And here is the same example with the fragment spread into the query:

Monday, October 4, 2021

Azure DevOps–Emoji’s

Both in pull request comments and wiki pages in Azure DevOps, you can use emojis while documenting, adding comments or reviewing requests.

If you want to find out what emoji’s are supported; you can find the full list here.

 

Friday, October 1, 2021

Application Insights Telemetry enricher for Kubernetes

Quick tip if you are hosting your ASP.NET Core application in Kubernetes: have a look at the Microsoft Application Insights for Kubernetes nuget package.

Through this package you can enrich your Application Insights telemetry data with Kubernetes specific information:

After adding the nuget package you can register the enricher by adding the following code:

More information: https://github.com/microsoft/ApplicationInsights-Kubernetes

Thursday, September 30, 2021

C# 10 - Implicit (global) usings

Yesterday I thought that my journey in discovering possibilities of the ‘using’ keyword had ended, but further reading introduced me to a new .NET 6 feature: implicit (global) using directives.

You can enable this feature by adding a <ImplicitUsings>enable</ImplicitUsings> setting inside a <PropertyGroup> in your .csproj file.

Remark: The project templates in the .NET 6 SDK include this setting by default

Enabling this feature will automatically generate a global usings file for you. You can see the generated file by looking inside the obj folder that gets created when you build a project. In here you'll find a subfolder named for your build configuration(e.g. Debug, Release, ...) containing a net6.0 folder. Inside there you'll find a file called something like ExampleApp.GlobalUsings.g.cs.

The content of this file will look something like this:

The set of included namespaces changes according to the project type.

You can control what is generated inside this file by adding MSBuild Item Group. To add namespaces you should use the <Using Include />:

And to remove namespaces you can use <Using Remove />:

Wednesday, September 29, 2021

C# 10–Global using

I continue my journey in discovering possibilities of the ‘using’ keyword and arrive at an upcoming feature in C# 10: global using directives.

C# 10.0 allows you to define using directives globally, so that you don’t have to write them in every file. Let’s take a look at a simple example.

I created a GlobalUsings.cs class in my project and added the following 2 lines:

By doing this the 2 namespaces above are available in every C# file in my project.

Remark: In this example I’ve put my global usings in a seperate file. This isn’t necessary, you can put the global usings in any code file, although I would recommend isolating it to make them easier to discover and maintain.

Now I can simplify my Program.cs file:

You can combine this feature with the using static as well:

This allows me to even further simplify my Program.cs file:

More information: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/using-directive#global-modifier

Tuesday, September 28, 2021

C#–Using static

While writing my blog post yesterday about class aliases, I was reading through the documentation about the using keyword and noticed another feature I almost forgot: ‘using static’.

Through ‘using static’ you can import static members of types directly into the current scope. This can safe you from some extra typing work if you need a specific static class over and over again.

An example:

In the code above I have to repeat the ‘Console’ static class for each call. Now let’s rewrite this example with the help of ‘using static’:

More information: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/using-directive#static-modifier

Monday, September 27, 2021

Confuse your colleagues by using class aliases in C#

C# supports the concept of namespaces aliases for a long time. This allows you to create an alias for long namespaces so you don’t have to specify the whole name every time.

An example:

Did you know that this does not only work for namespaces but also for classes? It is also possible to alias a specific class name.

An example:

This feature could be fun if you start using aliases that match with class names used somewhere else in your codebase. Like in the example above where I assigned the Product class an Order alias. Can’t be any more confusing! So please don’t do this at work and choose a good alias name instead…

Friday, September 24, 2021

Learning the ins and outs of web development through web.dev

If you are new to web development or want to further expand your web development skills , go check out all the learning material provided at web.dev.

At the moment of writing this post, you can find courses about CSS, performance, web frameworks, and much more…

Monday, September 20, 2021

Azure DevOps–Export wiki to PDF

If you ever want to export your Azure DevOps wiki, you can use the AzureDevOps.WikiPDFExport tool.

  • Start by cloning your Azure DevOps wiki to a local folder.

Troubleshooting

While executing the tool I encountered some issues so here are some troubleshooting tips.

Tip 1 – Ignore the Qt: Could not initialize OLE (error 80010106) warning

If you see a Qt: Could not initialize OLE (error 80010106) warning while executing the tool, don’t worry. You can just ignore this message.

Tip 2 – Be aware about the .order file

The first time I executed the tool only a few of my wiki pages were included. I found out that this was the case because when it detects a .order file it will only include the pages mentioned inside the .order file.

So make sure that all wiki pages are mentioned inside the .order file.

Tip 3 – Specify the ‘—attachments-path’ when your wiki pages include images

When trying to export a wiki page containing images it failed with the following error message:

C:\example\Docs>azuredevops-export-wiki

    WARN: Removing Table of contents [[_TOC_]] from pdf

ERR: Something bad happend.

System.ArgumentNullException: Value cannot be null. (Parameter 'path1')

   at System.IO.Path.Combine(String path1, String path2)

   at azuredevops_export_wiki.WikiPDFExporter.CorrectLinksAndImages(MarkdownObject document, FileInfo file, MarkdownFile mf) in D:\Git\WikiPDFExport\AzureDevOps.WikiPDFExport\WikiPDFExporter.cs:line 674

   at azuredevops_export_wiki.WikiPDFExporter.ConvertMarkdownToHTML(List`1 files) in D:\Git\WikiPDFExport\AzureDevOps.WikiPDFExport\WikiPDFExporter.cs:line 497

   at azuredevops_export_wiki.WikiPDFExporter.Export() in D:\Git\WikiPDFExport\AzureDevOps.WikiPDFExport\WikiPDFExporter.cs:line 149

To fix this error I had to include the ‘--attachments-path’ option and explicitly specify the location of the images:

C:\example\Docs>azuredevops-export-wiki --attachments-path C:\example\Docs\.attachments

Tip 4 – Use the –h option for a good TOC

The tool doesn’t support the TOC (Table of Contents) tag and will be removed from the pdf. As a workaround you can use the ‘-h’ option to create a heading for every wiki page in the PDF file. This doesn’t replace this feature but already helps to make your PDF more accessible.

Friday, September 17, 2021

ASP.NET Core: Share a cookie between subdomains

By default when you create a cookie in ASP.NET Core it is only applicable to that specific subdomain.

For example, a cookie created in subdomain.mydomain.com can not be shared with a second subdomain secondsubdomain.mydomain.com.

To change this behavior,  you need to add the following code in Startup.ConfigureServices:

By specifying a common domain in the Cookie.Domain property, the cookie will be shared between subdomain.mydomain.com and secondsubdomain.mydomain.com.

Thursday, September 16, 2021

Azure App Service Health check

Azure App Service has a built-in health check feature. Through this feature Azure can automatically check the health endpoint of your web app and can take an unhealthy instance out of the load balancer until it's healthy again, or even restart or replace it.

To enable Health check, browse to the Azure portal and select your App Service app.

Under Monitoring, select Health check.

Select Enable and provide a valid URL path on your application, such as /hc or /api/health. Click Save.


Remark: Notice the Load balancing slider. Here you can specify the time an app can be unhealthy before being removed from the load balancer.

Azure only looks at the HTTP response the page gives. If the response is in the 2XX range the instance is considered healthy, else it is shown as degraded or unhealthy. It also doesn't follow redirects. If your webapp settings allow http the check by Azure is done over http. If it is set to HTTPS only then Azure used https calls to check the endpoint. In case your health checks return HTTP 307 response make sure you set your webapp to only use https or remove the redirect from your code.

You can further control the Health check behavior through the following appsetings:

  • WEBSITE_HEALTHCHECK_MAXPINGFAILURES : The required number of failed requests for an instance to be deemed unhealthy and removed from the load balancer. For example, when set to 2, your instances will be removed after 2 failed pings. (Default value is 10)
  • WEBSITE_HEALTHCHECK_MAXUNHEALTHYWORKERPERCENT : By default, no more than half of the instances will be excluded from the load balancer at one time to avoid overwhelming the remaining healthy instances. For example, if an App Service Plan is scaled to four instances and three are unhealthy, two will be excluded. The other two instances (one healthy and one unhealthy) will continue to receive requests. In the worst-case scenario where all instances are unhealthy, none will be excluded.

More information: Monitor App Services instances using Health Check

Wednesday, September 15, 2021

Model binding in ASP.NET Core

While converting an existing ASP.NET MVC application to ASP.NET Core I noticed it was using a custom model binder. That made me wonder how I could achieve the same thing in ASP.NET Core.

In ASP.NET MVC a model binder had to implement the IModelBinder interface:

To use the model binder you had to let the ASP.NET MVC framework know the existence of the custom model binder. In the original codebase we were using Unity and we created a small extension method that allowed us to register the Modelbinder:

That extension method allowed us to write the following code:

Let’s find out how to achieve the same result in ASP.NET Core…

In ASP.NET Core you should also implement a IModelBinder interface, although the contract is different(and async):

To use this custom model binder you can either create a custom ModelBinderProvider or use the ModelBinder attribute:

More information: Custom Model Binding in ASP.NET Core

Tuesday, September 14, 2021

Visual Studio Code–Draw.io extension

While pair programming with a colleague I noticed he was using  draw.io directly inside Visual Studio Code. This was made possible by the following extension: https://marketplace.visualstudio.com/items?itemName=hediet.vscode-drawio

It offers some great features like:

  • Using draw.io offline
  • Work together on a diagram using the liveshare feature of VS Code
  • Link diagram elements to code fragments

I’m a fan!

Monday, September 13, 2021

The ‘async void’ trap

A colleague contacted me that he had trouble using a library I created. This library is used to abstract away the used mailing system. He told me that when the mail couldn’t be send succesfully, the error message got lost and that the library doesn’t return anything useful.

I started by running my integration test suite against this library(these are the moments you are extremely happy that you’ve spend so much time on writing tests) but all tests returned green.

I’ll asked the colleague to send me the code he was using. Do you notice what’s wrong?

The title of this post already gives you the answer. He was using ‘async void’ instead of ‘async Task’. ‘async void’ should only be used by event handlers.

I recommended using AsyncFixer to avoid this kind of issues in the future.

Friday, September 10, 2021

Azure DevOps - User is not available

I got a question from a colleague who was asking how long it took before a user that was added to Azure AD is available in Azure DevOps. The answer is simple; instant.

So why was this person asking this question then? The reason is that there is a difference in behavior depending if the user is already added to the Azure DevOps organisation or not.

If the user is not part of the organisation yet, when you search for the user by firstname and lastname, nothing will be found.

But if you search for the user by his email address, you will find the user.

It is only after the user is added for the first time to an Azure DevOps organisation that you will be able to find a user by his name. Until then you should use the email address to search for the user in Azure AD.

Thursday, September 9, 2021

MassTransit–Message Initializers

Let’s have a look at a small example that shows how to publish a message in MassTransit:

What is maybe not immediatelly obvious in the code above, is that when you call publishEndpoint.Publish<OrderSubmitted> a lot of magic is going on.

The generic argument would make you think that the  Publish method expeccts a message of type OrderSubmitted. But we are passing an anonymous object??? And to make it even stranger, we only have the OrderSubmitted interface, we never created a class that implements it???

What is happening? How this even works? The answer to all this magic are Message Initializers.

When calling the Publish method above, you are using a specific overload:

Task Publish<T>(object values, CancellationToken cancellationToken = default) where T : class;

Message initializers make it easy to produce interface messages using anonymous objects. The message initializers will try to match the values of an anonymous object with the properties of the specified interface. While doing that it can use tricks like type conversions, conversion of list elements, matching nested objects, specify header values and even use invariant variables(guaranteeing that the same value is used everywhere inside a message).

Check out the documentation to learn more about what is possible through message initializers: https://masstransit-project.com/usage/producers.html#message-initializers

 

Wednesday, September 8, 2021

C# switch expressions

A lot of the examples of switch expressions you can find are where a function directly maps to an expression. For example:

This could make you think that you cannot use a switch expression inside a method body. It certainly is possible. The only important thing is that you assign the result of your switch expression to a variable. Let’s rewrite the example above:

Tuesday, September 7, 2021

Copy files through MSBuild

So far I’ve always used xcopy in a pre or post build event to copy files between projects:

But did you know that you don’t need this and that it can be done by standard msbuild features? (By the way the code above doesn’t even work on Linux)

To achieve the same thing you can use the following msbuild configuration in your csproj file:

2 important things to notice:

  1. You can use file patterns in the 'Include' to specify a set of files
  2. You can use ‘LinkBase’ to specify a target folder

Monday, September 6, 2021

Avoid concurrent test runs by using XUnit Collection Fixtures

Last week I blogged about XUnit Collection Fixtures. Something I wasn’t fully aware of but does make sense if you think about it is that by using the [Collection] attribute all tests that share the same context in the same thread. So you don’t need to worry about concurrent test runs.

By default XUnit tests that are part of the same test class will not run in parallel. For example, in the code below Test1 and Test2 will never run at the same time:

If you want to achieve the same behavior while having tests in multiple classes, you can put tests in the same collection:

Friday, September 3, 2021

Troubleshoot Kubernetes deployments

In case your deployments on Kubernetes fails, the following diagram can help:

(It is created by the people from learnk8s who provide Kubernetes training)

A PDF version of this diagram can be found here: https://learnk8s.io/a/a-visual-guide-on-troubleshooting-kubernetes-deployments/troubleshooting-kubernetes.v2.pdf

Thursday, September 2, 2021

XUnit Collection fixtures

While reviewing some XUnit unit tests, I noticed the usage of the [Collection] attribute.

I didn’t know the attribute. So I took a look at the XUnit documentation and discovered the existence of Collection fixtures. It allows you to create a single test context and share it among tests in several test classes, and have it cleaned up after all the tests in the test classes have finished.

In the code I was reviewing it was used to spin up a test server and shut it down after all tests has been completed.

I don’t find it very intuitive how it should be used but it is well explained in the documentation. In case you are too lazy to click on the link, here are the steps:

  • Create the fixture class, and put the startup code in the fixture class constructor. If the fixture class needs to perform cleanup, implement IDisposable on the fixture class, and put the cleanup code in the Dispose() method.
  • Create the collection definition class, decorating it with the [CollectionDefinition] attribute, giving it a unique name that will identify the test collection. Add ICollectionFixture<> to the collection definition class.
  • Add the [Collection] attribute to all the test classes that will be part of the collection, using the unique name you provided to the test collection definition class's [CollectionDefinition] attribute. If the test classes need access to the fixture instance, add it as a constructor argument, and it will be provided automatically.

Wednesday, September 1, 2021

Azure Pipelines - Ubuntu 16.04 LTS environment is deprecated

When trying to run a release pipeline in Azure DevOps, the following warning started to appear for all our builds:

Ubuntu 16.04 LTS environment is deprecated and will be removed on September 20, 2021. Migrate to ubuntu-latest instead.

To get rid of this warning, we had to explicitly specify the vmImage property inside our YAML build template:

More information: https://docs.microsoft.com/en-us/azure/devops/pipelines/yaml-schema?view=azure-devops&tabs=schema%2Cparameter-schema#pool

Tuesday, August 31, 2021

Visual Studio Solution Filters

As I do a lot of software audits for customer, I encounter a large variation of codebases, some really small; a few projects, some really big; hundreds of projects in one solution.

The question is how can you avoid that Visual Studio becomes awfully slow when you have such a large solution? Enter Visual Studio Solution Filters…

What are Visual Studio Solution Filters?

Visual Studio Solution Filters allow you to selectively load a subset of the projects in a solution. Typically when working in a large solution you don’t need all projects. With a solution filter(.slnf) you can save a subset of projects that you want to load.

How to create a Visual Studio Solution Filter? 

Let’s share how to create a solution filter:

  • Open Visual Studio.
  • Click on the Open a project or solution option.

  • In the file dialog, select the solution you want to load. Don’t forget to check the Do not load projects checkbox in the bottom of the dialog before hitting Open.

  • The solution is opened without loading any of the projects.

  • Right click on the projects you want to load and choose Reload project or Reload project with dependencies.

  • Once you are done loading the projects you need, you can save the solution filter by right clicking on the solution filed and choosing the Save as Solution filter option from the context menu.

Monday, August 30, 2021

Let’s Learn .NET : F#

Let’s Learn .NET is a monthly series of beginner courses to teach the basics of the .NET ecosystem.

The July edition covers the fundamentals of F# in 2 hours. If you want to get started with Functional Programming in F#, this is a good introduction to the language.

Friday, July 30, 2021

AD FS Help

ADFS is great as long as it works. But when you get into trouble, all the help you can find is welcome…

While investigating an ADFS issue, I found the AD FS Help website:

This website combines a list of of both online and offline tools that can help you configure, customize and troubleshoot your ADFS instance.

Thursday, July 29, 2021

GraphQL HotChocolate 11 - Updated Application Insights monitoring

A few months ago, I blogged about integrating Application Insights in HotChocolate to monitor the executed GraphQL queries and mutations. In HotChocolate 11, the diagnostics system has been rewritten and the code I shared in that post no longer works.  Here is finally the updated post I promised on how to achieve this in HotChocolate 11.

Creating our own DiagnosticEventListener

  • Starting from HotChocolate 11, your diagnostic class should inherit from DiagnosticEventListener. We still inject the Application Insights TelemetryClient in the constructor:

Remark: There seems to be a problem with dependency injection in HotChocolate 11. This problem was fixed in HotChocolate 12. I show you a workaround at the end of this article.

  • In this class you need to override at least the ExecuteRequest method:
  • To track the full lifetime of a request, we need to implement a class that implements the IActivityScope interface. In the constructor of this class, we put all our initialization logic:
  • I'm using 2 small helper methods GetHttpContextFrom and GetOperationIdFrom:
  • In the Dispose() method, we clean up all resources and send the telemetry data to Application Insights for this request:
  • Here we are using 1 extra helper method HandleErrors:

Here is the full code:

Configuring the DiagnosticEventListener

Before we can use this listener, we need to register it in our Startup.cs file:

Remark:Notice that we are using an overload of the AddDiagnosticEventListener method to resolve and pass the TelemetryClient instance to the listener. If you don’t use this overload, nothing gets injected and you end up with an error. As mentioned, this code only works starting from HotChocolate 12. For Hot Chocolate 11, check out the workaround below.

Workaround for HotChocolate 11

In HotChocolate 11, when you try to use the ServiceProvider instance inside the listener, you don’t get access to the application level services. This means that you cannot resolve the TelemetryClient. As a hack we can build an intermediate  ServiceProvider instance and use that instead:

Wednesday, July 28, 2021

dotnet monitor–Run as a sidecar in a Kubernetes cluster–Part 3

Small update about the post from yesterday, if your Kubernetes cluster is not exposed publicly, you can also choose to just disable the security check by adding the ‘--no-auth’ argument.

Here is the updated yaml file:

Tuesday, July 27, 2021

dotnet monitor–Run as a sidecar in a Kubernetes cluster–Part 2

Last week I blogged about how you can run dotnet monitor as a sidecar in your Kubernetes cluster. Although the yaml file I shared worked on my local cluster (inside Minikube), it didn’t work when I tried to deploy it to AKS. Nothing happened when I tried to connect to the specified URL’s.

To fix this I had to take multiple steps:

  • First I had to explicitly set the ‘—urls’ argument inside the manifest:
  • Now I was able to connect to the url but it still failed. When I took a look at the logs I noticed the following message:

{"Timestamp":"2021-07-27T18:48:29.6522095Z","EventId":7,"LogLevel":"Information","Category":"Microsoft.Diagnostics.Tools.Monitor.ApiKeyAuthenticationHandler","Message":"MonitorApiKey was not authenticated. Failure message: API key authentication not configured.","State":{"Message":"MonitorApiKey was not authenticated. Failure message: API key authentication not configured.","AuthenticationScheme":"MonitorApiKey","FailureMessage":"API key authentication not configured.","{OriginalFormat}":"{AuthenticationScheme} was not authenticated. Failure message: {FailureMessage}"},"Scopes":[{"Message":"ConnectionId:0HMAH5RL3D6BM","ConnectionId":"0HMAH5RL3D6BM"},{"Message":"RequestPath:/processes RequestId:0HMAH5RL3D6BM:00000001, SpanId:|fe3ec0c2-46980a5b9b2602e2., TraceId:fe3ec0c2-46980a5b9b2602e2, ParentId:","RequestId":"0HMAH5RL3D6BM:00000001","RequestPath":"/processes","SpanId":"|fe3ec0c2-46980a5b9b2602e2.","TraceId":"fe3ec0c2-46980a5b9b2602e2","ParentId":""}]}

  • We need to create an API key secret and mount it as a volume to our sidecar. Here is the code to generate a secret:
kubectl create secret generic apikey \
  --from-literal=ApiAuthentication__ApiKeyHash=$hash \
  --from-literal=ApiAuthentication__ApiKeyHashType=SHA256 \
  --dry-run=client -o yaml \
  | kubectl apply -f -
  • Now we need to mount the secret as a volume. Here is the updated manifest:

If you want to learn more, I could recommend the following video as a good introduction:

Monday, July 26, 2021

NDepend–DDD Rule

A few weeks ago I was contacted by Patrick Smacchia, the creator of NDepend, if I would check out the latest edition of their code quality tool. As I had an upcoming software audit assignment planned, I thought it would be a great occasion to see what NDepend brings to the table and how it can help me to improve my understanding of an unfamiliar codebase.

NDepend offers a lot of rules that are evaluated against your code. These rules can help you identify all kind of issues in your code.

If you want to learn more about this feature check out the following video:

 

All these rules are created using CQLinq(the code query language of NDepend) and can be customized to your needs (and the specificalities of your project).

One rule that got my interest was the ‘DDD -  ubiquitous language check’. This rule allows you to check if the correct domain language terms are used. It is disabled by default(because it should be updated to reflect your domain language).

Let’s see how to update this rule and enable it:

  • Open up the Queries and Rules explorer in Visual NDepend:
  • Browse to the Naming Conventions section:
  • On the right you’ll find the DDD rule in the list of rules

  • Check the checkbox to enable the rule

  • Click on the rule to open the edit window. Here you can update the CQLinq query to make it correspond with the ubiquitous language of your domain

  • After changing the rule, click on the ‘Save’ icon to start using the updated rules

Friday, July 23, 2021

AKS–Limit ranges

Last week, we got into problems when booting up our AKS cluster(we’ll shut the development cluster down every night to safe costs). Instead of green lights, our Neo4J database refused to run. In the logs, we noticed the following error message:

ERROR Invalid memory configuration - exceeds physical memory.

Let me share what caused this error.

Maybe you’ve read my article about resource limits in Kubernetes. There I talked about the fact that you can set resource limits at the container level.

What I didn’t mention in the article is that you can also configure default limits at the namespace level through limit ranges.

From the documentation:

A LimitRange provides constraints that can:

  • Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
  • Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
  • Enforce a ratio between request and limit for a resource in a namespace.
  • Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.

So if you don’t configure resource limits and/or requests at the container level, you can still set it at the namespace level.

This is exactly what we did, here are the limit ranges that are currently in place:

And it are these (default) limits that brought our Neo4J instance into trouble. Although enough memory was available in the cluster, the container was limited by default to only use 512MB which is unsufficient to run our Neo4J cluster. The solution was to change our Helm chart to assign more memory to the Neo4J pods.

When configuring resource limits, settings at the pod/container level always supersede settings at the namespace level.

Thursday, July 22, 2021

Azure Kubernetes Service- Failed to acquire a token

When invoking ‘kubectl’, it failed with the following error message:

PS /home/bart> kubectl apply -f ./example.yaml

E0720 07:58:14.668222     182 azure.go:154] Failed to acquire a token: unexpected error when refreshing token: refreshing token: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_grant","error_description":"AADSTS700082: The refresh token has expired due to inactivity. The token was issued on 2021-03-31T13:22:18.9100852Z and was inactive for 90.00:00:00.\r\nTrace ID: 68f8e37d-4d18-4e7d-a3e6-b11291831a02\r\nCorrelation ID: 65ee9420-d6f9-4a7c-8214-a82756c7ecc8\r\nTimestamp: 2021-07-20 07:58:14Z","error_codes":[700082],"timestamp":"2021-07-20 07:58:14Z","trace_id":"68f8e37d-4d18-4e7d-a3e6-b11291831a02","correlation_id":"65ee9420-d6f9-4a7c-8214-a82756c7ecc8","error_uri":"https://login.microsoftonline.com/error?code=700082"}

The error is self explaining. My refresh token has expired and as a consequence it was not possible to get a new access token.

But how can we fix this?

We need to re-invoke the az aks get-credentials command. You’ll have to authenticate again after which the credentials will be downloaded and available in the Kubernetes CLI.

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster