Monday, December 31, 2018

ASP.NET Error - The specified task executable "csc.exe" could not be run.

When trying to build an ASP.NET Web application from a colleague, it failed with the following error message:

The specified task executable "csc.exe" could not be run. Could not load file or assembly 'System.Security.Principal.Windows, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies.

I was able to fix the problem by updating the Microsoft.CodeDom.Providers.DotNetCompilerPlatform nuget package to the latest version:

  <package id="Microsoft.CodeDom.Providers.DotNetCompilerPlatform" version="2.0.1" targetFramework="net471" />

Wednesday, December 26, 2018

Adding GraphQL middleware when using ASP.NET Core

Enabling a GraphQL endpoint in your ASP.NET Core application is quite easy thanks to the GraphQL.Server.Transports.AspNetCore NuGet package.

One of the nice features of GraphQL is that you can extend your resolvers using custom middleware. In the GraphQL.NET documentation they refer to the FieldsMiddleware property to register this extra middleware:

Unfortunately when using the GraphQL.Server package you only have an GraphQLOptions object but the only thing you can do is set a SetFieldMiddleware flag:

To register your own middleware you have to jump through some extra hoops:

  • Create a custom GraphQLExecutor and override the GetOptions method:
  • Register this Executor instance in your IoC container(here I'm using StructureMap):

Monday, December 24, 2018

ASP.NET Web API–The inner handler has not been assigned

I created the following messagehandler to add the correlation id to an outgoing HTTP request:

and used the following code to link it to the HttpClient:

Looked OK to me until I tried to execute this code. This failed horribly with the following error message:

  "message": "An error has occurred.",

  "exceptionMessage": "The inner handler has not been assigned.",

  "exceptionType": "System.InvalidOperationException",

  "stackTrace": "   at System.Net.Http.DelegatingHandler.SetOperationStarted()
   at System.Net.Http.DelegatingHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
   at CorrelationId.AddCorrelationIdToRequestHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)

What was not obvious to me is that you have to specify the next handler in the request pipeline. As I only had one handler added, I had to specify the HttpClientHandler as the next (and final) handler:

Friday, December 21, 2018

Angular–Output enum as string in your component html

TypeScript supports 2 types of enums:

  • Numeric enums
  • String enums

From the documentation:

Numeric enums

An enum can be defined using the enum keyword.

enum Direction {
    Up = 1,

Above, we have a numeric enum where Up is initialized with 1. All of the following members are auto-incremented from that point on. In other words, Direction.Up has the value 1, Down has 2, Left has 3, and Right has 4.

Numeric enums can be mixed in computed and constant members (see below).

String enums

String enums are a similar concept, but have some subtle runtime differences as documented below. In a string enum, each member has to be constant-initialized with a string literal, or with another string enum member.

enum Direction {
    Up = "UP",
    Down = "DOWN",
    Left = "LEFT",
    Right = "RIGHT",

While string enums don’t have auto-incrementing behavior, string enums have the benefit that they “serialize” well. In other words, if you were debugging and had to read the runtime value of a numeric enum, the value is often opaque - it doesn’t convey any useful meaning on its own (though reverse mapping can often help), string enums allow you to give a meaningful and readable value when your code runs, independent of the name of the enum member itself.

In my situation I was using a Numeric enum but I wanted to display the related name on the screen(in my situation a Status value):

To display this value in a View I had to use some extra magic:

  • First I had to import the enum in the corresponding component file:
  • Next step was to create a public property on the component class that exposes this type:
  • Now I can access this property on my view and get the enum string value:

Thursday, December 20, 2018

Azure DevOps Server - [System.String[]] doesn't contain a method named 'Trim'

When trying to release an application using Azure DevOps Pipelines, one of the tasks failed with the following error message:

2018-12-18T14:02:03.6775235Z Deployment status for machine DEVELOPMENT : Failed

2018-12-18T14:02:03.6941240Z Deployment failed on machine DEVELOPMENT with following message : System.Exception: Method invocation failed because [System.String[]] doesn't contain a method named 'Trim'.

2018-12-18T14:02:03.7087715Z ##[error]] doesn't contain a method named 'Trim'."}};]

2018-12-18T14:02:03.9362960Z ##[error]Deployment on one or more machines failed. System.Exception: Method invocation failed because [System.String[]] doesn't contain a method named 'Trim'.

This specific task tries to do a deployment of the application using a remote execution of powershell on the target machine. The problem is that an outdated Powershell version was still running on this machine.

I checked the Powershell version using the following script:

PS C:\Users\ordina1> $PSVersionTable.PSVersion

Major  Minor  Build  Revision

-----  -----  -----  --------

2      0      -1     -1

Whoops! Still  Powershell version 2. Time for an update…

Here are the instructions to install the latest version:

Wednesday, December 19, 2018

"NHibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session"

I’m a big fan of NHibernate(if I didn’t mention this before). The only problem is that it’s error messages can be kind of cryptic, especially if you have limited knowledge about NHibernate.

A colleague came to me with the following problem: He tried to update a detached object that he got back from an API call and constructed himself, NHibernate refused to save it(using the Update method) and returned the following error message:

"NHibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session"

The problem was that a similar object(using the same identifier) was already loaded and tracked by the session. When he tried to associate the detached object to the session, NHibernate recognizes a similar object with the same identifier and raises the error above.

To fix this, you have to switch from Update() to Merge(). This method copies the state of the given object onto the persistent object with the same identifier. If there is no persistent instance currently associated with the session, it will be loaded. The method returns the persistent instance. If the given instance is unsaved or does not exist in the database, NHibernate will save it and return it as a newly persistent instance.

Tuesday, December 18, 2018

Free ebook - Pattern Recognition and Machine Learning

I’m investing a lot of time to at least get a little bit of understanding about machine learning. Luckily there is a lot of (free) information out there.

Christopher Bishop, Technical Fellow and Laboratory Director In Microsoft Research Cambridge, shared his book Pattern Recognition and Machine Learning.

This leading textbook provides a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. (—> Exactly what I need Smile)


Much (machine) learning fun!

Monday, December 17, 2018

Feature ‘default literal’ is not available in C# 7.0.

On one of my projects we are ‘default literals’, one of the features introduced in C#7.1.  To be able to use this, you have to change your project properties to point to the latest minor version(more information here: Unfortunately on the build server it didn’t work as expected, but got the following error in our build logs:

error CS8107: Feature ‘default literal’ is not available in C# 7.0. Please use language version 7.1 or greater.

The strange this was that on the same build server,  another build using the same feature did succeed. So it couldn’t be linked to the possibility that C# 7.1 was not installed on the build server.

Then I got some inspiration, maybe it was related to the Build Configuration. And indeed when I switched from Debug to Release the problem appeared in my Visual Studio as well:


When enabling C# 7.1 for this project, a <langversion> is introduced in the csproj file:


The problem was that this line was set inside a conditional propertygroup:

<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">



By moving the <langversion> to another propertygroup(without a conditional) the problem was solved.

Friday, December 14, 2018

Entity Framework–PostgreSQL–Enable spatial type support–Part 2

In a previous post I explained on how to enable spatial type support for your Entity Framework (core) model. If you added the HasPostgresExtension("postgis") check, you get the following error message when you try to execute the Entity Framework migration and the extension is not installed:

could not open extension control file "/usr/share/postgresql/9.5/extension/postgis.control": No such file or directory

To install the extension you can follow the instructions on the postgis website: The installation differs based on the host platform you are using for your PostgreSQL instance.

After installation has completed you can activate the extension for you database using the following statement:

-- Enable PostGIS (includes raster)

Thursday, December 13, 2018

ASP.NET Web API–Extract header data

I was looking for a way to get the header from a Web API request. After trying multiple things I ended up with creating my own model binder:

Wednesday, December 12, 2018

Entity Framework–PostgreSQL–Enable spatial type support

To enable Spatial Type support for PostgreSQL you have to do an extra step when configuring your DBContext:

Calling the UseNetTopologySuite()method activates a plugin for the Npgsql EF Core provider which enables mapping NetTopologySuite's types to PostGIS columns, and even translate many useful spatial operations to SQL. This is the recommended way to interact with spatial types in Npgsql.

To check if the PostGIS extension is installed in your database, you can add the following to your DbContext:

SQL Server–Document your database using Extended Properties

On one of my projects we had to re-use an existing table. Unfortunately the table wasn’t documented very well(read; not at all). To make it even worse some of the columns were used for different things than you would expect based on the column names. The table even contained some implicit logic about the way the data was inserted into the table.

To avoid this horror again,  we decided to apply the boy scout rule and leave the camp place in a better state. We started by adding documentation to changed or newly added columns using the SQL Server Extended Properties feature.

You can add Extended Properties either through SQL Server Management Studio or through the sp_addextendedproperty stored procedure:

exec sp_addextendedproperty  
     @name = N'Price' 
    ,@value = N'Testing entry for Extended Property' 
    ,@level0type = N'Schema', @level0name = 'northwind' 
    ,@level1type = N'Table',  @level1name = 'Product' 
    ,@level2type = N'Column', @level2name = 'Price'

A few parameters are required to execute sp_addextendedproperty.

  • @name is ‘Price’ in our case. This cannot be null. This is the name of the Extended Property.
  • @value is the value or description of the property and it cannot exceed 7500 bytes.
  • @level0type in our case ‘Schema’ and @level0name is the value is set as 'northwind' as the value
  • @level1type in our case ‘Table’ and @level1name is ‘Product’
  • @level2type in our case ‘Column’ and @level2name is ‘Price’

More information:

Tuesday, December 11, 2018

Enabling row compression in SQL Server

Enabling row and page compression can give you a big performance gain in SQL Server. IO remains expensive especially when your SQL Server is still using spinning disks. By enabling row (and page) compression you can decrease the amount of storage needed on disk a lot.

How to enable row compression?

We’ll start by estimating the space savings for row compression by executing the following stored procedure:

EXEC sp_estimate_data_compression_savings 'vervoer', 'MAD', NULL, NULL, 'ROW' ;  
Here are the results we get back:
By comparing the column size_with_requested_compression_setting(KB) and dividing by the column size_with_current_compression_setting(KB), you can save over 50%. Sounds good enough for me, let’s enable this:

Monday, December 10, 2018

NUnit–TestCase vs TestCaseSource

NUnit supports parameterized tests through the TestCase attribute. This allows you to specify multiple sets of arguments and will create multiple tests behind the scenes:

However the kind of information that you can pass on through an attribute is rather limited. If you want to pass on complex objects you need a different solution. In that case (no pun intended) you can use the TestCaseSource attribute:

TestCaseSourceAttribute is used on a parameterized test method to identify the source from which the required arguments will be provided. The attribute additionally identifies the method as a test method. The data is kept separate from the test itself and may be used by multiple test methods.

An example:

Friday, December 7, 2018

Azure DevOps–Where can I see the available hosted pipeline minutes?

Last week I got a question from one of our internal teams because they got the following error message when trying to execute a build:

Your account has no free minutes remaining. Add a hosted pipeline to run more builds or releases.

Until recently we were using private build agents, but we decided to switch completely to hosted builds.

Checking the remaining build minutes

Here are the steps to check the remaining build minutes:

  • Open the Azure DevOps site of your organisation
  • Click on Organization settings in the left corner


  • Click On Retention and parallel jobs


  • Switch to the Parallel jobs tab


  • Here you can see the available job pipelines and the amount of minutes remaining(if you are using the free tier)



  • Clicking on the Get button will walk you through a wizard to purchase extra parallel jobs.

Thursday, December 6, 2018

FluentValidation–Conditional Validation Rule

FluentValidation is really powerful, but this power also makes it sometimes complex to find the correct way to solve a specific problem. In this case I wanted to conditionally execute a certain validation when data in another field was filled in.

FluentValidation makes this possible through the usage of the When/Unless methods. Here is a short example:

We only validate the DeliveryDate when FastDelivery is choosen.

Wednesday, December 5, 2018

Learning F# through FSharpKoans

In my journey to become a better F# (and C#) developer, I found the following project on GitHub:

From the documentation:

Inspired by EdgeCase's fantastic Ruby koans, the goal of the F# koans is to teach you F# through testing.

When you first run the koans, you'll be presented with a runtime error and a stack trace indicating where the error occured. Your goal is to make the error go away. As you fix each error, you should learn something about the F# language and functional programming in general.

Your journey towards F# enlightenment starts in the AboutAsserts.fs file. These koans will be very simple, so don't overthink them! As you progress through more koans, more and more F# syntax will be introduced which will allow you to solve more complicated problems and use more advanced techniques.

To get started clone the project and open it in Visual Studio(Code). Browse to the FSharpKoans project and run ‘dotnet watch run’.

Now it’s up to you Smile. Have fun!


Tuesday, December 4, 2018

ASP.NET Core SignalR–Add a Redis backplane

The moment that you start using SignalR, better sooner than late you should add a backplane to allow scaling out your backend services. At the moment of writing I’m aware of only 2 possible backplanes:

  • The Azure SignalR service: in this case you are moving your full SignalR backend logic to Azure. Azure will manage the scale out for you.
  • Redis backplane: in this case you are still running the SignalR backend yourself and only the data is replicated through Redis

In their official documentation Microsoft refers to the Microsoft.AspNetCore.SignalR.StackExchangeRedis nuget package, but I’m using the (official?) Microsoft.AspNetCore.SignalR.Redis package.

Here are the steps you need to take to use it:

  • Add the Redis connectionstring to your configuration.
  • That’s all!

Remark: Don’t forget to enable sticky sessions when you are doing a SignalR scale-out.

Monday, December 3, 2018

ASP.NET Web API–Mediatype exception on POST

When trying to send some JSON data through an HTTP POST, I got the following exception message back:

The request entity's media type 'text/plain' is not supported for this resource. No MediaTypeFormatter is available to read an object of type 'TransportDocument' from content with media type 'text/plain'

This error showed up because I forgot to specify the content type in the HTTP headers. Here is one way how to fix this:

Friday, November 30, 2018

SignalR - CORS error

After updating SignalR my program started to fail with the following error message:

Access to XMLHttpRequest at 'http://localhost:22135/chat/negotiate?jwt=<removed the jwt key>' from origin 'http://localhost:53150' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute.

To fix it I had to update the CORS policy by adding the AllowCredentials() method:

Thursday, November 29, 2018

Breaking changes when updating SignalR TypeScript client

Tuesday I blogged about some breaking changes when updating the SignalR backend in ASP.NET Core. Later on I noticed I had to do some changes on the frontend as well:



Wednesday, November 28, 2018

ElasticSearch Integration Testing with ElasticSearch Inside

Integration Testing can be quite cumbersome, especially when you have a lot of moving parts involved. To test my ElasticSearch code I was used to spin up a docker instance and discard it after the tests ran.

Recently I changed my approach after receiving the following tip from a colleague(thanks Jasper!): 

ElasticSearch Inside is  a fully embedded version of Elasticsearch for integration tests. When the instance is created both the JVM and Elasticsearch itself is extracted to a temporary location and started. Once disposed everything is removed again. And despite what you may think, this happens really fast(a few seconds on my machine).

How to

  • Install the Nuget package:

Install-Package elasticsearch-inside

  • After that you have to create a new instance of the ElasticSearch class and wait for the Ready() method.
using (var elasticsearch = new Elasticsearch())
    await elasticsearch.Ready();
  • Now you can call the ElasticSearch instance using NEST:
using (var elasticsearch = new Elasticsearch())
    await elasticsearch.Ready();
    var client = new ElasticClient(new ConnectionSettings(elasticsearch.Url));

    var result = client.Ping();

Tuesday, November 27, 2018

Breaking changes when updating SignalR for .NET Core

I had to do some changes to one of our applications and noticed that it was still using the beta release of SignalR. So I thought it would be a good idea to quickly do an update to the release version. There were some breaking changes I was able to fix quite easily:









Monday, November 26, 2018

GraphQL–DotNet–Nullable types: Nullable<> cannot be coerced to a non nullable GraphQL type

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to use nullable types inside my GraphQL schema.

By default when you use a nullable type inside your GraphQL schema, you get the following error message:

ArgumentOutOfRangeException: Explicitly nullable type: Nullable<DateTime> cannot be coerced to a non nullable GraphQL type.

To fix this, you have to expliclity specify the field as nullable in you GraphQL type definition:

Friday, November 23, 2018

GraphQL–DotNet - The type: Guid cannot be coerced effectively to a GraphQL type

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to use Guids inside my GraphQL schema.

On one of my projects we are using Guids. Here is the mapping I tried to use:

When loading my GraphQL endpoint this resulted in the following error message:

The type: Guid cannot be coerced effectively to a GraphQL type

To fix it I had to explicitly specify the FieldType as IdType:

Thursday, November 22, 2018

GraphQL–DotNet–Expose exceptions

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to expose exceptions. Out of the box you get a generic GraphQL exception, but if you want to drill down into what is going on, you have to change some configuration:

I’m using to serve my ASP.NET Core GraphQL endpoint. When configuring the middleware you have to add a specific setting to activate exception details:

GraphQL-DotNet–Use an async resolver

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to call an async method inside a field resolver.

You have 2 options:

  • Option 1 -  Use the Field<> method and return a Task:
  • Option 2 – Use the FieldAsync<> method and await the result:

Tuesday, November 20, 2018

Angular-oauth2-oidc: Error validating tokens. Wrong nonce.

After integrating the Angular-oauth2-oidc library in our application, we got the following error message when invoking the Implicit Flow:

Error validating tokens. Wrong nonce.

This is the code we were using:

Problem is that the loadDiscoveryDocument is a promise and we didn’t await for the result. As a consequence the nonce from the first request(loadDiscoveryDocumentAndTryLogin) is overwritten by the second request(initImplicitFlow) causing the error above.

To fix it we have to chain the requests together:

Friday, November 16, 2018

F#–Write your own Excel in 100 lines of code

I’m a big F# lover. I really fell in love with the density and expressiveness of the language. Today I noticed the following blog post where Tomas Petricek wrote an Excel variant in about 100 lines of F# code. Truly impressive!

Go check out the blog post here; and have a look at the completed code here;


Thursday, November 15, 2018

ELK stack–Getting started

Yesterday Elastic announced the new releases of their product suite.

Here is the general announcement; and here are the announcements for the specific products:

Best way to try out the new features is by using the available Docker images at

To help you getting started I created a docker-compose file that combines ElasticSearch, Kibana and APM:

Wednesday, November 14, 2018

Learning PWA’s - Service Workies

Progressive Web Apps are evolving quite fast and Google is leading the initiative. At Chrome Dev Summit 2018 they reconfirmed the message that the future of PWA’s is looking great. With deeper OS integrations, improved speed and upcoming new capabilities, the gap between native and PWA will possible be closed faster than we think.

So time to start learning about PWA’s and no better way to learn something than through gamification. Dave Geddes, who created Flexbox Zombies and CSS Grid Critters, created a new learning game. Service Workies helps you understand Service Workers soup to nuts. The first chapter of the adventure is rolling out in beta now. Google partnered with Dave to make sure the full adventure can be free to all.


Tuesday, November 13, 2018

TFS Build SonarQube Error - SonarAnalyzer.dll could not be found

Got a call for help from a colleague who couldn’t get an Acceptance release out of the door. Problem was that the automated build responsible for packaging and deploying the Acceptance version failed. He asked me to take a look…

Inside the build logs we noticed the following error message:

2018-11-13T06:35:22.4133574Z (CoreCompile target) ->

2018-11-13T06:35:22.4133574Z   CSC : error CS0006: Metadata file 'D:\b\4\agent\_work\_temp\.sonarqube\resources\1\Google.Protobuf.dll' could not be found [D:\b\4\agent\_work\40\s\MTIL.Domain\MTIL.Domain.csproj]

2018-11-13T06:35:22.4133574Z   CSC : error CS0006: Metadata file 'D:\b\4\agent\_work\_temp\.sonarqube\resources\1\SonarAnalyzer.CSharp.dll' could not be found [D:\b\4\agent\_work\40\s\MTIL.Domain\MTIL.Domain.csproj]

2018-11-13T06:35:22.4133574Z   CSC : error CS0006: Metadata file 'D:\b\4\agent\_work\_temp\.sonarqube\resources\1\SonarAnalyzer.dll' could not be found [D:\b\4\agent\_work\40\s\MTIL.Domain\MTIL.Domain.csproj]


This would make you think that there is something wrong with our SonarQube tasks. Would we an obvious reason right? The strange thing is that we weren’t using SonarQube on our Acceptance release build, so why did the build agent mentions something about SonarQube?

Chapter 1 - The story of the canceled build

After further investigation, I noticed that the problem started happening when another build was canceled. This build was using the same repository(but a different branch) to build our development release. And in this build we were using the SonarQube tasks. So maybe something wasn’t cleaned up correctly?

I killed the VBCSCompiler process (it was blocking some files in the _temp folder) and removed the _temp folder completely. This first seemed to work but after doing one build the problem reappeared.

Chapter 2 - The story of the other running build

So the story continues. I started to monitor the build server for other activities and noticed that the problem reappeared when a development build was running that was using the SonarQube tasks. When I tried to do an Acceptance build after the Development build completed, it succeeded without errors.

Our conclusion was that when another build was running on the same repo that was using the SonarQube tasks, it created some conflict that made the other build fail. Sounds like a bug to me?

As a workaround, we temporarily disable the Development build when we try to create an Acceptance build.

Monday, November 12, 2018

ASP.NET Core–Serve a default.html

Sometimes you lose waaay to much time on something that looks obvious once you find it. Today I was searching a solution to serve a default.html file when the root url was called in Asp.NET Core; e.g. http://mysamplesite/root/

I was first fiddling around with the StaticFiles middleware but couldn’t find a way. Turns out there is another middleware; DefaultFiles that handles this use case.

Just call the UseDefaultFiles method from Startup.Configure:

public void Configure(IApplicationBuilder app)

With UseDefaultFiles, requests to a folder search for:

  • default.htm
  • default.html
  • index.htm
  • index.html

The first file found from the list is served as though the request were the fully qualified URI. The browser URL continues to reflect the URI requested.

Remark: UseDefaultFiles must be called before UseStaticFiles to serve the default file.

More information:

Friday, November 9, 2018

Azure DevOps–Aad guest invitation failed

When trying to invite a user from an external domain to Azure Devops, it failed with the following error message:

"Aad guest invitation failed"


What is going on?

Our Azure DevOps is backed by Azure AD(and synced with our internal AD). To invite the user, we’ll have to add him first to our Azure AD before we can add him as a guest to our Azure DevOps account.

Thursday, November 8, 2018

MassTransit - Increase the throughput of your consumers

While running some stress tests on our environment, I noticed that our queues started to fill up. When I took a look at our MassTransit consumers, they were processing 10 messages simultaneously but not more although the CPU on the server was not stressed at all.

What is going on?

The reason is due to the number of messages the RabbitMQ transport will “prefetch” from the queue. The default for this is 10, so we can only process 10 messages simultaneously. To increase this, we can do 2 things:

  1. Configure the prefetchcount for your endpoint using the PrefetchCount property on the IRabbitMqBusFactoryConfigurator class.
  2. Add a ?prefetch=X to the Rabbit URL.

Remark: As a general recommendation, PrefetchCount should be relatively high, so that RabbitMQ doesn't choke delivering messages due to network delays.

Wednesday, November 7, 2018

MassTransit–Fault events vs Error Queues

Something that was kind of confusing for me was the relationship between MassTransit Fault<T> events and Error Queues.

When a MassTransit consumer throws an exception, the exception is caught by middleware and the message is moved to an _error queue (prefixed by the receive endpoint queue name). The exception details are stored as headers with the message.


I was thinking that the Messages were wrapped in a Fault<T> event before they were stored in the queue. But it turns out that they are unrelated.

What really happens is that in addition to moving the message to an error queue, MassTransit also generates a Fault<T> event. This event is either sent to a FaultAddress or ResponseAddress if present or the fault is published.

Tuesday, November 6, 2018

Entity Framework Core–Table per hierarchy

In my Entity Framework Core application I wanted to use the Table Per Hierarchy pattern. So I configured the EF mapping to use a discriminator column:

This didn’t had the desired effect. As I ran my application I got the following error message:

The entity type 'ForeignSupplier' is part of a hierarchy, but does not have a discriminator value configured.

Whoops! I forgot to specify how EF Core should recognize the different child types. Let’s fix this in our configuration:

Monday, November 5, 2018

Entity Framework Core - The EF Core tools version '2.1.1-rtm-30846' is older than that of the runtime '2.1.4-rtm-31024'.

When trying to create my initial migration using EF Core, I got the following error message:

Add-Migration InitialCreate

The EF Core tools version '2.1.1-rtm-30846' is older than that of the runtime '2.1.4-rtm-31024'. Update the tools for the latest features and bug fixes.

System.IO.FileLoadException: Could not load file or assembly 'Microsoft.EntityFrameworkCore.Relational, Version=, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

File name: 'Microsoft.EntityFrameworkCore.Relational, Version=, Culture=neutral, PublicKeyToken=adb9793829ddae60'

   at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.<>c.<FindContextTypes>b__12_4(TypeInfo t)

   at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext()

   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()

   at System.Linq.Enumerable.ConcatIterator`1.MoveNext()

   at System.Linq.Enumerable.DistinctIterator`1.MoveNext()

   at System.Linq.Enumerable.WhereEnumerableIterator`1.MoveNext()

   at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.FindContextTypes()

   at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.FindContextType(String name)

   at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.CreateContext(String contextType)

   at Microsoft.EntityFrameworkCore.Design.Internal.MigrationsOperations.AddMigration(String name, String outputDir, String contextType)

   at Microsoft.EntityFrameworkCore.Design.OperationExecutor.AddMigrationImpl(String name, String outputDir, String contextType)

   at Microsoft.EntityFrameworkCore.Design.OperationExecutor.AddMigration.<>c__DisplayClass0_1.<.ctor>b__0()

   at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.<>c__DisplayClass3_0`1.<Execute>b__0()

   at Microsoft.EntityFrameworkCore.Design.OperationExecutor.OperationBase.Execute(Action action)

Could not load file or assembly 'Microsoft.EntityFrameworkCore.Relational, Version=, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

At least this error message points me in the right direction. I opened up the Package Manager console and updated the Microsoft.EntityFrameworkCore.Tools nuget package:

Install-Package Microsoft.EntityFrameworkCore.Tools -Version 2.1.4

Friday, November 2, 2018

Serilog–How to enable Seq integration through config?

I’m a big fan of structured logging in general and Serilog in particular. Especially in combination with Seq it is unbeatable.

Today I lost some time searching how to specify the Seq log sink using configuration. Here are the steps I had to take:

  • Add the Serilog.Settings.Configuration NuGet package to your project(this one is for ASP.NET Core, others exists for web.config, …)
  • Create a LoggerConfiguration:
  • After that you can specify your configuration inside your appsettings.json:

Wednesday, October 31, 2018

Git–Remove local commits

Quick reminder for myself; I wanted to revert my local commits and reset my branch to the status on the origin. I didn’t push my changes yet, so no need to use revert.

As I always forget the correct statement, here it is;

git reset --hard origin/<branch_name>

Tuesday, October 30, 2018

ElasticSearch–Reindex API

At first when I had to recreate or change an index in ElasticSearch I used a rather naïve approach. I deleted and recreated my index and start processing all data again from the original source.

This approach worked but put a lot of stress on the network and ElasticSearch nodes.

A better solution is to use the reindex API. It enables you to reindex your documents without requiring any plugin nor external tool.

Thanks to the existence of the _source field you already have the whole document available to you in Elasticsearch itself. This means you can start from your existing index and use the reindex API to create a new index next to it:

POST _reindex
  "source": {
    "index": "twitter"
  "dest": {
    "index": "new_twitter"

Remark: Reindex does not attempt to set up the destination index. You should set up the destination index prior to running a _reindex action, including setting up mappings, shard counts, replicas, etc.

More information:

Monday, October 29, 2018

Advanced async/await using AsyncUtilities

If you want to do some advanced stuff with async/await in C# I can recommend the AsyncUtilities library from Bar Arnon. It contains a set of useful utilities and extensions for async programming.

From the documentation:


  1. ValueTask
  2. AsyncLock
  3. Striped Lock
  4. TaskEnumerableAwaiter
  5. CancelableTaskCompletionSource

Extension Methods:

  1. ContinueWithSynchronously
  2. TryCompleteFromCompletedTask
  3. ToCancellationTokenSource

I especially like the support for TaskEnumerableAwaiter and the nice way he implemented it using the fact that when implementing async/await the compiler doesn’t expect Task or Task<T> specifically, it looks for a GetAwaiter method that returns an awaiter that in turn implements INotifyCompletion and has:

  • IsCompleted: Enables optimizations when the operation completes synchronously.
  • OnCompleted: Accepts the callback to invoke when the asynchronous operation completes.
  • GetResult: Returns the result of the operation (if there is one) and rethrows the exception if one occurred.

Thanks to the fact that GetAwaiter can also be an extension method it becomes possible to turn existing types to awaitables with the right custom awaiter.

Bar did exactly that for awaiting a collection of tasks:

More information:

Friday, October 26, 2018

Stay up-to-date on everything what happens in Azure

Microsoft Azure is evolving at an enormous speed making it hard to keep up. The best way to stay up-to-date is to subscribe the Azure newsletter, giving you a weekly updated on everything what’s going on in Azure land.


Thursday, October 25, 2018

Owin–Stage Markers

While reviewing some code I noticed the following code snippet;

I had no clue what Stage Markers where so time to dig in…

What are stage markers?

Stage markers play a rol when you are using OWIN in IIS. IIS has a specific execution pipeline containing a predefined set of pipeline events. If you want to run a specific set of OWIN middleware  during a particular stage in the IIS pipeline, you  can use the UseStageMarker method as seen in the sample above.

Stage marker rules

There are rules on which stage of the pipeline you can execute middleware and the order components must run. Following stage events are supported:

By default, OWIN middleware runs at the last event (PreHandlerExecute). To run a set of middleware components during an earlier stage, insert a stage marker right after the last component in the set during registration.

More information:

Wednesday, October 24, 2018

ElasticSearch - Log request and response data

For debugging purposes it can be useful to see the executed REST call. To make this possible using NEST you first have to disable Direct Streaming after which you can start capturing request and response data.

Here is a short snippet on how to enable this in your application:

Important: Don’t forget to disable this when you go to production, it has a big impact on performance.

Tuesday, October 23, 2018

ElasticSearch - Enable HttpCompression

Traffic between your application and ElasticSearch is uncompressed by default. If you want to gain some performance(at the cost of some CPU cycles), it is probably a good idea to enable http compression.

Here is how to enable this using NEST:

Monday, October 22, 2018

Application Insights–Change the application name in the Application Map

When you use the default configuration in Application Insights, the default role name (on-premise) used is the name of your Application Insights resource:


This is not that meaningful especially because right now our frontend and backend telemetry is shown together. Let’s fix this by introducing a TelemetryInitializer. In this TelemetryInitializer we update the RoleName:

  • Create an initializer in both the frontend and backend project:
  • Don’t forget to change the role name accordingly.
  • Load this initializer in your global.asax:

· Run the application again


Remark: You’ll have to wait some time before the updated names show up on the Application map

Friday, October 19, 2018

Team Foundation Server 2018 Update 3–Securing search

With the release of Team Foundation Server Update 3, security was enhanced by enabling basic authentication between TFS and the Search service. Before there was no security enabled out-of-the-box.

This means that when you try to upgrade to Update 3, you are required to provide a new user/password combination:


More information:

Thursday, October 18, 2018

ASP.NET - HTTP Error 500.24 - Internal Server Error

After configuring a new application in IIS and deploying our code to it, the server returned the following error:

HTTP Error 500.24 - Internal Server Error

An ASP.NET setting has been detected that does not apply in Integrated managed pipeline mode.

Most likely causes: •system.web/identity@impersonate is set to true.

The error message itself pointed us to the 'impersonate’ setting that was causing this error. However for this website we would like to use impersonation, so disabling it was not an option.

Instead what we did was ignoring this error by adding following configuration in our web.config:

        <validation validateIntegratedModeConfiguration="false"/>

Wednesday, October 17, 2018

DevOps Quick Reference posters

Just a quick tip for today. Willy-Peter Schaub shared a nice set of quick reference posters about Azure and DevOps on Github:


One I especially liked is the DevOps approach @ Microsoft:


Tuesday, October 16, 2018

ASP.NET Core Configuration - System.ArgumentException: Item has already been added.

In ASP.NET Core you can specify an environment by setting the 'ASPNETCORE_ENVIRONMENT'  environment variable on your server. Yesterday however I wanted to setup a second environment on the same machine. To achieve this I added a second website to IIS and set the environment variable through a parameter in my web.config:

Unfortunately after doing that my application no longer worked. In the Event Viewer I could see the following error message:

Application: dotnet.exe

CoreCLR Version: 4.6.26628.5

Description: The process was terminated due to an unhandled exception.

Exception Info: System.ArgumentException: Item has already been added. Key in dictionary: 'ASPNETCORE_ENVIRONMENT'  Key being added: 'ASPNETCORE_ENVIRONMENT'

   at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add)

   at System.Environment.ToHashtable(IEnumerable`1 pairs)

   at System.Environment.GetEnvironmentVariables()

   at Microsoft.Extensions.Configuration.EnvironmentVariables.EnvironmentVariablesConfigurationProvider.Load()

   at Microsoft.Extensions.Configuration.ConfigurationRoot..ctor(IList`1 providers)

   at Microsoft.Extensions.Configuration.ConfigurationBuilder.Build()

   at Microsoft.AspNetCore.Hosting.WebHostBuilder..ctor()

   at Web.Program.Main(String[] args) in c:\Program.cs:line 10

It seems that ASP.NET Core doesn’t like that you try to set the same variable twice. Strange because I thought it worked on a previous version of .NET Core(I’m running on 2.1at the moment).

To fix it, I had to remove the environment variable from the system(restart the system) and set the environment for all applications through the web.config.

Anyone with a better solution?

Monday, October 15, 2018

ASP.NET Core–Environment Tag Helper

One of the nice features in ASP.NET Core (MVC) is the introduction of Tag Helpers, a more HTML friendly alternative for HtmlHelpers.

A built-in Tag Helper is the Environment Tag Helper,  it conditionally renders its enclosed content based on the current hosting environment.

In my case I wanted to render some content in the development environment and add some other content in all other environments(test, uat, production). To support this scenario, the Environment Tag Helper gives you include & exclude attributes to control rendering the enclosed content based on the included or excluded hosting environment names.

An example:

In the example above I'm changing the base href for Angular depending on the environment. In development I'm running my angular site under the web application root whereas in other environments I'm running my angular site under a virtual directory.

Tuesday, October 9, 2018

Which OAuth flow should I use?

OAuth 2.0 supports several different grants. By grants we mean ways of retrieving an Access Token. Unfortunately it can be quite a challenge to find out which grant should be used in which situation.

The guys from Auth0 created the following diagram to help you arrive at the correct grant type:

Flowchart for OAuth 2.0 Grants

Monday, October 8, 2018


On one of my projects, a developer asked my help when he noticed that the application just hang after invoking a specific action on a Web API controller.

In this at first innocent looking piece of code he was using a combination of async and non-async code. Here is a simplified example;

If you ran the code above then you end up with a deadlock. This is what happens:

  1. Thread "A" would be given to the request to run on and "Index" would be called
  2. Thread "A" would call "DoSomethingAsync" and get a Task reference
  3. Thread "A" would then request the ".Result" property of that task and would block until the task completed
  4. The "Task.Delay" call would complete and the runtime would try to continue the "DoSomethingAsync" work
  5. The ASP.NET synchronization context would require that work continue on Thread "A" and so the work would be placed on a queue for Thread "A".

Thread "A". is waiting until the DoSomethingAsync task is completed, but the DoSomethingAsync cannot complete because it needs Thread "A". So we end up with a deadlock situation.

The root cause is that ASP.NET applications have a special synchronization context, that returns to the same thread after an async call complete

    To avoid this kind of issues it’s a good idea to set .ConfigureAwait(false). By doing this you no longer use the ASP.NET synchronization context, meaning that the work can continue on any thread available when the async work has completed.

    Remark: Be aware that this requires your code to be thread-safe.

    As most of the time you don’t care on which thread the work continues, it can be a good idea to always add ".ConfigureAwait(false)" to your await calls. If you don’t want to forget this, you can use the following analyzer in your projects: This will give you a warning when a possible ConfigureAwait(false) call is missing. The analyzer is quite smart and detects when a configureawait is possible.


    Remark: When you are using ASP.NET Core, you don’t have to worry about this as the synchronization context that was used for previous versions of ASP.NET is gone.

    WSFederation OWIN - Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly 'System.IdentityModel.Tokens.Jwt, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

    At the end of last year I blogged about the following exception I got when using the WSFederation OWIN middleware together with ADFS.

    Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly 'System.IdentityModel.Tokens.Jwt, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

    The problem was related to an incompatibility between the OWIN version(5) and the Microsoft.IdentityModel.Tokens nuget packages. In the meanwhile some newer versions of this package are released, making the issue disappear starting from version 5.2.

    Friday, October 5, 2018

    Introducing Microsoft Learn

    Last month Microsoft launched their new learning website called Microsoft Learn. Goal of Microsoft Learn is to become the one stop for self-paced, guided learning on all of  Microsoft’s platform products and services.

    Today the site already offers more than 80 hours of learning content for Azure, Dynamics 365, PowerApps, Microsoft Flow, and Power BI. Among that content, you’ll find experiences that will help get you ready for new certification exams for developers, administrators, and solution architects.

    A quick tour of the Microsoft Learn experience

    Wednesday, October 3, 2018

    ElasticSearch–More like this

    One of the nice features in ElasticSearch is ‘The More Like This’ Query. ‘The More Like This’ Query finds documents that are "like" a given set of documents. In order to do so, MLT selects a set of representative terms of these input documents, forms a query using these terms, executes the query and returns the results.

    An example:

    The really important parameter is the Like() where you can specify free form text and/or a single or multiple documents.

    More information:

    Tuesday, October 2, 2018

    Akka vs Orleans–Comparing Actor systems

    If you are looking for a balanced evaluation and comparison between Akka and Orleans, have a look at

    Afbeeldingsresultaat voor akka Afbeeldingsresultaat voor microsoft orleans

    The core message is well described in the introduction;

    The most interesting aspect is the difference in primary focus between the two projects:

    • The primary focus of Orleans is to simplify distributed computing and allow non-experts to write efficient, scalable and reliable distributed services.

    • Akka is a toolkit for building distributed systems, offering the full power but also exposing the inherent complexity of this domain.

    Both projects intend to be complete solutions, meaning that Orleans’ second priority is to allow experienced users to control the platform in more detail and adapt it to a wide range of use-cases, while Akka also raises the level of abstraction and offers simplified but very useful abstraction.

    Another difference is that of design methodology:

    • The guiding question for Orleans is “what is the default behavior that is most natural and easy for non-experts?” The second question is then how the expert can make their own decision.

    • Akka’s guiding question is “what is the minimal abstraction that we can provide without compromises?” This means that “good default” for us is not driven by what users might expect, but what we think users will find most useful for reasoning about their program once they have understood the abstraction—familiarity is not a goal in itself.

    I have used both systems in the past and I can stand behind the message as brought forward in this article.

    Monday, October 1, 2018

    IIS–Decryption key specified has invalid hex characters

    After setting a machine key inside my web.config, I got the following IIS error:

    Decryption key specified has invalid hex characters

    Here is the related web.config line:

    <machineKey decryptionKey="decription key is here" validation="SHA1" validationKey="validationkey,IsolateApps" />

    The root cause of this error is that in fact the configuration I specified above is invalid. Using an explicit decryptionKey together with the IsolateApps modifier doesn’t work. The IsolateApps modifier causes ASP.NET to generate a unique key for each application on your server. This is only applicable if you are getting ASP.NET to auto-generate keys at runtime.

    More information:

    Friday, September 28, 2018

    NHibernate.MappingException: No persister for: Sample.Product

    After creating a new mapping file using the code mapping feature in NHibernate, it didn’t work when I tried to persist an instance of the object. Instead I got the following error message:

    NHibernate.MappingException: No persister for: Sample.Product

    Here is the mapping file I’m using:

    Do you notice what’s wrong? ….

    I forgot to make my mapping class public:

    public class ProductMapping: ClassMapping<Product>

    Thursday, September 27, 2018

    Visual Studio 2017–SQL Server Data Tools- Custom assemblies could not be loaded

    On one of my SQL Server Reporting Services reports, I had to use some custom logic. Instead of embedding the code directly into the report, I decided to create a separate assembly which makes it easier to test and debug this code.

    After importing the assembly in my Visual Studio 2017 Reporting Services project, I couldn’t load the Report Designer preview window anymore. Instead I got the following error message:

    An error occured during local report processing.

    The definition of the report ‘/samplereport’ is invalid.

    Error while loading code module: ‘Sample.Barcode, Version=, Culture=neutral, PublicKeyToken=null’. Details: Could not load file or assembly ‘Sample.Barcode, Version=, Culture=neutral, PublicKeyToken=null’ or one of its dependencies. The system cannot find the file specified.

    To fix this I had to copy the DLL to the following location:

    C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\CommonExtensions\Microsoft\SSRS
    Remark: Note that this DLL is loaded only once in Visual Studio. If you change something to this DLL, you have to close and open Visual Studio again.

    Wednesday, September 26, 2018

    Windows Identity Foundation - Using a machine key to encrypt/decrypt your session token

    By default WIF uses a built-in SessionSecurityTokenHandler to serialize the session token to and from the cookie. Behind the scenes this tokenhandler uses the Data Protection API (DPAPI) to protect the cookie material. DPAPI uses a key that is specific to the computer on which it is running in its protection algorithms. For this reason, the default session token handler is not usable in Web farm scenarios because, in such scenarios, tokens written on one computer may need to be read on another computer.

    As a solution you can switch the default SessionSecurityTokenHandler by a machine key based alternative:

    After doing that, there is one extra step required. The default IIS configuration autogenerates a machine key per application pool.


    To generate a specific key and copy it to all server instances on your web farm, remove the checkboxes next to the ‘Automatically generate at runtime’ option and choose Generate Keys from the action menu on the right.


    Now you can copy and paste the generated keys to the other servers (or automatically let them replicate if you configured the IIS Web Farm feature).

    Tuesday, September 25, 2018

    F# 4.5–The ‘match!’ keyword

    F# 4.5 introduces the match! keyword which allows you to inline a call to another computation expression and pattern match on its result. Let’s have an example.

    Here is the code I had to write before F# 4.5:

    Notice that I have to bind the result of callServiceAsync before I can pattern match on it. Now let’s see how we can simplify this using the ‘match!’ keyword:

    When calling a computation expression with match!, it will realize the result of the call like let!. This is often used when calling a computation expression where the result is an optional.