Friday, December 22, 2017

WSFederation OWIN - Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly 'System.IdentityModel.Tokens.Jwt, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

At one of my clients we are (still) using ASP.NET MVC 5 and Web API 2. To secure these web applications we use the WSFederation OWIN middleware together with ADFS. This combination works nice and helps us keeping our applications secure.

Today one of the teams contacted me and complained that the middleware no longer worked. The error message they got looked like this:

Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly
'System.IdentityModel.Tokens.Jwt, Version=, Culture=neutral, 

The root cause of the problem could be found in the version number, they accidently upgraded the System.IdentityModel.Tokens.Jwt assembly from version 4 to 5. It turns out that version 5 is no longer compatible with OWIN.

After reverting back to version 4, everything returned back to normal…

Thursday, December 21, 2017

TFS 2017 Build- MSTest v2 tests are not recognized

After upgrading our unit tests to MSTest v2 we noticed that our tests were no longer discovered by the VSTest task on our build agent.

As a solution, we decided to invoke the test execution ourself.

Therefore I added 2 tasks to our Build definition:

  • One command line task to execute dotnet test


  • One task to collect and publish the test results


In the command line task I configured the following settings:

  • To execute the dotnet command we specify ‘dotnet’ as the Tool
  • We also specify the following arguments:
    • test: we want to execute the test commando
    • --no-restore: the package restore already happened in a previous build step and shouldn’t be re-executed here
    • --no-build: assembly compilation already happened in a previous build step and shouldn’t be re-executed here
    • --logger:trx: output the test results in the trx format
  • A last important setting that we change is the ‘Continue on error’ setting is set to true. If we don’t do this, a failing test will stop any further execution of the other build steps so we never get a chance to publish the test results.


In the publish test result task I configured the following settings:

  • Test Result Format: VSTest; Our tests are executed using MSTest and published using the VSTest format.
  • Test Results Files: **/*.trx: Search for all trx files found and publish them
  • Merge Test Results: True; Merge all test results if multiple files are found.


After configuring these steps, we were able to succesfully run our tests and publish the results.

Remark: We are still using TFS 2017 Update 1, a newer version of the Test task exists where this problem is gone.

Wednesday, December 20, 2017

Designing, building, and operating microservices on Azure

Perfect reading material for during the Christmas holidays! Microsoft released some new material together with a reference implementation on how to build a microservices application on top of the Azure platform.

Microservices have become a popular architectural style for building cloud applications that are resilient, highly scalable, and able to evolve quickly. To be more than just a buzzword, however, microservices require a different approach to designing and building applications.

In this set of articles, we explore how to build and run a microservices architecture on Azure. Topics include:

  • Using Domain Driven Design (DDD) to design a microservices architecture.
  • Choosing the right Azure technologies for compute, storage, messaging, and other elements of the design.
  • Understanding microservices design patterns.
  • Designing for resiliency, scalability, and performance.
  • Building a CI/CD pipeline.

Tuesday, December 19, 2017

VS Code–Import Cost extension

The more I use VS Code, the more I love it. It is fast and offers an ever growing list of great extensions. One of the extensions I added recently is the Import Cost Extension.

From the documentation:

This extension will display inline in the editor the size of the imported package. The extension utilizes webpack with babili-webpack-plugin in order to detect the imported size.

 Example Image

On one of the Angular projects I’m working I saw our (minified) vendor.bundle.js file growing to 8MB(!) in size. People were importing any library they found useful without being aware of the extra cost it introduces. With the help of the Import Cost extension, you see the cost early and maybe think twice before you import another library.

I’m a fan!

More information:

Monday, December 18, 2017


Angular 4 introduces a new directive that helps you compare select options. This directive is especially useful when your select options are dynamically populated.

Let’s explain why…

Here is a sample component that contains a dropdown with some options:

We set the default option by passing the object reference through the FormControl constructor. Problem is now that when we repopulate our options list(for example through an HTTP call), the object reference is gone and the model binding of our selected value is lost.

To solve this problem we can use the compareWith directive which will no longer compare the object references but uses a boolean expression or function instead:

Friday, December 15, 2017

Error CS1525: Invalid expression term 'throw'

On our build server we noticed that one of our builds failed with the following error message:

Error CS1525: Invalid expression term 'throw'

When building the project locally in Visual Studio, we had no errors Sad smile?

We found a solution here that worked for us as well:

Update Microsoft.Net.Compilers to 2.0.1 or greater.

We did an Update-All of the NuGet package at the solution level after which the error disappeared…

Thursday, December 14, 2017

Create a .npmrc file on Windows

Kind of stupid but Windows doesn’t like it when you try to create a file with only an extension(like .gitignore, .npmrc,…). Windows will give you an error message instead:


The trick to get it working is to include an ending dot also, like


Don’t ask me why it works…

Wednesday, December 13, 2017

Lettable operators in RxJs

After upgrading to Angular 5 (and having some trouble with RxJs but that is for another post), I noticed the introduction of "lettable operators", which can be accessed in rxjs/operators.

What is a lettable operator?

A lettable operator is basically any function that returns a function with the signature: <T, R>(source: Observable<T>) => Observable<R>. Euhm, what?!

Simply put, operators (like filter, map, …) are no longer tied to an Observable directly but can be used with the current let operator(explaining the name). This means you can no longer use the dot-chaining, but will have to use another way to compose your operators.

Therefore is a pipe method built into Observable now at Observable.prototype.pipe:

Why lettable operators?

Lettable operators were introduced to solve the following problems with the dot-chaining(from the documentation):

  1. Any library that imports a patch operator will augment the Observable.prototype for all consumers of that library, creating blind dependencies. If the library removes their usage, they unknowingly break everyone else. With lettables, you have to import the operators you need into each file you use them in.
  2. Operators patched directly onto the prototype are not "tree-shakeable" by tools like rollup or webpack. Lettable operators will be as they are just functions pulled in from modules directly.
  3. Unused operators that are being imported in apps cannot be detected reliably by any sort of build tooling or lint rule. That means that you might import scan, but stop using it, and it's still being added to your output bundle. With lettable operators, if you're not using it, a lint rule can pick it up for you.
  4. Functional composition is awesome. Building your own custom operators becomes much, much easier, and now they work and look just like all other operators from rxjs. You don't need to extend Observable or override lift anymore.

In short lettable operators will improve tree shaking and make it easier to create custom operators.

Tuesday, December 12, 2017

PostgreSQL: Identify your slowest queries

PostgreSQL provides a large list of modules that extends its core functionality. One of these modules is the pg_stat_statements module that provides a means for tracking execution statistics of all SQL statements executed by a server.

Before you can use this module, it must be loaded by adding pg_stat_statements to shared_preload_libraries in postgresql.conf, because it requires additional shared memory. This means that a server restart is needed to add the module.

After loading the module you can execute the below query to get the top 5 duration queries executed during your performance/benchmarking run:

It is recommended to reset the pg_stat_statements using the query below to ensure that you only capture the statements from your performance/benchmarking run:

Monday, December 11, 2017

System.Data.SqlClient.SqlException : The ntext data type cannot be selected as DISTINCT because it is not comparable

After upgrading to NHibernate 5, one of our integration tests started to fail with the following error message:

System.Data.SqlClient.SqlException : The ntext data type cannot be selected as DISTINCT because it is not comparable.

The error itself happened inside a by Nhibernate generated query:

Message: NHibernate.Exceptions.GenericADOException : [SQL: select distinct supplier1_.SupplierID as supplierid1_138_, supplier1_.Address as address2_138_, supplier1_.City as city3_138_, supplier1_.CompanyName as companyname4_138_, supplier1_.ContactName as contactname5_138_, supplier1_.ContactTitle as contacttitle6_138_, supplier1_.Country as country7_138_, supplier1_.Fax as fax8_138_, supplier1_.HomePage as homepage9_138_, supplier1_.Phone as phone10_138_, supplier1_.PostalCode as postalcode11_138_, supplier1_.Region as region12_138_ from Products product0_ inner join Categories category2_ on product0_.CategoryID=category2_.CategoryID, Suppliers supplier1_ where supplier1_.SupplierID=product0_.SupplierID and category2_.CategoryName=@p0] 

The error message pointed us to a ntext data type. A look into the database revealed that indeed we were using the ntext data type.


It is recommended to avoid this data type(it is marked as obsolete) and use nvarchar(MAX) instead.

But why didn’t we got this error before? It turned out that the query generated by NHibernate 5 is different then the one used by NHibernate 4. In NHibernate 5 an extra DISTINCT clause is added to the query, resulting in the error message above.

Friday, December 8, 2017

Angular 5 - EmptyError: no elements in sequence

After upgrading to Angular 5, the first run of my application ain’t a big success. I ended up with the following error message when I tried to navigate using the Angular Router:

EmptyError: no elements in sequence

The problem turned out not to be related to Angular directly but to RxJS 5.5.3. Reverting to 5.5.2 resolved the problem.

npm install rxjs@5.5.2

More information:

Thursday, December 7, 2017

Angular Update Guide

With the fast cadence that new versions of Angular are released, it is not always easy to know what steps are necessary to go from one version to another.

To help you, you can use the Angular Update Guide:

This is a small application that asks you the following questions:

  • From what version to what version do you want to migrate?
  • How complex is your app?
  • Do you use ngUpgrade?
  • What Package Manager do you use?


After answering all these questions, you can click the Show me how to update! button and you get a step by step guide:


Wednesday, December 6, 2017

NHibernate 5–Async/await

With all the fuzz about .NET Core I almost forgot that NHibernate 5 was released a month ago.

One the things I was waiting for was the introduction of async/await to optimize IO bound methods. And after waiting for a loooooong time it’s finally there:

Tuesday, December 5, 2017

TypeScript: Variable 'require' must be of type 'any', but here has type 'NodeRequire'.

After loading a solution for a code review, I was welcomed by the following error message:

Variable 'require' must be of type 'any', but here has type 'NodeRequire'.

Google brought me to the following GitHub issue; where the proposed solution was as simple as beautiful:

There must be another definition of require somewhere.

And indeed, after triggering a search on my solution I found another ‘declare var require’ somewhere in a typedefinition file. After removing it, the error disappeared (and everything still worked).

Monday, December 4, 2017

The request was aborted: could not create SSL/TLS secure channel–Part 3

A colleague asked for help with the following problem:

He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend:

The request was aborted: Could not create SSL/TLS secure channel.

Let’s take you through the journey that brought us to our final solution.

Part 3 – The registry hack

Our acceptance and production environments are still Windows Server 2008 R2. On these OS TLS 1.0 is still the default. We can change this by altering the registry:

  • Open the registry on your server by running ‘regedit‘
  • Browse to the following location:
    • HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols
  • Add the TLS 1.1 and TLS 1.2 keys under Protocols
    • Right click on Protocols and click New –> Key
  • Create 2 keys Client and Server under both TLS keys
  • Create DWORD values under the Server and Client keys
    • Right click on Server and Client and click New—> DWORD
    • Set the following values:
      • DisabledByDefault [Value = 0]
      • Enabled [Value = 1]
  • That’s it!

Friday, December 1, 2017

The request was aborted: could not create SSL/TLS secure channel–Part 2

A colleague asked for help with the following problem:

He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend:

The request was aborted: Could not create SSL/TLS secure channel.

Let’s take you through the journey that brought us to our final solution.

Part 2 – The unexpected .NET Framework update(again).

As mentioned in Part 1, the problem started to appear after a rollout of .NET Framework 4.6. Now what made it even stranger is that we didn’t saw the issue on our production environment. So why didn’t it work on our acceptance environment and did it work on our production environment?

Turned out that on our production enviroment another .NET Framework update was executed and that the behavior of the HttpClient changed (again):

From the documentation:

Default operating system support for TLS protocols*

The TLS stack, which is used by System.Net.Security.SslStream and up-stack components such as HTTP, FTP, and SMTP, allows developers to use the default TLS protocols supported by the operating system. Developers need no longer hard-code a TLS version.

This explained why it worked on production where the HttpClient no longer used TLS1.2 by default but falls back to the OS default.

But wait, we are still using an older procotol, can’t we change this? This is a question we’ll answer in our third and final post…

Thursday, November 30, 2017

The request was aborted: could not create SSL/TLS secure channel–Part 1

A colleague asked for help with the following problem:

He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend:

The request was aborted: Could not create SSL/TLS secure channel.


Let’s take you through the journey that brought us to our final solution.

Part 1 – The unexpected .NET Framework update.

What we find especially strange was that it worked before, and that the errors only started to appear recently. This brought us on the path of what was recently changed. One of the things that happened was an upgrade to .NET Framework 4.6. Could it be?

In the .NET documentation we found that in .NET 4.6 the HttpClient defaults to TLS 1.2. Maybe that caused the error?

We updated our code to force the system to use TLS 1.0:

This worked but now we were using an older(less secure) TLS version. Let’s continue the investigation in another post…

Wednesday, November 29, 2017

Angular–OnPush Change detection

I always had a hard time explaining how and why the OnPush change detection worked in Angular. But recently I found the following post at Angular-University:


In my opinion, the best explanaition about this topic ever!

Tuesday, November 28, 2017

Angular: Unsubscribe observables

When you subscribe to an observable in Angular, a subscription is created. To avoid memory leaks in your Angular components it is important that you unsubscribe from any subscription in the OnDestroy lifecycle hook.

Although you could think this is a good general rule, there are a lot of exceptions where Angular does the right thing and handles the unsubscribe for you:

  • AsyncPipe: if you are using observable streams via the AsyncPipe then you do not need to worry about unsubscribing. The async pipe will take care of subscribing and unsubscribing for you.
  • Router observables: The Router manages the observables it provides and localizes the subscriptions. The subscriptions are cleaned up when the component is destroyed, protecting against memory leaks, so we don't need to unsubscribe from the route params Observable.
  • Http observables: Http observables produces finite values and don’t require an unsubscribe.

As a general conclusion, in most cases you don’t need to explicitly call the unsubscribe method. The default behavior of Observable operators is to dispose of the subscription as soon as .complete() or .error() messages are published.

Monday, November 27, 2017

Progressive Web App–Redirect from HTTP to HTTPS

I’m currently working on a Progressive Web App(PWA) using ASP.NET Core. After creating our initial setup I used the Google Lighthouse chrome extension to check my application.

The results looked OK, I only had one failed audit: “Does not redirect HTTP traffic to HTTPS”.


Let’s fix this by adding the AspNetCore Rewrite middleware:

If you need to specify a port, you can add some extra parameters:

Friday, November 24, 2017

Entity Framework 6.2–Model Cache

Entity Framework 6.2 introduces the concept of a Model Cache. This gives you the ability to use a prebuilt edmx.

Why is this useful?

By default Entity Framework(not Core) will generate an EDMX behind the scenes at startup. If you have a rather large EF model, it can take a lot of time.

How to configure it?

To configure it, you have to use the new SetModelStore method to apply the DefaultDbModelStore. This class compares the timestamp between the assembly of your context against the edmx.  If they do not match, the model cache is deleted and rebuilt.


Thursday, November 23, 2017

C# 7–Deconstruction

With the introduction of ValueTuple in C# 7, the C# team also introduced support for deconstruction that allows you to split out a ValueTuple in its discrete arguments:

The nice thing is that this feature is not limited to tuples; any type can be deconstructed, as long as it has a Deconstruct method with the appropriate out parameters:

Remark: The Deconstruct method can also be an extension method.

Wednesday, November 22, 2017

ElasticSearch for YAML lovers

By default the output returned by ElasticSearch is JSON. However if you like the more dense format that YAML offers, it is possible  to ask ElasticSearch to output your data as YAML. Just add  ‘format=yaml’ as a querystring parameter to your query:

GET nusearch/package/_search?format=yaml
  "suggest": {
    "package-suggestions": {
      "prefix": "asp",
      "completion": {
        "field": "suggest"
  "_source": {
    "includes": [

Your output will become:


Tuesday, November 21, 2017

TFS 2018 and SQL Server 2017–Multidimensional server mode

Last week I did a test migration for a customer from TFS 2015 to TFS 2018. They already configured a SQL Server 2017 Database Services, Analysis Services and Reporting Services for me, so I thought I was good to go.

However halfway through the migration process I noticed the following warning appear:

[2017-11-15 14:18:43Z][Warning] An error was encountered while attempting to upgrade either the warehouse databases or the Analysis Services database. Reporting features will not be usable until the warehouse and Analysis Services database are successfully configured. Use the Team Foundation Server Administration console to update the Reporting configuration. Error details: TF400646: Team Foundation Server requires Analysis Services instance installed in the 'Multidimensional' server mode. The Analysis Services instance you supplied (<INSTANCE NAME>) is in 'Tabular' server mode. You can either install another instance of Analysis Services and supply that instance name, or you can uninstall this instance and install it in the required server mode.

Turns out that in SQL Server 2017 Analysis Services you can choose between 3 possible modes:

Relational modeling constructs (model, tables, columns), articulated in tabular metadata object definitions in Tabular Model Scripting Language (TMSL) and Tabular Object Model (TOM) code. This is the default value. OLAP modeling constructs (cubes, dimensions, measures). Originally an add-in, but now fully integrated into Excel. Visual modeling only, over an internal Tabular infrastructure. You can import a Power Pivot model into SSDT to create a new Tabular model that runs on an Analysis Services instance.



TABULAR Relational modeling constructs (model, tables, columns), articulated in tabular metadata object definitions in Tabular Model Scripting Language (TMSL) and Tabular Object Model (TOM) code.
MULTIDIMENSIONAL OLAP modeling constructs (cubes, dimensions, measures).
POWERPIVOT Originally an add-in, but now fully integrated into Excel. Visual modeling only, over an internal Tabular infrastructure. You can import a Power Pivot model into SSDT to create a new Tabular model that runs on an Analysis Services instance.

More information:

Monday, November 20, 2017

PostgreSQL–Case insensitive search

By default when you use the LIKE operator in PostgreSQL, your query parameter is used in a case sensitive matter. This means that the query

SELECT * FROM “Products” WHERE “Name” LIKE ‘Beverag%’

will produce different results then

SELECT * FROM “Products” WHERE “Name” LIKE ‘beverag%’

A possible solution for this could be the use of regular expressions:

SELECT * FROM “Products” WHERE “Name” ~* 'beverag';

This query returns all matches where the name contains the word ‘beverag’ but because it is a case-insensitive search, it also matches things like ‘BEVERAGE’.

Friday, November 17, 2017

ADFS–Where to find issuer thumbprint for WIF(Windows Identity Foundation)?

To validate a new installation of ADFS, we created a small sample app that used Windows Identity Foundation to authenticate to the ADFS server.

We got most information from our system administrator, but it turned out that the Issuer Thumbprint was missing.

As the system administrator wasn’t in the office, we had to find a different solution to get the thumbprint.

Here is what we did:


  • To read out the certificate information(and the thumbprint) you have to
    • Create a new text file
    • Copy the certificate value into the file
    • Save the file with a .cer extension
  • Now you can open the file, and read out the thumbprint value:
    • Double click on the file
    • Go to the Details tab
    • Scroll to the thumbprint property


    Thursday, November 16, 2017

    TFS 2018– Remove ElasticSearch

    Here is an update regarding my post

    In TFS 2018, the command to remove your ElasticSearch instance changed a little and the steps became:

    • Open Powershell as an administrator
    • Go to the folder where ConfigureTFSSearch.ps1 is installed. In TFS 2018, this is typically C:\Program Files\Microsoft Team Foundation Server 2018\Search\zip
    • Run the ConfigureTFSSearch script with the remove option: ".\Configure-TFSSearch.ps1 -Operation remove"

    Wednesday, November 15, 2017

    ElasticSearch–Understand the query magic using ‘explain’

    Sometimes an ElasticSearch query is invalid or doesn’t return the results you expect. To find out what is going on, you can add the explain parameter to the query string:


    In your results you get an extra explanation section


    More information:

    Tuesday, November 14, 2017

    Using GuidCOMB in SQL Server and PostgreSQL

    On a project I’m working on, we expect to have a really large amount of data. Therefore we decided to switch our ID strategy from Integers to GUIDs. Problem is that when you start using GUIDs as part of your database index, they become really fragmented resulting in longer write times.

    To solve this, you can use the GuidCOMB technique where a part of the GUID is replaced by a sorted date/time value. This guarantees that values will be sequential and avoids index page splits.

    NHibernate and Marten supports the GuidCOMB technique out-of-the-box but if you want to use it with other tools you can try RT.Comb,  a small .NET Core library that generated “COMB” GUID values in C#.

    Here is a sample how to use it in combination with Entity Framework:

    • Let’s first create an Entity Framework Value Generator that uses the RT.Comb library:
    • To apply this generator when an object is added to a DbContext, you can specify it in the Fluent mapping configuration:

    Friday, November 10, 2017

    Kestrel error: The connection was closed because the response was not read by the client at the specified minimum data rate.

    While running some performance tests on our ASP.NET Core application, after increasing the load to a certain level, we saw the following error messages appear on the server:

    The connection was closed because the response was not read by the client at the specified minimum data rate.

    This error is related to the Minimum request body data rate specified by Kestrel.

    From the documentation:

    Kestrel checks every second if data is coming in at the specified rate in bytes/second. If the rate drops below the minimum, the connection is timed out. The grace period is the amount of time that Kestrel gives the client to increase its send rate up to the minimum; the rate is not checked during that time. The grace period helps avoid dropping connections that are initially sending data at a slow rate due to TCP slow-start.

    The default minimum rate is 240 bytes/second, with a 5 second grace period.

    A minimum rate also applies to the response. The code to set the request limit and the response limit is the same except for having RequestBody or Response in the property and interface names.

    The problem was that we were simulating our load from one machine which was not capable of sending enough data at the expected request rate. After scaling out our load tests to multiple test agents on different machines, the problem disappeared…

    Thursday, November 9, 2017

    Azure Storage Explorer–Support for Cosmos DB

    Great news! From now on the Azure Storage Explorer can be used to manage your Cosmos DB databases.

    Key features

    • Open Cosmos DB account in the Azure portal
    • Add resources to the Quick Access list
    • Search and refresh Cosmos DB resources
    • Connect directly to Cosmos DB through a connection string
    • Create and delete Databases
    • Create and delete Collections
    • Create, edit, delete, and filter Documents
    • Create, edit, and delete Stored Procedures, Triggers, and User-Defined Functions


    Install Azure Storage Explorer: [Windows], [Mac], [Linux]

    Wednesday, November 8, 2017

    .NET Core Unit Tests–Enable logging

    I noticed that .NET Core Unit Tests capture the output send through tracing (via Trace.Write()) and through the console (via Console.Write()).

    It took me some time before I had the correct code to get the Microsoft.Extensions.Logging data written to my test logs.

    So here is a small code snippet in case you don’t want to search for it yourself:

    Remark: Don’t forget to include the Microsoft.Extensions.Logging.Console nuget package.

    Tuesday, November 7, 2017

    .NET Core Unit Tests–Using configuration files

    Here are the steps to use Microsoft.Extensions.Configuration in your .NET Core unit tests:

    Monday, November 6, 2017

    Git–Commit changes to a new branch

    Did it ever happend to you that you were changing some code in one branch until you realized that you actually wanted to commit on another (new) branch?

    I was expecting that this was not easy to do, but in fact it’s rather easy.

    Don’t stage your changes, instead just create a new branch using

    git checkout -b another-branch

    This will create and checkout “another-branch”.

    Now you can stage your files using

    git add .

    and commit them using

    git commit -m <message>

    Remark: This works in Visual Studio as well

    Friday, November 3, 2017

    TypeScript Index Signatures

    I love TypeScript and how it helps me writing better JavaScript applications. However sometimes I struggle with the dynamic world that JavaScript has to offer and the fight for type safety that TypeScript adds to the mix.

    A situation I had was where I had some objects each sharing the same set of properties. However in some situations extra metadata was added depending on the customer(it’s a multitenant solution). So I created an interface for all the shared properties, but what should I do with the (possible) extra metadata? Adding so many different extra properties on the interface and making them optional sounded not ideal?

    TypeScript allows you to add extra properties to specific objects with the help of index signatures. Adding an index signature to the interface declaration allows you to specify any number of properties for different objects that you are creating.

    An example:

    Thursday, November 2, 2017

    .NET Core SignalR Client error: System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation

    To test a .NET Core SignalR application, I created a sample application(using the full .NET framework) where I included the Microsoft.AspNetCore.SignalR.Client NuGet package and added the following code:

    However when I tried running this application, it failed with the following error message:

    System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation, Version=, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

    I checked all my assembly references but they all seemed OK.

    As a workaround, I was able to avoid the issue by removing the .WithConsoleLogger() line. Anyone who has an idea what can be wrong?

    Remark: I think it has to do something with the SignalR client which targets .NET Standard 2.0 and my sample application wich targets .NET Framework 4.7. But no clue what exactly is causing it…

    Wednesday, November 1, 2017

    Web.config transformations in .NET Core

    In a previous post I mentioned that we started to put environment variables inside our web.config files to change the ASPNETCORE_ENVIRONMENT setting inside our ASP.NET Core apps. As we were already using Web Deploy to deploy our ASP.NET Core applications, we decided to use the web.config transformations functionality to set the environment variable in our web.config to a correct value before deploying:

    • We created extra web.{environment}.config files


    • And added the Xdt transformation configuration:

    However when we tried to deploy, we noticed that the transformation was not executed and that the original web.config file was used.

    What did we do wrong?

    The answer turned out to be “Nothing”. Unfortunately ASP.NET Core projects don’t support the transformation functionality. Luckily, a colleague(thanks Sami!) brought  the following library under my attention:

    dotnet-transform-xdt is a dotnet CLI tool for applying XML Document Transformation (typically, to ASP.NET configuration files at publish time, but not limited to this scenario).

    That’s exactly what we need!

    How to use dotnet-transform-xdt?

    • Right click on your ASP.NET Core project and choose Edit csproj
    • Add the following line to the list of Package references:

    <DotNetCliToolReference Include="Microsoft.DotNet.Xdt.Tools" Version="2.0.0" />

    • Add the following target before the closing </project>:

    <Target Name="ApplyXdtConfigTransform" BeforeTargets="_TransformWebConfig">
       < Exec Command="dotnet transform-xdt --xml &quot;$(_SourceWebConfig)&quot; --transform &quot;$(_XdtTransform)&quot; --output &quot;$(_TargetWebConfig)&quot;" Condition="Exists('$(_XdtTransform)')" />
    < /Target>

    • If you now run dotnet publish  and examine the Web.config in the publish output folder, a transformed web.config should be there…

    Tuesday, October 31, 2017

    Visual Studio 2017 (Enterprise) - Where are the Web performance and load testing tools?

    Yesterday I wanted to do some performance testing before we put a new application into production. However when I opened Visual Studio (2017) I couldn’t find the Web performance and load testing tools.

    Are there no longer available in VS 2017? Luckily, they still are. But they are not installed out-of-the-box.

    Let’s open the Visual Studio installer and fix this:

    • Search for Visual Studio Installer and execute it


    • Click on More –> Modify


    • Go to the Individual components tab, scroll to the Debugging and testing section and select Web performance and load testing tools.


    • Click Modify to start the installation

    Monday, October 30, 2017

    C# 7: Lambdas vs Local functions

    C# 7 introduces the concept of local functions. Local functions can be nested in other functions similar to anonymous delegates or lambda expressions. Doesn’t this make local functions redundant? Not at all, anonymous functions and lambda expressions have certain restrictions that local functions have not.

    Here is a list of things a local function can do that lambda’s can’t:

    • Local functions can be called without converting them to a delegate

    • Local functions can be recursive

    • Local functions can be iterators

    • Local functions can be generic

    • Local functions have strictly more precise definite assignment rules

    • In certain cases, local functions do not need to allocate memory on the heap

    More information can be found at:

    Friday, October 27, 2017

    Angular: Cancel a pending HTTP request

    In case you are wondering how to cancel a pending HTTP request, here is the (simple) answer.  You can do this by calling unsubscribe on the Subscription object returned by the  subscribe method.

    An example:


    Thursday, October 26, 2017

    Angular aot build error–No template specified for component

    A colleague sent me the following screenshot with an error he got after switching to an AOT build in Angular:


    Here is the related TypeScript file for this component:

    The problem was caused by the “template: require('./perceel-list.component.html')” statement in the component file. The aot build doesn’t like it when you try to resolve html templates dynamically.

    Removing the require and just using the templateUrl instead solved the problem:

    Wednesday, October 25, 2017

    Angular CLI– ng build --prod

    Did you know that you can pass a “--prod” parameter when executing compiling your Angular code using “ng build”?

    The "--prod” option also has a development counterpart “--dev”. They will set the following list of parameters:

    Flag --dev --prod
    --aot false true
    --environment dev prod
    --output-hashing media all
    --sourcemaps true false
    --extract-css false true
    --named-chunks true false

    More information:

    Tuesday, October 24, 2017

    Swagger–Expose enums as strings in your Web API

    By default Swagger exposes enums in your API definitions as numbers which makes it not easy to understand what a specific parameter value means.


    You can configure Swagger to expose enums using string names instead. Therefore add the following line to your Swagger configuration:



    Monday, October 23, 2017

    Send extra arguments to npm script in package.json

    In our package.json we defined some script commands to automate certain actions.

    However sometimes we want to add extra parameters to the script when executing it. This is possible by adding an extra pair of dashes  and the extra parameter when executing the command:

    ngbuild -- --environment=local

    Friday, October 20, 2017

    .NET Conf 2017

    In case you missed .NET Conf 2017, all the videos are available online on Channel 9.


    Thursday, October 19, 2017

    SQL Server Full Text Search–Wildcards

    The SQL Server Full Text Search option is really powerful. However you need to be aware that by default a search is always done on a full word. For example if you had indexed ‘the quick brown fox jumps over the lazy dog’ and you search for ‘brow’ you don’t get a result back.

    To solve this you can use wildcards, but you have to be aware that you put the full search term between quotes.

    This query will not work:

    SELECT BookID,BookTitle

    FROM Books

    WHERE CONTAINS(BookTitle,'brow*')

    But this query will:

    SELECT BookID,BookTitle

    FROM Books

    WHERE CONTAINS(BookTitle,'"brow*"')

    Wednesday, October 18, 2017

    Angular i18n issue - Cannot read property 'toLowerCase' of null

    After following the steps in the Angular documentation to setup internationalization(i18n) support, I tried to execute my brand new i18n npm command:

    PS C:\Projects\test\AngularLocalization\angularlocal> npm run i18n

    > angularlocal@0.0.0 i18n C:\Projects\test\AngularLocalization\angularlocal

    > ng-xi18n

    TypeError: Cannot read property 'toLowerCase' of null

        at Extractor.serialize (C:\Projects\test\AngularLocalization\angularlocal\node_modules\@an


        at C:\Projects\test\AngularLocalization\angularlocal\node_modules\@angular\compiler-cli\sr


        at process._tickCallback (internal/process/next_tick.js:109:7)

        at Module.runMain (module.js:606:11)

    at run (bootstrap_node.js:389:7)

        at startup (bootstrap_node.js:149:9)

        at bootstrap_node.js:502:3

    Extraction failed

    npm ERR! Windows_NT 10.0.15063

    npm ERR! argv "C:\\Program Files\\nodejs\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\

    npm\\bin\\npm-cli.js" "run" "i18n"

    npm ERR! node v6.11.3

    npm ERR! npm  v3.10.10

    npm ERR! code ELIFECYCLE

    npm ERR! angularlocal@0.0.0 i18n: `ng-xi18n`

    npm ERR! Exit status 1

    npm ERR!

    npm ERR! Failed at the angularlocal@0.0.0 i18n script 'ng-xi18n'.

    npm ERR! Make sure you have the latest version of node.js and npm installed.

    npm ERR! If you do, this is most likely a problem with the angularlocal package,

    npm ERR! not with npm itself.

    npm ERR! Tell the author that this fails on your system:

    npm ERR!     ng-xi18n

    npm ERR! You can get information on how to open an issue for this project with:

    npm ERR!     npm bugs angularlocal

    npm ERR! Or if that isn't available, you can get their info via:

    npm ERR!     npm owner ls angularlocal

    npm ERR! There is likely additional logging output above.

    npm ERR! Please include the following file with any support request:

    npm ERR!     C:\Projects\test\AngularLocalization\angularlocal\npm-debug.log

    Whoops! This was not the output I was hoping for…

    Strange! Because it worked perfectly before Confused smile. A search through the issues on GitHub brought me to the following issue:

    The issue seems to have appeared in Angular 4.0.3. Luckily a workaround exists, I altered the commando in my package.json to include the prefered format:


    When I invoked the i18n command again, this time it worked without a problem.

    Tuesday, October 17, 2017

    Impress your colleagues with your knowledge about…Expression Evaluator Format Specifiers

    Sometimes when working with C# you discover some hidden gems. Some of them very useful, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that I discovered some days ago are Expression Evaluator Format Specifiers.

    What is it?

    Expression Evaluator Format Specifies come into the picture when you are debugging in Visual Studio. The part of the debugger that processes the language being debugged is known as the expression evaluator (EE). A different expression evaluator is used for each language, though a default is selected if the language cannot be determined.

    A format specifier, in the debugger, is a special syntax to tell the EE how to interpret the expression being examined. You can read about all of the format specifiers in the documentation.

    One of really useful format specifiers is the ‘ac’ (always calculate) format specifier. This format specifier will force evaluation of the expression on every step. This is useful during debugging when you want to track a specific value.

    How to use it?

    • Start a debugging session in your application.


    • Go to the Watch window(Debug –> Windows –> Watch –> Watch 1)


    • Write the expression that you want to check, a comma and the format specifier; {expression},{format specifier}


    • If you use the ac format specifier you don’t have to refresh your expression but will it be evaluated on every step:



    Monday, October 16, 2017

    Seeing the power of types

    Most applications I’ve seen don’t take advantage of the power of the type system and fall back to primitive types like string, int, … .

    But what if you start using the type system to design a more understandable and less buggy application?

    You don’t believe it is possible? Have a look at the Designing with Types blog series, it will change the way you write your code forever…

    The complete list of posts:

    1. Designing with types: Introduction

    Making design more transparent and improving correctness

    2. Designing with types: Single case union types

    Adding meaning to primitive types

    3. Designing with types: Making illegal states unrepresentable

    Encoding business logic in types

    4. Designing with types: Discovering new concepts

    Gaining deeper insight into the domain

    5. Designing with types: Making state explicit

    Using state machines to ensure correctness

    6. Designing with types: Constrained strings

    Adding more semantic information to a primitive type

    7. Designing with types: Non-string types

    Working with integers and dates safely

    8. Designing with types: Conclusion

    A before and after comparison

    Friday, October 13, 2017

    Angular: Analyze your webpack bundles

    To optimize your application it can be useful to investigate all the things that are loaded and used inside your webpack bundles. A great tool to visualize this information is the webpack dependency analyzer:

    From the documentation:

    The Webpack dependency analyzer is a Webpack plugin and CLI utility that represents bundle content as convenient interactive zoomable treemap

    webpack bundle analyzer zoomable treemap

    This module will help you:

    1. Realize what's really inside your bundle
    2. Find out what modules make up the most of it's size
    3. Find modules that got there by mistake
    4. Optimize it!

    How to use it inside your Angular app?

    • Install the bundle through npm:
      • npm install webpack-bundle-analyzer
    • Update your package.json with an extra command:
      • "analyze": "ng build --prod --stats-json && webpack-bundle-analyzer dist/stats.json"
    • Invoke the command through npm
      • npm run analyze
    • A browser window is loaded at

    Thursday, October 12, 2017

    IIS Server configs

    If you are hosting your ASP.NET applications inside IIS I have a great tip for you:

    This GitHub project contains a list of boilerplate web.config files applying some best practices(like security hardening) and taking maximal advantage of the powerfull functionality that IIS has to offer.

    It shows and explains how to:

    • Apply security through obscurity by not exposing specific information through the headers
    • Apply GZIP compression on static content
    • Disable tracing
    • Secure your cookies
    • Cache static content
    • Support cache busting

    Wednesday, October 11, 2017

    ASP.NET Core–Environment variables

    ASP.NET Core references a particular environment variable, ASPNETCORE_ENVIRONMENT to describe the environment the application is currently running in. This variable can be set to any value you like, but 3 values are used by convention: Development, Staging, and Production.

    Based on this information the ASP.NET Core configuration system can load specific configuration settings (through .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) )
    or execute a specific Startup class or Startup method through the Startup conventions(e.g. a Startup{EnvironmentName} class or a Configure{EnvironmentName}() method inside the Startup class).

    At one customer we are hosting our ASP.NET Core applications inside IIS. The IIS environment is used both for development and testing. So we want to host the same application twice with a different environment setting. By default the environment setting is loaded from a system level environment variable which of course can be set to only one value.

    How can we solve this?

    To support this scenario the ASP.NET Core Module inside your web.config allows you specify environment variables for the process specified in the processPath attribute by specifying them in one or more environmentVariable child elements of an environmentVariables collection element under the aspNetCore element. Environment variables set in this section take precedence over system environment variables for the process.

    An example:

    Tuesday, October 10, 2017

    ASP.NET Core 2.0–Authentication Middleware changes

    ASP.NET Core 2.0 introduces a new model for authentication which requires some changes when upgrading your existing ASP.NET Core 1.x applications to 2.0.

    In ASP.NET Core 1.x, every auth scheme had its own middleware, and startup looked something like this:

    In ASP.NET Core 2.0, there is now only a single Authentication middleware, and each authentication scheme is registered during ConfigureServices() instead of during Configure():

    More information:

    Monday, October 9, 2017

    Angular 4.3: HTTP Interceptors are back

    With the introduction of a new HttpClient in Angular 4.3, an old feature of Angular.js was re-introduced; HttpInterceptors. Interceptors are sitting between the application and the backend and allow you to transform a request coming from the application before it is actually submitted to the backend. And of course you when a response arrivers from the backend an interceptor can transform it before delivering it to your application logic.

    This allows us to simplify the interaction with the backend in our Angular app and hide most of the shared logic inside an interceptor.

    Let’s create a simple example that injects an OAuth token in our requests:

    To be able to use the interceptor, you’ll have to register it:

    Friday, September 29, 2017

    Enabling Application Insights on an existing project

    Yesterday I lost some time searching how to Enable Application Insights on an existing project in Visual Studio.

    I thought it was available on the context menu when you right click on your Visual Studio project, but no option found there:


    Turns out you need to go one level deeper Confused smile;

    • Right click on your project
    • Click on Add and select Application Insights Telemetry…


    Now you can go through the configuration wizard by clicking on Start Free:


    Thursday, September 28, 2017

    Visual Studio 2017 Offline install - “Unable to download installation files”

    After creating an offline installer for Visual Studio 2017 using vs_enterprise.exe --layout c:\vs2017offline we were ready to install Visual Studio on our build servers(which don’t have Internet access).

    However when we tried to run the installer, it failed after running for a few minutes with the following error message:

    Unable to download installation files

    Unable to download install files

    This error message was not that useful as we found out that the problem was not related to missing installation files but due to the fact that we forgot to install the required certificates first.

    To install the certificates first, you have to

    1. Browse to the "certificates" folder inside the layout folder you created(e.g. c:\vs2017offline\certificates)

    2. Right-click each one and choose Install PFX.

    3. Specify Local machine as target certificate store
    4. Leave the password field empty

    More information:

    Wednesday, September 27, 2017

    Team Foundation Server–Upgrade your build agents

    If you upgrade your TFS installation to a newer version, a new version of the build agent is available as well.

    To upgrade your agents, you have 2 options:

    • If a new major version of the agent is released, you’ll have to manually delete the old agent and install a new agent.
    • If a new minor version of the agent is released, the existing agent is upgraded automatically when it runs a task that requires a newer version of the agent.
      • If you want to trigger the update manually, you can go to the Agent Pool hub, right click on a Queue and click on Update All Agents.


    More information at

    Tuesday, September 26, 2017

    NPGSQL–Relation does not exist

    PostgreSQL has great .NET support thanks to the open source NPGSQL library.

    From the home page:

    Npgsql is an open source ADO.NET Data Provider for PostgreSQL, it allows programs written in C#, Visual Basic, F# to access the PostgreSQL database server. It is implemented in 100% C# code, is free and is open source.

    In addition, providers have been written for Entity Framework Core and for Entity Framework 6.x.

    However I immediately had some problems the moment I tried to execute a query. Strange thing was that my code almost the same as what could be found on the Getting Started page;

    The error I got was the following:

    Query failed: ERROR: relation "Northwind.Products" does not exist

    I tried to execute the same query directly in the PGAdmin tool and indeed I got the same error.

    What am I doing wrong? The problem is that PostgreSQL by default implicitly converts unquoted identifiers in my query to lowercase.

    So the following query;

    SELECT Id, ProductName FROM Northwind.Products

    was transformed to

    SELECT id, productname FROM northwind.products

    As object names are case sensitive in PostgreSQL(or so it seems), this resulted in the fact that my table was not found.

    There are 2 possible solutions:

    • Use quotes around your identifiers: SELECT “Id”, “ProductName” FROM “Northwind”.”Products”
    • Change the casing of your database objects(tables, columns, …) to lowercase

    I choose the last option, because I had to escape my query in string in my C# code otherwise.

    Monday, September 25, 2017

    TypeScript error–Property ‘assign’ does not exists on type ‘ObjectConstructor’

    A colleague asked me for help when he got into trouble with his TypeScript code. Here is a simplified version:

    Although this looks like valid code, the TypeScript compiler complained:


    After some headscratching, we discovered that there was a “rogue” tsconfig.json at a higher level that set “ES5” as the target. Object.Assign was added as part of “ES6” explaining why TypeScript complained.


    After changing the target to “es6”, the error disappeared.

    Friday, September 22, 2017

    VSWhere.exe–The Visual Studio Locator

    As someone who has built a lot if CI and CD pipelines, one of the struggles I always got when new Visual Studio versions were released was how to make my build server use the correct version of MSBuild when multiple Visual Studio versions were installed.

    It got a lot better over the years, but even recently I was sitting together with a customer to investigate how we could make the build server understand that the Visual Studio 2017 Build tools should be used.

    One of the (badly documented) tricks you could use was scanning the registry for specific registry keys. Luckily Microsoft released recently a new tool that makes finding your Visual Studio instances a lot easier: vswhere.exe

    From the documentation:

    vswhere is designed to be a redistributable, single-file executable that can be used in build or deployment scripts to find where Visual Studio - or other products in the Visual Studio family - is located. For example, if you know the relative path to MSBuild, you can find the root of the Visual Studio install and combine the paths to find what you need.

    You can emit different formats for information based on what your scripts can consume, including plain text, JSON, and XML. Pull requests may be accepted for other common formats as well.

    vswhere is included with the installer as of Visual Studio 2017 version 15.2 and later, and can be found at the following location: %ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe. The binary may be copied from that location as needed, installed using Chocolatey, or the latest version may be downloaded from the releases page. More information about how to get vswhere is on the wiki.

    This tool is also used internally in the VSBuild build task in TFS to discover recent Visual Studio versions(2017 and newer).

    A quick sample:

    • Open a command prompt
    • Browse to %ProgramFiles(x86)%\Microsoft Visual Studio\Installer\ or the location where you downloaded vswhere.exe.
    • Let’s try vswhere –latest


    Thursday, September 21, 2017

    .NET Standard: Using the InternalsVisibleToAttribute

    In .NET Standard projects, there is an AssemblyInfo class built-in, so you no longer need a separate AssemblyInfo.cs file in your project Properties.

    But what if you want to use the InternalsVisibleToAttribute? This was one of the attributes I used a lot to expose the internals of my Assembly to my test projects.

    Turns out that it doesn’t matter really where you put this attribute. It is applied at the assembly level, so you can include in any source code file you like. Using the AssemblyInfo file was just a convenience.

    So what I did, was creating an empty .cs file and add the following code:

    Wednesday, September 20, 2017

    .NET Standard: Duplicate 'System.Reflection.AssemblyCompanyAttribute' attribute

    In a ‘classic’ .NET project, you have an AssemblyInfo.cs file.


    This file contains all kind of information about your assembly

    After upgrading a classic .NET project to .NET Standard, I started to get errors about some of the properties inside the AssemblyInfo.cs file:


    A .NET Standard project already has the AssemblyInfo information built-in. So after upgrading you end up with 2 AssemblyInfo specifications, leading to the errors above.

    The solution is to remove the original AssemblyInfo.cs file in the Properties folder.

    Remark: If you want to change the assembly information, you now have to use the Package tab inside your Project Properties.