Skip to main content

Posts

Showing posts from 2018

ASP.NET Error - The specified task executable "csc.exe" could not be run.

When trying to build an ASP.NET Web application from a colleague, it failed with the following error message: The specified task executable "csc.exe" could not be run. Could not load file or assembly 'System.Security.Principal.Windows, Version=4.0.1.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. I was able to fix the problem by updating the Microsoft.CodeDom.Providers.DotNetCompilerPlatform nuget package to the latest version:   <package id="Microsoft.CodeDom.Providers.DotNetCompilerPlatform" version=" 2.0.1 " targetFramework="net471" />

Adding GraphQL middleware when using ASP.NET Core

Enabling a GraphQL endpoint in your ASP.NET Core application is quite easy thanks to the GraphQL.Server.Transports.AspNetCore NuGet package. One of the nice features of GraphQL is that you can extend your resolvers using custom middleware. In the GraphQL.NET documentation they refer to the FieldsMiddleware property to register this extra middleware: Unfortunately when using the GraphQL.Server package you only have an GraphQLOptions object but the only thing you can do is set a SetFieldMiddleware flag: To register your own middleware you have to jump through some extra hoops: Create a custom GraphQLExecutor and override the GetOptions method: Register this Executor instance in your IoC container(here I'm using StructureMap):

ASP.NET Web API–The inner handler has not been assigned

I created the following messagehandler to add the correlation id to an outgoing HTTP request: and used the following code to link it to the HttpClient: Looked OK to me until I tried to execute this code. This failed horribly with the following error message:   "message": "An error has occurred.",   "exceptionMessage": "The inner handler has not been assigned.",   "exceptionType": "System.InvalidOperationException",   "stackTrace": "   at System.Net.Http.DelegatingHandler.SetOperationStarted()    at System.Net.Http.DelegatingHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)    at CorrelationId.AddCorrelationIdToRequestHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) What was not obvious to me is that you have to specify the next handler in the request pipeline. As I only had one handler added, I had to specify the HttpCl

Angular–Output enum as string in your component html

TypeScript supports 2 types of enums: Numeric enums String enums From the documentation : Numeric enums An enum can be defined using the enum keyword. enum Direction { Up = 1, Down, Left, Right, } Above, we have a numeric enum where Up is initialized with 1 . All of the following members are auto-incremented from that point on. In other words, Direction.Up has the value 1 , Down has 2 , Left has 3 , and Right has 4 . Numeric enums can be mixed in computed and constant members (see below) . String enums String enums are a similar concept, but have some subtle runtime differences as documented below. In a string enum, each member has to be constant-initialized with a string literal, or with another string enum member. enum Direction { Up = "UP", Down = "DOWN", Left = "LEFT", Right = "RIGHT", } While string enums don’t have auto-incrementing behavior,

Azure DevOps Server - [System.String[]] doesn't contain a method named 'Trim'

When trying to release an application using Azure DevOps Pipelines, one of the tasks failed with the following error message: 2018-12-18T14:02:03.6775235Z Deployment status for machine DEVELOPMENT : Failed 2018-12-18T14:02:03.6941240Z Deployment failed on machine DEVELOPMENT with following message : System.Exception: Method invocation failed because [System.String[]] doesn't contain a method named 'Trim'. 2018-12-18T14:02:03.7087715Z ##[error]] doesn't contain a method named 'Trim'."}};] 2018-12-18T14:02:03.9362960Z ##[error]Deployment on one or more machines failed. System.Exception: Method invocation failed because [System.String[]] doesn't contain a method named 'Trim'. This specific task tries to do a deployment of the application using a remote execution of powershell on the target machine. The problem is that an outdated Powershell version was still running on this machine. I checked the Powershell version using the f

"NHibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session"

I’m a big fan of NHibernate(if I didn’t mention this before). The only problem is that it’s error messages can be kind of cryptic, especially if you have limited knowledge about NHibernate. A colleague came to me with the following problem: He tried to update a detached object that he got back from an API call and constructed himself, NHibernate refused to save it(using the Update method) and returned the following error message: "NHibernate.NonUniqueObjectException: a different object with the same identifier value was already associated with the session" The problem was that a similar object(using the same identifier) was already loaded and tracked by the session. When he tried to associate the detached object to the session, NHibernate recognizes a similar object with the same identifier and raises the error above. To fix this, you have to switch from Update() to Merge(). This method copies the state of the given object onto the persistent object with the same

Free ebook - Pattern Recognition and Machine Learning

I’m investing a lot of time to at least get a little bit of understanding about machine learning. Luckily there is a lot of (free) information out there. Christopher Bishop, Technical Fellow and Laboratory Director In Microsoft Research Cambridge, shared his book Pattern Recognition and Machine Learning . This leading textbook provides a comprehensive introduction to the fields of pattern recognition and machine learning. It is aimed at advanced undergraduates or first-year PhD students, as well as researchers and practitioners. No previous knowledge of pattern recognition or machine learning concepts is assumed. (—> Exactly what I need ) Much (machine) learning fun!

Feature ‘default literal’ is not available in C# 7.0.

On one of my projects we are ‘default literals’, one of the features introduced in C#7.1.  To be able to use this, you have to change your project properties to point to the latest minor version(more information here: https://bartwullems.blogspot.com/2017/09/how-to-start-using-c-71.html ). Unfortunately on the build server it didn’t work as expected, but got the following error in our build logs: error CS8107: Feature ‘default literal’ is not available in C# 7.0. Please use language version 7.1 or greater. The strange this was that on the same build server,  another build using the same feature did succeed. So it couldn’t be linked to the possibility that C# 7.1 was not installed on the build server. Then I got some inspiration, maybe it was related to the Build Configuration. And indeed when I switched from Debug to Release the problem appeared in my Visual Studio as well: When enabling C# 7.1 for this project, a <langversion> is introduced in the csproj file:

Entity Framework–PostgreSQL–Enable spatial type support–Part 2

In a previous post I explained on how to enable spatial type support for your Entity Framework (core) model. If you added the HasPostgresExtension("postgis") check, you get the following error message when you try to execute the Entity Framework migration and the extension is not installed: could not open extension control file "/usr/share/postgresql/9.5/extension/postgis.control": No such file or directory To install the extension you can follow the instructions on the postgis website: https://postgis.net/install/ . The installation differs based on the host platform you are using for your PostgreSQL instance. After installation has completed you can activate the extension for you database using the following statement: -- Enable PostGIS (includes raster) CREATE EXTENSION postgis;

Entity Framework–PostgreSQL–Enable spatial type support

To enable Spatial Type support for PostgreSQL you have to do an extra step when configuring your DBContext: Calling the UseNetTopologySuite() method activates a plugin for the Npgsql EF Core provider which enables mapping NetTopologySuite's types to PostGIS columns, and even translate many useful spatial operations to SQL. This is the recommended way to interact with spatial types in Npgsql. To check if the PostGIS extension is installed in your database, you can add the following to your DbContext:

SQL Server–Document your database using Extended Properties

On one of my projects we had to re-use an existing table. Unfortunately the table wasn’t documented very well(read; not at all). To make it even worse some of the columns were used for different things than you would expect based on the column names. The table even contained some implicit logic about the way the data was inserted into the table. To avoid this horror again,  we decided to apply the boy scout rule and leave the camp place in a better state. We started by adding documentation to changed or newly added columns using the SQL Server Extended Properties feature. You can add Extended Properties either through SQL Server Management Studio or through the sp_addextendedproperty stored procedure: exec sp_addextendedproperty @name = N'Price' ,@value = N'Testing entry for Extended Property' ,@level0type = N'Schema', @level0name = 'northwind' ,@level1type = N'Table', @level1name = 'Product' ,@level2type

Enabling row compression in SQL Server

Enabling row and page compression can give you a big performance gain in SQL Server. IO remains expensive especially when your SQL Server is still using spinning disks. By enabling row (and page) compression you can decrease the amount of storage needed on disk a lot. How to enable row compression? We’ll start by estimating the space savings for row compression by executing the following stored procedure: EXEC sp_estimate_data_compression_savings 'vervoer', 'MAD', NULL, NULL, 'ROW' ; GO Here are the results we get back: By comparing the column size_with_requested_compression_setting(KB) and dividing by the column size_with_current_compression_setting(KB), you can save over 50%. Sounds good enough for me, let’s enable this: ALTER TABLE vervoer.MAD REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = ROW) GO

NUnit–TestCase vs TestCaseSource

NUnit supports parameterized tests through the TestCase attribute. This allows you to specify multiple sets of arguments and will create multiple tests behind the scenes: However the kind of information that you can pass on through an attribute is rather limited. If you want to pass on complex objects you need a different solution. In that case (no pun intended) you can use the TestCaseSource attribute: TestCaseSourceAttribute is used on a parameterized test method to identify the source from which the required arguments will be provided. The attribute additionally identifies the method as a test method. The data is kept separate from the test itself and may be used by multiple test methods. An example:

Azure DevOps–Where can I see the available hosted pipeline minutes?

Last week I got a question from one of our internal teams because they got the following error message when trying to execute a build: Your account has no free minutes remaining. Add a hosted pipeline to run more builds or releases. Until recently we were using private build agents, but we decided to switch completely to hosted builds. Checking the remaining build minutes Here are the steps to check the remaining build minutes: Open the Azure DevOps site of your organisation Click on Organization settings in the left corner Click On Retention and parallel jobs Switch to the Parallel jobs tab Here you can see the available job pipelines and the amount of minutes remaining(if you are using the free tier) To purchase parallel jobs, you can click on the available link. You will be redirect to the Visual Studio marketplace: https://marketplace.visualstudio.com/items?itemName=ms.build-release-hosted-pipelines Clicking on the Get

FluentValidation–Conditional Validation Rule

FluentValidation is really powerful, but this power also makes it sometimes complex to find the correct way to solve a specific problem. In this case I wanted to conditionally execute a certain validation when data in another field was filled in. FluentValidation makes this possible through the usage of the When/Unless methods. Here is a short example: We only validate the DeliveryDate when FastDelivery is choosen.

Learning F# through FSharpKoans

In my journey to become a better F# (and C#) developer, I found the following project on GitHub: https://github.com/ChrisMarinos/FSharpKoans From the documentation: Inspired by EdgeCase's fantastic Ruby koans , the goal of the F# koans is to teach you F# through testing. When you first run the koans, you'll be presented with a runtime error and a stack trace indicating where the error occured. Your goal is to make the error go away. As you fix each error, you should learn something about the F# language and functional programming in general. Your journey towards F# enlightenment starts in the AboutAsserts.fs file. These koans will be very simple, so don't overthink them! As you progress through more koans, more and more F# syntax will be introduced which will allow you to solve more complicated problems and use more advanced techniques. To get started clone the project and open it in Visual Studio(Code). Browse to the FSharpKoans project and run ‘dotnet wa

ASP.NET Core SignalR–Add a Redis backplane

The moment that you start using SignalR, better sooner than late you should add a backplane to allow scaling out your backend services. At the moment of writing I’m aware of only 2 possible backplanes: The Azure SignalR service: in this case you are moving your full SignalR backend logic to Azure. Azure will manage the scale out for you. Redis backplane: in this case you are still running the SignalR backend yourself and only the data is replicated through Redis In their official documentation Microsoft refers to the Microsoft.AspNetCore.SignalR.StackExchangeRedis nuget package, but I’m using the (official?) Microsoft.AspNetCore.SignalR.Redis package. Here are the steps you need to take to use it: Add the Microsoft.AspNetCore.SignalR.Redis package to your ASP.NET Core project. In your startup class add the AddRedis line to your SignalR middleware configuration: Add the Redis connectionstring to your configuration. That’s all! Remark: Don’t for

ASP.NET Web API–Mediatype exception on POST

When trying to send some JSON data through an HTTP POST, I got the following exception message back: The request entity's media type 'text/plain' is not supported for this resource. No MediaTypeFormatter is available to read an object of type 'TransportDocument' from content with media type 'text/plain' This error showed up because I forgot to specify the content type in the HTTP headers. Here is one way how to fix this:

SignalR - CORS error

After updating SignalR my program started to fail with the following error message: Access to XMLHttpRequest at 'http://localhost:22135/chat/negotiate?jwt=<removed the jwt key>' from origin 'http://localhost:53150' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'. The credentials mode of requests initiated by the XMLHttpRequest is controlled by the withCredentials attribute. To fix it I had to update the CORS policy by adding the AllowCredentials() method:

ElasticSearch Integration Testing with ElasticSearch Inside

Integration Testing can be quite cumbersome, especially when you have a lot of moving parts involved. To test my ElasticSearch code I was used to spin up a docker instance and discard it after the tests ran. Recently I changed my approach after receiving the following tip from a colleague(thanks Jasper!): https://github.com/poulfoged/elasticsearch-inside   ElasticSearch Inside is  a fully embedded version of Elasticsearch for integration tests. When the instance is created both the JVM and Elasticsearch itself is extracted to a temporary location and started. Once disposed everything is removed again. And despite what you may think, this happens really fast(a few seconds on my machine). How to Install the Nuget package: Install-Package elasticsearch-inside After that you have to create a new instance of the ElasticSearch class and wait for the Ready() method. using (var elasticsearch = new Elasticsearch()) { await elasticsearch.Ready(); } Now

Breaking changes when updating SignalR for .NET Core

I had to do some changes to one of our applications and noticed that it was still using the beta release of SignalR. So I thought it would be a good idea to quickly do an update to the release version. There were some breaking changes I was able to fix quite easily: From: To: From: To: From: To: From: To:

GraphQL–DotNet–Nullable types: Nullable<> cannot be coerced to a non nullable GraphQL type

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to use nullable types inside my GraphQL schema. By default when you use a nullable type inside your GraphQL schema, you get the following error message: ArgumentOutOfRangeException: Explicitly nullable type: Nullable<DateTime> cannot be coerced to a non nullable GraphQL type. To fix this, you have to expliclity specify the field as nullable in you GraphQL type definition:

GraphQL–DotNet - The type: Guid cannot be coerced effectively to a GraphQL type

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to use Guids inside my GraphQL schema. On one of my projects we are using Guids. Here is the mapping I tried to use: When loading my GraphQL endpoint this resulted in the following error message: The type: Guid cannot be coerced effectively to a GraphQL type To fix it I had to explicitly specify the FieldType as IdType:

GraphQL–DotNet–Expose exceptions

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to expose exceptions. Out of the box you get a generic GraphQL exception, but if you want to drill down into what is going on, you have to change some configuration: I’m using https://github.com/graphql-dotnet/server to serve my ASP.NET Core GraphQL endpoint. When configuring the middleware you have to add a specific setting to activate exception details:

GraphQL-DotNet–Use an async resolver

I’m spending a lot (read: “waaay to many”) time learning GraphQL. One of the things I had to figure out was how to call an async method inside a field resolver. You have 2 options: Option 1 -  Use the Field<> method and return a Task: Option 2 – Use the FieldAsync<> method and await the result:

Angular-oauth2-oidc: Error validating tokens. Wrong nonce.

After integrating the Angular-oauth2-oidc library in our application, we got the following error message when invoking the Implicit Flow: Error validating tokens. Wrong nonce. This is the code we were using: Problem is that the loadDiscoveryDocument is a promise and we didn’t await for the result. As a consequence the nonce from the first request(loadDiscoveryDocumentAndTryLogin) is overwritten by the second request(initImplicitFlow) causing the error above. To fix it we have to chain the requests together:

GraphQL .NET(Core) - Blog series

Quick tip if you want to get started with GraphQL, have a look at this great blog series: GraphQL with ASP.NET Core (Part- I : Hello World) GraphQL with ASP.NET Core (Part- II : Middleware) GraphQL with ASP.NET Core (Part- III : Dependency Injection) GraphQL with ASP.NET Core (Part- IV : GraphiQL - An in-browser IDE) GraphQL with ASP.NET Core (Part- V : Fields, Arguments, Variables) GraphQL with ASP.NET Core (Part- VI : Persist Data - Postgres with EF Core) GraphQL with ASP.NET Core (Part- VII : Mutation) GraphQL with ASP.NET Core (Part- VIII : Entity Relations - One to Many) GraphQL with ASP.NET Core (Part- IX : Entity Relations - Many to Many) GraphQL with ASP.NET Core (Part- X : Data Loader - Series Finale)

F#–Write your own Excel in 100 lines of code

I’m a big F# lover. I really fell in love with the density and expressiveness of the language. Today I noticed the following blog post where Tomas Petricek wrote an Excel variant in about 100 lines of F# code. Truly impressive! Go check out the blog post here; http://tomasp.net/blog/2018/write-your-own-excel/ and have a look at the completed code here; https://github.com/tpetricek/elmish-spreadsheet/tree/completed

ELK stack–Getting started

Yesterday Elastic announced the new releases of their product suite. Here is the general announcement; https://www.elastic.co/blog/elastic-stack-6-5-0-released and here are the announcements for the specific products: ElasticSearch: https://www.elastic.co/blog/elasticsearch-6-5-0-released Kibana: https://www.elastic.co/blog/kibana-6-5-0-released LogStash: https://www.elastic.co/blog/logstash-6-5-0-released Beats: https://www.elastic.co/blog/beats-6-5-0-released ES-Hadoop: https://www.elastic.co/blog/es-hadoop-6-5-0-released APM: https://www.elastic.co/blog/elastic-apm-6-5-0-released Best way to try out the new features is by using the available Docker images at https://www.docker.elastic.co/ To help you getting started I created a docker-compose file that combines ElasticSearch, Kibana and APM: https://github.com/wullemsb/elasticgettingstarted/blob/master/docker-compose.yml

Learning PWA’s - Service Workies

Progressive Web Apps are evolving quite fast and Google is leading the initiative. At Chrome Dev Summit 2018 they reconfirmed the message that the future of PWA’s is looking great. With deeper OS integrations, improved speed and upcoming new capabilities , the gap between native and PWA will possible be closed faster than we think. So time to start learning about PWA’s and no better way to learn something than through gamification. Dave Geddes , who created Flexbox Zombies and CSS Grid Critters, created a new learning game. Service Workies helps you understand Service Workers soup to nuts. The first chapter of the adventure is rolling out in beta now. Google partnered with Dave to make sure the full adventure can be free to all.

TFS Build SonarQube Error - SonarAnalyzer.dll could not be found

Got a call for help from a colleague who couldn’t get an Acceptance release out of the door. Problem was that the automated build responsible for packaging and deploying the Acceptance version failed. He asked me to take a look… Inside the build logs we noticed the following error message: 2018-11-13T06:35:22.4133574Z (CoreCompile target) -> 2018-11-13T06:35:22.4133574Z   CSC : error CS0006: Metadata file 'D:\b\4\agent\_work\_temp\.sonarqube\resources\1\Google.Protobuf.dll' could not be found [D:\b\4\agent\_work\40\s\MTIL.Domain\MTIL.Domain.csproj] 2018-11-13T06:35:22.4133574Z   CSC : error CS0006: Metadata file 'D:\b\4\agent\_work\_temp\.sonarqube\resources\1\SonarAnalyzer.CSharp.dll' could not be found [D:\b\4\agent\_work\40\s\MTIL.Domain\MTIL.Domain.csproj] 2018-11-13T06:35:22.4133574Z   CSC : error CS0006: Metadata file 'D:\b\4\agent\_work\_temp\.sonarqube\resources\1\SonarAnalyzer.dll' could not be found [D:\b\4\agent\_work\40\s\MTIL.D

ASP.NET Core–Serve a default.html

Sometimes you lose waaay to much time on something that looks obvious once you find it. Today I was searching a solution to serve a default.html file when the root url was called in Asp.NET Core; e.g. http://mysamplesite/root/ I was first fiddling around with the StaticFiles middleware but couldn’t find a way. Turns out there is another middleware; DefaultFiles that handles this use case. Just call the UseDefaultFiles method from Startup.Configure : public void Configure(IApplicationBuilder app) {   app.UseDefaultFiles();   app.UseStaticFiles(); } With UseDefaultFiles , requests to a folder search for: default.htm default.html index.htm index.html The first file found from the list is served as though the request were the fully qualified URI. The browser URL continues to reflect the URI requested. Remark: UseDefaultFiles must be called before UseStaticFiles to serve the default file. More information: https://docs.microsoft.

Azure DevOps–Aad guest invitation failed

When trying to invite a user from an external domain to Azure Devops, it failed with the following error message: "Aad guest invitation failed" What is going on? Our Azure DevOps is backed by Azure AD(and synced with our internal AD). To invite the user, we’ll have to add him first to our Azure AD before we can add him as a guest to our Azure DevOps account.

MassTransit - Increase the throughput of your consumers

While running some stress tests on our environment, I noticed that our queues started to fill up. When I took a look at our MassTransit consumers, they were processing 10 messages simultaneously but not more although the CPU on the server was not stressed at all. What is going on? The reason is due to the number of messages the RabbitMQ transport will “prefetch” from the queue. The default for this is 10, so we can only process 10 messages simultaneously. To increase this, we can do 2 things: Configure the prefetchcount for your endpoint using the PrefetchCount property on the IRabbitMqBusFactoryConfigurator class. Add a ?prefetch=X to the Rabbit URL. Remark: As a general recommendation, PrefetchCount should be relatively high, so that RabbitMQ doesn't choke delivering messages due to network delays.

MassTransit–Fault events vs Error Queues

Something that was kind of confusing for me was the relationship between MassTransit Fault<T> events and Error Queues. When a MassTransit consumer throws an exception, the exception is caught by middleware and the message is moved to an _error queue (prefixed by the receive endpoint queue name). The exception details are stored as headers with the message. I was thinking that the Messages were wrapped in a Fault<T> event before they were stored in the queue. But it turns out that they are unrelated. What really happens is that in addition to moving the message to an error queue, MassTransit also generates a Fault<T> event. This event is either sent to a FaultAddress or ResponseAddress if present or the fault is published.

Entity Framework Core–Table per hierarchy

In my Entity Framework Core application I wanted to use the Table Per Hierarchy pattern. So I configured the EF mapping to use a discriminator column: This didn’t had the desired effect. As I ran my application I got the following error message: The entity type 'ForeignSupplier' is part of a hierarchy, but does not have a discriminator value configured. Whoops! I forgot to specify how EF Core should recognize the different child types. Let’s fix this in our configuration:

Entity Framework Core - The EF Core tools version '2.1.1-rtm-30846' is older than that of the runtime '2.1.4-rtm-31024'.

When trying to create my initial migration using EF Core, I got the following error message: Add-Migration InitialCreate The EF Core tools version '2.1.1-rtm-30846' is older than that of the runtime '2.1.4-rtm-31024'. Update the tools for the latest features and bug fixes. System.IO.FileLoadException: Could not load file or assembly 'Microsoft.EntityFrameworkCore.Relational, Version=2.1.3.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) File name: 'Microsoft.EntityFrameworkCore.Relational, Version=2.1.3.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'    at Microsoft.EntityFrameworkCore.Design.Internal.DbContextOperations.<>c.<FindContextTypes>b__12_4(TypeInfo t)    at System.Linq.Enumerable.WhereSelectListIterator`2.MoveNext()    at System.Linq.Enumerable.WhereEnumerableIterator`1.Mo

Serilog–How to enable Seq integration through config?

I’m a big fan of structured logging in general and Serilog in particular. Especially in combination with Seq it is unbeatable. Today I lost some time searching how to specify the Seq log sink using configuration. Here are the steps I had to take: Add the Serilog.Settings.Configuration NuGet package to your project(this one is for ASP.NET Core, others exists for web.config, …) Create a LoggerConfiguration: After that you can specify your configuration inside your appsettings.json:

Git–Remove local commits

Quick reminder for myself; I wanted to revert my local commits and reset my branch to the status on the origin. I didn’t push my changes yet, so no need to use revert . As I always forget the correct statement, here it is; git reset --hard origin/<branch_name>

ElasticSearch–Reindex API

At first when I had to recreate or change an index in ElasticSearch I used a rather naïve approach. I deleted and recreated my index and start processing all data again from the original source. This approach worked but put a lot of stress on the network and ElasticSearch nodes. A better solution is to use the reindex API . It enables you to reindex your documents without requiring any plugin nor external tool. Thanks to the existence of the _source field you already have the whole document available to you in Elasticsearch itself. This means you can start from your existing index and use the reindex API to create a new index next to it: POST _reindex { "source": { "index": "twitter" }, "dest": { "index": "new_twitter" } } Remark: Reindex does not attempt to set up the destination index. You should set up the destination index prior to running a _reindex action, including setting up mappings,

Advanced async/await using AsyncUtilities

If you want to do some advanced stuff with async/await in C# I can recommend the AsyncUtilities library from Bar Arnon. It contains a set of useful utilities and extensions for async programming. From the documentation : Utilities: ValueTask AsyncLock Striped Lock TaskEnumerableAwaiter CancelableTaskCompletionSource Extension Methods: ContinueWithSynchronously TryCompleteFromCompletedTask ToCancellationTokenSource I especially like the support for TaskEnumerableAwaiter and the nice way he implemented it using the fact that when implementing async/await the compiler doesn’t expect Task or Task<T> specifically, it looks for a GetAwaiter method that returns an awaiter that in turn implements INotifyCompletion and has: IsCompleted : Enables optimizations when the operation completes synchronously. OnCompleted : Accepts the callback to invoke when the asynchronous operation completes. GetResult : Returns the result of the

Stay up-to-date on everything what happens in Azure

Microsoft Azure is evolving at an enormous speed making it hard to keep up. The best way to stay up-to-date is to subscribe the Azure newsletter , giving you a weekly updated on everything what’s going on in Azure land.

Owin–Stage Markers

While reviewing some code I noticed the following code snippet; I had no clue what Stage Markers where so time to dig in… What are stage markers? Stage markers play a rol when you are using OWIN in IIS. IIS has a specific execution pipeline containing a predefined set of pipeline events. If you want to run a specific set of OWIN middleware  during a particular stage in the IIS pipeline, you  can use the UseStageMarker method as seen in the sample above. Stage marker rules There are rules on which stage of the pipeline you can execute middleware and the order components must run. Following stage events are supported: By default, OWIN middleware runs at the last event ( PreHandlerExecute ). To run a set of middleware components during an earlier stage, insert a stage marker right after the last component in the set during registration. More information: https://docs.microsoft.com/en-us/aspnet/aspnet/overview/owin-and-katana/owin-middleware-in-the-iis-integrated-pip

ElasticSearch - Log request and response data

For debugging purposes it can be useful to see the executed REST call. To make this possible using NEST you first have to disable Direct Streaming after which you can start capturing request and response data. Here is a short snippet on how to enable this in your application: Important: Don’t forget to disable this when you go to production, it has a big impact on performance.

ElasticSearch - Enable HttpCompression

Traffic between your application and ElasticSearch is uncompressed by default. If you want to gain some performance(at the cost of some CPU cycles), it is probably a good idea to enable http compression. Here is how to enable this using NEST:

Application Insights–Change the application name in the Application Map

When you use the default configuration in Application Insights, the default role name (on-premise) used is the name of your Application Insights resource: This is not that meaningful especially because right now our frontend and backend telemetry is shown together. Let’s fix this by introducing a TelemetryInitializer. In this TelemetryInitializer we update the RoleName: Create an initializer in both the frontend and backend project: Don’t forget to change the role name accordingly. Load this initializer in your global.asax: · Run the application again Remark: You’ll have to wait some time before the updated names show up on the Application map

Team Foundation Server 2018 Update 3–Securing search

With the release of Team Foundation Server Update 3, security was enhanced by enabling basic authentication between TFS and the Search service. Before there was no security enabled out-of-the-box. This means that when you try to upgrade to Update 3, you are required to provide a new user/password combination: More information: https://blogs.msdn.microsoft.com/devops/2018/09/13/search-msrc-fix-for-tfs-2017-update-3/

ASP.NET - HTTP Error 500.24 - Internal Server Error

After configuring a new application in IIS and deploying our code to it, the server returned the following error: HTTP Error 500.24 - Internal Server Error An ASP.NET setting has been detected that does not apply in Integrated managed pipeline mode. Most likely causes: •system.web/identity@impersonate is set to true. The error message itself pointed us to the 'impersonate’ setting that was causing this error. However for this website we would like to use impersonation, so disabling it was not an option. Instead what we did was ignoring this error by adding following configuration in our web.config: <configuration> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> </system.webServer> </configuration>

DevOps Quick Reference posters

Just a quick tip for today. Willy-Peter Schaub shared a nice set of quick reference posters about Azure and DevOps on Github : One I especially liked is the DevOps approach @ Microsoft:

ASP.NET Core Configuration - System.ArgumentException: Item has already been added.

In ASP.NET Core you can specify an environment by setting the 'ASPNETCORE_ENVIRONMENT'  environment variable on your server. Yesterday however I wanted to setup a second environment on the same machine. To achieve this I added a second website to IIS and set the environment variable through a parameter in my web.config: Unfortunately after doing that my application no longer worked. In the Event Viewer I could see the following error message: Application: dotnet.exe CoreCLR Version: 4.6.26628.5 Description: The process was terminated due to an unhandled exception. Exception Info: System.ArgumentException: Item has already been added. Key in dictionary: 'ASPNETCORE_ENVIRONMENT'  Key being added: 'ASPNETCORE_ENVIRONMENT'    at System.Collections.Hashtable.Insert(Object key, Object nvalue, Boolean add)    at System.Environment.ToHashtable(IEnumerable`1 pairs)    at System.Environment.GetEnvironmentVariables()

ASP.NET Core–Environment Tag Helper

One of the nice features in ASP.NET Core (MVC) is the introduction of Tag Helpers, a more HTML friendly alternative for HtmlHelpers. A built-in Tag Helper is the Environment Tag Helper ,  it conditionally renders its enclosed content based on the current hosting environment . In my case I wanted to render some content in the development environment and add some other content in all other environments(test, uat, production). To support this scenario, the Environment Tag Helper gives you include & exclude attributes to control rendering the enclosed content based on the included or excluded hosting environment names. An example: In the example above I'm changing the base href for Angular depending on the environment. In development I'm running my angular site under the web application root whereas in other environments I'm running my angular site under a virtual directory.

Which OAuth flow should I use?

OAuth 2.0 supports several different grants . By grants we mean ways of retrieving an Access Token. Unfortunately it can be quite a challenge to find out which grant should be used in which situation. The guys from Auth0 created the following diagram to help you arrive at the correct grant type:

Async/Await–.ConfigureAwait(false)

On one of my projects, a developer asked my help when he noticed that the application just hang after invoking a specific action on a Web API controller. In this at first innocent looking piece of code he was using a combination of async and non-async code. Here is a simplified example; If you ran the code above then you end up with a deadlock. This is what happens: Thread "A" would be given to the request to run on and "Index" would be called Thread "A" would call "DoSomethingAsync" and get a Task reference Thread "A" would then request the ".Result" property of that task and would block until the task completed The "Task.Delay" call would complete and the runtime would try to continue the "DoSomethingAsync" work The ASP.NET synchronization context would require that work continue on Thread "A" and so the work would be placed on a queue for Thread "A". Threa

WSFederation OWIN - Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly 'System.IdentityModel.Tokens.Jwt, Version=5.0.0.127, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

At the end of last year I blogged about the following exception I got when using the WSFederation OWIN middleware together with ADFS. Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly 'System.IdentityModel.Tokens.Jwt, Version=5.0.0.127, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. The problem was related to an incompatibility between the OWIN version(5) and the Microsoft.IdentityModel.Tokens nuget packages. In the meanwhile some newer versions of this package are released, making the issue disappear starting from version 5.2.

Introducing Microsoft Learn

Last month Microsoft launched their new learning website called Microsoft Learn . Goal of Microsoft Learn is to become the one stop for self-paced, guided learning on all of  Microsoft’s platform products and services. Today the site already offers more than 80 hours of learning content for Azure, Dynamics 365, PowerApps, Microsoft Flow, and Power BI. Among that content, you’ll find experiences that will help get you ready for new certification exams for developers, administrators, and solution architects.

ElasticSearch–More like this

One of the nice features in ElasticSearch is ‘The More Like This’ Query. ‘The More Like This’ Query finds documents that are "like" a given set of documents. In order to do so, MLT selects a set of representative terms of these input documents, forms a query using these terms, executes the query and returns the results. An example: The really important parameter is the Like() where you can specify free form text and/or a single or multiple documents. More information: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-mlt-query.html

Akka vs Orleans–Comparing Actor systems

If you are looking for a balanced evaluation and comparison between Akka and Orleans, have a look at https://github.com/akka/akka-meta/blob/master/ComparisonWithOrleans.md .   The core message is well described in the introduction; The most interesting aspect is the difference in primary focus between the two projects: The primary focus of Orleans is to simplify distributed computing and allow non-experts to write efficient, scalable and reliable distributed services. Akka is a toolkit for building distributed systems, offering the full power but also exposing the inherent complexity of this domain. Both projects intend to be complete solutions, meaning that Orleans’ second priority is to allow experienced users to control the platform in more detail and adapt it to a wide range of use-cases, while Akka also raises the level of abstraction and offers simplified but very useful abstraction. Another difference is that of design m

IIS–Decryption key specified has invalid hex characters

After setting a machine key inside my web.config, I got the following IIS error: Decryption key specified has invalid hex characters Here is the related web.config line: <machineKey decryptionKey="decription key is here" validation="SHA1" validationKey="validationkey,IsolateApps" /> The root cause of this error is that in fact the configuration I specified above is invalid. Using an explicit decryptionKey together with the IsolateApps modifier doesn’t work. The IsolateApps modifier causes ASP.NET to generate a unique key for each application on your server. This is only applicable if you are getting ASP.NET to auto-generate keys at runtime. More information: https://stackoverflow.com/questions/15002960/isolateapps-causes-decryption-key-specified-has-invalid-hex-characters

NHibernate.MappingException: No persister for: Sample.Product

After creating a new mapping file using the code mapping feature in NHibernate, it didn’t work when I tried to persist an instance of the object. Instead I got the following error message: NHibernate.MappingException: No persister for: Sample.Product Here is the mapping file I’m using: Do you notice what’s wrong? …. I forgot to make my mapping class public: public class ProductMapping: ClassMapping<Product>

Visual Studio 2017–SQL Server Data Tools- Custom assemblies could not be loaded

On one of my SQL Server Reporting Services reports, I had to use some custom logic. Instead of embedding the code directly into the report, I decided to create a separate assembly which makes it easier to test and debug this code. After importing the assembly in my Visual Studio 2017 Reporting Services project, I couldn’t load the Report Designer preview window anymore. Instead I got the following error message: An error occured during local report processing. The definition of the report ‘/samplereport’ is invalid. Error while loading code module: ‘Sample.Barcode, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null’. Details: Could not load file or assembly ‘Sample.Barcode, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null’ or one of its dependencies. The system cannot find the file specified. To fix this I had to copy the DLL to the following location: C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\CommonExtensions\Microsoft\SSRS

Windows Identity Foundation - Using a machine key to encrypt/decrypt your session token

By default WIF uses a built-in SessionSecurityTokenHandler to serialize the session token to and from the cookie. Behind the scenes this tokenhandler uses the Data Protection API (DPAPI) to protect the cookie material. DPAPI uses a key that is specific to the computer on which it is running in its protection algorithms. For this reason, the default session token handler is not usable in Web farm scenarios because, in such scenarios, tokens written on one computer may need to be read on another computer. As a solution you can switch the default SessionSecurityTokenHandler by a machine key based alternative: After doing that, there is one extra step required. The default IIS configuration autogenerates a machine key per application pool. To generate a specific key and copy it to all server instances on your web farm, remove the checkboxes next to the ‘Automatically generate at runtime’ option and choose Generate Keys from the action menu on the right. Now you can copy and

F# 4.5–The ‘match!’ keyword

F# 4.5 introduces the match! keyword which allows you to inline a call to another computation expression and pattern match on its result. Let’s have an example. Here is the code I had to write before F# 4.5: Notice that I have to bind the result of callServiceAsync before I can pattern match on it. Now let’s see how we can simplify this using the ‘match!’ keyword: When calling a computation expression with match! , it will realize the result of the call like let! . This is often used when calling a computation expression where the result is an optional .