Friday, July 29, 2016

Bringing some sanity to MicroServices–Beyond the hype

Microservices, “SOA done right”, are the cool kids of the moment. Everyone wants to know them and sit next to them in the classroom. Everytime I see a customer just trying to copy the Netflix architecture or the Facebook architecture, it makes me cringe. Every architecture should be problem space first and technology/architecture second. Too many times people just start with a cool new architecture and then try to fit it inside their codebase, project and IT landscape.  The new golden hammer makes everything look a nail…


So reading the blog posts of Christian Posta about Microservices was a relief for me. Finally some sanity in this crazy world. If you think that microservices fit your problem domain and you want to get some real insights, I recommend reading his blog series:

I hope that more posts will follow!

Although I’m not a Java developer, I even ordered his book, Microservices for Java developers Smile

Thursday, July 28, 2016

Upgrading TFS–Don’t change the language of your TFS deployment during upgrade

I helped a customer upgrading their existing TFS 2010 environment to TFS 2015. Where the old version of TFS was using the English bits, for the 2015 installation they provided me the Italian installation media.

When I started the configuration process to trigger the upgrade, the installation wizard gave the following warning:


OK, it’s just a warning I thought, what harm could be done? So I clicked on the checkbox next to “I have read the above warning and would like to continue anyway”?

Turns out, that this was not one of my brightest ideasSmile. A few hours into the upgrade process, a failure popped up;

[Info   @19:00:17.421] Executing step: 'Add Process Templates' ConfigurationProcess.AddProcessTemplates (716 of 765)

[Info   @19:00:17.439] Loading process template: Deploy\ProcessTemplateManagerFiles\1033\MsfAgile\Metadata.xml

[Error  @19:00:17.476] Value cannot be null.

Parameter name: input

[Info   @19:00:17.509] System.ArgumentNullException: Value cannot be null.

Parameter name: input

   at System.Xml.XmlReaderSettings.CreateReader(Stream input, Uri baseUri, String baseUriString, XmlParserContext inputContext)

   at System.Xml.XmlReader.Create(Stream input, XmlReaderSettings settings, String baseUri)

   at Microsoft.TeamFoundation.Admin.Deploy.Application.ProcessStepPerformer.ProcessTemplateMetadata..ctor(Stream metadataResourceStream, AddProcessTemplateStepData template)

   at Microsoft.TeamFoundation.Admin.Deploy.Application.ProcessStepPerformer.ProcessTemplateMetadata.GetMetadata(IVssRequestContext requestContext, ServicingContext servicingContext, AddProcessTemplateStepData template)

   at Microsoft.TeamFoundation.Admin.Deploy.Application.ProcessStepPerformer.AddProcessTemplates(IVssRequestContext targetRequestContext, ServicingContext servicingContext, AddProcessTemplatesStepData stepData)

   at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformHostStep(String servicingOperation, ServicingOperationTarget target, IServicingStep servicingStep, String stepData, ServicingContext servicingContext)

   at Microsoft.TeamFoundation.Framework.Server.TeamFoundationStepPerformerBase.PerformStep(String servicingOperation, ServicingOperationTarget target, String stepType, String stepData, ServicingContext servicingContext)

   at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.PerformServicingStep(ServicingStep step, ServicingContext servicingContext, ServicingStepGroup group, ServicingOperation servicingOperation, Int32 stepNumber, Int32 totalSteps)

[Info   @19:00:17.509] [2016-07-14 17:00:17Z] Passaggio del servizio 'Add Process Templates' non riuscito. (Operazione del servizio: 'ToDev14M95Deployment'; Gruppo di passaggi: 'UpgradeProcessTemplates')

[Info   @19:00:17.509] [StepDuration] 0,0883791

[Info   @19:00:17.510] [GroupDuration] 0,0900522

[Info   @19:00:17.510] [OperationDuration] 0,252088

[Info   @19:00:17.510] Clearing dictionary, removing all items.

[Error  @19:00:17.511]

Messaggio eccezione: TF400711: errore durante l'esecuzione del passaggio del servizio 'Add Process Templates' per il componente UpgradeProcessTemplates durante ToDev14M95Deployment: Value cannot be null.

Parameter name: input (tipo TeamFoundationServicingException)

Traccia dello stack eccezione:    at Microsoft.TeamFoundation.Framework.Server.ServicingContext.FinishStep(Exception exception)

   at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.PerformServicingStep(ServicingStep step, ServicingContext servicingContext, ServicingStepGroup group, ServicingOperation servicingOperation, Int32 stepNumber, Int32 totalSteps)

   at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.PerformOperations(Int32 stepsToPerform)

   at Microsoft.TeamFoundation.Framework.Server.ServicingStepDriver.Execute(Int32 numberOfStepsToPerform)

   at Microsoft.TeamFoundation.Admin.UpgradeConfigDbDriver.Execute()

   at Microsoft.TeamFoundation.Admin.ConfigureUpgradeConfigDB.Run(ActivityContext context)

I had no clear indication if this issue was caused by the language change, but the fact that I had to click a checkbox had made me suspicious.

So I asked to download the English installation media, restored the backup and restarted the upgrade process. This time the process completed without any errors.

Lesson learned; don’t ignore warnings…

Wednesday, July 27, 2016

Using a SQL Server database as an ASP.NET session store

Whenever possible I prefer a stateless design, but sometimes using some session state can make your life so much easier. And we like easy right? This week I had to upgrade some old ASP.NET MVC applications to be able to move them to Azure. The only thing I had to do was to reconfigure the session state(which was in memory) to a persistent store(SQL Server in this case).

As it was a long time ago, I didn’t remember the involved steps. So here is a quick summary:

  • Open a Visual Studio Developer Command Prompt
  • Execute the following command:
    • C:\Program Files (x86)\Microsoft Visual Studio 11.0>aspnet_regsql -sstype c -ssadd –d <databasename>–S <servername>-U <userid> -P <password>
    • In our case we wanted to use a separate database(and not TempDB which is the default), so therefore we had to specify some extra parameters.
  • Update your web.config
    • <sessionstate mode="SQLServer" timeout="20" allowcustomsqldatabase="true" sqlconnectionstring="Data Source=<servername>;Initial Catalog=<databasename>; User ID=<UserID>;Password=<Password>;" cookieless="false">
  • That’s it!

For more information, have a look at

Some extra steps are required to make it work for multiple servers:

  • Have same application path on all web servers.
  • Use same machine key on all web servers. Machine key is used for encryption/decryption of session cookies. If machine keys are different, one server can't decrypt session cookie saved by other servers, so sessions could not be read.

Remark: You can also use the SQL Server In Memory OLTP instead of a ‘normal’ database. More information:

Tuesday, July 26, 2016

WPF–Using the XmlnsDefinitionAttribute causes issues when loading a WPF form in an Addin

Yesterday a colleague asked me for help. She was building an addin for ESRI ArcGis and she wanted to load a WPF form and show some related data. The problem was that when loading the WPF form some of the DLL’s were searched in the Addin folder but for some of the DLL’s, .NET was looking inside the bin folder of the main application. Easy solution would be to just copy over all addin assemblies to the bin folder of ArcGis. Of course this was not what we want as it removes the advantages of the whole addin concept.

So the big question is; why are some DLL’s loaded from the correct location and others not?

Let’s have a look at the WPF form first:

In this form the services DLL was loaded correctly whereas the caliburn.micro dll(still my favorite MVVM framework Smile) was not. The difference is that the services DLL namespace is constructed using the clr-namespace syntax whereas the Caliburn.Micro namespace is constructed using an URL.

Where does this URL come from?

WPF defines a CLR attribute that is consumed by XAML processors in order to map multiple CLR namespaces to a single XAML namespace. This attribute, XmlnsDefinitionAttribute, is placed at the assembly level in the source code that produces the assembly. The WPF assembly source code uses this attribute to map the various common namespaces, such as System.Windows and System.Windows.Controls, to the namespace. This prevents you from loading a lot of namespaces yourself as they can be grouped using one XAML namespace alias.

Unfortunately it was exactly this functionality that caused the Addin loader to look for assemblies at the wrong location.

How did we fix it?

By relying on the CLR behavior that the runtime first will check if an assembly is already loaded before it will try to search for it on the file system, we were able to solve the issue. Therefore inside the bootstrap logic of the addin we added the following line to explicitly load the Caliburn.Micro assembly into the appdomain:

Monday, July 25, 2016

FAAS: Serverless architectures with Function as a Service

After PAAS(Platform as a Service), IAAS(Infrastructure as a Service) and SAAS(Software as a Service), it is now time for FAAS; Function as a Service.  FAAS is one of the incarnations of Serverless architectures(BAAS, BackEnd as a Service is another one).

Let’s have a look at what Microsoft has to offer in the FAAS space; Azure Functions.

Azure Functions is a serverless event driven experience that extends the existing Azure App Service platform. These nano-services can scale based on demand and you pay only for the resources you consume.

Getting started

  • Go to the Azure Functions product page. Click on the big green Get started button.


  • Log in with an account linked to an Azure subscription.
  • If you logged in succesfully, you’ll be redirected to a Get started page(an Angular 2 app Smile) where you can configure the following information
    • Your subscription: select one of the associated subscriptions for this account
    • Name: select a name for your Azure Function(should be unique)
    • Region: select a target region
  • Click on the Create + get started button.


  • After the Azure function is created, you will be redirected to the Azure portal where you are welcomed by a Quickstart screen.


  • Let’s walk through the  Quickstart.
    • First we choose one of the sample scenario’s; let’s pick the Timer scenario.


    • Second choose a programming language. At the moment C# and JavaScript are supported.


    • Click on the Create this function button to complete the Quickstart.
  • And there we have it, our first Azure function:


Wednesday, July 20, 2016

Using work item templates in TFS 2015 and VSTS

Last week I heared a colleague that he had to customize the TFS work item templates to prepopulate some fields with default values. I had some good news for him. TFS (and VSTS) has a nice productivity feature; work item templates. With work item templates you can quickly create work items which have pre-populated values for your team's commonly used fields. And that’s exactly what he needed.

Here are the steps required to create a work item template:

  • Open the TFS web portal
  • Go to the Work hub and click on the Queries tab


  • Click on New and choose the Work Item type you want to create a template for


  • On the New Work Item form fill in the default values you want the template to contain. Next,
    • If you are using VSTS choose the Copy template URL option from the context menu:


    • If you are using TFS on premise, click on the Copy template URL button:


  • An URL is generated and copied to your clipboard. Here is an example of the generated url:

  • You can now share this url or put it for example on your TFS dashboard

Remark: If you don’t want to use the web portal but use Visual Studio or Team Explorer most of the time, you could use the Work Item Templates feature as part of the TFS Power Tools extension.

Tuesday, July 19, 2016

Microsoft REST API guidelines

If you ever build a REST(like) service, you know that there is no one ‘right’ way to build such a service. Okey, you have the HTTP specs and the REST dissertation by Roy Fielding but this still keeps a lot of questions unanswered:

  • How do you handle versioning?
  • When do you use which HTTP status code?
  • What metadata do you add to your headers?
  • How do you return error data?

In an effort to structurize all REST services build by Microsoft in the same way, they made their REST API guidelines publicly available:

The Microsoft REST API Guidelines, as a design principle, encourages application developers to have resources accessible to them via a RESTful HTTP interface. To provide the smoothest possible experience for developers on platforms following the Microsoft REST API Guidelines, REST APIs SHOULD follow consistent design guidelines to make using them easy and intuitive.

This document establishes the guidelines Microsoft REST APIs SHOULD follow so RESTful interfaces are developed consistently.

If you have to decide on your own standards to use inside your organization or you want to consume a Microsoft REST service, this guide is certainly worth a look.



Monday, July 18, 2016

Cloud Design Patterns infographic

I shared some great resources regarding Cloud related design patterns before:

Here is another one to add to the list:

  • Cloud Design Patterns Infographic: An interactive infographic that depicts common problems in designing cloud-hosted applications and shares design patterns to solve them.


Friday, July 15, 2016

TFS Build Test Task–How to exclude a set of tests

The Test Task in TFS Build allows you to specify which assemblies should be searched for (unit) tests.



By using the Test Filter criteria, you can further filter the set of Tests you want to be executed:


Based on the Tooltip you get the impression you can only specify which tests to include based on things like TestName, TestCategory, … But under the hood there is a lot more possible. You can not only include tests(using ‘=’), but also exclude tests(using ‘!=’), execute a contains check (using ‘~’) and group subexpressions (using ‘( <subexpression>)’).

Here is a short sample where I exclude all PerformanceTests, include tests with Priority 1 and where the name contains the word ‘UnitTest’:


Thursday, July 14, 2016

NUnit 3: Trace.WriteLine not captured in Test Explorer

After upgrading to NUnit 3, I noticed that no Tracing information was captured and written to the Test Output. (Note: I didn’t verify but I’m quiet sure it worked when using NUnit 2.x)

After searching through the documentation and Issues list on GitHub, I found the following quote:

“Currently, the "channels" we capture are Console.Out, Console.Error and TestContext.Out. ”

So indeed no Trace.Write(Line) in the list. Let’s check if this statement is correct…

Here is my test code:


And let’s now have a look at the test results:

  • Trace.WriteLine – no output available Sad smile:


  • TestContext.WriteLine – with output Winking smile:



Wednesday, July 13, 2016

Testing your Akka.NET actors using TestKit

Due to the nature of Actor based systems, they can be complex to test. Luckily Akka.NET, the actor framework of my choice, is accompanied by Akka.TestKit, a library that makes it easy to unit test Akka.NET actors.

How to get started?

Akka.TestKit has support for the most popular test frameworks for .NET; MSTest, XUnit and NUnit.

As I’m a long time NUnit user, I show you how to set it up using NUnit, but the steps for other frameworks are similar.

Step 1- Download the correct NuGet Package

For every test framework a seperate NuGet package is available, as I will use NUnit, I download the Akka.TestKit.NUnit3 nuget package. (If you are still using an older NUnit version, a separate package is available; Akka.TestKit.NUnit.

Step 2 – Initialize your test class

Every test class should inherit from TestKit. This class provides all the necessary wiring, setup and cleanup of the actor system and a range of helper methods.

Step 3 – Create an actor

Inside every test an ActorSystem instance is created making each test completely isolated. This ActorSystem is available through the Sys property on the TestKit class. You can create an actor using the Sys property or use the ActorOfAsTestActorRef method:

Step 4 – Send messages to your actor

Once your actor system is created, it’s time to start sending some messages:

Step 5 - Assert for results

Finally you can check for results. If you don’t expect any messages to be returned, you can use ExpectNoMsg() otherwise you can use the ExpectMsg() method:

Tuesday, July 12, 2016

A better sample database

Microsoft always provided us with some sample databases(remember Northwind, AdventureWorks and more recently Wide World Importers). These databases are small, easy to understand and great for testing purposes. The only problem is that they aren’t real world databases with real world problems. Also as they are quiet small, some performance issues will never show up when using these databases.

If you want the ‘real stuff’, I can recommend the Stack Overflow database. You can download a torrent of the SQL Server database version of the Stack Overflow data dump. Brent Ozar took the original data dump and converted it to a SQL Server database:

Be aware that the download is about 12GB and around 95GB after extraction.

Have fun Smile

Monday, July 11, 2016

Can you keep a secret?

A lot of applications store sensitive security related data inside their configuration. Things like API keys, database connection information, even passwords are directly accesible inside the app.config or web.config of a .NET application. Last week a colleague mentioned that they uploaded a project to GitHub accidently exposing the root AWS password for their Amazon account. Whooops!

With ASP.NET Core, Microsoft tries to solve this kind of problems with the introduction of the SecretManager command-line tool. This tool allows you to store these sensitive values in a secure way without exposing them through source control.

If you want to enable it, add the following entry to the “tools” section of project.json:

"Microsoft.Extensions.SecretManager.Tools": {
  "version": "1.0.0-preview1-final",
  "imports": "portable-net45+win8+dnxcore50"

You also need a unique identifier that links your project to the secret manager. Therefore add a userSecretsId for your project in its project.json file:

Now we can use the Secret Manager tool from a command window to set a secret:
dotnet user-secrets set MySecretKey MySecretValue

You can then reference the secret values stored by the secret manager by adding a reference to the Microsoft.Extensions.Configuration.UserSecrets package. Now we can add the “AddUserSecrets()” method to our Startup.cs file:


Probably you only want to do this during development, so wrap it inside an if block:


This will overwrite any configuration options loaded from a configuration file with the contents of the secret store. 

Remark: The secret store actually isn’t too secret, its just a set of JSON files hidden in your user profile folder.  It only prevents you from checking in these values into source control.

Friday, July 8, 2016

xcopy error - Invalid path 0 files copied

I was configuring a post build step to copy some files to the output directory of an other project. This was the XCOPY command I came up with:

XCOPY /E /Y “$(TargetDir)” “$(SolutionDir)\SecondApp\$(OutDir)”

Looks OK, right? But when I executed it, it failed with the following error message:

C:\projects\test>xcopy /E /Y "C:\projects\test\FirstApp\bin\Debug\" "C:\projects\test\SecondApp\bin\Debug\"

Invalid path

0 File(s) copied

These paths doesn’t look invalid. Strange! 

StackOverflow brought a solution;  Adding an ending dot to the source path should do the trick:

XCOPY /E /Y “$(TargetDir).” “$(SolutionDir)\SecondApp\$(OutDir)”

And indeed:

C:\projects\test>xcopy /E /Y "C:\projects\test\FirstApp\bin\Debug" "C:\projects\test\SecondApp\bin\Debug\"




3 File(s) copied

Thursday, July 7, 2016

Loading a configuration file in a separate appdomain

I lost some time today investigating how to load an app.config for a separate domain. My first attempt looked like this:

However the call to ConfigurationManager.AppSettings always returned null. I first thought that I did something wrong when configuring the AppDomainSetup.

I first tried to replace the AppDomainSetup configuration to explicitly load the config settings:

But this did not solve the issue. In the end I discovered that it was the ConfigurationManager.AppSettings that was causing the issue. When I replaced it with the code below, it started to work!

Wednesday, July 6, 2016

TFS Build: Failed to activate Xamarin license.

While helping a customer setting up a release pipeline for their Xamarin Mobile applications, I noticed that their CI(Continuous Integration) build was failing for some time.

When opening the build results I saw the following error message:

“Failed to activate Xamarin license. {"code":-3,"message":"Could not look up activation code."}”


The build was failing on the Activate Xamarin license step:


The acquisition of Xamarin by Microsoft made this license check obsolete, so just remove it and  you are good to!

Tuesday, July 5, 2016

Angular 2–Angular CLI

With the availability of Angular 2, I started experimenting with Angular CLI, a CLI for Angular 2 applications based on the ember-cli project. The project is still young and I encountered a lot of issues along the way.

Here are some tips and lesssons I learned:

  • Update Node to the latest LTS version(you need at least version 4 or later).
  • Run ng in a command prompt with Admin permissions
  • Be patient. The initial installation as well as ng new take a looooong time.
  • On Windows you need to run the build and serve commands with Admin permissions, otherwise the performance is awful.

Some errors I got:

  • “ng is not recognized as an internal or external command”
    • Check that %AppData%\npm is added to the PATH variable.
  • “SyntaxError: Use of const in strict mode.”
    • You are still using an older Node version that doesn’t support some of the new ES2015 features
  • “Error: Cannot find module 'exists-sync'”
    • For an unknown reason NPM still couldn’t find some packages. I installed them manually using the following command
      • npm install --save exists-sync
  • “The Broccoli Plugin: [BroccoliTypeScriptCompiler] failed with:Error: EMFILE: too many open files”
    • This happened when I tried to run ng test. As a workaround you can run ng build first and then run ng test –-watch false. This will run ng test without watching for file changes.

Monday, July 4, 2016

Validating message templates using SerilogAnalyzer

A few years ago, I switched my logging framework of choice to Serilog, a structured logging framework.

To use structured logging, you use a concept called ‘message templates’. The are a simple DSL extending .NET form strings. It looks a little bit like string interpolation, but with more power.

A quick example copied from the Serilog website:

The stored message will be a combination of the message template with the placeholders and the properties captured in JSON format:

The only problem is that similar to string.format and  in contrast to string interpolation, the compiler doesn’t give any warnings when the provided message template is incorrect or any of the parameters is missing.

Time to introduce SerilogAnalyzer, a Roslyn-based analysis for code using the Serilog logging library. It checks for common mistakes and usages problems.

A must have if you are using Serilog today! Download the Visual Studio Extension here:


Friday, July 1, 2016

The more you know, the more you know what you don’t know

The last weeks I interviewed a lot of people that wanted to join our company, some of them just finished their studies, some had a few years of work experience and some had years of experience working in the IT industry. If I compared CV’s between these people I noticed that their are big differences on how people rate their skills. Especially young professionals seems to be very confident that they are experts in things like HTML, JavaScript, CSS just to name a few, until the moment you really start to ask some though questions about each of these topics.

It made me think about the four stages of competence:

  1. Unconscious Incompetence. You don’t know what you don’t know.
  2. Conscious Incompetence. Now you know what you don’t know.
  3. Conscious Competence. You can think your way through an exercise and perform it with some conscious effort.
  4. Unconscious Competence. You can perform the task without thinking about it. It’s automatic. It’s burned into your body and it just knows what to do.

This is not only applicable when you are learning a new skill, but also when you want to teach this skill to someone else.

A good example is driving a car, once you know how to do it, it becomes quick an unconscious competence. The problem is that when you have to teach it to someone else, you first have to move it back to your conscious memory before you can start explaining it.

Conclusion: humans are strange creatures Smile