Skip to main content

Posts

Showing posts from July, 2016

Bringing some sanity to MicroServices–Beyond the hype

Microservices, “SOA done right”, are the cool kids of the moment. Everyone wants to know them and sit next to them in the classroom. Everytime I see a customer just trying to copy the Netflix architecture or the Facebook architecture, it makes me cringe. Every architecture should be problem space first and technology/architecture second. Too many times people just start with a cool new architecture and then try to fit it inside their codebase, project and IT landscape.  The new golden hammer makes everything look a nail… So reading the blog posts of Christian Posta about Microservices was a relief for me. Finally some sanity in this crazy world. If you think that microservices fit your problem domain and you want to get some real insights, I recommend reading his blog series: The real success story of microservices architectures. Why Microservices should be event-driven: autonomy vs authority? Carving the Java-ee monolith into microservices. Prefer verticals not layer

Upgrading TFS–Don’t change the language of your TFS deployment during upgrade

I helped a customer upgrading their existing TFS 2010 environment to TFS 2015. Where the old version of TFS was using the English bits, for the 2015 installation they provided me the Italian installation media. When I started the configuration process to trigger the upgrade, the installation wizard gave the following warning: OK, it’s just a warning I thought, what harm could be done? So I clicked on the checkbox next to “ I have read the above warning and would like to continue anyway ”? Turns out, that this was not one of my brightest ideas . A few hours into the upgrade process, a failure popped up; [Info   @19:00:17.421] Executing step: 'Add Process Templates' ConfigurationProcess.AddProcessTemplates (716 of 765) [Info   @19:00:17.439] Loading process template: Deploy\ProcessTemplateManagerFiles\1033\MsfAgile\Metadata.xml [Error  @19:00:17.476] Value cannot be null. Parameter name: input [Info   @19:00:17.509] System.ArgumentNullException: Value

Using a SQL Server database as an ASP.NET session store

Whenever possible I prefer a stateless design, but sometimes using some session state can make your life so much easier. And we like easy right? This week I had to upgrade some old ASP.NET MVC applications to be able to move them to Azure. The only thing I had to do was to reconfigure the session state(which was in memory) to a persistent store(SQL Server in this case). As it was a long time ago, I didn’t remember the involved steps. So here is a quick summary: Open a Visual Studio Developer Command Prompt Execute the following command: C:\Program Files (x86)\Microsoft Visual Studio 11.0>aspnet_regsql -sstype c -ssadd –d <databasename>–S <servername>-U <userid> -P <password> In our case we wanted to use a separate database(and not TempDB which is the default), so therefore we had to specify some extra parameters. Update your web.config <sessionstate mode="SQLServer" timeout="20" allowcustomsqlda

WPF–Using the XmlnsDefinitionAttribute causes issues when loading a WPF form in an Addin

Yesterday a colleague asked me for help. She was building an addin for ESRI ArcGis and she wanted to load a WPF form and show some related data. The problem was that when loading the WPF form some of the DLL’s were searched in the Addin folder but for some of the DLL’s, .NET was looking inside the bin folder of the main application. Easy solution would be to just copy over all addin assemblies to the bin folder of ArcGis. Of course this was not what we want as it removes the advantages of the whole addin concept. So the big question is; why are some DLL’s loaded from the correct location and others not? Let’s have a look at the WPF form first: In this form the services DLL was loaded correctly whereas the caliburn.micro dll(still my favorite MVVM framework ) was not. The difference is that the services DLL namespace is constructed using the clr-namespace syntax whereas the Caliburn.Micro namespace is constructed using an URL. Where does this URL come from? WPF defines a CLR

FAAS: Serverless architectures with Function as a Service

After PAAS(Platform as a Service), IAAS(Infrastructure as a Service) and SAAS(Software as a Service), it is now time for FAAS; Function as a Service.  FAAS is one of the incarnations of Serverless architectures (BAAS, BackEnd as a Service is another one). Let’s have a look at what Microsoft has to offer in the FAAS space; Azure Functions . Azure Functions is a serverless event driven experience that extends the existing Azure App Service platform. These nano-services can scale based on demand and you pay only for the resources you consume. Getting started Go to the Azure Functions product page . Click on the big green Get started button.   Log in with an account linked to an Azure subscription. If you logged in succesfully, you’ll be redirected to a Get started page(an Angular 2 app ) where you can configure the following information Your subscription: select one of the associated subscriptions for this account Name: select a name for

Using work item templates in TFS 2015 and VSTS

Last week I heared a colleague that he had to customize the TFS work item templates to prepopulate some fields with default values. I had some good news for him. TFS (and VSTS) has a nice productivity feature; work item templates. With work item templates you can quickly create work items which have pre-populated values for your team's commonly used fields. And that’s exactly what he needed. Here are the steps required to create a work item template: Open the TFS web portal Go to the Work hub and click on the Queries tab Click on New and choose the Work Item type you want to create a template for On the New Work Item form fill in the default values you want the template to contain. Next, If you are using VSTS choose the Copy template URL option from the context menu: If you are using TFS on premise, click on the Copy template URL button: An URL is generated and copied to your clipboard. Here is an example of the

Microsoft REST API guidelines

If you ever build a REST(like) service, you know that there is no one ‘right’ way to build such a service. Okey, you have the HTTP specs and the REST dissertation by Roy Fieldi ng but this still keeps a lot of questions unanswered: How do you handle versioning? When do you use which HTTP status code? What metadata do you add to your headers? How do you return error data? … In an effort to structurize all REST services build by Microsoft in the same way, they made their REST API guidelines publicly available : The Microsoft REST API Guidelines, as a design principle, encourages application developers to have resources accessible to them via a RESTful HTTP interface. To provide the smoothest possible experience for developers on platforms following the Microsoft REST API Guidelines, REST APIs SHOULD follow consistent design guidelines to make using them easy and intuitive. This document establishes the guidelines Microsoft REST APIs SHOULD follow so REST

Cloud Design Patterns infographic

I shared some great resources regarding Cloud related design patterns before: Free e-book: Cloud Design Patterns: Prescriptive Architecture Guidance for Cloud Applications Video: Cloud Scalability Patterns Here is another one to add to the list: Cloud Design Patterns Infographic : An interactive infographic that depicts common problems in designing cloud-hosted applications and shares design patterns to solve them.

TFS Build Test Task–How to exclude a set of tests

The Test Task in TFS Build allows you to specify which assemblies should be searched for (unit) tests. By using the Test Filter criteria, you can further filter the set of Tests you want to be executed: Based on the Tooltip you get the impression you can only specify which tests to include based on things like TestName, TestCategory, … But under the hood there is a lot more possible. You can not only include tests(using ‘=’), but also exclude tests(using ‘!=’), execute a contains check (using ‘~’) and group subexpressions (using ‘( <subexpression>)’). Here is a short sample where I exclude all PerformanceTests, include tests with Priority 1 and where the name contains the word ‘UnitTest’:

NUnit 3: Trace.WriteLine not captured in Test Explorer

After upgrading to NUnit 3, I noticed that no Tracing information was captured and written to the Test Output. (Note: I didn’t verify but I’m quiet sure it worked when using NUnit 2.x) After searching through the documentation and Issues list on GitHub , I found the following quote: “Currently, the "channels" we capture are Console.Out, Console.Error and TestContext.Out. ” So indeed no Trace.Write(Line) in the list. Let’s check if this statement is correct… Here is my test code: And let’s now have a look at the test results: Trace.WriteLine – no output available : TestContext.WriteLine – with output :

Testing your Akka.NET actors using TestKit

Due to the nature of Actor based systems, they can be complex to test. Luckily Akka.NET , the actor framework of my choice, is accompanied by Akka.TestKit, a library that makes it easy to unit test Akka.NET actors. How to get started? Akka.TestKit has support for the most popular test frameworks for .NET; MSTest, XUnit and NUnit. As I’m a long time NUnit user, I show you how to set it up using NUnit, but the steps for other frameworks are similar. Step 1- Download the correct NuGet Package For every test framework a seperate NuGet package is available, as I will use NUnit, I download the Akka.TestKit.NUnit3 nuget package. (If you are still using an older NUnit version, a separate package is available; Akka.TestKit.NUnit . Step 2 – Initialize your test class Every test class should inherit from TestKit. This class provides all the necessary wiring, setup and cleanup of the actor system and a range of helper methods. Step 3 – Create an actor Inside every test an Actor

A better sample database

Microsoft always provided us with some sample databases(remember Northwind, AdventureWorks and more recently Wide World Importers ). These databases are small, easy to understand and great for testing purposes. The only problem is that they aren’t real world databases with real world problems. Also as they are quiet small, some performance issues will never show up when using these databases. If you want the ‘real stuff’, I can recommend the Stack Overflow database. You can download a torrent of the SQL Server database version of the Stack Overflow data dump . Brent Ozar took the original data dump and converted it to a SQL Server database: Torrent file:  http://u.brentozar.com/StackOverflow201603.7z.torrent Be aware that the download is about 12GB and around 95GB after extraction. Have fun

Can you keep a secret?

A lot of applications store sensitive security related data inside their configuration. Things like API keys, database connection information, even passwords are directly accesible inside the app.config or web.config of a .NET application. Last week a colleague mentioned that they uploaded a project to GitHub accidently exposing the root AWS password for their Amazon account. Whooops! With ASP.NET Core, Microsoft tries to solve this kind of problems with the introduction of the SecretManager command-line tool. This tool allows you to store these sensitive values in a secure way without exposing them through source control. If you want to enable it, add the following entry to the “tools” section of project.json: "Microsoft.Extensions.SecretManager.Tools": {   "version": "1.0.0-preview1-final",   "imports": "portable-net45+win8+dnxcore50" } You also need a unique identifier that links your project to the s

xcopy error - Invalid path 0 files copied

I was configuring a post build step to copy some files to the output directory of an other project. This was the XCOPY command I came up with: XCOPY /E /Y “$(TargetDir)” “$(SolutionDir)\SecondApp\$(OutDir)” Looks OK, right? But when I executed it, it failed with the following error message: C:\projects\test>xcopy /E /Y "C:\projects\test\FirstApp\bin\Debug\" "C:\projects\test\SecondApp\bin\Debug\" Invalid path 0 File(s) copied These paths doesn’t look invalid. Strange!  StackOverflow brought a solution; http://stackoverflow.com/questions/25840861/invalid-path-0-files-copied-error-while-using-xcopy-command .  Adding an ending dot to the source path should do the trick: XCOPY /E /Y “$(TargetDir) . ” “$(SolutionDir)\SecondApp\$(OutDir)” And indeed: C:\projects\test>xcopy /E /Y "C:\projects\test\FirstApp\bin\Debug" "C:\projects\test\SecondApp\bin\Debug\" C:\projects\test\AppDomainIsolationTest\SecondApp\bin\Deb

Loading a configuration file in a separate appdomain

I lost some time today investigating how to load an app.config for a separate domain. My first attempt looked like this: However the call to ConfigurationManager.AppSettings always returned null. I first thought that I did something wrong when configuring the AppDomainSetup. I first tried to replace the AppDomainSetup configuration to explicitly load the config settings: But this did not solve the issue. In the end I discovered that it was the ConfigurationManager.AppSettings that was causing the issue. When I replaced it with the code below, it started to work!

TFS Build: Failed to activate Xamarin license.

While helping a customer setting up a release pipeline for their Xamarin Mobile applications, I noticed that their CI(Continuous Integration) build was failing for some time. When opening the build results I saw the following error message: “Failed to activate Xamarin license. {"code":-3,"message":"Could not look up activation code."}” The build was failing on the Activate Xamarin license step: The acquisition of Xamarin by Microsoft made this license check obsolete, so just remove it and  you are good to!

Angular 2–Angular CLI

With the availability of Angular 2, I started experimenting with Angular CLI , a CLI for Angular 2 applications based on the ember-cli project. The project is still young and I encountered a lot of issues along the way. Here are some tips and lesssons I learned: Update Node to the latest LTS version(you need at least version 4 or later). Run ng in a command prompt with Admin permissions Be patient. The initial installation as well as ng new take a looooong time. On Windows you need to run the build and serve commands with Admin permissions, otherwise the performance is awful. Some errors I got: “ng is not recognized as an internal or external command” Check that %AppData%\npm is added to the PATH variable. “SyntaxError: Use of const in strict mode.” You are still using an older Node version that doesn’t support some of the new ES2015 features “Error: Cannot find module 'exists-sync'” For an unknown reason

Validating message templates using SerilogAnalyzer

A few years ago , I switched my logging framework of choice to Serilog , a structured logging framework. To use structured logging, you use a concept called ‘message templates’. The are a simple DSL extending .NET form strings. It looks a little bit like string interpolation, but with more power. A quick example copied from the Serilog website: The stored message will be a combination of the message template with the placeholders and the properties captured in JSON format: The only problem is that similar to string.format and  in contrast to string interpolation, the compiler doesn’t give any warnings when the provided message template is incorrect or any of the parameters is missing. Time to introduce SerilogAnalyzer , a Roslyn-based analysis for code using the Serilog logging library. It checks for common mistakes and usages problems. A must have if you are using Serilog today! Download the Visual Studio Extension here: https://github.com/Suchiman/SerilogAnalyzer/relea

The more you know, the more you know what you don’t know

The last weeks I interviewed a lot of people that wanted to join our company, some of them just finished their studies, some had a few years of work experience and some had years of experience working in the IT industry. If I compared CV’s between these people I noticed that their are big differences on how people rate their skills. Especially young professionals seems to be very confident that they are experts in things like HTML, JavaScript, CSS just to name a few, until the moment you really start to ask some though questions about each of these topics. It made me think about the four stages of competence: Unconscious Incompetence. You don’t know what you don’t know. Conscious Incompetence. Now you know what you don’t know. Conscious Competence. You can think your way through an exercise and perform it with some conscious effort. Unconscious Competence. You can perform the task without thinking about it. It’s automatic. It’s burned into your body and it just k