Skip to main content


Showing posts from January, 2017

Azure Functions: Limitations

For a project I’m working on we are investigating the usage of Azure Functions to replace the existing PaaS components we are using. But what are the limitations? A customer explained me that they tried to move to FaaS before using AWS Lambda but where unhappy with the constrains and limits that AWS Lambda has. Let’s compare the two:   AWS Lambda Azure Functions Request Payload Size 6MB No limit Max duration 300 seconds No limit Deployment Package Size 50MB No limit Size of code/dependencies 250MB No Limit Concurrent executions 100 Only limited to the # of instances Should I say more? Some useful links: Best practices for Azure Functions: AWS Lambda limits:

TFS –Work item not visible on Product Backlog

After splitting out some Product Backlog Items in TFS into smaller ones, we noticed that they were no longer shown on one of the boards. What is going on? The problem was that the we created the new PBI’s as children of the existing PBI.  As a consequence TFS no longer shows the parent PBI’s as it only shows the leaf nodes by design. In this sample it means that the Requirement 1.0 PBI is not shown, only Requirement 1.0.1 & 1.0.2 are visible on the board:

Team Foundation Server 2017–Enable new work item form

The new work item form that was released a year ago as part of one of the VSTS updates has made it to the on-premise Team Foundation Server 2017 product. However after upgrading to TFS 2017, the new work item form is not yet visible. Only for new collections this new form experience is available by default, for existing collections you explicitly have to enable it. Here are the steps to enable this Go to the Collection Administration page Click on the Enable the new work item form link Choose an opt-in model for the new work item form If you want to immediatelly activate it, choose “New form only” If you want to allow users to try the new experience, choose “Enable opt-in for all users”. Users will get an ‘Try the new form’ link on their ‘old’ work item form allowing them to test the new functionality

Mini-SPA’s(Single Page Apps)

After building multiple both small and large Single Page Applications I’ve landed on an approach where I typically split out my single Single Page App into multiple smaller SPA’s. Similar to using microservices on the backend this approach offers a lot of benefits. First of all the initial footprint of your application is a lot smaller. Second working with multiple people on a SPA becomes easier, less change that something is broken after pulling the latest sources from your source control system. You can also avoid some of the issues regarding session expiration, javascript bugs that make your app freeze, memory leaks, etc… that you otherwise need to handle yourself. However each time I start a new project and I try to explain this approach to someone I get strange looks and people doesn’t understand why you shouldn’t build your SPA as one big monolith? Last week I stumbled over a blog post by Maurice De Beijer where he explains a similar approach. Now I feel confirmed in my op

ASP.NET Core: Set the environment on your Powershell prompt

While trying to run our ASP.NET Core application from the commandline using ‘dotnet run’ we got the following warning: Development environment should not be enabled in deployed applications, as it can result in sensitive information from exceptions being displayed to end users. For local debugging, development environment can be enabled by setting the ASPNETCORE_ENVIRONMENT environment variable to Development, and restarting the application. Makes sense, however in this case we would like to run the application using “Development” as the configured environment. How can we do that? From the command prompt: setx ASPNETCORE_ENVIRONMENT "Development" dotnet run From Powershell: $Env:ASPNETCORE_ENVIRONMENT = "Development" dotnet run Remark: Note that the environment change will only apply to the current window. So it is important to run both commands from the same command prompt

Walking down the memory lane–The history of JavaScript

Just a really short post with a link I wanted to share about the history of JavaScript:

Visual Studio Test goes Open Source

Microsoft, in its continuous effort to move to the open, has open sourced their Visual Studio Test Platform . This is not MSTest, the testing framework, but the tooling  and engine in Visual Studio that powers test explorer and vstest.console. The Visual Studio Test architecture has four major components: Test Runner is the command line entrypoint to test platform ( vstest.console ). Test Execution Host is an architecture and framework specific process that actually loads the test container and executes tests. Data Collector Host process hosts the various test execution data listeners. IDE/Editor process is used by developer for Edit/Build the application and trigger test runs. runtime) of the test. Not all parts are open sourced yet, but the rest will follow…

Failure to start website in IIS Express: Process with an ID #### is not running

Last week during a training one of the participants got the following error when she tried to debug a web application in IIS Express. IIS Express failed to start and Visual Studio showed the following error message: Process with an ID #### is not running We couldn’t find the root cause of the issue but we had a workaround to fix it: Close Visual Studio. Delete the hidden .vs folder on solution level Open your solution again This time IIS express should be back up and running

ASP.NET Core: User.Identity.Name remains empty after authenticating using OIDC

After solving the problem I had yesterday, my OIDC middleware in ASP.NET Core was finally working. I was able to login and I could find all my claims inside my ClaimsIdentity. However this was not the end of all my problems as I noticed that the User.Identity.Name value was empty. Strange! Because when I took a look at my claims, a name claim was certainly there… What is going on? The thing is that Microsoft provided a NameClaimType and also a RoleClaimType property on ClaimsIdentity . These properties define which claim should be used to represent the name(and role) claim on your User.Identity.  As a default value they decided on using the following claimtypes (which were part of WIF): These claimtypes are not part of the OIDC claim types and this explains why no mapping is happening… To fix this you can update your OIDC middleware by addin

ASP.NET Core: OpenIdConnectMiddleware returns 401 after oidc was challenged

The application we are working on is using OpenID Connect(OIDC) to externalize authentication. However after configuring the OIDC middleware( Microsoft.AspNetCore.Authentication.OpenIdConnect) in ASP.NET Core it failed to work. Instead of being redirected to the Identity Provider (by returning a 302) I got a 401 and the page just remains blank .  Inside my output logs I could see the following: 'FrontEnd.Web.exe' (CLR v4.0.30319: FrontEnd.Web.exe): Loaded 'C:\Windows\Microsoft.Net\assembly\GAC_MSIL\System.Runtime.Serialization\v4.0_4.0.0.0__b77a5c561934e089\System.Runtime.Serialization.dll'. Skipped loading symbols. Module is optimized and the debugger option 'Just My Code' is enabled. Microsoft.AspNetCore.Authentication.OpenIdConnect.OpenIdConnectMiddleware: Information: AuthenticationScheme: oidc was challenged. Microsoft.AspNetCore.Hosting.Internal.WebHost: Information: Request finished in 444.9692ms 401   On GitHub I found that there are some prob

ASP.NET Core MVC–Register a global authorization filter

ASP.NET Core MVC still supports the concept of action filters. On the project I’m currently working on I wanted to add a global Authorization filter. So similar to ASP.NET MVC, I tried to add the AuthorizeAttribute as a global filter: However this didn’t work. In ASP.NET Core MVC the Attribute class and the filter implementation are separated from each other. Instead I had to use the following code:

Enums alternative in C#

While reading the following blog post( ) I noticed this code snippet: This code is taking advantage of the Expression-bodied function members in C# 6. And although this is just a static class with some readonly properties I found it aesthecially pleasing and it looks like a useful (and more flexible) alternative to enums. If you look at the syntax you had to use before, it just feels more like a normal class instead of an enum construct… An alternative approach would be using Getter-only auto-properties but it doesn’t feel the same as well…

CQRS: Building a pipeline using MediatR behaviors

I recently started a new project where we are using MediatR to simplify our CQRS infrastructure. This gives us our basic building blocks to create our queries, commands and the related handlers. One of the things I wanted to add were some decorators to provide some extra functionality(like authorization, logging, transaction management,…) before and after a request. Turns out I’m lucky as with the release of MediatR 3.0 I can fall back to a built-in functionality called behaviors. Similar to action filters in ASP.NET MVC, it gives me the opportunity to execute some arbitrary logic before and after a request(command/query). Usage is simple: You have to implement the IPipelineBehavior<TRequest, TResponse> interface: Don’t forget to register the pipeline behavior through your IoC container(StructureMap in the example below)! Note: As I’m registering this as an Open Generic this behavior will apply to all my messages.

Formatting strings in F#

As F# is build on top of the CLR and has access to the same Base Class Library you can use String.Format as you know from C#. A (better) alternative is the printf function in F#. Be aware that the printf function(and its relatives) are not using the composite formatting technique we see in String.Format but use a C-style technique instead. This means that you don’t use positional placeholders like this: String.Format("A string: {0}. An int: {1}. A float: {2}. A bool: {3}","hello",42,3.14,true) Instead you use C-style format strings representing the data types(e.g. %s for string): printfn "A string: %s. An int: %i. A float: %f. A bool: %b" "hello" 42 3.14 true What makes this approach really nice is that it adds an extra layer of type safety on top of your string format. If you expect a string parameter but provide an integer, you’ll get a nice compiler error. But maybe you are wondering; what about string inter

F# printfn error

I tried to write out a message to the command line using ‘ printfn’ . Here is my code: Unfortunately this code doesn’t compile. Here is the compiler error I got: The type 'string' is not compatible with the type 'Printf.TextWriterFormat<'a>' How hard can it be to write a simple string message? The problem becomes obvious when we look at the signature of the printfn function: printfn : TextWriterFormat<'T> –> 'T The function didn’t expect a string but a TextWriterFormat<'T>. The compiler has no clue how to convert our string variable to this type. To fix this you should specify a format specification(%s in this case) and your variable:

Free ebook: Defending the New Perimeter – Modern Security

Right on time! Just before the weekend starts, a new (free) ebook was released giving you a whole weekend to digest it. “Defending the New Perimeter: Modern Security from Microsoft” is a guide to the Microsoft Cybersecurity Stack for IT Decision Makers written by Pete Zerger and Wes Kroesbergen. Brad Anderson(Corporate Vice President at Microsoft) wrote the introduction and summaries the book like this: “In "Defending the New Perimeter", Wes and Pete will explain how the components of Microsoft's defense stack work together seamlessly – backed by the rich intelligence of the Microsoft Intelligent Security Graph – to deliver the best possible protection for your infrastructure, devices, applications, and data. I think you will find it to be a great resource to acquaint yourself with Microsoft's approach to modern security for the hybrid enterprise. ” A must read for CIOs, CISOs and IT professionals who care about security. Download the ebook here (registr

Running MsTest unit tests for your .NET Core code

To be able to run unit tests for your .NET Core code, Microsoft created a new version of the MSTest Framework with support for .NET Core. This version is still in preview yet but provides both testing support from the commandline(through dotnet test ) and through Visual Studio. To get up and running I followed the steps as described here: . However when I tried to run the tests, the Test Explorer in Visual Studio remained empty and no tests were discovered. What went wrong? I made a stupid mistake as I only added the MSTest.TestFramework NuGet package. This package contains the Test Attributes and Assertions. To be able to run the tests you also need a Test Adapter and test runner which are part of the dotnet-test-mstest NuGet package. After adding this package, my tests were finally discovered. Problem solved! From the project.json: "dependencies": {    "dotnet-te

Tips when using the pipeline operator in F#

Thanks to the pipeline operator you can craft some really beautiful and readible code. Similar to commandline pipelining you can take the output of a previous function and use it as the input of the next step in your pipeline. 2 things you need to be aware of if you want to use your functions inside a pipeline construct: 1. Argument order is important The value you want to apply the pipeline function on should be passed as the last parameter. So this is wrong: And this is correct: Let’s have a look at the definition of the pipeline ‘|>’ operator: let (|>) x f = f x Basically what's happening here is that the value on the left of the pipe forward operator is being passed as the last parameter to the function on the right. A lot of standard F# functions are structured so that the parameter most likely to be passed down a chain like this is defined as the last parameter. 2. Be aware about the difference between a tuple and multiple parameters A mist

Learn to write composable functional JavaScript

During the Christmas Holiday I followed the following great online course: Professor Frisby Introduces Composable Functional JavaScript . Course description: This course teaches the ubiquitous abstractions for modeling pure functional programs. Functional languages have adopted these algebraic constructs across the board as a way to compose applications in a principled way. We can do the same in JavaScript. While the subject matter will move beyond the functional programming basics, no previous knowledge of functional programming is required. You'll start composing functionality before you know it. If you have 2 hours of spare time, it is a great introduction into the functional programming world. And best of all, it’s free! Tip: Keep your JSFiddle open and type along during the training…

Impress your colleagues with your knowledge about... the Visual Studio Hosting process.

Sometimes when working with C# you discover some hidden gems. Some of them very useful, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that you all used but probably never wondered about is the vshost file. When you create and compile an application in Visual Studio you get 2 executables for the price of one; your main output exe and a second vshost.exe file. The second executable is used by default when you are debugging your application. It was introduced in Visual Studio 2005 and its purpose is to provide improved debugging performance(among other things like Enabling Partial-Trust Debugging and Design-Time Expression Evaluation). Without this hosting process, every time you debug your application an AppDomain has to be created, the debugger needs to be initialized and so on… This takes time and has a negative impact on the overall performance of your debugging experience. The hosting process speeds up this proce