Thursday, February 28, 2019

Angular - Auto-unsubscribe

One of the nice underused features of Angular is the support for decorators (through Typescript).

From the Typescript documentation:

A Decorator is a special kind of declaration that can be attached to a class declaration, method, accessor, property, or parameter. Decorators use the form @expression, where expression must evaluate to a function that will be called at runtime with information about the decorated declaration. 

A nice example of the usage of decorators is the ngx-auto-unsubscribe library.  This Class decorator that will automatically unsubscribe from observable subscriptions when the component is destroyed.

An example:

Wednesday, February 27, 2019

error : NETSDK1061: The project was restored using Microsoft.NETCore.App version 1.0.0, but with current settings, version 2.2.0 would be used instead.

After adding a .NET Core Test project to my solution, the build on the server started to fail. These are the error messages we got:

2019-02-26T13:02:17.1451963Z C:\Build\_work\25\s\Data.Tests\Data.Tests.csproj : warning NU1604: Project dependency Microsoft.NETCore.App does not contain an inclusive lower bound. Include a lower bound in the dependency version to ensure consistent restore results.

2019-02-26T13:02:17.1451963Z C:\Build\_work\25\s\Data.Tests\Data.Tests.csproj : error : NETSDK1061: The project was restored using Microsoft.NETCore.App version 1.0.0, but with current settings, version 2.2.0 would be used instead. To resolve this issue, make sure the same settings are used for restore and for subsequent operations such as build or publish. Typically this issue can occur if the RuntimeIdentifier property is set during build or publish but not during restore. For more information, see https://aka.ms/dotnet-runtime-patch-selection.

The strange thing was that building this project locally worked without any issues.

To fix it, I opened up the Data.Tests.csproj file and explicitly added the RuntimeFrameworkVersion:

<RuntimeFrameworkVersion>2.2.0</RuntimeFrameworkVersion>

Could this be a bug in the .NET Core Test project template?

Tuesday, February 26, 2019

Azure DevOps Extension for Azure CLI

Microsoft is introducing a new extensions for the Azure CLI, the Azure DevOps Extension. This extension adds Pipelines, Boards, Repos, Artifacts and DevOps commands to the Azure CLI 2.0. It replaces the VSTS CLI that has been deprecated and will no longer be receiving new features.

Getting started

az –version

    • Add the Azure DevOps Extension

    az extension add --name azure-devops

      • Run the az login command. This will open up the default browser and load a sign-in page. You can also login using a PAT if you want.

      az login

      • Set the default project organization and project name. This avoids that we have to do this for every command:

      az devops configure --defaults organization=https://dev.azure.com/ordina project=SampleProject

      az pipelines build list -o table

      Monday, February 25, 2019

      Marten 3.4–Full text search support

      I’m a big fan of Marten, the document database on top of PostgreSQL.

      With the Marten 3.4 release last week, we finally got full text search support (together with some other bug fixes and some performance improvements).

      PostgreSQL has built in full text search, with this release this functionality finally becomes available in Marten. Using it is as easy as writing the following LINQ statement:

      More information here: http://jasperfx.github.io/marten/documentation/documents/querying/linq/#sec24

      Remark: To use this feature, you will need to use PostgreSQL version 10.0 or above, as this is the first version that support text search function on jsonb column - this is also the data type that Marten use to store it's data.

      Friday, February 22, 2019

      Azure DevOps - SonarQube - error CS0006: Metadata file 'Google.Protobuf.dll' could not be found

      Automated deployments are great, until the moment your builds start to fail due to side-effects.

      Last week when trying to build and deploy an application using Azure DevOps, the build failed with the following error messages in the log:

      2019-02-20T20:47:57.0671661Z   CSC : error CS0006: Metadata file 'D:\b\3\_work\_temp\.sonarqube\resources\0\Google.Protobuf.dll' could not be found

      2019-02-20T20:47:57.0671661Z   CSC : error CS0006: Metadata file 'D:\b\3\_work\_temp\.sonarqube\resources\0\SonarAnalyzer.CSharp.dll' could not be found

      2019-02-20T20:47:57.0671661Z   CSC : error CS0006: Metadata file 'D:\b\3\_work\_temp\.sonarqube\resources\0\SonarAnalyzer.dll' could not be found

      I knew that I had seen this error before: https://bartwullems.blogspot.com/2018/11/tfs-build-sonarqube-error.html, but the things that worked then didn’t help.

      After rebooting the server, cleaning the build folder, praying to all the known and unknown gods, I finally found a solution(read workaround) that worked:

      I deleted all the .sonarqube folders found in the ‘_work’ directory of the build agent.

      Anyone who has a ‘real’ solution?

      Thursday, February 21, 2019

      Grit - The power of passion and perseverance

      As a consultant I have the privilege to work with a lot of different people, teams and customers. What I always find fascinating is how some people and teams grow and become successful where other teams keep running in circles failing to improve, this despite the fact that all these people are really talented and smart.

      So what makes that some teams succeed where others fail? My answer is simple, it is about how they handle failure and learn from their mistakes.

      Which brings me to the following TED talk:

      Wednesday, February 20, 2019

      Azure DevOps - Allow Scripts to Access OAuth Token

      For a specific build task I’m using inside Azure DevOps it was required to allow the build to access the OAuth token to interact with TFS/Azure DevOps. This option is disabled by default and it took me some time to figure out where I could enable it.

      Here are the (easy once you know it) steps:

      • Go to Pipelines –> Build.

      • Select the build/release pipeline that you want to configure from the list and click on Edit in the right corner

      • On the Edit Build Definition screen, click on the ‘Run on Agent’ section (in my case Agent Job 1)

      • In the Properties, you find the ‘Allow scripts to access the OAuth token’ in the Additional options section

      Tuesday, February 19, 2019

      Git–Lightweight vs Annotated tags

      While investigating some ways to label our releases inside Azure DevOps I discovered that there are in fact 2 types of tags:  lightweight and annotated.

      From the documentation:

      A lightweight tag is very much like a branch that doesn’t change — it’s just a pointer to a specific commit.

      Annotated tags, however, are stored as full objects in the Git database. They’re checksummed; contain the tagger name, email, and date; have a tagging message; and can be signed and verified with GNU Privacy Guard (GPG). It’s generally recommended that you create annotated tags so you can have all this information; but if you want a temporary tag or for some reason don’t want to keep the other information, lightweight tags are available too

      Azure DevOps Services and TFS support both annotated and lightweight tags.  Go to the Repos section(1) and select Tags from the menu(2). Now you can start searching for both tag types(3):

      When you type a tag name, tags are filtered.

      Annotated tags are displayed with a tag name, message, commit, tagger, and creation date(e.g. demo2 in the screenshot above). Lightweight tags are displayed with a tag name and commit(e.g. demo in the screenshot above).

      You can create annotated tags using the web portal, and starting with Visual Studio 2017 Update 6, you can create both lightweight and annotated tags from within Visual Studio:

      Open up Visual Studio and go to Team Explorer. Select Tags from the available options. Click on New Tag to create a new one.

      To create an annotated tag, provide both a name and a message when creating the tag. To create a lightweight tag, omit the message and supply only a name.

      Which tag type should I use?

      I prefer to use annotated tags because of the extra metadata it allows you to add. The fact that you know who created the tag but also the option to add some extra comments make them far more usable than lightweight tags.

      I’m aware that an annotated tag can be signed as an extra level of security but I never had a reason to start doing this.

      Monday, February 18, 2019

      GraphQL Schema Language Cheat sheet

      Although the GraphQL Schema Language is not that hard and the Graphiql tooling integration helps a lot, I still sometimes have to fall back to the documentation to know how to do something.

      To help you even further, you can start using the GraphQL Schema Language Cheat sheet(more information here: https://wehavefaces.net/graphql-shorthand-notation-cheatsheet-17cd715861b6). This gives you a one pager in PDF or PNG with the full syntax.

      Friday, February 15, 2019

      GraphQL DotNet–Default values for arguments

      In GraphQL type definitions, each field can take zero or more arguments. Every argument passed to a field needs to have a name and a type. Next to this, it’s also possible to specify default values for arguments.

      For example, let’s create a query in our schema that returns all products for a category, if no category is specified we want to return ‘Beverages’:

      But how can you achieve this using GraphQL DotNet using the GraphType first approach?

      Thursday, February 14, 2019

      Invoking a GraphQL endpoint from Postman using the application/graphql header

      If your GraphQL endpoint accepts the application/graphql header, invoking a GraphQL endpoint from Postman becomes really easy:

      • Add the header Content-type: application/graphql:

      • Now you can just paste your GraphQL query in the body of the request:

      • And if you execute the request, you get the GraphQL data back:


      Wednesday, February 13, 2019

      ASP.NET Core - IOptionsMonitor vs IOptionsSnapshot

      While preparing some training material about ASP.NET Core, I was wondering about the difference between IOptionsMonitor and IOptionsSnapshot. In contrast to IOptions allow both interfaces to track configuration changes during the lifetime of your application.

      However, internally both interfaces are constructed completely different; IOptionsMonitor is registered in DI container as singleton, it has a CurrentValue property and is capable of detecting changes through OnChange event subscription. On the other hand, IOptionsSnapshot is registered as scoped, has a Value property and also have a change detection capability by reading the last options for each request, but it doesn't have the OnChange event.

      So what’s the point of having 2 interfaces with completely different signatures when both achieve the same goal; picking up configuration changes at runtime?

      The difference between the two (and this also explains the different implementation approach) is that IOptionsSnapshot guarantees that you have the same configuration values during a single request(explaining the scoped lifetime). IOptionsMonitor picks up every change immediately meaning that if the configuration value is read multiple times in a request, it can return different values.

      As a general rule, you should prefer IOptionsSnaphot above IOptionsMonitor.

      Tuesday, February 12, 2019

      Azure DevOps Demo Generator

      I regularly give demos about Azure DevOps. Unfortunately it can take a lot of time to create some good sample data that really allows me to show the power of Azure DevOps.

      Last week I got a great tip from a colleague to help me with this: https://azuredevopsdemogenerator.azurewebsites.net/

      Azure DevOps Demo Generator helps you create projects on your Azure DevOps Organization with pre-populated sample content that includes source code, work items, iterations, service endpoints, build and release definitions based on a template you choose.

      Get started using the Demo Generator V2 now, or follow the simple walkthrough.

      Monday, February 11, 2019

      ElasticSearch–Speed up your bulk indexing

      New indexed documents in ElasticSearch are not searchable until a refresh occurs. By default, every shard is refreshed once every second ‒ defined by a dynamic index level setting named refresh_interval. This forces Elasticsearch to create a new segment every second.

      During bulk indexing it is recommended to increase this value. This allows larger segments to flush and decreases future merge pressure. (Replace $INDEX$ with your index name).

      PUT /$INDEX$/_settings

      {

          "index" : {

              "refresh_interval" : "60s"

          }

      }

      What also can help is setting the  index.number_of_replicas to 0. This is a tradeoff, as the loss any shard will cause data loss, but at the same time indexing will be faster since documents will be indexed only once.

      PUT /$INDEX$/_settings

      {

          "index" : {

              "number_of_replicas" : "0"

          }

      }

      Once the initial loading is finished, you can set index.refresh_interval and index.number_of_replicas back to their original values:

      PUT /$INDEX$/_settings

      {

          "index" : {

              "refresh_interval" : "1s",

              "number_of_replicas" : "1"

          }

      }

      Friday, February 8, 2019

      SeriLog–Decrease the application impact while logging

      If you ever had to maintain an application, you know that good logs are your best friend. That is one of the reasons why I’m a big fan of Serilog, a structured logging framework for .NET.

      And if you ask me how much logging do we need, I would answer that you can not have too much log data. Of course all this log data introduces its own challenges, like how can you search fast through all these logs but also the impact it has on the performance of your application.

      If you start writing thousands of messages a second to the Console, you’ll see your application slowing down.

      To mitigate this problem, most Serilog sinks write messages by default in asynchronous batches to reduce application latency and improve network performance.

      Unfortunately there are few sinks that don’t do this by default, by example the Console sink.  For these cases, you can use Serilog.Sinks.Async. It provides an async wrapper WriteTo.Async() that moves logging onto a worker thread, so that application code can continue executing while file writes etc. proceed in the background.

      Thursday, February 7, 2019

      ASP.NET Core IIS InProcess hosting

      Before ASP.NET Core 2.2 was released when hosting an ASP.NET Core application in IIS, what was happening behind the scenes is that ASP.NET Core was running as a separate process using Kestrel as the web server. An ASP.NET Core Module was configured in IIS to behave as a reverse proxy and forward the requests to the Kestrel server. This out-of-process hosting model introduced some extra overhead.

      In ASP.NET Core 2.2 this was solved by introducing the InProcess hosting model. If you create a new ASP.NET Core 2.2 application, this is enabled by default. But for existing applications that you want to upgrade to ASP.NET Core 2.2 you have some extra steps to do.

      Let’s walk through these steps:

      Step 1: Update the target framework version to 2.2

      Update the target framework version of your ASP.NET Core app to 2.2.

      <TargetFramework>netcoreapp2.2</TargetFramework>

      Step 2 - Update Microsoft.AspNetCore.App

      Update the Microsoft.AspNetCore.App metapackage to at least 2.2.

      < PackageReference Include="Microsoft.AspNetCore.App" Version="2.2.1" />

      Step 3: Update your Web.Config file

      Inside your web.config file you have to update the modules to AspNetCoreModuleV2 and add a hostingModel attribute with the value set to “InProcess”:

      < ?xml version="1.0" encoding="utf-8"?>
      <configuration>
        <system.webServer>
          <handlers>
           <add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
          </handlers>
          <aspNetCore processPath="dotnet" arguments=".\MyApp.dll" hostingModel="InProcess" />
         </system.webServer>
      </configuration>

      Remark: If you don't want to create a web.config, you can also enable this by setting the <AspNetCoreHostingModel> MSBuild property to InProcess in the .csproj.

      Wednesday, February 6, 2019

      ASP.NET Core Scope Validation

      While preparing a training about .NET Core I stumbled over a feature I was unaware it existed in ASP.NET Core: Scope Validation.

      One of the nice things inside ASP.NET Core is that it has built-in support for Dependency Injection.

      Service lifetime

      When registering your dependencies you have 3 service lifetimes to choose from:

      • Transient: Transient lifetime services are created each time they're requested. This lifetime works best for lightweight, stateless services.
      • Scoped: Scoped lifetime services are created once per request.
      • Singleton: Singleton lifetime services are created the first time they're requested (or when ConfigureServices is run and an instance is specified with the service registration). Every subsequent request uses the same instance.

      Simple so far. The problem is that you can shoot yourself in the foot when you start combining multiple lifetimes in the same object tree. 

      For example when using a scoped service in a middleware, and you inject it via constructor injection, the service will behave as a singleton as the service is injected only once at the construction of the middleware. (Note: Inject the scoped service into the Invoke or InvokeAsync method as a solution)

      Another example where it can go wrong is when trying to resolve a scoped service from a singleton. This can lead to an incorrect state in your application.

      To prevent this kind of problems from happening, Microsoft introduced Scope validation.

      Scope validation

      If you are using the default WebHost builder scope validation is already enabled as one of the steps when executing CreateDefaultBuilder. Behind the scenes it sets ServiceProviderOptions.ValidateScopes to true if the app's environment is Development.

      When ValidateScopes is set to true, the default service provider performs checks to verify that:

      • Scoped services aren't directly or indirectly resolved from the root service provider.
      • Scoped services aren't directly or indirectly injected into singletons.

      The root service provider is created when BuildServiceProvider is called. The root service provider's lifetime corresponds to the app/server's lifetime when the provider starts with the app and is disposed when the app shuts down.

      Scoped services are disposed by the container that created them. If a scoped service is created in the root container, the service's lifetime is effectively promoted to singleton because it's only disposed by the root container when app/server is shut down. Validating service scopes catches these situations.

      Remark: if you want to validate scopes for all environment, you can configure the ServiceProviderOptions with UseDefaultServiceProvider on the host builder:

      Tuesday, February 5, 2019

      Impress your colleagues with your knowledge about … .NET Debugging Information

      Sometimes when working with C# you discover some hidden gems. Some of them very useful, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that I want to share today is the magic behind debugging in .NET; PDB files.

      Open a project in Visual Studio, right click on it and go to Properties. Go to the Build tab and click on the Advanced… button on the bottom of the screen.

      In the Debugging information you can choose between Full, Pdb-only, or Portable. But what does this mean and what is the difference? Let’s find out…

      Debugging information

      By configuration the Debugging information, you can choose what info is generated by the compiler to help you debug your application. It has the following options:

      • none

        Specifies that no debugging information will be generated.

      • full

        Enables attaching a debugger to the running program.

      • pdbonly

        Allows source code debugging when the program is started in the debugger but will only display assembler when the running program is attached to the debugger.

      • portable

        Produces a .PDB file, a non-platform-specific, portable symbol file that provides other tools, especially debuggers, information about what is in the main executable file and how it was produced. This is the most recent cross-platform format for .NET Core.

      • embedded

        Embeds portable symbol information into the assembly. No external .PDB file is produced.

      Which one should I choose?

      If you use full, be aware that there is some impact on the speed and size of JIT optimized code and a small impact on code quality with full. For development purposes you want the full debugging experience so full will be the logical choice. pdbonly or none is recommended for generating release code.

      Monday, February 4, 2019

      TypeScript - Switch to absolute paths

      By default when you are using TypeScript imports, you are using relative paths. This leads to long combinations of  '../../../../' especially when you have a deeply nested project tree structure. Luckily most IDE’s provide some tooling to help you with these paths.

      import { Component, OnInit, Input, Output, EventEmitter } from '@angular/core';

      import { SearchResponse } from '../../shared/models/searchResponse';

      import { SampleService } from '../../../shared/services/sample/sample.service';

      import { SearchRequest } from '../../shared/models/searchRequest;

      import { ErrorService } from '../../../shared/services/errorService';

      import { LoaderService } from '../../../shared/services/loaderService';

      import { TableData } from '../../shared/models/tableData';

      By configuring the paths inside your tsconfig file you can switch to:

      import { Component, OnInit, Input, Output, EventEmitter } from '@angular/core';

      import { SearchResponse } from 'models/searchResponse';

      import { SampleService } from 'services/sample/sample.service';

      import { SearchRequest } from 'models/searchRequest;

      import { ErrorService } from 'services/errorService';

      import { LoaderService } from 'services/loaderService';

      import { TableData } from 'models/tableData';

      And here is how to achieve this by using the path section inside your tsconfig file:

      Friday, February 1, 2019

      ElasticSearch– Development vs Production Mode

      A lot of people are unaware that ElasticSearch can run in 2 modes; Development and Production Mode. So what is the difference between the 2 modes?

      Bootstrap Checks

      To explain the difference I first have to talk a little bit about bootstrap checks. During the startup of an ElasticSearch node, it will validate certain ElasticSearch, JVM and system settings and checks if they are safe for the operation of ElasticSearch. 

      Development vs. production mode

      In previous versions of Elasticsearch, a bootstrap check failure was logged as a warning. Unfortunately users didn’t notice these messages. To solve this ElasticSearch introduced the development and production mode. If Elasticsearch is in development mode, any bootstrap checks that fail appear as warnings in the Elasticsearch log. If Elasticsearch is in production mode, any bootstrap checks that fail will cause Elasticsearch to refuse to start.

      By default, Elasticsearch assumes that you are working in development mode.

      How to switch?

      Ok, now that you understand what the difference is between the 2 modes, it’s time for the next question. How can you switch between the two?

      This is completely based on the network settings like network.host.  If your node does not bind transport to an external interface (the default), it is in development mode. When the node does bind transport to an external interface, it is in production mode. Simple as that!