Skip to main content

Posts

Showing posts from February, 2019

Angular - Auto-unsubscribe

One of the nice underused features of Angular is the support for decorators (through Typescript). From the Typescript documentation: A Decorator is a special kind of declaration that can be attached to a class declaration , method , accessor , property , or parameter . Decorators use the form @expression , where expression must evaluate to a function that will be called at runtime with information about the decorated declaration.   A nice example of the usage of decorators is the ngx-auto-unsubscribe library.  This Class decorator that will automatically unsubscribe from observable subscriptions when the component is destroyed. An example:

error : NETSDK1061: The project was restored using Microsoft.NETCore.App version 1.0.0, but with current settings, version 2.2.0 would be used instead.

After adding a .NET Core Test project to my solution, the build on the server started to fail. These are the error messages we got: 2019-02-26T13:02:17.1451963Z C:\Build\_work\25\s\Data.Tests\Data.Tests.csproj : warning NU1604: Project dependency Microsoft.NETCore.App does not contain an inclusive lower bound. Include a lower bound in the dependency version to ensure consistent restore results. 2019-02-26T13:02:17.1451963Z C:\Build\_work\25\s\Data.Tests\Data.Tests.csproj : error : NETSDK1061: The project was restored using Microsoft.NETCore.App version 1.0.0, but with current settings, version 2.2.0 would be used instead. To resolve this issue, make sure the same settings are used for restore and for subsequent operations such as build or publish. Typically this issue can occur if the RuntimeIdentifier property is set during build or publish but not during restore. For more information, see https://aka.ms/dotnet-runtime-patch-selection . The strange thing was that building

Azure DevOps Extension for Azure CLI

Microsoft is introducing a new extensions for the Azure CLI, the Azure DevOps Extension. This extension adds Pipelines, Boards, Repos, Artifacts and DevOps commands to the Azure CLI 2.0. It replaces the VSTS CLI that has been deprecated and will no longer be receiving new features. Getting started Install the Azure CLI . You must have at least v2.0.49 , which you can verify with az --version command. az –version Add the Azure DevOps Extension a z extension add --name azure-devops Run the az login command. This will open up the default browser and load a sign-in page. You can also login using a PAT if you want. az login Set the default project organization and project name. This avoids that we have to do this for every command: az devops configure --defaults organization=https://dev.azure.com/ordina project=SampleProject Now we can try one of the available commands( https://docs.microsoft.com/en-us/cli/azure/ext/azure-devop

Marten 3.4–Full text search support

I’m a big fan of Marten, the document database on top of PostgreSQL. With the Marten 3.4 release last week, we finally got full text search support (together with some other bug fixes and some performance improvements). PostgreSQL has built in full text search, with this release this functionality finally becomes available in Marten. Using it is as easy as writing the following LINQ statement: More information here: http://jasperfx.github.io/marten/documentation/documents/querying/linq/#sec24 Remark: To use this feature, you will need to use PostgreSQL version 10.0 or above, as this is the first version that support text search function on jsonb column - this is also the data type that Marten use to store it's data.

Azure DevOps - SonarQube - error CS0006: Metadata file 'Google.Protobuf.dll' could not be found

Automated deployments are great, until the moment your builds start to fail due to side-effects. Last week when trying to build and deploy an application using Azure DevOps, the build failed with the following error messages in the log: 2019-02-20T20:47:57.0671661Z   CSC : error CS0006: Metadata file 'D:\b\3\_work\_temp\.sonarqube\resources\0\Google.Protobuf.dll' could not be found 2019-02-20T20:47:57.0671661Z   CSC : error CS0006: Metadata file 'D:\b\3\_work\_temp\.sonarqube\resources\0\SonarAnalyzer.CSharp.dll' could not be found 2019-02-20T20:47:57.0671661Z   CSC : error CS0006: Metadata file 'D:\b\3\_work\_temp\.sonarqube\resources\0\SonarAnalyzer.dll' could not be found I knew that I had seen this error before: https://bartwullems.blogspot.com/2018/11/tfs-build-sonarqube-error.html , but the things that worked then didn’t help. After rebooting the server, cleaning the build folder, praying to all the known and unknown gods, I finally fo

Grit - The power of passion and perseverance

As a consultant I have the privilege to work with a lot of different people, teams and customers. What I always find fascinating is how some people and teams grow and become successful where other teams keep running in circles failing to improve, this despite the fact that all these people are really talented and smart. So what makes that some teams succeed where others fail? My answer is simple, it is about how they handle failure and learn from their mistakes. Which brings me to the following TED talk :

Azure DevOps - Allow Scripts to Access OAuth Token

For a specific build task I’m using inside Azure DevOps it was required to allow the build to access the OAuth token to interact with TFS/Azure DevOps. This option is disabled by default and it took me some time to figure out where I could enable it. Here are the (easy once you know it) steps: Go to Pipelines –> Build . Select the build/release pipeline that you want to configure from the list and click on Edit in the right corner On the Edit Build Definition screen, click on the ‘Run on Agent’ section (in my case Agent Job 1) In the Properties , you find the ‘Allow scripts to access the OAuth token’ in the Additional options section

Git–Lightweight vs Annotated tags

While investigating some ways to label our releases inside Azure DevOps I discovered that there are in fact 2 types of tags:  lightweight and annotated . From the documentation : A lightweight tag is very much like a branch that doesn’t change — it’s just a pointer to a specific commit. Annotated tags, however, are stored as full objects in the Git database. They’re checksummed; contain the tagger name, email, and date; have a tagging message; and can be signed and verified with GNU Privacy Guard (GPG). It’s generally recommended that you create annotated tags so you can have all this information; but if you want a temporary tag or for some reason don’t want to keep the other information, lightweight tags are available too Azure DevOps Services and TFS support both annotated and lightweight tags.  Go to the Repos section(1) and select Tags from the menu(2). Now you can start searching for both tag types(3): When you type a tag name, tags are filtered. Annotated t

GraphQL Schema Language Cheat sheet

Although the GraphQL Schema Language is not that hard and the Graphiql tooling integration helps a lot, I still sometimes have to fall back to the documentation to know how to do something. To help you even further, you can start using the GraphQL Schema Language Cheat sheet(more information here: https://wehavefaces.net/graphql-shorthand-notation-cheatsheet-17cd715861b6 ). This gives you a one pager in PDF or PNG with the full syntax.

GraphQL DotNet–Default values for arguments

In GraphQL type definitions, each field can take zero or more arguments . Every argument passed to a field needs to have a name and a type . Next to this, it’s also possible to specify default values for arguments. For example, let’s create a query in our schema that returns all products for a category, if no category is specified we want to return ‘Beverages’: But how can you achieve this using GraphQL DotNet using the GraphType first approach?

Invoking a GraphQL endpoint from Postman using the application/graphql header

If your GraphQL endpoint accepts the application/graphql header, invoking a GraphQL endpoint from Postman becomes really easy: Add the header Content-type: application/graphql: Now you can just paste your GraphQL query in the body of the request: And if you execute the request, you get the GraphQL data back:

ASP.NET Core - IOptionsMonitor vs IOptionsSnapshot

While preparing some training material about ASP.NET Core, I was wondering about the difference between IOptionsMonitor and IOptionsSnapshot . In contrast to IOptions allow both interfaces to track configuration changes during the lifetime of your application. However, internally both interfaces are constructed completely different; IOptionsMonitor is registered in DI container as singleton, it has a CurrentValue property and is capable of detecting changes through OnChange event subscription. On the other hand, IOptionsSnapshot is registered as scoped, has a Value property and also have a change detection capability by reading the last options for each request, but it doesn't have the OnChange event. So what’s the point of having 2 interfaces with completely different signatures when both achieve the same goal; picking up configuration changes at runtime? The difference between the two (and this also explains the different implementation approach) is that IOptionsS

Azure DevOps Demo Generator

I regularly give demos about Azure DevOps. Unfortunately it can take a lot of time to create some good sample data that really allows me to show the power of Azure DevOps. Last week I got a great tip from a colleague to help me with this: https://azuredevopsdemogenerator.azurewebsites.net/ Azure DevOps Demo Generator helps you create projects on your Azure DevOps Organization with pre-populated sample content that includes source code, work items, iterations, service endpoints, build and release definitions based on a template you choose. Get started using the Demo Generator V2 now , or follow the simple walkthrough .

ElasticSearch–Speed up your bulk indexing

New indexed documents in ElasticSearch are not searchable until a refresh occurs. By default, every shard is refreshed once every second ‒ defined by a dynamic index level setting named refresh_interval . This forces Elasticsearch to create a new segment every second. During bulk indexing it is recommended to increase this value. This allows larger segments to flush and decreases future merge pressure. (Replace $INDEX$ with your index name). PUT /$INDEX$/_settings {     "index" : {         "refresh_interval" : "60s"     } } What also can help is setting the  index.number_of_replicas to 0 . This is a tradeoff, as the loss any shard will cause data loss, but at the same time indexing will be faster since documents will be indexed only once. PUT /$INDEX$/_settings {     "index" : {         "number_of_replicas" : "0"     } } Once the initial loading is finished, you can set index.refresh_interval

SeriLog–Decrease the application impact while logging

If you ever had to maintain an application, you know that good logs are your best friend. That is one of the reasons why I’m a big fan of Serilog , a structured logging framework for .NET. And if you ask me how much logging do we need, I would answer that you can not have too much log data. Of course all this log data introduces its own challenges, like how can you search fast through all these logs but also the impact it has on the performance of your application. If you start writing thousands of messages a second to the Console, you’ll see your application slowing down. To mitigate this problem, most Serilog sinks write messages by default in asynchronous batches to reduce application latency and improve network performance. Unfortunately there are few sinks that don’t do this by default, by example the Console sink .  For these cases, you can use Serilog.Sinks.Async . It provides an async wrapper WriteTo.Async() that moves logging onto a worker thread, so that application

ASP.NET Core IIS InProcess hosting

Before ASP.NET Core 2.2 was released when hosting an ASP.NET Core application in IIS, what was happening behind the scenes is that ASP.NET Core was running as a separate process using Kestrel as the web server. An ASP.NET Core Module was configured in IIS to behave as a reverse proxy and forward the requests to the Kestrel server. This out-of-process hosting model introduced some extra overhead. In ASP.NET Core 2.2 this was solved by introducing the InProcess hosting model. If you create a new ASP.NET Core 2.2 application, this is enabled by default. But for existing applications that you want to upgrade to ASP.NET Core 2.2 you have some extra steps to do. Let’s walk through these steps: Step 1: Update the target framework version to 2.2 Update the target framework version of your ASP.NET Core app to 2.2. <TargetFramework>netcoreapp2.2</TargetFramework> Step 2 - Update Microsoft.AspNetCore.App Update the Microsoft.AspNetCore.App metapackage to at least 2.2.

ASP.NET Core Scope Validation

While preparing a training about .NET Core I stumbled over a feature I was unaware it existed in ASP.NET Core: Scope Validation. One of the nice things inside ASP.NET Core is that it has built-in support for Dependency Injection . Service lifetime When registering your dependencies you have 3 service lifetimes to choose from: Transient: Transient lifetime services are created each time they're requested. This lifetime works best for lightweight, stateless services. Scoped: Scoped lifetime services are created once per request. Singleton: Singleton lifetime services are created the first time they're requested (or when ConfigureServices is run and an instance is specified with the service registration). Every subsequent request uses the same instance. Simple so far. The problem is that you can shoot yourself in the foot when you start combining multiple lifetimes in the same object tree.  For example when using a scoped service in a middleware, and you

Impress your colleagues with your knowledge about … .NET Debugging Information

Sometimes when working with C# you discover some hidden gems. Some of them very useful, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that I want to share today is the magic behind debugging in .NET; PDB files. Open a project in Visual Studio, right click on it and go to Properties . Go to the Build tab and click on the Advanced… button on the bottom of the screen. In the Debugging information you can choose between Full , Pdb-only , or Portable . But what does this mean and what is the difference? Let’s find out… Debugging information By configuration the Debugging information, you can choose what info is generated by the compiler to help you debug your application. It has the following options: none Specifies that no debugging information will be generated. full Enables attaching a debugger to the running program. pdbonly Allows source code debugging whe

TypeScript - Switch to absolute paths

By default when you are using TypeScript imports, you are using relative paths. This leads to long combinations of  '../../../../' especially when you have a deeply nested project tree structure. Luckily most IDE’s provide some tooling to help you with these paths. import { Component, OnInit, Input, Output, EventEmitter } from '@angular/core'; import { SearchResponse } from '../../shared/models/searchResponse'; import { SampleService } from '../../../shared/services/sample/sample.service'; import { SearchRequest } from '../../shared/models/searchRequest; import { ErrorService } from '../../../shared/services/errorService'; import { LoaderService } from '../../../shared/services/loaderService'; import { TableData } from '../../shared/models/tableData'; By configuring the paths inside your tsconfig file you can switch to: import { Component, OnInit, Input, Output, EventEmitter } from '@a

ElasticSearch– Development vs Production Mode

A lot of people are unaware that ElasticSearch can run in 2 modes; Development and Production Mode. So what is the difference between the 2 modes? Bootstrap Checks To explain the difference I first have to talk a little bit about bootstrap checks. During the startup of an ElasticSearch node, it will validate certain ElasticSearch, JVM and system settings and checks if they are safe for the operation of ElasticSearch.  Development vs. production mode In previous versions of Elasticsearch, a bootstrap check failure was logged as a warning. Unfortunately users didn’t notice these messages. To solve this ElasticSearch introduced the development and production mode. If Elasticsearch is in development mode, any bootstrap checks that fail appear as warnings in the Elasticsearch log. If Elasticsearch is in production mode, any bootstrap checks that fail will cause Elasticsearch to refuse to start. By default, Elasticsearch assumes that you are working in development mode. How to