Tuesday, December 24, 2019

SQL Server–Activate Memory Optimized tables

When trying to create a Memory Optimized table in SQL Server, I got the following error message:

Cannot create memory optimized tables. To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container.

To fix this I first had to add a new memory optimized filegroup to the database:

ALTER DATABASE OntSessionTmp ADD FILEGROUP ontsessiontmp_mod CONTAINS MEMORY_OPTIMIZED_DATA

After creating the filegroup I had to link a container:

ALTER DATABASE ontsessiontmp ADD FILE (name='ontsessiontmp_mod1', filename='F:\Program Files\Microsoft SQL Server\MSSQL13.ONTINST2\MSSQL\DATA\ontsessiontmp_mod1') TO FILEGROUP ontsessiontmp_mod

Now I could switch a table to a Memory Optimized version. Yes!

Monday, December 23, 2019

Install .NET Core 3.1 SDK on Ubuntu 18.04 inside WSL

I’m a big fan of WSL (Windows Subsystem for Linux) to test my .NET Core applications on Linux.

Recently I tried to install the .NET Core 3.1 SDK on my Ubuntu distribution inside WSL:

bawu@ORD02476:~$ sudo apt-get install dotnet-sdk-3.1

Reading package lists... Done

Building dependency tree

Reading state information... Done

E: Unable to locate package dotnet-sdk-3.1

E: Couldn't find any package by glob 'dotnet-sdk-3.1'

E: Couldn't find any package by regex 'dotnet-sdk-3.1'

This didn’t seem to work. He couldn’t find the .NET Core 3.1 SDK inside the package manager.

I first tried to refresh the packages list:

bawu@ORD02476:~$ sudo apt update

0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com]

Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease

Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]

Get:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]

Get:4 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]

Get:5 http://archive.ubuntu.com/ubuntu bionic/universe amd64 Packages [8570 kB]

Get:6 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [593 kB]

Get:7 http://security.ubuntu.com/ubuntu bionic-security/main Translation-en [194 kB]

Get:8 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [15.1 kB]

Get:9 http://security.ubuntu.com/ubuntu bionic-security/restricted Translation-en [4684 B]

Get:10 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [627 kB]

Get:11 http://security.ubuntu.com/ubuntu bionic-security/universe Translation-en [210 kB]

Get:12 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [6120 B]

Get:13 http://security.ubuntu.com/ubuntu bionic-security/multiverse Translation-en [2600 B]

Get:14 http://archive.ubuntu.com/ubuntu bionic/universe Translation-en [4941 kB]

Get:15 http://archive.ubuntu.com/ubuntu bionic/multiverse amd64 Packages [151 kB]

Get:16 http://archive.ubuntu.com/ubuntu bionic/multiverse Translation-en [108 kB]

Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [817 kB]

Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/main Translation-en [288 kB]

Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [24.1 kB]

Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/restricted Translation-en [6620 B]

Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1033 kB]

Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/universe Translation-en [319 kB]

Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [9284 B]

Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse Translation-en [4508 B]

Get:25 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [2512 B]

Get:26 http://archive.ubuntu.com/ubuntu bionic-backports/main Translation-en [1644 B]

Get:27 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [4028 B]

Get:28 http://archive.ubuntu.com/ubuntu bionic-backports/universe Translation-en [1856 B]

Fetched 18.2 MB in 27s (661 kB/s)

Reading package lists... Done

Building dependency tree

Reading state information... Done

131 packages can be upgraded. Run 'apt list --upgradable' to see them.

But this didn’t made any difference:

bawu@ORD02476:~$ apt search dotnet-sdk

Sorting... Done

Full Text Search... Done

The trick is to first let the package manager know that it should include packages from ‘packages.microsoft.com’:

bawu@ORD02476:~$ wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb

bawu@ORD02476:~$ sudo dpkg -i packages-microsoft-prod.deb

Selecting previously unselected package packages-microsoft-prod.

(Reading database ... 28645 files and directories currently installed.)

Preparing to unpack packages-microsoft-prod.deb ...

Unpacking packages-microsoft-prod (1.0-ubuntu18.04.2) ...

Setting up packages-microsoft-prod (1.0-ubuntu18.04.2) ...

If we now update the package manager, it reads the packages from Microsoft as well:

bawu@ORD02476:~$ sudo apt-get update

Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease

Hit:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease

Get:3 https://packages.microsoft.com/ubuntu/18.04/prod bionic InRelease [4003 B]

Hit:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease

Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease

Get:6 https://packages.microsoft.com/ubuntu/18.04/prod bionic/main amd64 Packages [85.1 kB]

Fetched 89.1 kB in 14s (6320 B/s)

Reading package lists... Done

And finally we find the .NET Core SDK when searching for packages:

bawu@ORD02476:~$ apt search dotnet-sdk

Sorting... Done

Full Text Search... Done

dotnet-sdk-2.1/bionic 2.1.802-1 amd64

  Microsoft .NET Core SDK 2.1.802

dotnet-sdk-2.1.105/bionic 2.1.105-1 amd64

  Microsoft .NET Core SDK - 2.1.105

dotnet-sdk-2.1.200/bionic 2.1.200-1 amd64

  Microsoft .NET Core SDK - 2.1.200

dotnet-sdk-2.1.201/bionic 2.1.201-1 amd64

  Microsoft .NET Core SDK - 2.1.201

dotnet-sdk-2.1.202/bionic 2.1.202-1 amd64

  Microsoft .NET Core SDK - 2.1.202

dotnet-sdk-2.1.300-preview2-008533/bionic 2.1.300-preview2-008533-1 amd64

  Microsoft .NET Core SDK 2.1.300 - Preview

dotnet-sdk-2.1.300-rc1-008673/bionic 2.1.300-rc1-008673-1 amd64

Microsoft .NET Core SDK 2.1.300 - rc1

dotnet-sdk-2.2/bionic 2.2.402-1 amd64

  Microsoft .NET Core SDK 2.2.402

dotnet-sdk-3.0/bionic 3.0.101-1 amd64

  Microsoft .NET Core SDK 3.0.101

dotnet-sdk-3.1/bionic 3.1.100-1 amd64

Microsoft .NET Core SDK 3.1.100

Friday, December 20, 2019

Delete a Windows Service

I’m so spoiled on using TopShelf that I don’t know anymore on how to remove a Windows Service the old fashioned way.

If you have Powershell 6 or higher, you can do the following:

Remove-Service –name “Your Service Name”

An alternative is to directly use the Service Control Manager:

sc.exe delete "Your Service Name"

Don’t forget the ‘.exe’ when invoking the Service Control Manager inside a Powershell command window.

More information: https://docs.microsoft.com/en-us/dotnet/framework/windows-services/how-to-install-and-uninstall-services

Thursday, December 19, 2019

RedisTimeoutException - High ‘in’ value

While investigating a performance issue we noticed that the we had a lot of RedisTimeoutExceptions inside our logs:

2019-12-15 16:26:57.425 +01:00 [ERR] Connection id "0HLS1E3OEP520", Request id "0HLS1E3OEP520:0000001D": An unhandled exception was thrown by the application.

StackExchange.Redis.RedisTimeoutException: Timeout performing PEXPIRE CookiesStorageAuthSessionStore-4a07a1bb-04e0-442d-9223-e1612967bf2b, inst: 2, queue: 8, qu: 0, qs: 8, qc: 0, wr: 0, wq: 0, in: 14411, ar: 0, clientName: SERVER1, serverEndpoint: Unspecified/redis:6379, keyHashSlot: 4822 (Please take a look at this article for some common client-side issues that can cause timeouts: http://stackexchange.github.io/StackExchange.Redis/Timeouts)

You would think that this indicates there is a performance issue in Redis but this turned out not to be the case.

Let’s have a second look at the error message and especially the strange parameters: inst: 2, queue: 8, qu: 0, qs: 8, qc: 0, wr: 0, wq: 0, in: 14411, ar: 0

In this situation you have to notice the high ‘in’ value. This value tells us how much data is sitting in the client’s socket kernel buffer.  This indicates that the data has arrived at the local machine but has not been read by the application yet.

So the problem should not be found inside Redis but in the application(or server) that is consuming Redis.

Wednesday, December 18, 2019

OpenLayers - Use SVG icons for markers

On one of my projects we are using OpenLayers to visualize geographical data. I got stuck when I tried to show markers on a map by using SVG icons.
My first attempt looked like this:


But on the map I only got a weird black triangle.😕

I tried a lot of alternatives but nothing made any difference. In the end I found out that I could fix it by specifying a size(width and height) inside my SVG file:


Remark: Changing the scale and size properties on the style seems to have no affect when using SVG’s. So make sure to set an appropriate size in the SVG file.

Tuesday, December 17, 2019

Disable transformations in ASP.NET Core

By default when you add a web.config file to an ASP.NET Core application, the config file is transformed with the correct processPath and arguments.

This can be quite annoying especially if you want to apply some web config transformation magic.

Inside the documentation, I found that you can prevent the Web SDK from transforming the web.config file by setting the <IsTransformWebConfigDisabled> property in the project file:

This disabled the web config transformation behavior but didn’t solve the problem that Visual Studio updated the web.config file.

A colleague sent me another (undocumented?) flag <ANCMPreConfiguredForIIS> :

Remark: This setting only works when you are running in IIS InProcess.

Monday, December 16, 2019

Change password in another domain

As a consultant I’m active at multiple clients. For each of these clients I get a domain account to be able to login onto their systems. But as they all enforce a password policy, I have to update a lot of passwords every month(thank god that password managers exists).

Here is a quick tip of you want to change the password of an Active Directory account but your PC isn’t part of the same domain:

You can still change your password if your PC is able to talk to the domain controller:

  • Hit CTRL-ALT-DELETE
  • Select Change a Password
  • You’ll see the Change a Password screen where the focus is on the old Password field. But what is not immediately obvious is that you can change the username(and domain!) in the first field. By setting here a different username you can change the password directly for any account you have access to.

Friday, December 13, 2019

Transport Tycoon exercises for DDD

While searching something on Github I stumbled over the following repo: https://github.com/Softwarepark/exercises/blob/master/transport-tycoon.md

This is a set of Domain-Driven Design (DDD) exercises. They take place in the universe of the Transport Tycoon. It is a game "in which the player acts as an entrepreneur in control of a transport company, and can compete against rival companies to make as much profit as possible by transporting passengers and various goods by road, rail, sea and air."

If you want to learn about DDD, or practice your DDD skills this is a great way to start…

Thursday, December 12, 2019

ASP.NET Core - Load session state asynchronously

While browsing through the ASP.NET Core documentation I noticed the following section: Load session state asynchronously.

The default session provider in ASP.NET Core loads session records from the underlying IDistributedCache backing store asynchronously only if the ISession.LoadAsync method is explicitly called before the TryGetValue, Set, or Remove methods. If LoadAsync isn't called first, the underlying session record is loaded synchronously, which can incur a performance penalty at scale.

To have apps enforce this pattern, wrap the DistributedSessionStore and DistributedSession implementations with versions that throw an exception if the LoadAsync method isn't called before TryGetValue, Set, or Remove. Register the wrapped versions in the services container.

To avoid this performance penalty, I created 2 extensions methods that do a LoadAsync before reading or writing the session state:

Wednesday, December 11, 2019

Autofac–Configure Container in .NET Core WorkerService

.NET Core 3.0 introduced a new WorkerService template that can be used as a starting point for long running service apps. As the worker service template didn’t use a Startup.cs file like in a traditional ASP.NET Core application, it wasn’t immediately obvious to me where I had to configure my IoC container.

So a quick tip for my future self, you can do this using the ConfigureContainer() method on the HostBuilder:

Remark: The example above is using Autofac but the approach is similar for other containers.

Tuesday, December 10, 2019

NuGet Restore error - Response status code does not indicate success: 401 (Unauthorized)

When trying to build a project in Visual Studio, it failed while downloading the nuget packages from our internal Azure Artifacts nuget store.

In the logs I could find the following error message:

C:\Program Files\dotnet\sdk\3.0.100\NuGet.targets(123,5): error : Failed to retrieve information about 'Example.WebApi.Client' from remote source 'http://tfs:8080/tfs/DefaultCollection/_packaging/892779dc-d854-4c9f-8b26-833d52585ae6/nuget/v3/flat2/example.webapi.client/index.json'. [C:\Projects\MapSU\MapSU.Server.sln]

C:\Program Files\dotnet\sdk\3.0.100\NuGet.targets(123,5): error :   Response status code does not indicate success: 401 (Unauthorized). [C:\Projects\MapSU\MapSU.Server.sln]

Directly accessing the Azure Artifacts url worked without a problem, but when I tried to do this through Visual Studio or through the commandline it failed with the error above.

I was able to solve the problem by removing the ‘vscredentials’ in the Windows Credentials manager that referred to the tfs server:

  • Open the search bar in windows and search for Credentials Manager
  • Click on Manage Windows Credentials.
  • In the Credential Manager go to ‘Windows Credentials’
  • Scroll to the list of Generic Credentials. Click on the vscredentials one and click on Remove.

Now try to access Azure Artifacts again in Visual Studio. This time I got a login popup, after entering the correct credentials everything worked again.

Monday, December 9, 2019

HTTP Error 500.35 - ANCM Multiple In-Process Applications in same Process

After switching one of our ASP.NET Core applications from out-of-process hosting to InProcess hosting, we got the following error message:

This is caused by the fact that we are running multiple applications in IIS in the same application pool. As long as we were running out-of-process this was not an issue. But when we switched to InProcess hosting it turned out that you cannot run multiple In-Process applications in the same application pool.

The solution is simple, give each application its own app pool.

Friday, December 6, 2019

.NET Core 3 - Minibook

The people from InfoQ released a free (mini)book about .NET Core. In this book, five authors talk about the current state of .NET Core 3.0 from multiple perspectives. Each author brings their experience and ideas on how different .NET Core 3.0 features are relevant to the .NET ecosystem, both present and future.

It covers the following topics:

  • Navigating the .NET Ecosystem - In 2002, .NET was released. Over the next 12+ years, the .NET developer community patiently grew at a seemingly steady pace. Then, things started evolving rapidly. Microsoft anticipated the changing ecosystem and embraced the open-source development mindset, even acquiring GitHub.
  • Interview with Scott Hunter on .NET Core 3.0 - Chris Woodruff talks to Director of Program Management for the .NET platform Scott Hunter about what developers can expect from .NET Core 3.
  • Single Page Applications and ASP.NET Core 3.0 - Web development has changed in the past few years, with the maturity of Angular, React, Vue, and others. We’ve moved from building web pages to building apps. We’ve also been shifting from rendering markup on the server to more commonly rendering it directly in the browser. But as developers continue to transition to client-side development, many are asking if they should still be using ASP.NET.
  • Using the .Net Core Template Engine to Create Custom Templates and Projects - The tooling story changed dramatically with .NET Core, because of its serious emphasis on the command line. This is a great fit for .NET Core's cross-platform, tooling-agnostic image.
  • Angular & ASP.NET Core 3.0 - Deep Dive - While there are many advantages to using Angular for building SPAs, some parts including trivial, static content such as Contact As, Licensing, etc. don’t need the extra complexity. In this article Evgueni Tsygankov shows how to build reusable Angular components that can be hosted in ASP.NET Core pages, allowing you to choose the right tool for each page.

Remark: Registration is required before you can download the ebook.

SQL Server–Find biggest tables

Did you know it is really easy to find out which tables in SQL Server use the most disk space?

  • Open SQL Server Management Studio
  • Right click on your database
  • Go to Reports –> Standard Reports

  • Choose the Disk Usage by Table report:

Thursday, December 5, 2019

ASP.NET Core - Using IClaimsTransformation with Windows Authentication

In ASP.NET Core you can implement the IClaimsTransformation interface. This allows you to extend/change the incoming claimsprincipal:

Unfortunately my ClaimsTransformer was never invoked when I used Windows Authentication in IIS.

The trick was to explicitly specify the IISServerDefaults.AuthenticationScheme:

Wednesday, December 4, 2019

Running ElasticSearch in Docker–Memlock check

While updating my docker-compose file  to the latest docker images, I noticed I had to set some extra values before I could run my ELK cluster:

I had to add an extra bootstrap check:

- "bootstrap.memory_lock=true"

and set the memlock ulimits:

ulimits:
    memlock:
        soft: -1
        hard: –1

The memory lock setting will disable swapping out parts of the JVM heap to disk. Memory swapping is bad for performance and node stability so it should be avoided at all cost.

More information: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html

The memlock soft and hard values configures the range of memory that ElasticSearch will use. Setting this to –1 means unlimited.

Tuesday, December 3, 2019

Publish a .NET Core app using Azure Pipeline

To publish a .NET Core application using Web Deploy, you can use the .NET Core Task inside Azure pipelines.

Through the .NET Core Task you can invoke the dotnet CLI and invoke the available commands. To use Web Deploy you can use the ‘Publish’ command.

It took me some time to find the correct arguments, so here they are in case you need it:

dotnet publish --configuration $(BuildConfiguration) /p:PublishProfile=$(BuildConfiguration) /p:Password=$(WebDeployPassword) /p:Username=$(WebDeployUser) /p:AllowUntrustedCertificate=True

Remark: Don’t forget to first create a PublishProfile(pubxml) file in Visual Studio and commit it to your repo. Git ignored my pubxml by default and it took me some time to figure out why nothing happened…

Monday, December 2, 2019

EditorConfig - Let private fields start with an underscore

I find code consistency really important. This removes a lot of mental burden from the developer.

One of the conventions that is quite common in C#is to use camelCase for local variables and _camelCase for private or internal fields.

An example:

Unfortunately this convention is not automatically enforced by Visual Studio. To fix this you can add an editorconfig file with the following rules:

dotnet_naming_rule.instance_fields_should_be_camel_case.severity = suggestion
dotnet_naming_rule.instance_fields_should_be_camel_case.symbols = instance_fields
dotnet_naming_rule.instance_fields_should_be_camel_case.style = instance_field_style
 
dotnet_naming_symbols.instance_fields.applicable_kinds = field
 
dotnet_naming_style.instance_field_style.capitalization = camel_case
dotnet_naming_style.instance_field_style.required_prefix = _

Friday, November 29, 2019

ASP.NET ReportViewer - ReportViewerWebControl.axd was not found

After migrating an old ASP.NET application to a new IIS server, the application started to throw errors. The app was using a ReportViewer control to show some SQL Server Reporting Services reports. But when I tried to load a specific report, the report viewer kept showing a spinning wheel.

When I took a look at the console, I noticed 404 errors for ReportViewerWebControl.axd.

The problem was that the original application was hosted on IIS 6. To configure the ReportViewer in IIS6 an entry for the Http Handler was added inside the system.web <httpHandler> section. Starting from IIS 7.0 this section is no longer read. Instead we have to add the ReportViewerWebControl.axd to the system.webserver <handlers> section:

Thursday, November 28, 2019

SignalR–Automatic reconnect

With the release of .NET Core 3.0, Microsoft updated SignalR as well. One of the new features is the re-introduction of Automatic Reconnect. Automatic reconnect was part of the original SignalR for ASP.NET, but wasn’t available (until recently) in ASP.NET Core.
The JavaScript client for SignalR can be configured to automatically reconnect using the withAutomaticReconnect method on HubConnectionBuilder. It won't automatically reconnect by default.

Without any parameters, withAutomaticReconnect() configures the client to wait 0, 2, 10, and 30 seconds respectively before trying each reconnect attempt, stopping after four failed attempts.
You can configure the number of reconnect attempts before disconnecting and change the reconnect timing, by passing an array of numbers representing the delay in milliseconds to wait before starting each reconnect attempt:

Watch the video here:

Wednesday, November 27, 2019

Kubernetes Learning Path–50 days from zero to hero

Microsoft released a learning path for Kubernetes:

Kubernetes is taking the app development world by storm. By 2022, more than 75% of global organizations will be running containerized applications in production.* Kubernetes is shaping the future of app development and management—and Microsoft wants to help you get started with it today.

This guide is meant for anyone interested in learning more about Kubernetes. In just 50 days, you’ll understand the basics of Kubernetes and get hands-on experience with its various components, capabilities, and solutions, including Azure Kubernetes Service. Go from zero to hero with Kubernetes to set your company up for future app development success.

Go check it out here: https://azure.microsoft.com/en-us/resources/kubernetes-learning-path/

Tuesday, November 26, 2019

GraphQL - A love story

In case you’ve missed my presentation at VISUG XL, you can find the slides here:

Demo’s are available at Github: https://github.com/wullemsb/visugxlgraphqldemos

Monday, November 25, 2019

ProxyKit–Bad gateway error

We are using ProxyKit as an API proxy for our Geo services. But although it worked perfectly on our development environment and when talking to our GeoServer, it failed with the following exception when we tried to access an ASP.NET Core API through the same proxy:

502 Bad Gateway

This turned out to be a bug in ProxyKit version 2.2.1. ProxyKit sets an empty content when executing a GET request. ASP.NET Core doesn’t like this as it expects that GET requests doesn’t have a body.

As a work around I explicitly set the content to null when executing a GET request:

Remark: A new version 2.2.2 was released last week to fix the issue: https://github.com/damianh/ProxyKit/releases/tag/v2.2.2.

Friday, November 22, 2019

ASP.NET SessionState error: Provider must implement the class 'System.Web.SessionState.SessionStateStoreProviderBase'

After switching to the async SQL Session State Provider one of our projects failed to run. Instead we got a yellow-screen-of-death with the following message:

Provider must implement the class 'System.Web.SessionState.SessionStateStoreProviderBase'

I had to take one extra manual step to get the async provider up and running. I had to replace the default session state module by an async alternative:

Thursday, November 21, 2019

Postman–Working with data files

Instead of doing a manual cleanup of my local RabbitMQ cluster I decided to use the Management HTTP API and script the whole process.

I decided to use Postman and the collection runner data files support.

Let’s walk through the steps:

  • I started by creating a new Collection inside Postman:

  • Inside this collection I started adding DELETE requests to different objects(exchanges, queues,…) inside RabbitMQ

  • I used variables (recognizable with the double mustaches {{variableName}}) inside the URLs.
    • E.g. http://user:pw@localhost:15672/api/exchanges/%2F/{{exchange}}
  • These variables should be replaced by values from a JSON data file.
  • Create a JSON file containing a list of values. Unfortunately it is not possible to add the data file to your collection. So save it anywhere you want
    • Here is how the data file looked like for the above request:
  • Let’s now try to use the collection runner. Click on the Runner button at the top:

  • Select the collection you want to run on the left.

  • Also select a data file.
  • Click on the big blue run button to execute the requests in the collection. Parameters will be replaced using the values you’ve provided in the data file.

Wednesday, November 20, 2019

RabbitMQ - Management HTTP API

One nice extra you get when installing the RabbitMQ management plugin is a corresponding HTTP api that allows you to do everything that is possible from the management portal through the API.

Remark: API documentation here: https://rawcdn.githack.com/rabbitmq/rabbitmq-management/v3.8.1/priv/www/api/index.html

There is only one thing I had to figure out…

You can query a specific vhost inside RabbitMQ by adding a vhost section inside the API. An example: /api/queues/vhost/name

The problem was I had to target the default vhost which was named “/” by default.

Trying to use that “/” inside the api didn’t work and resulted in an object not found error: /api/queues///name

The correct way was to add some URL encoding magic: /api/queues/%2F/name


Tuesday, November 19, 2019

IdentityModel 4 - Request custom grant

While updating my training material about OIDC I noticed that the documentation of IdentityServer didn’t reflect the latest changes in the IdentityModel library.

The example found in the documentation was still using the old TokenClient together with a RequestCustomGrantAsync() method:

This method no longer exists in version 4 of IdentityModel. Instead you need to use the RequestTokenAsync() extension method on the HttpClient:

Monday, November 18, 2019

ADFS error - The audience restriction was not valid because the specified audience identifier is not present in the acceptable identifiers list of this Federation Service.

After adding a claims provider trust in ADFS, we got the following error message when trying to use the configured 3th party IP-STS.

The audience restriction was not valid because the specified audience identifier is not present in the acceptable identifiers list of this Federation Service.

User Action

See the exception details for the audience identifier that failed validation. If the audience identifier identifies this Federation Service, add the audience identifier to the acceptable identifiers list by using Windows PowerShell for AD FS.  Note that the audience identifier is used to verify whether the token was sent to this Federation Service. If you think that the audience identifier does not identify your Federation Service, adding it to the acceptable identifiers list may open a security vulnerability in your system.

Exception details:

Microsoft.IdentityServer.AuthenticationFailedException: ID1038: The AudienceRestrictionCondition was not valid because the specified Audience is not present in AudienceUris.

Audience: 'http://adfs4.example.be/adfs/services/trust' ---> Microsoft.IdentityModel.Tokens.AudienceUriValidationFailedException: ID1038: The AudienceRestrictionCondition was not valid because the specified Audience is not present in AudienceUris.

Audience: 'http://adfs4.example.be/adfs/services/trust'

To solve this problem we had to add the audience uri of our ADFS server to the list of acceptable identifiers(as well explained in the error message):

 set-ADFSProperties -AcceptableIdentifier 'http://adfs4.example.be/adfs/services/trust'

Friday, November 15, 2019

Using a scoped service inside a HostedService

When trying to resolve a scoped dependency inside a HostedService, the runtime returned the following error message:

System.InvalidOperationException: Cannot consume scoped service ‘IScoped’ from singleton ‘Microsoft.Extensions.Hosting.IHostedService’.

The problem is that the IHostedService is a singleton and is created outside a dependency injection scope. Trying to inject any scoped service(e.g.  an EF Core DbContext) will result in the error message above.

To solve this problem you have to create a dependency injection scope using the IServiceScopeFactory. Within this scope you can use the scoped services:

Thursday, November 14, 2019

IdentityServer 4 - CORS

If an endpoint is called via Ajax calls from JavaScript-based clients, CORS configuration is required.

This can be done by setting the AllowedCorsOrigins collection on the client configuration. IdentityServer will consult these values to allow cross-origin calls from the origins.

Remark: Be sure to use an origin (not a URL) when configuring CORS. For example: https://foo:123/ is a URL, whereas https://foo:123 is an origin.

Wednesday, November 13, 2019

MassTransit–Batch Message Consumption

A lesser known feature inside MassTransit is the support of batch messages. This can be a really nice feature if you want to combine a batch of high-volume smaller messages into a single atomic consumer.

How does this work?

MassTransit will combine multiple messages into a single consume by specifying a window, such as a message count (batch size), time period, or a combination of both.

There are 2 configurable limits:

  • Size: A limit specifying the maximum number of messages which can fit into a single batch will trigger once that many messages are ready to be consumed. The batch size must be less than or equal to any prefetch counts or concurrent message delivery limits in order reach the size limit.

  • Time: A limit specifying how long to wait for additional messages from the time when the first message is ready, after which the messages ready within that time are delivered as a single batch. The time limit should be well within the lock time of a message, including enough time to process the batch.

Batch configuration

To use the batching functionality, configure an extra receive endpoint and use the Batch method to configure the endpoint:

Batch consumption

The message batch is delivered as an array to the consumer, so that the existing behavior is maintained for middleware, factories, etc. An additional context is available on the payload, which can be used to discover details related to the batch. Instead of receiving a single message you get a Batch<T> of messages:

Remark: This feature is experimental.  Be sure to configure the transport with sufficient concurrent message capacity (prefetch, etc.) so that a batch can actually complete without always reaching the time limit.

Tuesday, November 12, 2019

Azure DevOps - Kanban board limits

Got a question from a customer about the Kanban board in Azure DevOps Server:

I noticed that on the Kanban board on the first and last column only a limited number of work items are shown. Is this something that can be configured?

Quick answer: no.

To limit the number of items on the board, the first and last column of your kanban board will only show 20 work items. To see more work items you need to use the Show more items link in the bottom.

Monday, November 11, 2019

ASP.NET Core 3.0 - Enable Authentication

Quick tip for anyone using ASP.NET Core 3.0 (especially when you did an upgrade from ASP.NET Core 2.x); if you want to enable authentication don’t forget to add the correct middleware. You need both UseAuthentication and UseAuthorization:

In earlier versions of ASP.NET Core, authorization support was provided via the [Authorize] attribute. Authorization middleware wasn't available. In ASP.NET Core 3.0, authorization middleware is required(!). Therefore the ASP.NET Core Authorization Middleware (UseAuthorization) should be placed immediately after UseAuthentication.

Add the UseAuthorization and UseAuthentication methods AFTER the UseRouting() but BEFORE the UseEndpoints():

A DefaultPolicy is initially configured to require authentication, so no additional configuration is required.

Tuesday, November 5, 2019

Azure DevOps NuGet error

When trying to restore a NuGet package inside an Azure DevOps Build it failed with the following error message:

2019-11-13T09:17:59.8686867Z ##[section]Starting: NuGet restore

2019-11-13T09:17:59.8813825Z ==============================================================================

2019-11-13T09:17:59.8813825Z Task         : NuGet Installer

2019-11-13T09:17:59.8813825Z Description  : Installs or restores missing NuGet packages

2019-11-13T09:17:59.8813825Z Version      : 0.2.31

2019-11-13T09:17:59.8813825Z Author       : Microsoft Corporation

2019-11-13T09:17:59.8813825Z Help         : [More Information](https://go.microsoft.com/fwlink/?LinkID=613747)

2019-11-13T09:17:59.8813825Z ==============================================================================

2019-11-13T09:18:01.6050815Z [command]C:\Windows\system32\chcp.com 65001

2019-11-13T09:18:01.6050815Z Active code page: 65001

2019-11-13T09:18:01.6070347Z Detected NuGet version 3.3.0.212 / 3.3.0

2019-11-13T09:18:01.6080113Z SYSTEMVSSCONNECTION exists true

2019-11-13T09:18:01.6080113Z [command]D:\b\4\agent\_work\_tasks\NuGetInstaller_333b11bd-d341-40d9-afcf-b32d5ce6f23b\0.2.31\node_modules\nuget-task-common\NuGet\3.3.0\NuGet.exe restore -NonInteractive D:\b\4\agent\_work\59\s\MestbankPortaal.sln -ConfigFile D:\b\4\agent\_work\59\s\nuget.config

2019-11-13T09:18:02.1119369Z MSBuild auto-detection: using msbuild version '14.0' from 'C:\Program Files (x86)\MSBuild\14.0\bin'.

2019-11-13T09:18:02.4713257Z Feeds used:

2019-11-13T09:18:02.4713257Z   C:\Users\tfsservice\AppData\Local\NuGet\Cache

2019-11-13T09:18:02.4713257Z   C:\Users\tfsservice\.nuget\packages\

2019-11-13T09:18:02.4713257Z   http://tfs:8080/tfs/DefaultCollection/_packaging/Feed/nuget/v3/index.json

2019-11-13T09:18:02.4713257Z   C:\Program Files (x86)\Microsoft SDKs\NuGetPackages\

2019-11-13T09:18:02.4713257Z

2019-11-13T09:18:02.4986705Z Restoring NuGet package IAM.Security.2.6.1.

2019-11-13T09:18:45.2347882Z WARNING: Unable to find version '2.6.1' of package 'IAM.Security'.

2019-11-13T09:18:45.2572523Z Unable to find version '2.6.1' of package 'IAM.Security'.

2019-11-13T09:18:45.2816698Z ##[error]Error: D:\b\4\agent\_work\_tasks\NuGetInstaller_333b11bd-d341-40d9-afcf-b32d5ce6f23b\0.2.31\node_modules\nuget-task-common\NuGet\3.3.0\NuGet.exe failed with return code: 1

2019-11-13T09:18:45.2816698Z ##[error]Packages failed to install

2019-11-13T09:18:45.2855766Z ##[section]Finishing: NuGet restore

This was a package that was recently added to our private Azure Artifacts feed, but I was able to restore it without any problem on my local machine.

Eventually I found out that I could solve the problem by increasing the NuGet version used by the build process to version 4:

Friday, October 25, 2019

ElasticSearch–Decrease the monitoring data

Quick tip to prevent the monitoring index to get out of control:

To decrease the amount of data of the ElasticSearch monitoring you can change the _cluster settings

More information: https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-settings.html

Wednesday, October 23, 2019

Azure DevOps–How to change the default iteration?

After upgrading to Azure DevOps 2019, I got a question from a customer asking how to change the default iteration used when creating new work items. By default the current iteration is used. If you are creating user stories, this is probably not what you want as these user stories should first be analyzed, groomed, … before they can be planned inside a specific iteration/sprint.

Fortunately this is something that can be changed easily at the team level:

  • Click on the Show Team Profile  icon on the Kanban board:

  • Click on Team settings:

  • Go to Iterations and Areas:

  • Click on Iterations:

  • Now you change the default iteration how you want:

Tuesday, October 22, 2019

Fork–A fast and friendly GIT client

Tooling is important and great tools can make a difference. Especially when you are using so rich and complex as GIT can be.

Last week I discovered a new GIT client; Fork.

I was especially impressed by the interactive rebase functionality. Could be a lifesaver if you are now afraid to use rebase

Remarks: My current favorite is GitKraken in case you want to know. But I’m giving Fork a try and so should you…

Friday, October 18, 2019

Orleans–Fire and forget

The basic building block in Orleans is a ‘Grain’. A grain is an atomic unit of isolation, distribution, and persistence. A grain perfectly matches the OO principles as it encapsulates state of an entity and encodes its behavior in the code logic.

The ‘normal’ way to communicate between Grains is through message passing. This is greatly simplified thanks to the async/await programming model in C# combined with some code generation voodoo. It almost ‘feels’ like normal method invocations.

A consequence of the async programming model is that you always should ‘await’ the results. However there are some situations where you don’t care about the results and want to call another grain in a ‘fire-and-forget’ mode.

To achieve this you can use the InvokeOneWay() extension method on a GrainReference:

Thursday, October 17, 2019

Testing–Faking a delay in SQL Server

To test the timeout settings of my application, I needed a query that exceeds a certain time.

This was not that hard to achieve in SQL Server thanks to the WAITFOR statement. This statement blocks the execution of a batch, stored procedure, or transaction until a specified time or time interval is reached, or a specified statement modifies or returns at least one row.


----Delay for 10 seconds
WAITFOR DELAY '000:00:10'
SELECT 'Hello finally!'
GO

More information: https://docs.microsoft.com/en-us/sql/t-sql/language-elements/waitfor-transact-sql?view=sql-server-ver15

Wednesday, October 16, 2019

MassTransit - A convention for the message type {typename} was not found

While preparing a presentation I created a simple demo app where I wanted to publish a message to a queue.

My first (naïve) attempt looked like this:

This failed with the following error message:

A convention for the message type {typename} was not found

What was my mistake?

In MassTransit you have to make a difference between sending a message and publishing a message. Whereas publishing a message doesn’t require any extra configuration, sending a message does. Why? Because when sending a message you directly target a queue. MassTransit has to know which queue the message should be send to.

We have 2 options to make this work:

1) You specify the target queue using a convention:

2) You use the ISendEndpointprovider to specify a queue:

Tuesday, October 15, 2019

Reconfigure the ElasticSearch Windows Service

If you are running ElasticSearch on a Windows Cluster you probably are using a Windows Service to run it in the background and make it start automatically at boot time.

This Windows Service feature is available out-of-the-box if you use the .zip package.

Some settings of ElasticSearch are managed as command-line arguments(e.g. the min and max memory usage). To manipulate these settings when using the Windows Service, you have to go through the ElasticSearch Windows Service Manager.

  • To open the manager, go the ElasticSearch installation folder. (e.g. c:\elasticsearch-7.4.0\)
  • Browse to the bin folder (where you should find a elasticsearch-service.bat file)
  • Run the following command:
    • elasticsearch-service.bat manager

This will open a GUI where you can manage multiple ElasticSearch settings:

Remark: Most changes will require a restart of the service.

Monday, October 14, 2019

ASP.NET Core gRPC–Unary RPC vs Streams

When creating your gRPC implementation you have to be aware about the difference between Unary RPC and Streams.

Unary RPC

This is the simplest type of RPC, where the client sends a single request and gets back a single response. This is the simplest approach and similar to what we know when using WCF.

Streams

With streaming we have to make a difference between server streaming, client streaming and bidirectional streaming.

A server-streaming RPC is most similar to the Unary RPC, the only difference is that the server sends back a stream of responses instead of a single response after getting the client’s request message. After sending back all its responses, the server’s status details (status code and optional status message) and optional trailing metadata are sent back to complete on the server side. The client completes once it has all the server’s responses.

A client-streaming RPC turns things around, the client sends a stream of requests to the server instead of a single request. The server sends back a single response, typically but not necessarily after it has received all the client’s requests, along with its status details and optional trailing metadata.
In a bidirectional streaming RPC, the call is initiated by the client calling the method and the server receiving the client metadata, method name, and deadline. The server can choose to send back its initial metadata or wait for the client to start sending requests. What happens next depends on the application, as the client and server can read and write in any order - the streams operate completely independently.

It is important that you are aware of the difference as it can greatly impact the performance characteristics of your application. Working with streams can help you reduce the memory footprint of your application as you don’t have to buffer the whole response in memory before returning it to the client. Another advantage is that the client can start processing before all data is returned from the server(or the other way around).

More information: https://grpc.io/docs/guides/concepts/

Friday, October 11, 2019

Why you should not fear rebase…

Last week our junior developers shared their experiences and lessons learned during a ‘dev case’. One thing they al mentioned is that doing a ‘rebase’ in GIT is, according to their opinion, a bad idea. Although rebasing is a powerful tool, and you have to apply it carefully, there is no reason to avoid it all cost.

Some things to keep in mind before you rebase:

  1. Never rebase commits that have been pushed to a remote origin and shared with others.
  2. Use rebase to catch up with the commits on another branch as you work with a local feature branch.
  3. You can't update a published branch with a push after you've rebased the local branch. You'll need to force push the branch to rewrite the history of the remote branch to match the local history. Never force push branches in use by others.

As a general rule it would only use rebase on local changes that haven't been shared with others. Once you’ve shared your changes, switch to merge instead.

  • To use rebase, go to Team Explorer –> Branches.
  • Select Rebase instead of Merge.
  • Specify the target branch and click Rebase.

Thursday, October 10, 2019

ASP.NET Core 3.0 - ConfigureContainer magic

Last week I blogged about the changes I had to make to let Autofac work with ASP.NET Core 3.0. Inside my Startup.cs file I had to use the .ConfigureContainer() method:

But where is this method coming from? Let’s dig into the ASP.NET Core source code to find out…

The source of all magic is the StartupLoader class: https://github.com/aspnet/Hosting/blob/rel/1.1.0/src/Microsoft.AspNetCore.Hosting/Internal/StartupLoader.cs.

This class uses reflection to find the following 3 methods in the Startup.cs file:

  • a Configure() method
  • a ConfigureServices() method
  • a ConfigureContainer() method

If you want environment-specific setup you can put the environment name after the Configure part, like ConfigureDevelopment, ConfigureDevelopmentServices, and ConfigureDevelopmentContainer. If a method isn’t present with a name matching the environment it’ll fall back to the default.

If a ConfigureContainer() method is found, the IServiceProviderFactory<TContainerBuilder> CreateBuilder method is invoked and the created builder is passed as a parameter to the ConfigureContainer()

Wednesday, October 9, 2019

Visual Studio 2019–Code Cleanup

Did you notice the small broom icon at the bottom of your code editor window in Visual Studio?

This is the  Code Cleanup button. It allows you to apply code styles from an EditorConfig file or from the Code Style options page. (The .editorconfig takes precedence).

To configure the exact Code Cleanup actions, you can click the expander arrow next to the code cleanup broom icon and then choose Configure Code Cleanup.

Configure Code Cleanup in Visual Studio 2019

After you've configured code cleanup, you can either click on the broom icon or press Ctrl+K, Ctrl+E to run code cleanup.

Tuesday, October 8, 2019

Visual Studio–Generate an EditorConfig file

I talked about the .editorconfig file a long time ago as a way to standardize code style conventions in your team. These conventions allow Visual Studio to offer automatic style and format fixes to clean up your document.

But did you know that in Visual Studio 2019, you can generate an .editorconfig file dynamically based on the style and formatting found in your existing codebase?

  • Go to Tools –> Options –> IntelliCode.
  • Change the EditorConfig inference setting to Enabled.
  • Right click on you solution in the Solution Explorer and choose Add –> New EditorConfig (IntelliCode)

Add IntelliCode-generated EditorConfig file in Visual Studio

After you add the file in this way, IntelliCode automatically populates it with code style conventions it infers from your codebase.

Friday, October 4, 2019

Azure DevOps - Publish code as wiki–Advanced features

One of the features I really like in Azure DevOps is to publish your code as a wiki. This allows you to choose a Git repository, branch and folder that contain Markdown files. The markdown files are then published as pages inside the wiki.

Unfortunately when we take a look at the table of contents(TOC) we see all the markdown files listed in alphabetical order. Every subfolder is also shown as a wiki page even when it doesn’t contain any markdown files.

This is probably not what we want. Let’s improve the TOC…

Change the page order

To change the order of the files in the TOC, you can add a .order file to the repository.

Each .order file defines the sequence of pages contained within a folder. The root .order file specifies the sequence of pages defined at the root level. And for each folder, a .order file defines the sequence of sub-pages added to a parent page.

Inside the .order file you can specify each file name without the .md extension. An example:

README
page-2
page-3

By default, the first file that appears at the root within alphabetical order is set as the wiki home page.

Promote folder to page

Another annoying thing is that every subfolder is shown as a TOC item in the wiki although no page exists. It would be logical to have a wiki page for every folder. Therefore we need to create a markdown file with the same name as the folder as a sibling to the folder (meaning both the folder and the md file of the same name should lie next to each other). See the screenshot below:

Promote a folder to a page

Wednesday, October 2, 2019

.NET Core 3.0 - HTTP/2 support

You maybe think this is a bad title for this blog post? HTTP/2 support was already available for .NET Core 3.0. So why a blog post with the release of .NET Core 3.0?

The reason is that although it was possible to host an ASP.NET Core 2.0 application behind an HTTP/2 endpoint, the HttpClient class didn’t had support for it!

There are 2 ways that can be used to enable HTTP/2:

Enable HTTP/2 at the instance level

To enable HTTP/2 support at the instance level, you can set the DefaultRequestVersion when creating the HttpClient instance:

For example, the following code creates an HttpClient instance using HTTP/2 as its default protocol:

Of course even better is to use the HttpClientFactory to create and configure the HttpClient:

Enable HTTP/2 at the request level

It is also possible to create a single request using the HTTP/2 protocol:

Remark: Remember that HTTP/2 needs to be supported by both the server and the client. If either party doesn't support HTTP/2, both will use HTTP/1.1.

Tuesday, October 1, 2019

ASP.NET Core 3.0 - Autofac error

After upgrading my ASP.NET Core application to 3.0 I got an error in Startup.cs  when I switched to the new HostBuilder:

System.NotSupportedException: 'ConfigureServices returning an System.IServiceProvider isn't supported.'

Let’s take a look at my ConfigureServices() method:

Inside my ConfigureServices I’m using the Autofac containerbuilder to build up my container and return an AutofacServiceProvider. Therefore I updated the ConfigureServices() method signature to return an IServiceProvider. This worked perfectly in ASP.NET Core 2.x but is not allowed anymore when using the new HostBuilder in ASP.NET Core 3.0.

Time to take a look at the great Autofac documentation for a solution: https://autofac.readthedocs.io/en/latest/integration/aspnetcore.html#asp-net-core-3-0-and-generic-hosting. Ok, so to fix this we have to change our ConfigureServices() method to no longer return an IService Provider:

Then we have to update the program.cs to register our AutofacServiceProvider there:

As a last step we have to add a ConfigureContainer() method to the Startup.cs where we can configure the containerbuilder instance:

Remark: We don’t have to build the container ourselves, this is done by the framework for us.

Monday, September 30, 2019

ASP.NET Core 3.0–Swashbuckle error

As probably most people in the .NET ecosystem I’m using Swashbuckle to generate my OpenAPI documentation. (Anyone using NSwag instead?)

After upgrading to ASP.NET Core 3.0 (and switching to the 5.0-rc4 prerelease version of Swashbuckle), the following code no longer compiled:

I had to replace the Info class which couldn’t be found by the OpenApiInfo class:

This OpenApiInfo class is now part of the Microsoft.OpenApi.Models namespace.

Friday, September 27, 2019

C# 8.0 Nullable reference types for framework and library authors

C# 8.0 introduces the concept of nullable reference types. If you are a framework or library author, introducing this functionality in your code is not without its challenges.

A good introduction can be found here:

Thursday, September 26, 2019

Setup the Kubernetes dashboard on Docker for Windows

A useful tool when you are new to Kubernetes is the Kubernetes Dashboard.

Unfortunately the Kubernetes Dashboard is not included out-of-the-box with Docker for Windows however it can be easily setup for your local cluster.

To setup the dashboard use the following command:

kubectl apply -f 
https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml 
The output should look like this:

secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

To view the Dashboard in your web browser run:

kubectl proxy

And nagivate to your Kubernetes Dashboard at:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/.

The first time you open the dashboard you have to enter an authentication token.

Creating an authorization token

To create an authorization token, we first need a Service Account:

kubectl create serviceaccount admin-user

Now we need a clusterrolebinding to assign the ClusterRole to the created admin-user:

kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user admin-user

Check an associated secret:

kubectl get serviceaccounts admin-user -o yaml

As a last step we need to request the secret data:

kubectl describe secret admin-user-token-5s5rv

Now you can copy and past the generated token in the Token field.

Wednesday, September 25, 2019

Switch between Kubernetes contexts

Lost some time yesterday figuring out how to switch between different Kubernetes environments. So a quick post, just as a reminder for myself:

You can view contexts using the kubectl config command:

kubectl config get-contexts

CURRENT  NAME                            CLUSTER                    NAMESPACE           
*         docker-desktop                   docker-desktop             docker-desktop                                   
          docker-for-desktop               docker-desktop             docker-desktop

You can set the context by specifying the context name:

kubectl config use-context docker-for-desktop

Tuesday, September 24, 2019

ElasticSearch–Performance testing

When trying to load test our ElasticSearch cluster, we noticed big variations in results that we couldn’t explain based on the changes we made.

Turned out that our tests were not executed in comparable situations as we didn’t clear the ElasticSearch cache.

So before running our tests, we cleared the cache using following command:

POST /<myindexname>/_cache/clear?request=true

If you want to view what’s inside the Elastic node cache, you can use the following command::

GET /_cat/nodes?v&h=id,name,queryCacheMemory,queryCacheEvictions,requestCacheMemory,requestCacheHitCount,requestCacheMissCount,flushTotal,flushTotalTime

Monday, September 23, 2019

GraphQL Rules

As with every technology you give to your team everyone has different opinions and conventions. A style guide becomes an indispensible part of your development organisation. Otherwise the ‘tabs vs spaces’ discussion can go on forever.

This also applies to GraphQL. So to help you get started take a look at https://graphql-rules.com/.

Rules and recommendations mentioned here were the results of 3 years' experience of using GraphQL both on the frontend and backend sides. We also include the recommendations and experience of Caleb Meredith (PostGraphQL author, Facebook ex-employee) and Shopify engineers.

This guide is intended to be open source and could change in the future, - the rules may be improved on, changed, or even become outdated. What is written here is a culmination of time and pain suffered from the use of horrible GraphQL-schemas.

Friday, September 20, 2019

Cannot create or delete the Performance Category 'C:\Windows\TEMP\tmp3DA0.tmp' because access is denied.

After migrating some .NET applications from an old server to a brand new Windows Server 2019 instance, we stumbled over a range of errors.

Yesterday we got one step closer to a solution but we are not there yet. The application still doesn’t work but now we get the following error message:

Server Error in '/AppServices' Application.


Cannot create or delete the Performance Category 'C:\Windows\TEMP\tmp3DA0.tmp' because access is denied.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.UnauthorizedAccessException: Cannot create or delete the Performance Category 'C:\Windows\TEMP\tmp3DA0.tmp' because access is denied.

ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6 and IIS 7, and the configured application pool identity on IIS 7.5) that is used if the application is not impersonating. If the application is impersonating via <identity impersonate="true"/>, the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user.

To grant ASP.NET access to a file, right-click the file in File Explorer, choose "Properties" and select the Security tab. Click "Add" to add the appropriate user or group. Highlight the ASP.NET account, and check the boxes for the desired access.

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.


Stack Trace:

 

[UnauthorizedAccessException: Cannot create or delete the Performance Category 'C:\Windows\TEMP\tmp3DA0.tmp' because access is denied.]

   System.Diagnostics.PerformanceCounterLib.RegisterFiles(String arg0, Boolean unregister) +482

   System.Diagnostics.PerformanceCounterLib.RegisterCategory(String categoryName, PerformanceCounterCategoryType categoryType, String categoryHelp, CounterCreationDataCollection creationData) +105

   System.Diagnostics.PerformanceCounterCategory.Create(String categoryName, String categoryHelp, PerformanceCounterCategoryType categoryType, CounterCreationDataCollection counterData) +275

   Akka.Monitoring.PerformanceCounters.ActorPerformanceCountersMonitor.Init(IEnumerable`1 akkaMetrics) in D:\olympus\akka-monitoring\src\Akka.Monitoring.PerformanceCounters\ActorPerformanceCountersMonitor.cs:116

 

[TargetInvocationException: Exception has been thrown by the target of an invocation.]

   System.RuntimeMethodHandle.InvokeMethod(Object target, Object[] arguments, Signature sig, Boolean constructor) +0

   System.Reflection.RuntimeMethodInfo.UnsafeInvokeInternal(Object obj, Object[] parameters, Object[] arguments) +128

   System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) +142

   Owin.Loader.<>c__DisplayClass12.<MakeDelegate>b__b(IAppBuilder builder) +93

   Owin.Loader.<>c__DisplayClass1.<LoadImplementation>b__0(IAppBuilder builder) +212

   Microsoft.Owin.Host.SystemWeb.OwinAppContext.Initialize(Action`1 startup) +873

   Microsoft.Owin.Host.SystemWeb.OwinBuilder.Build(Action`1 startup) +51

   Microsoft.Owin.Host.SystemWeb.OwinHttpModule.InitializeBlueprint() +101

   System.Threading.LazyInitializer.EnsureInitializedCore(T& target, Boolean& initialized, Object& syncLock, Func`1 valueFactory) +135

   Microsoft.Owin.Host.SystemWeb.OwinHttpModule.Init(HttpApplication context) +160

   System.Web.HttpApplication.RegisterEventSubscriptionsWithIIS(IntPtr appContext, HttpContext context, MethodInfo[] handlers) +581

   System.Web.HttpApplication.InitSpecial(HttpApplicationState state, MethodInfo[] handlers, IntPtr appContext, HttpContext context) +168

   System.Web.HttpApplicationFactory.GetSpecialApplicationInstance(IntPtr appContext, HttpContext context) +277

   System.Web.Hosting.PipelineRuntime.InitializeApplication(IntPtr appContext) +369

 

[HttpException (0x80004005): Exception has been thrown by the target of an invocation.]

   System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +532

   System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +111

   System.Web.HttpRuntime.ProcessRequestNotificationPrivate(IIS7WorkerRequest wr, HttpContext context) +714

I couldn’t find a ‘good’ solution to get rid of the error message above. In the end I did 2 things to get rid of the error:

  1. I temporarily gave the application pool accountAdministrator rights on the server.
  2. I gave the application pool account ‘Write’ permissions on the following registry path: Computer\HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\

After creating the performance counters on startup, I revoked these permissions.

Anyone with a better solution?