Friday, July 30, 2021

AD FS Help

ADFS is great as long as it works. But when you get into trouble, all the help you can find is welcome…

While investigating an ADFS issue, I found the AD FS Help website:

This website combines a list of of both online and offline tools that can help you configure, customize and troubleshoot your ADFS instance.

Thursday, July 29, 2021

GraphQL HotChocolate 11 - Updated Application Insights monitoring

A few months ago, I blogged about integrating Application Insights in HotChocolate to monitor the executed GraphQL queries and mutations. In HotChocolate 11, the diagnostics system has been rewritten and the code I shared in that post no longer works.  Here is finally the updated post I promised on how to achieve this in HotChocolate 11.

Creating our own DiagnosticEventListener

  • Starting from HotChocolate 11, your diagnostic class should inherit from DiagnosticEventListener. We still inject the Application Insights TelemetryClient in the constructor:

Remark: There seems to be a problem with dependency injection in HotChocolate 11. This problem was fixed in HotChocolate 12. I show you a workaround at the end of this article.

  • In this class you need to override at least the ExecuteRequest method:
  • To track the full lifetime of a request, we need to implement a class that implements the IActivityScope interface. In the constructor of this class, we put all our initialization logic:
  • I'm using 2 small helper methods GetHttpContextFrom and GetOperationIdFrom:
  • In the Dispose() method, we clean up all resources and send the telemetry data to Application Insights for this request:
  • Here we are using 1 extra helper method HandleErrors:

Here is the full code:

Configuring the DiagnosticEventListener

Before we can use this listener, we need to register it in our Startup.cs file:

Remark:Notice that we are using an overload of the AddDiagnosticEventListener method to resolve and pass the TelemetryClient instance to the listener. If you don’t use this overload, nothing gets injected and you end up with an error. As mentioned, this code only works starting from HotChocolate 12. For Hot Chocolate 11, check out the workaround below.

Workaround for HotChocolate 11

In HotChocolate 11, when you try to use the ServiceProvider instance inside the listener, you don’t get access to the application level services. This means that you cannot resolve the TelemetryClient. As a hack we can build an intermediate  ServiceProvider instance and use that instead:

Wednesday, July 28, 2021

dotnet monitor–Run as a sidecar in a Kubernetes cluster–Part 3

Small update about the post from yesterday, if your Kubernetes cluster is not exposed publicly, you can also choose to just disable the security check by adding the ‘--no-auth’ argument.

Here is the updated yaml file:

Tuesday, July 27, 2021

dotnet monitor–Run as a sidecar in a Kubernetes cluster–Part 2

Last week I blogged about how you can run dotnet monitor as a sidecar in your Kubernetes cluster. Although the yaml file I shared worked on my local cluster (inside Minikube), it didn’t work when I tried to deploy it to AKS. Nothing happened when I tried to connect to the specified URL’s.

To fix this I had to take multiple steps:

  • First I had to explicitly set the ‘—urls’ argument inside the manifest:
  • Now I was able to connect to the url but it still failed. When I took a look at the logs I noticed the following message:

{"Timestamp":"2021-07-27T18:48:29.6522095Z","EventId":7,"LogLevel":"Information","Category":"Microsoft.Diagnostics.Tools.Monitor.ApiKeyAuthenticationHandler","Message":"MonitorApiKey was not authenticated. Failure message: API key authentication not configured.","State":{"Message":"MonitorApiKey was not authenticated. Failure message: API key authentication not configured.","AuthenticationScheme":"MonitorApiKey","FailureMessage":"API key authentication not configured.","{OriginalFormat}":"{AuthenticationScheme} was not authenticated. Failure message: {FailureMessage}"},"Scopes":[{"Message":"ConnectionId:0HMAH5RL3D6BM","ConnectionId":"0HMAH5RL3D6BM"},{"Message":"RequestPath:/processes RequestId:0HMAH5RL3D6BM:00000001, SpanId:|fe3ec0c2-46980a5b9b2602e2., TraceId:fe3ec0c2-46980a5b9b2602e2, ParentId:","RequestId":"0HMAH5RL3D6BM:00000001","RequestPath":"/processes","SpanId":"|fe3ec0c2-46980a5b9b2602e2.","TraceId":"fe3ec0c2-46980a5b9b2602e2","ParentId":""}]}

  • We need to create an API key secret and mount it as a volume to our sidecar. Here is the code to generate a secret:
kubectl create secret generic apikey \
  --from-literal=ApiAuthentication__ApiKeyHash=$hash \
  --from-literal=ApiAuthentication__ApiKeyHashType=SHA256 \
  --dry-run=client -o yaml \
  | kubectl apply -f -
  • Now we need to mount the secret as a volume. Here is the updated manifest:

If you want to learn more, I could recommend the following video as a good introduction:

Monday, July 26, 2021

NDepend–DDD Rule

A few weeks ago I was contacted by Patrick Smacchia, the creator of NDepend, if I would check out the latest edition of their code quality tool. As I had an upcoming software audit assignment planned, I thought it would be a great occasion to see what NDepend brings to the table and how it can help me to improve my understanding of an unfamiliar codebase.

NDepend offers a lot of rules that are evaluated against your code. These rules can help you identify all kind of issues in your code.

If you want to learn more about this feature check out the following video:

 

All these rules are created using CQLinq(the code query language of NDepend) and can be customized to your needs (and the specificalities of your project).

One rule that got my interest was the ‘DDD -  ubiquitous language check’. This rule allows you to check if the correct domain language terms are used. It is disabled by default(because it should be updated to reflect your domain language).

Let’s see how to update this rule and enable it:

  • Open up the Queries and Rules explorer in Visual NDepend:
  • Browse to the Naming Conventions section:
  • On the right you’ll find the DDD rule in the list of rules

  • Check the checkbox to enable the rule

  • Click on the rule to open the edit window. Here you can update the CQLinq query to make it correspond with the ubiquitous language of your domain

  • After changing the rule, click on the ‘Save’ icon to start using the updated rules

Friday, July 23, 2021

AKS–Limit ranges

Last week, we got into problems when booting up our AKS cluster(we’ll shut the development cluster down every night to safe costs). Instead of green lights, our Neo4J database refused to run. In the logs, we noticed the following error message:

ERROR Invalid memory configuration - exceeds physical memory.

Let me share what caused this error.

Maybe you’ve read my article about resource limits in Kubernetes. There I talked about the fact that you can set resource limits at the container level.

What I didn’t mention in the article is that you can also configure default limits at the namespace level through limit ranges.

From the documentation:

A LimitRange provides constraints that can:

  • Enforce minimum and maximum compute resources usage per Pod or Container in a namespace.
  • Enforce minimum and maximum storage request per PersistentVolumeClaim in a namespace.
  • Enforce a ratio between request and limit for a resource in a namespace.
  • Set default request/limit for compute resources in a namespace and automatically inject them to Containers at runtime.

So if you don’t configure resource limits and/or requests at the container level, you can still set it at the namespace level.

This is exactly what we did, here are the limit ranges that are currently in place:

And it are these (default) limits that brought our Neo4J instance into trouble. Although enough memory was available in the cluster, the container was limited by default to only use 512MB which is unsufficient to run our Neo4J cluster. The solution was to change our Helm chart to assign more memory to the Neo4J pods.

When configuring resource limits, settings at the pod/container level always supersede settings at the namespace level.

Thursday, July 22, 2021

Azure Kubernetes Service- Failed to acquire a token

When invoking ‘kubectl’, it failed with the following error message:

PS /home/bart> kubectl apply -f ./example.yaml

E0720 07:58:14.668222     182 azure.go:154] Failed to acquire a token: unexpected error when refreshing token: refreshing token: adal: Refresh request failed. Status Code = '400'. Response body: {"error":"invalid_grant","error_description":"AADSTS700082: The refresh token has expired due to inactivity. The token was issued on 2021-03-31T13:22:18.9100852Z and was inactive for 90.00:00:00.\r\nTrace ID: 68f8e37d-4d18-4e7d-a3e6-b11291831a02\r\nCorrelation ID: 65ee9420-d6f9-4a7c-8214-a82756c7ecc8\r\nTimestamp: 2021-07-20 07:58:14Z","error_codes":[700082],"timestamp":"2021-07-20 07:58:14Z","trace_id":"68f8e37d-4d18-4e7d-a3e6-b11291831a02","correlation_id":"65ee9420-d6f9-4a7c-8214-a82756c7ecc8","error_uri":"https://login.microsoftonline.com/error?code=700082"}

The error is self explaining. My refresh token has expired and as a consequence it was not possible to get a new access token.

But how can we fix this?

We need to re-invoke the az aks get-credentials command. You’ll have to authenticate again after which the credentials will be downloaded and available in the Kubernetes CLI.

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

Wednesday, July 21, 2021

Passing a cancellation token to an IAsyncEnumerable

I talked about the concept of asynchronous streams before. Thanks to the introduction of the IAsyncEnumerable interface, it allowed us to combine the richness of LINQ with the readibility of async-await to asynchronously access databases, microservices and remote APIs.

As enumerating async streams will typically result in remote asynchronous operations it can take a while to complete. I was wondering how we could cancel these async operations?

To support this use case, IAsyncEnumerable has an extension method ‘WithCancellation()’. The CancellationToken will be passed on to the  IAsyncEnumerable.GetEnumerator method.

More information about what’s happening behind the scenes can be found here.

Remark: There are some further improvements on the way in ASP.NET Core 6 regarding consuming IAsyncEnumerable streams as mentioned in this post: https://www.tpeczek.com/2021/07/aspnet-core-6-and-iasyncenumerable.html.

Tuesday, July 20, 2021

Kubernetes–What is the difference between resource requests and limits?

In Kubernetes it is a best practice to configure resource limits for your containers. VSCode will even warn you if it couldn’t detect resource limits in your manifest files:

Setting resource limits prevents a container consuming too much resources and impacting other workloads. By setting limits, pods will be terminated by Kubernetes when their limits are exceeded. This helps in keeping the cluster healthy and stable.

The most common resources to specify are CPU and memory, but others exists.

Here is a short example on how to configure this at the container level:

In the example above, the CPU usage is limited to 250m or  250 milliCPU (1/4th of a vCPU/Core) and memory usage is limited to 512Mi or 512MiB.

Next to resource limits, it is also possible to configure resource requests. Setting a resource request indicates the amount of that resource that you expect the container will use. Kubernetes will use this information when determining which node to schedule the pod on.

A node will be ineligible to host a new container if the sum of the workload requests, including the new container’s request, exceeds the available capacity. This remains the case even if the real-time memory use is actually very low.  This is the reason why it is best to keep the requests values as low as possible and setting the limits as high as possible(without bringing other workloads into trouble). Using a low resource request value gives your pods the best chance of getting scheduled to a node.

Limits are also configured in the resource section of your manifest file:

More information: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

Monday, July 19, 2021

Windows Terminal–Add Git Bash

I’ve started to love Windows Terminal. So it would be nice if I could add Git Bash to the list of possible terminal experiences.

Before you can start integrating Git Bash,make sure that it is available on your system. You can install it as part of the Git for Windows experience.

Open up Windows Terminal and select the down arrow in the tab bar at the top. Select Settings from the dropdown menu.

This opens the Settings tab for Windows Terminal.

I first tried to add a new Profile for Git Bash using the Add new option on the left but I couldn’t get it working.

What I did instead was choosing the Open JSON file option.

Now I could directly alter the configuration file. In the profiles list I added the following configuration:

Friday, July 16, 2021

.NET Conf–Focus on F#

Great news for all F# developers, a new .NET Conf is coming up at the end of July with only one topic on the agenda; F# !

.NET Conf: Focus on F# is a free, one-day livestream event that features speakers from the community and Microsoft teams working on and using the F# language. Learn how F# lets you write succinct, performant, and robust code backed by a powerful type system and the .NET runtime you can trust to build mission-critical software.

Hear from language designers and experts using F# in a variety of ways from building minimal web APIs to performing data manipulation, interactive programming, machine learning, and data science.

Tune in here on July 29, ask questions live, and learn about what F# can do.

 

Thursday, July 15, 2021

Front-End Performance Checklist 2021

For everyone who cares about the performance of their web applications (don’t we all?) this is really, really a must read.

It took me 2 days to go through all the material, links and related articles but boy was it worth it!

I hope that the teams I work with are ready because I will have a long list of questions and remarks coming up!

Let’s make 2021… fast! An annual front-end performance checklist (available as PDF, Apple Pages, MS Word), with everything you need to know to create fast experiences on the web today, from metrics to tooling and front-end techniques.

Wednesday, July 14, 2021

NDepend–Find out where a library is used–Part 2

A few weeks ago I was contacted by Patrick Smacchia, the creator of NDepend, if I would check out the latest edition of their code quality tool. As I had an upcoming software audit assignment planned, I thought it would be a great occasion to see what NDepend brings to the table and how it can help me to improve my understanding of an unfamiliar codebase.

Yesterday I shared how I was able to generate the CQLinq query I needed through the class browser. While writing that post I noticed that NDepend also suggested me to use another feature to get there.

When creating a new query, the editor suggests you to have a look at the code search feature:

So let’s do that today.

I opened Search (View –> Search View).

Let’s see if we can find all Repository objects. I change the ‘Element’ value to ‘Type’ and enter ‘Repository’ in the search field. NDepend picks this up and generates a CQLinq query for me.

Tuesday, July 13, 2021

NDepend–Find out where a library is used directly and indirectly

A few weeks ago I was contacted by Patrick Smacchia, the creator of NDepend, if I would check out the latest edition of their code quality tool. As I had an upcoming software audit assignment planned, I thought it would be a great occasion to see what NDepend brings to the table and how it can help me to improve my understanding of an unfamiliar codebase.

In preparation of the software audit, the software architect told me that one of the problems of their current codebase is that their data access logic was spread out everywhere throughout the code. They already started an effort in isolating the data access logic but he had no clue how far this has progressed.

Let’s see if we can find the answer using NDepend.

I had read in the documentation that NDepend offers the ability to query the code model using CQLinq. As a developer I like to code, so let’s try that first…

I opened VisualNdepend, loaded the NDepend project I wanted to investigate and hit CTLR-R (or go through View –> View Editor Panel).

Now I could start writing queries using the CQLinq syntax. If your query compiles succesfully, the results are immediatelly shown:

Although the syntax is easy to read and understand I couldn’t figure out the correct query to write to return all classes that where using the RavenDB client(the data store that was used). This was how far I got just by using the provided intellisense:

Luckily NDepend is there to help me. There are multiple ways to let NDepend help you get the correct query.

Let’s see one way…

I switched to the Class Browser (View –> Class Browser).

There I scrolled to the Raven.Client.Lightweight assembly. I right clicked on it and choose ‘Select Types’ and ‘…That are using me Directly’.

And there it was, the query I was trying to write myself.

Now I can see all the places where the data access logic was used directly. Nice!

Monday, July 12, 2021

ASP.NET Core–Generate URL using Url.Action

I needed to generate a URL that I could return inside an ASP.NET Controller.

Thanks to the built-in UrlHelper that is something that even I can do. At least that was what I thought…

This is the code I tried:

Surprisingly this didn’t work.  It turns out that the controller argument needs the controllername without the controller suffix.  So in my example above ProductController should be Product instead.

I changed it to this:

Still refactor friendly but unfortunately a lot less readible.

Friday, July 9, 2021

NDepend–Don’t extract it in your Program Files!

A few weeks ago I was contacted by Patrick Smacchia, the creator of NDepend, if I would check out the latest edition of their code quality tool. As I had an upcoming software audit assignment planned, I thought it would be a great occasion to see what NDepend brings to the table and how it can help me to improve my understanding of an unfamiliar codebase.

After a reply to Patrick I got a link to the Professional edition but you can download a trial version here. NDepend doesn’t have an installer but is provided as a zip file.

I extracted it to my %Program Files% folder and had a look at what was in there:

VisualNDepend.exe looked promising so I double clicked on that executable. Unfortunately nothing happened !?

Hmmm, maybe I should not be so eager and first read the README.txt file available in the zip?

OK, the first 2 lines in the README immediatelly point out my mistake;

This file contains information about NDepend files when unzipped.
Don't unzip these files in '%ProgramFiles%\NDepend'.
This might provoke problems because of Windows protection.

And indeed after extracting it to another folder, I could run the executable. Time to start our journey! (But that is for another post)

Remark:

In case you really want to install NDepend in your %Program Files%, you should right click on the zip file before extracting and click on Properties.

In the Security section at the bottom, check the ‘Unblock’ checkbox and choose ‘Apply’. If you now extract it to your %Program Files% it should work.

Thursday, July 8, 2021

dotnet monitor–Run as a sidecar in a Kubernetes cluster

Yesterday I blogged about ‘dotnet monitor’ and how it can help you to collect diagnostic artifacts at runtime in a uniform way.

Let’s have a look today on how to use ‘dotnet monitor’ inside a Kubernetes cluster.

When running in a cluster, it is recommend to run the dotnet-monitor container as a sidecar alongside your application container in the same pod.

Here is an example manifest on how to set this up:

Most important to notice in the manifest is that you need to share a volume between the application container and the sidecar.

Let’s deploy this manifest:

$ kubectl apply –f ./dotnetmonitor.yaml

Once your pods are up and running, we need to use port forwarding to be able to access the diagnostics endpoint from our local machine.

To do this, we first need to find the name of the pod :

$ kubectl get pod -l app=dotnet-monitor-example
NAME                                 READY   STATUS    RESTARTS   AGE
dotnet-monitor-example-78997f8fdf-nrhp7   2/2     Running   0          5m

Now we know the pod name, we an forward traffic using the kubectl port-forward command:

$ kubectl port-forward pods/dotnet-monitor-example-78997f8fdf-nrhp7 52323:52323

Now we can invoke the different endpoints the same way as before:

$ curl -s http://localhost:52323/processes | jq

Remark: Although the example above worked on my local cluster, I got into trouble when I tried to do use the same steps on AKS. I'll share another post to explain where I got into trouble and how I fixed it.

Wednesday, July 7, 2021

dotnet monitor - Getting started

With the announcement that ‘dotnet monitor’ graduated to a supported tool in .NET ecosystem, I putted it on my list of tools to have a look at. Now the summer vacation has started I finally found some time to try it out.

What is ‘dotnet monitor’?

Before I dive in on how to start using dotnet monitor, let’s explain what it does:

Dotnet monitor aims to simplify collecting diagnostics artifacts (e.g. logs, traces, process dumps) by exposing a consistent REST API regardless of where your application is run. This makes your application easy debuggable no matter if it is running locally, in a docker container, or in Kubernetes.

So dotnet monitor gives you an uniform and easy accessible way to check what is going on inside your dotnet core application.

Getting started

We’ll have a look at how we can use dotnet monitor in a sidecar container in another post, let us now focus on using it as a local tool.

Dotnet monitor is distributed as a .NET Core global tool and can be installed through NuGet:

dotnet tool install -g dotnet-monitor --version 5.0.0-preview.5.*

Now we can invoke dotnet monitor:

dotnet monitor collect

Remark: Calling this command didn’t work the first time. I got a SocketException when I tried to run it. After a reboot, it worked as expected.

Now you can browse to ‘https://localhost:52323/processes’ and see the list of active .NET core processes:

[{"pid":13212,"uid":"fc8be99e-ac72-4e1b-8519-2b50f81093a2","name":"dotnet"},{"pid":8692,"uid":"25913bc1-e7a4-4680-9538-3254fd7a057b","name":"iisexpress"}]
Once you now the correct process id you can further explore these processes in more detail using one of the following endpoints:
  • /processes
  • /dump/{pid?}
  • /gcdump/{pid?}
  • /trace/{pid?}
  • /logs/{pid?}
  • /metrics

More information: https://github.com/dotnet/dotnet-monitor/tree/main/documentation

 

Tuesday, July 6, 2021

Using the correct HTTP Status codes when building REST api’s

One of the important elements in building a good REST api, is the correct usage of the HTTP Status Codes.

A help in identifying the correct status code to return is the following ’http decision diagram’.

This diagram describes the resolution of HTTP response status codes, given various headers. It follows the indications in RFC7230 RFC7231 RFC7232 RFC7233 RFC7234 RFC7235, and fills in the void where necessary.

You can find this diagram here: https://github.com/for-GET/http-decision-diagram

Monday, July 5, 2021

ASP.NET Core–Read data from the HttpClient in a memory efficient way

I created a service that allowed users to download all address data from Belgium in CSV format. By default when fetching this data through the HttpClient it would result in having the full blob into memory.

But thanks to the power if IAsyncEnumerable<> and setting the HttpCompletionOption.ResponseHeadersRead flag when calling GetAsync() I could get this done in a very efficient way:

By setting the HttpCompletionOption.ResponseHeadersRead flag the client would only wait until the headers are received and then continue execution. So I could start processing the results before all data was downloaded.

Friday, July 2, 2021

Understanding the GraphQL specification

If you are really interested in GraphQL, reading through the spec is certainly a good idea.

Although really well written, it can still be not that fun to go through. As an alternative you could have a look at the blog series written by Loren Sands-Ramshaw in which he walks you through the most important parts of the specification in a more digestable way:

 

    ASP.NET Core: CreatedAtAction - No route matches the supplied values

    After creating a new item I wanted to return the location where the newly created item could be fetched. So I used the CreatedAtActionResult to specify the action.

    Here is the code I was using:

    And here is the related GetById method:

    I wouldn’t be writing this blog post if everything worked as expected. When calling the CreateAsync method, I got the following error message:

    No route matches the supplied values.

    I couldn’t figure out what I was doing wrong until I took a look at the different overloads of CreatedAtAction:

    If you look at the code above, you notice that I’m using the second overload. This overload expects 2 parameters:

    • actionname: The name of the action to use for generating the URL
    • value: The content value to format in the entity body

    I was passing a‘new { id = product.Id } ’value expecting that it would be matched to the ‘id’ parameter that the route was expecting. This turned out not to be the case as the second overload uses the passed object as the body of the response.

    To get the behavior I want I need to use the first overload and pass both a routevalue object and a value object: