Friday, July 24, 2020

Kubernetes–Troubleshooting ImagePullBackOff errors

After deploying a new pod, I couldn’t access it. Time to take a look what was going on:

C:\Users\BaWu\Desktop>kubectl get pods --all-namespaces

NAMESPACE              NAME                                                              READY   STATUS             RESTARTS   AGE

kube-system            addon-http-application-routing-default-http-backend-7fc6fc27bj2   1/1     Running            0          93m

kube-system            addon-http-application-routing-external-dns-6c6465cf6f-hqn2w      1/1     Running            0          93m

kube-system            addon-http-application-routing-nginx-ingress-controller-668m4rb   1/1     Running            0          93m

kube-system            azure-cni-networkmonitor-cn57j                                    1/1     Running            0          10d

kube-system            azure-ip-masq-agent-4sjmw                                         1/1     Running            0          10d

kube-system            coredns-544d979687-5c7rt                                          1/1     Running            0          10d

kube-system            coredns-544d979687-rbbh9                                          1/1     Running            0          10d

kube-system            coredns-autoscaler-78959b4578-jdr24                               1/1     Running            0          10d

kube-system            dashboard-metrics-scraper-5f44bbb8b5-dfw47                        1/1     Running            0          10d

kube-system            kube-proxy-8d5sr                                                  1/1     Running            0          10d

kube-system            kubernetes-dashboard-785654f667-2gcbn                             1/1     Running            0          10d

kube-system            metrics-server-85c57978c6-bzsc2                                   1/1     Running            0          10d

kube-system            omsagent-rs-5f579fcfd-9pqpf                                       0/1     ImagePullBackOff   0          2d10h

kube-system            omsagent-rs-6b6cdf78fc-26mpb                                      1/1     Running            1531       10d

kube-system            omsagent-wfgtp                                                    0/1     ImagePullBackOff   0          2d10h

kube-system            tunnelfront-f7bd7ccb-t7g95                                        2/2     Running            1          6d20h

kubernetes-dashboard   dashboard-metrics-scraper-c79c65bb7-w9thj                         0/1     ImagePullBackOff   0          27m

kubernetes-dashboard   kubernetes-dashboard-56484d4c5-c4cwv                              0/1     ImagePullBackOff   0          27m

The deployment turned out to be in the imagepullbackoff  state. There can be various reasons on why this is the case. Let’s figure out what could cause this by calling describe. This gave us a lot of extra information:

C:\Users\BaWu\Desktop>kubectl describe pod kubernetes-dashboard-56484d4c5-c4cwv --namespace=kubernetes-dashboard

Name:         kubernetes-dashboard-56484d4c5-c4cwv

Namespace:    kubernetes-dashboard

Priority:     0

Node:         aks-agentpool-27676582-vmss000000/10.9.1.5

Start Time:   Fri, 24 Jul 2020 12:53:52 +0200

Labels:       k8s-app=kubernetes-dashboard

              pod-template-hash=56484d4c5

Annotations:  <none>

Status:       Pending

IP:           10.9.1.21

IPs:

  IP:           10.9.1.21

Controlled By:  ReplicaSet/kubernetes-dashboard-56484d4c5

Containers:

  kubernetes-dashboard:

    Container ID:

    Image:         kubernetesui/dashboard:v2.0.0

    Image ID:

    Port:          8443/TCP

    Host Port:     0/TCP

    Args:

      --auto-generate-certificates

      --namespace=kubernetes-dashboard

    State:          Waiting

      Reason:       ImagePullBackOff

    Ready:          False

    Restart Count:  0

    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3

    Environment:    <none>

    Mounts:

      /certs from kubernetes-dashboard-certs (rw)

      /tmp from tmp-volume (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-bxq7s (ro)

Conditions:

  Type              Status

  Initialized       True

  Ready             False

  ContainersReady   False

  PodScheduled      True

Volumes:

  kubernetes-dashboard-certs:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  kubernetes-dashboard-certs

    Optional:    false

  tmp-volume:

    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)

    Medium:

    SizeLimit:  <unset>

  kubernetes-dashboard-token-bxq7s:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  kubernetes-dashboard-token-bxq7s

    Optional:    false

QoS Class:       BestEffort

Node-Selectors:  kubernetes.io/os=linux

Tolerations:     node-role.kubernetes.io/master:NoSchedule

                 node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason     Age                  From                                        Message

  ----     ------     ----                 ----                                        -------

  Normal   Scheduled  30m                  default-scheduler                           Successfully assigned kubernetes-dashboard/kubernetes-dashboard-56484d4c5-c4cwv to aks-agentpool-27676582-vmss000000

  Normal   Pulling    28m (x4 over 30m)    kubelet, aks-agentpool-27676582-vmss000000  Pulling image "kubernetesui/dashboard:v2.0.0"

  Warning  Failed     28m (x4 over 30m)    kubelet, aks-agentpool-27676582-vmss000000  Failed to pull image "kubernetesui/dashboard:v2.0.0": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

  Warning  Failed     28m (x4 over 30m)    kubelet, aks-agentpool-27676582-vmss000000  Error: ErrImagePull

  Normal   BackOff    27m (x6 over 30m)    kubelet, aks-agentpool-27676582-vmss000000  Back-off pulling image "kubernetesui/dashboard:v2.0.0"

  Warning  Failed     26s (x117 over 30m)  kubelet, aks-agentpool-27676582-vmss000000  Error: ImagePullBackOff

This indicates that the Kubernetes cluster cannot talk to https://registry-1.docker.io/v2/. This makes sense as I only configured a trust with ACR.

Thursday, July 23, 2020

Kubernetes–the server could not find the requested resource

When trying to deploy the Kubernetes dashboard on an AKS cluster it failed with the following error message:

Error from server (NotFound): the server could not find the requested resource

Here was the full command I tried to execute:

C:\Users\BaWu>kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

Error from server (NotFound): the server could not find the requested resource

The problem was related to the fact that kubectl supports one version (older or newer) of kube-apiserver.

I checked the installed version using:

C:\Users\BaWu>kubectl version

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"windows/amd64"}

Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.10", GitCommit:"89d8075525967c7a619641fabcb267358d28bf08", GitTreeState:"clean", BuildDate:"2020-06-23T02:52:37Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

I had version 1.9 installed where the api was on version 1.16. To solve it I had to download and install a newer version of kubectl. Instructions to do this can be found here: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Wednesday, July 22, 2020

Using gRPC for your internal microservices

gRPC is a really great fit as a communication protocol between your (internal) microservices. It runs on top of HTTP/2 and gives you great performance.

One caveat is that the HTTP/2 prerequisite requires by default that all communication is happening securely. So you have to setup TLS and create certificates for all your services.

But what should you do when you are using Kubernetes and use TLS termination at the ingress controller level?

A new feature announced in .NET Core 3.0 brings rescue. You can turn on unencrypted connections for HTTP/2 by setting the DOTNET_SYSTEM_NET_HTTP_SOCKETSHTTPHANDLER_HTTP2UNENCRYPTEDSUPPORT environment variable to 1 or by enabling it in the app context:

Tuesday, July 21, 2020

Razor Class Libraries–Static Content 404 issue –Part 2

I’ll continue my journey using Razor Class Libraries in ASP.NET Core.

Here are my previous posts:

After a first colleague returned to his desk with a solution for the problem I discussed yesterday, a second colleague arrived and explained he had a similar problem.

This time I could pinpoint the issue to the following PackageReference that was (still) used in a referenced project:

<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.2.0" />

Static files worked differently in .NET Core 2.2 Razor Class Libraries. The inclusion of the Microsoft.AspNetCore.Mvc v2.2.0 library broke the behaviour in ASP.NET Core 3.x applications. This reference is no longer needed as it is now a part of the Microsoft.AspNetCore.App framework reference.

Monday, July 20, 2020

Razor Class Libraries–Static Content 404 issue –Part 1

I’ll continue my journey using Razor Class Libraries in ASP.NET Core.

Here are my previous posts:

Today I want to share an issue a colleague got when he tried to use a Razor Class Library I created.

When he tried to load a static asset from the RCL, the asset was not found and a 404 error was returned to the browser.

It took me a while to pinpoint the issue but in the end it turned out that the following setting in his ASP.NET Core project caused the problem:

<ANCMPreConfiguredForIIS>true</ANCMPreconfiguredForIIS>

After commenting out this line in the csproj file, the static assets were loaded correctly

I have no clue why this solved the problem as I don’t see any relation between these features…

Friday, July 17, 2020

Razor Class libraries–Clean up your content path

I’ll continue my journey using Razor Class Libraries in ASP.NET Core.

Here are my previous posts:

Today I want to write a small addition to my post from yesterday. As I mentioned yesterday to reference some content inside your razor views you need to use a path similar to _content/{LIBRARY NAME}.

This path doesn’t look so nice. Luckily you change it to a different path name by editing the RCL project properties and adding a StaticWebAssetBasePath.

Now you can access your files using /myownpath/example.css.

Thursday, July 16, 2020

Razor Class Libraries–Static Content

I’ll continue my journey using Razor Class Libraries in ASP.NET Core.

Here are my previous posts:

Today I want to cover how you can use static content inside your Razor Class library.

To include static content(images, fonts, stylesheets,…) in your RCL you need to create a wwwroot folder in the class library and include any required files there:

When packing an RCL, all content in the wwwroot folder is automatically included in the package.

As this content becomes part of the DLL you cannot just reference them from the root path(“~”. Instead the path is constructed using ‘_content/{LIBRARY NAME}/’.

For example to reference an example.css file that you stored inside a RCL named ‘Example.RCL’, the correct way to include this css file in your application becomes:

Wednesday, July 15, 2020

Kubernetes- The Virtual Kubelet

If you are looking at running Kubernetes in a cloud, sooner or later you’ll hear about ‘Virtual Kubelet’. But what is it?

Let’s first go to https://virtual-kubelet.io/ and look at the definition there:

Virtual Kubelet is an open-source Kubernetes kubelet implementation that masquerades as a kubelet.

Mmm. That didn’t help a lot. But wait there is a second sentence, maybe that will explain everything:

This allows Kubernetes nodes to be backed by Virtual Kubelet providers such as serverless cloud container platforms.

Nope. Doesn’t ring a bell. Let’s try to explain this in our own words:

A Kubernetes cluster is divided into two components:

  • Control plane nodes provide the core Kubernetes services and orchestration of application workloads.
  • Nodes run your application workloads.

Kubernetes expects that a node is a virtual (or for the vintage fans a physical) machine. On every node a kind of agent is running that maanages the containers that were created by Kubernetes and runs them on the node it manages. This agent is called a ‘kubelet’.

Okay, we are one step closer. We know what a kubelet is. But what is than a ‘virtual kubelet’?

Let’s introduce the cloud into the picture. On most cloud platforms you can not only run virtual machines but typically there are other (higher level) managed services; like for example serverless with Azure functions or Azure Container instances. Virtual Kubelet is an open-source implementation of Kubernetes kubelet with the purpose of connecting Kubernetes to other APIs. It registers itself as a node and allows us to deploy on top of other cloud native services not limited to virtual machines.

Aha, finally we get the picture…

Tuesday, July 14, 2020

.NET Conf “Focus on Microservices”

Every year I shout out on my blog that a new edition of .NET Conf is coming(this year November 10-12 for the .NET 5 launch).

What I was not aware of is that the organizers of .NET Conf started a series of smaller events focused on specific things you can do with .NET. There have been 2 editions so far; one focusing on Blazor and the other one on Xamarin.

The next one is about Microservices (who could have guessed that?) on July 30, 2020.

.NET Conf: Focus on Microservices is a free, livestream event that features speakers from the community and .NET teams that are working on designing and building microservice-based applications, tools and frameworks. Learn from the experts their best practices, practical advice, as well as patterns, tools, tips and tricks for successfully designing, building, deploying and running cloud native applications at scale with .NET.

Check out the agenda and the amazing list of speakers.

Monday, July 13, 2020

Razor Class Libraries–Views not found

Last week I talked about Razor Class Libraries as a nice and easy way to package and share UI components for your ASP.NET Core MVC/Razor pages application.

Inside my Razor Class Library I had some shared ViewComponents that I placed in a Shared folder. Unfortunately when trying to use these ViewComponents inside my ASP.NET Core Web application, the ViewComponents were not found?!

It costed me some headaches before I discovered what I was doing wrong. It is really important that your shared View(Components) reside under a Views folder as this is where Razor will search it views by convention. You can override the convention if you want to but it is probably easier to just move everything to a Views folder like I did:

Friday, July 10, 2020

MassTransit–Youtube videos

Great tip of you want to learn everything about the ins and outs of MassTransit. Chris Patterson took the time to create a great (free) video set available here: https://www.youtube.com/playlist?list=PLx8uyNNs1ri2MBx6BjPum5j9_MMdIfM9C

Thursday, July 9, 2020

ASP.NET Core–Razor Class Libraries

A little known feature in ASP.NET Core (MVC/Razor pages) is the support for Razor Class libraries(RCL).  Through the RCL you can package and reuse UI components like View Components or Razor Components but also share full feature areas containing Razor view, controllers, models and so on…

Let’s get started

  • Open Visual Studio and choose Create a new project.
  • Select Razor Class Library and click Next.

  • Give the library a name and click on Create.
    • Remark 1: To avoid a file name collision with the generated view library, ensure the library name doesn't end in .Views.
    • Remark 2: Select Support pages and views if you need to support views. Otherwise only Razor components are supported.

Now you can start adding your View Components and Razor components to your RCL. When building your RCL 2 DLL’s are created:

  • One DLL containing all your logic
  • One DLL containing your Razor views (ends with .Views)

You can now either reference this RCL directly or include it in your project through NuGet.

What’s nice is that you still can override a view by providing the same Razor markup file (.cshtml) in your web app. The file in your web app will always take precedence.

Wednesday, July 8, 2020

MassTransit–Debugging your configuration

When configuring MassTransit there are a lot of moving parts. You can have multiple endpoints, middlewares, … that all impact your application.

To understand how your bus instance is wired together you can use the GetProbeResult method:

Before you can use the code above, you’ll need this small extension method:

More info: https://masstransit-project.com//troubleshooting/show-config.html

Tuesday, July 7, 2020

Entity Framework Core - Soft delete

The easiest way to implement soft delete in EF Core is through query filters.

This filter can be specified when creating our EF Core model:

Now every time when query for the ‘Role’ object an extra ‘WHERE’ clause is included that filters with IsDeleted=false.

But hold your horses we are not there yet, we also have to override our SaveChanges()and SaveChangesAsyc() methods on the DbContext otherwise the ‘Role’ entity will still be removed from the database when we call Remove().

Remark: As a possible improvement you could generate a base class or interface for all ‘soft deletable’ entities.

Monday, July 6, 2020

Quality is a team issue

I forgot where I found the following quote but I copied and printed it out as a reminder for myself:

Quality is a team issue. The most diligent developer placed on a team that just doesn’t care will find it difficult to maintain the enthusiasm needed to fix niggling problems. The problem is further exacerbated if the team actively discourages the developer from spending time on these fixes.

It remains one of the biggest lessons I learned during my software career and it manifested itself in 2 ways:

  • Supermotivated developers eager to learn new things ending in a burn/born out after six months on a project. I saw really talented and motivated people (and especially these people) getting completely fed up by an organization not willing to move. I even saw people leave the IT industry because of this.
  • A negative trend in general software quality when one or more of the team members didn’t put the quality bar at the same level as the rest of the team. How much we tried to convince the developer to raise the bar, it unfortunately always ended to a lowering of the quality standard of the whole team instead. (See the Broken Window story)

Luckily I’ve also seen the other way around; when everyone aims for the same quality level (no matter if you are a UX’r, developer, analyst, tester, architect, …) than miracles happen. No problem becomes too complex to tackle and speed of delivery increases without sacrificing on quality.

But it all starts with one common value; that quality is a team issue…

Friday, July 3, 2020

Entity Framework - Mapping issue

I had the following entities in my domain:

And this is how it was mapped through Entity Framework:

Unfortunately Entity Framework got confused when I tried to create a new Role and RoleAssignment. It generated the wrong query for the RoleAssignment and tried to set the “Id” field to the Role id.

To fix it I had to explicitly tell Entity Framework to use the Role_Id column to map the foreign key relationship:

Strange thing is that this code was working in a previous release of the application.

Thursday, July 2, 2020

Razor compilation

In .NET Core (3.x) Razor compilation happens by default at build time. This means that every time you make a change to a Razor file, you have to rebuild your application before a change becomes visible inside your frontend. This is different compared to .NET where Razor compilation happened at runtime.

If you have to work a lot inside Razor, build time compilation can be annoying and slow down your development process. Luckily it is possible to enable runtime compilation for your ASP.NET Core Web application.

One option you have is to enable runtime compilation at project creation:

  • Create a new ASP.NET Core Web application in Visual Studio
  • Check the Enable Razor runtime compilation check box when creating your project.

If you have an existing project, you have to take a different approach:

Wednesday, July 1, 2020

.NET Core–Backgroundservice lifetime

.NET Core 3 introduces a new worker template that uses the concept of a BackgroundService. The BackgroundService is a base class for implementing a long running IHostedService.

One important thing to be aware of is that the BackgroundService has a different lifetime than the host. This means that although a BackgroundService exits, it doesn’t mean that the host application will exit as well.

One way to guarantee that the host application stops when the BackgroundService exits, is by injecting the IHostApplicationLifetime in your service and call StopApplication():

Tuesday, June 30, 2020

Evaluate your business strategies using Microsoft Assessments

Microsoft previewed ‘Microsoft Assessments’, a free, online platform that helps customers evaluate their business strategies.

From the documentation:

Microsoft Assessments is a free, online platform that helps customers in a self-service online manner evaluate their business strategies and workloads, and through curated guidance from Microsoft, they are able to improve their posture in Azure

At the moment of writing there are 4 available assessments:

  • Cloud Journey Tracker: Identify your cloud adoption path based on your needs with this tracker and navigate to relevant content in the Cloud Adoption Framework for Azure.
  • Governance Benchmark: Identify gaps in your organizations current state of governance. Get a personalized benchmark report and curated guidance on how to get started.
  • Microsoft Azure Well-Architected Review: Examine your workload through the lenses of reliability, cost management, operational excellence, security and performance efficiency
  • Strategic Migration Assessment and Readiness Tool: Preparing for a scale migration is critical to ensure your project is executed smoothly and that you realize intended benefits

Every assessment will walk you through a list of questions. As a result you get a report and a list of recommended next steps:

Monday, June 29, 2020

Node issue–Call retries were exceeded

When trying to build an Angular 9 application on the build server, it failed on the ‘Generating ES5 bundles for differential loading’ step with the following error message:

An unhandled exception occurred: Call retries were exceeded

On the developer machines we couldn’t reproduce the issue (as always).  Inside the angular-errors.log we found the following extra details:

[error] Error: Call retries were exceeded
    at ChildProcessWorker.initialize (\node_modules\@angular-devkit\build-angular\node_modules\jest-worker\build\workers\ChildProcessWorker.js:193:21)
    at ChildProcessWorker.onExit (\node_modules\@angular-devkit\build-angular\node_modules\jest-worker\build\workers\ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:210:5)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
We were able to solve the issue by upgrading the node version on the build server. Hopefully this helps for you as well…

Friday, June 26, 2020

ASP.NET Core–Using Value Tuples in Razor

I was wondering if it was possible to use ValueTuples as your Razor model. This turned out to work perfectly!

Here is my ViewComponent method(notice that this also works in Razor pages and MVC controllers):

And here is my Razor view:

Thursday, June 25, 2020

NHibernate - CreateMultiCriteria is obsolete

It was code cleanup day. So time to get rid of some nagging warnings I didn’t had time to tackle before. One of the warnings I wanted to get rid of was an NHibernate warning about the fact the CreateMultiCriteria method was now obsolete.

This is how my code looked like originally:

And this how I got rid of the CreateMultiCriteria message:

Notice that I replaced the CreateMultiCriteria call with a CreateQueryBatch call. The API is a little bit different. Most important to notice is that there is a GetResult method where you can specify what call result you want to get back.

Wednesday, June 24, 2020

ASP.NET Core–Endpoint authorization

Until recently I always used an (empty) [Authorize] attribute on top of my Controllers to active authorization on a specific endpoint. (or I used a global AuthorizeFilter)

This will authorize users using the DefaultPolicy which just requires an authenticated user.

With the introduction of endpoint routing there is a new alternative. Disadvantage of the AuthorizeFilter or Authorize attribute are that these are MVC-only features.

A (better) solution is to use the RequireAuthorization() extension method on IEndpointConventionBuilder:

This has the same effect as applying an [Authorize] attribute on every controller.

Tuesday, June 23, 2020

WSFederation–Implementing logout on ADFS

In one my ASP.NET Core applications we are (still) using WSFederation as the authentication protocol. While implementing the signout functionality I noticed that I correctly was signout at ADFS level but that ADFS didn’t return me back to my application afterwards.

This is handled by the wreply parameter and this parameter was correctly send to ADFS.

Here is my logout code:

After some trial and error I could pinpoint the issue to the following situation; when the reply URL was a subpath of the configured WSFederation endpoint it worked and I got correctly redirected.

For example:

I guess it makes sense as it is kind of a security measure.

Monday, June 22, 2020

ElasticSearch–Upgrade error - System.IO.IOException: Source and destination path must have identical roots.

When trying to update an ElasticSearch cluster through the Windows Installer(MSI) it always seemed to fail.

In the error logs I found the following message:

System.IO.IOException: Source and destination path must have identical roots. Move will not work across volumes.

   at System.IO.Directory.InternalMove(String sourceDirName, String destDirName, Boolean checkHost)

   at System.IO.Abstractions.DirectoryWrapper.Move(String sourceDirName, String destDirName)

   at Elastic.InstallerHosts.Elasticsearch.Tasks.Install.CreateDirectoriesTask.CreateConfigDirectory(FileSystemAccessRule rule)

   at Elastic.InstallerHosts.Elasticsearch.Tasks.Install.CreateDirectoriesTask.ExecuteTask()

   at Elastic.InstallerHosts.SessionExtensions.Handle(Session session, Func`1 action)

There is a problem with the installer when you are using different volumes for your ElasticSearch application and your ElasticSearch data(which I think is a good practice).  In that case the installer always fails as he tries to copy some files from one disk to another.

As a workaround(I tried multiple versions of the Windows installer but all got the same issue) I installed the ElasticSearch application on the data disk.

Friday, June 19, 2020

.NET Core–Disable a specific compiler warning

In my .NET Core app I wanted Visual Studio to stop complaining about missing XML comments (CS1591 warning).

To enable this I had to add a <NoWarn></NoWarn> configuration:

Thursday, June 18, 2020

The power of tuples and deconstructors

Today I updated a part of my code using a combination of tuples and tuple deconstruction.

This was the original code:

And here it is after applying my changes:

I like it!

Wednesday, June 17, 2020

Upgrading ElasticSearch–Discovery configuration is required in production mode

While preparing for an ElasticSearch upgrade I checked the Upgrade assistant.

One (critical) issue was mentioned: Discovery configuration is required in production mode

Let’s have a look what the documentation has to mention:

Production deployments of Elasticsearch now require at least one of the following settings to be specified in the elasticsearch.yml configuration file:

  • discovery.seed_hosts
  • discovery.seed_providers
  • cluster.initial_master_nodes
  • discovery.zen.ping.unicast.hosts
  • discovery.zen.hosts_provider

The first three settings in this list are only available in versions 7.0 and above. If you are preparing to upgrade from an earlier version, you must set discovery.zen.ping.unicast.hosts or discovery.zen.hosts_provider.

In our case we don’t want to form a multiple-node cluster. So we ignore the documentation above and instead change the discovery.type to single-node.

For more information about when you might use this setting, see Single-node discovery.

Tuesday, June 16, 2020

Azure Monitor / Application Insights : Workbooks

An easy way to get started with Azure Monitor / Application Insights is through ‘Workbooks’.

From the documentation:

Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.

You can build your own workbooks but there is a large list of available templates out-of-the-box that can help you gain insight in your Azure services.

  • To get started open the Azure Portal
  • Go to Azure Monitor/ Application Insights
  • Select Workbooks from the menu on the left

    • You can create a new report or choose one of the existing templates.

      • Let’s have a look at the ‘Usage through the day’ report for example.

        • You can click on Edit to start customizing the report.
        • Every report can be a combination of text, parameters, graphs and metrics.

          Monday, June 15, 2020

          Seq - ERR_SSL_PROTOCOL_ERROR

          Structured logging is the future and tools like ElasticSearch and Seq can help you manage and search through this structured log data.

          While testing Seq, a colleague told me that he couldn’t access Seq. Instead his browser returned the following error:

          ERR_SSL_PROTOCOL_ERROR

          The problem was that he tried to access the Seq server using HTTPS although this was not activated. By default Seq runs as a windows service and listens only on HTTP.

          To enable HTTPS some extra work needs to be done:

          • First make sure you have a valid SSL certificate installed in either the Local Machine or Personal certificate store of your Seq server.
          • Open the certificate manager on the server, browse to the certificate and read out the thumbprint value.
          • Now open a command prompt on the server and execute the following commands:
            • seq bind-ssl --thumbprint="THUMBPRINT HERE --port=9001
            • seq config -k api.listenUris -v https://YOURSERVER:9001
            • seq restart

          Remark: The ‘--port’ parameter is only necessary when you are not listening on the standard HTTPS port(443).

          More information: https://docs.datalust.co/docs/ssl

          Friday, June 12, 2020

          Try an API call directly in Chrome Dev Tools

          Quick tip if you want to test an api call; you can make an HTTP request directly from the Chrome Developer Tools:

          • Open your Developer Tools(F12)
          • Go to Console
          • Enter the following:

          Thursday, June 11, 2020

          TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface.

          A colleague shared with me the following strange error message he got when he tried to use a .NET Standard library I created:

          An unhandled exception occurred while processing the request.

          TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface.

          System.Reflection.Emit.TypeBuilder.CreateTypeNoLock()

          · TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface.

          o System.Reflection.Emit.TypeBuilder.CreateTypeNoLock()

          o System.Reflection.Emit.TypeBuilder.CreateTypeInfo()

          o System.Reflection.DispatchProxyGenerator+ProxyBuilder.CreateType()

          o System.Reflection.DispatchProxyGenerator.GenerateProxyType(Type baseType, Type interfaceType)

          o System.Reflection.DispatchProxyGenerator.GetProxyType(Type baseType, Type interfaceType)

          o System.Reflection.DispatchProxyGenerator.CreateProxyInstance(Type baseType, Type interfaceType)

          o System.Reflection.DispatchProxy.Create<T, TProxy>()

          o System.ServiceModel.Channels.ServiceChannelProxy.CreateProxy<TChannel>(MessageDirection direction, ServiceChannel serviceChannel)

          o System.ServiceModel.Channels.ServiceChannelFactory.CreateProxy<TChannel>(MessageDirection direction, ServiceChannel serviceChannel)

          o System.ServiceModel.Channels.ServiceChannelFactory.CreateChannel<TChannel>(EndpointAddress address, Uri via)

          o System.ServiceModel.ChannelFactory<TChannel>.CreateChannel(EndpointAddress address, Uri via)

          o System.ServiceModel.ChannelFactory<TChannel>.CreateChannel()

          o System.ServiceModel.ClientBase<TChannel>.CreateChannel()

          o System.ServiceModel.ClientBase<TChannel>.CreateChannelInternal()

          o System.ServiceModel.ClientBase<TChannel>.get_Channel()

          o IRD3Service.IRD3ServiceClient.MbCoreGetIdentificerendeEenheidAsync(int idIe) in Reference.cs

          Inside this library I’m doing a WCF call to get some data from a backend service. WCF internally generates a proxy for the WCF client through the ProxyBuilder and it is this ProxyBuilder that started to complain…

          The problem seems to be that I generated all my proxy types as internal(which should be a good thing). But the ProxyBuilder does not agree with me.

          Some research (thanks Google!) brought me to the following possible solutions:

          • Change all proxy types from internal to public
          • Add an InternalsVisibleToAttribute(“ProxyBuilder”) to the library

          I tried the second approach and it worked! Up to the next problem…

          Wednesday, June 10, 2020

          InternalsVisibleTo in your csproj file

          I blogged before about how to use the [InternalsVisibleTo] in your .NET Standard/.NET Core project. Today I discovered an alternative approach where you specify the attribute information in your csproj file:

          During compilation of your project an AssemblyInfo.cs file is generated (take a look at your object folder):

          Tuesday, June 9, 2020

          ElasticSearch.NET exception after upgrade

          After upgrading ElasticSearch.NET to the latest version, my application failed with the following error message:

          Could not load type 'Elasticsearch.Net.IInternalSerializerWithFormatter' from assembly 'Elasticsearch.Net, Version=7.0.0.0, Culture=neutral, PublicKeyToken=96c599bbe3e70f5d'.

          A look at my packages.config (yes it is still an old(er) ASP.NET application), showed the following:

          <package id="CommonServiceLocator" version="2.0.1" targetFramework="net461" requireReinstallation="true" />

          <package id="Elasticsearch.Net" version="7.7.1" targetFramework="net472" />

          <package id="Iesi.Collections" version="4.0.1.4000" targetFramework="net461" />

          <package id="LazyCache" version="0.7.1.44" targetFramework="net461" />

          <package id="Microsoft.CSharp" version="4.6.0" targetFramework="net472" />

          <package id="NEST" version="7.1.0" targetFramework="net472" />

          The problem was that although I had updated the Elasticsearch.NET nuget packages I forgot to do the same thing for the NEST high level client.

          To fix it I had to update the NEST nuget package as well.

          Monday, June 8, 2020

          GraphQL vs OData

          In case you didn’t noticed yet, I’m a big fan of GraphQL. One of the questions I get a lot (especially from .NET developers) is what the difference is with OData?

          At first sight they have a lot of similarities and partially try to achieve the same goal, but there are some reasons why I prefer GraphQL over OData.

          Let’s first have a look at the “official” descriptions:

          From odata.org:

          OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests.

          OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools.

          From graphql.org:

          GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

          Sounds familiar?

          I can understand that people who have used OData will think it does the same thing, but what makes it different?

          Decoupling

          OData brought the power of SQL to your URI’s at  the cost of a high coupling. The OData ecosystem was meant to replace your existing REST api’s and your implementation had a direct technical coupling. GraphQL is more like a backend for frontend where you can bring multiple REST api’s together in one uniform interface.

          Scaling

          Although technical feasible to create one OData schema for your whole organization, it would be hard to build and maintain. Compare it with GraphQL Federation where it becomes easy to create a single data graph for your whole organization.

          Adoption

          Although OData is an open standard and there are some other big names next to Microsoft who jumped on the bandwagon, I mostly encounter OData usage at companies that use SAP and/or .NET.  GraphQL has a much broader adoption across multiple ecosystems and platforms.

          I’ve used OData in the passed and I really liked it in the context of WCF Data Services and Silverlight(RIP) but the flexibility, rich ecosystem and amazing tools and solutions(e.g. Apollo) of GraphQL should be enough to convince you…

          Remark: I can recommend the following read to go in more detail about the differences: https://jeffhandley.com/2018-09-13/graphql-is-not-odata

          Thursday, June 4, 2020

          Azure Service Bus Explorer in Azure Portal

          Until recently I used the Service Bus Explorer to debug and manage Azure Service Bus. But last week I noticed the following new menu item in Azure Service Bus:

          To use the Azure Service Bus explorer, you need to navigate to the Service Bus namespace on which you want to perform send, peek, and receive operations. Then select either ‘Queues’ or ‘Topics’ from the from the navigation menu. After doing that you should see the ‘Service Bus Explorer’ option.

          Following operations are supported:

          • Queues
            • 'Send' to a queue
            • 'Receive' from a queue.
            • 'Peek' from a queue.
            • 'Receive' from DeadLetterQueue.
            • 'Peek' from the DeadLetterQueue.
          • Topics
            • 'Send' to a topic.
          • Subscriptions
            • 'Peek' from a subscription on a topic.
            • 'Receive' from a subscription.
            • 'Peek' from the DeadLetter subscription.
            • 'Receive' from the DeadLetter subscription.

          To learn more about the Service Bus Explorer tool, please read the documentation.

          Application Insights - Stop tracking 404 errors

          By default Application Insights will log every 404 error in your web app as an error. I think this is a good default, but what if you don’t want to see these 404 errors?

          There are 2 options to solve this:

          Telemetry Processor

          A telemetry processor gives you direct control over what is included or excluded from the telemetry stream.

          We can register our new TelemetryProcessor by using the AddApplicationInsightsTelemetryProcessor extension method on IServiceCollection, as shown below:

          Telemetry Initializer

          Telemetry initializers allow you to enrich telemetry with additional information and/or to override telemetry properties set by the standard telemetry modules. By default, any request with a response code >= 400 is flagged as failed. But if we want to treat a 404 as a success, we can provide a telemetry initializer that sets the Success property:

          We can register the TelemetryInitializer in our Startup.cs:

          Advantage of the Telemetry Initializer is that we still log the 404 event but no longer as an error.

          More information: https://docs.microsoft.com/en-us/azure/azure-monitor/app/api-filtering-sampling

          Tuesday, June 2, 2020

          Sharing authentication ticket between .NET Core and ASP.NET (Owin)

          By default authentication tickets cannot be shared between .NET Core and OWIN. The good news is that it is possible but we have to take some extra steps:

          .NET Core App

          On .NET Core side we have to change the cookie authentication middleware:

          • The cookie name should match the name used by the OWIN Cookie Authentication Middleware (.AspNet.SharedCookie for example).
          • An instance of a DataProtectionProvider should be initialized to the common data protection key storage location.

          ASP.NET (OWIN) App

          On ASP.NET (OWIN) side we have to install the Microsoft.Owin.Security.Interop package first.

          Then we can change the cookie authentication middleware:

          • The cookie name should match the name used by the ASP.NET Core Cookie Authentication Middleware (.AspNet.SharedCookie in the example).
          • An instance of a DataProtectionProvider should be initialized to the common data protection key storage location.

            Monday, June 1, 2020

            ASP.NET Core–Set environment through the commandline

            ASP.NET Core has built-in support for multiple environments. This makes it easy to load different configuration and apply different middleware depending on the environment.

            The typical to control the environment we want to use is through the ASPNETCORE_ENVIRONMENT environment variable.

            It is also possible to set the environment variable by passing it to the dotnet run command as an argument.

            To set this up, we have to modify the Program.cs:

            The AddCommandLine method allows us to read configuration values from the command line.

            Now we can start the app with dotnet run --environment Development.

            Thursday, May 28, 2020

            Using YARP to create a reverse proxy server

            So far I’ve always used ProxyKit to create a reverse proxy in ASP.NET Core. But with the announcement of Yarp, it is time to try this alternative…

            • I created a new ASP.NET Core “empty” project:

            dotnet new web -n ProxyTest -f netcoreapp3.1
            The template "ASP.NET Core Empty" was created successfully.

            Processing post-creation actions...
            Running 'dotnet restore' on ProxyTest\ProxyTest.csproj...
              Restore completed in 278,54 ms for C:\Projects\test\yarptest\ProxyTest\ProxyTest.csproj.

            Restore succeeded.

            • Next step is to reference the Microsoft.ReverseProxy preview nuget package:
            <ItemGroup> 
              <PackageReference Include="Microsoft.ReverseProxy" Version="1.0.0-preview.1.*" /> 
            </ItemGroup>
            • Now it is time to update our Startup.cs. This is what I had when using Proxykit:
            • And here is the updated Startup.cs after switching to Yarp:
              • In Yarp everything is handled through configuration right now, so the real magic is there:

              I'm curious on how this will evolve in the future...

              Wednesday, May 27, 2020

              GraphQL Inspector

              While in traditional REST API’s versioning is a hot topic, GraphQL takes a strong opinion on avoiding versioning by providing the tools for the continuous evolution of a GraphQL schema. As GraphQL only returns the data that is explicitly requested, it becomes easier to introduce new functionality by adding new types and fields without introducing breaking changes. As you know what fields are used by which clients you can have a lot more knowledge in your hands to prevent breaking your clients.

              For small schema’s it can be feasible to inspect your schema for changes manually but for larger schemas or federated schema’s good tooling becomes a necessity.

              A tool that can help you to achieve this is GraphQL Inspector.

              It offers the following (free) features:

              • Compares schemas
              • Detect breaking or dangerous changes
              • Schema change notifications
              • Use serverless functions validate changes
              • Validates Operations and Fragments against a schema
              • Finds similar / duplicated types
              • Schema coverage based on Operations and Fragments
              • Serves a GraphQL server with faked data and GraphiQL
              • Docker Image

              Getting started

              To get started you have multiple items. You can use it as a Github application, a Github action but also as a commandline tool.

              Let’s see how to use the commandline tool:

              npm install --global @graphql-inspector/cli graphql

              Now we can compare two schema’s:

              graphql-inspector diff old.graphql new.graphql

              Detected the following changes (2) between schemas:

                Description was removed from field Post.createdAt
                Field Post.createdAt changed type from String to String!
              success No breaking changes detected

              It is a must have for every GraphQL developer!

              Tuesday, May 26, 2020

              Hands-on-labs: App modernization

              A colleague shared the following hands-on-lab with me: https://github.com/microsoft/MCW-App-modernization

              It’s a great starting point to learn about the cloud and take your first steps towards it. It combines a whiteboard design session and a hands-on-lab.

              This is wat you will design and build:

              Friday, May 22, 2020

              .NET Core–Generate documentation

              Although I try to make my API’s as descriptive as possible, sometimes good documentation can still make a difference.

              One way to enable documentation generation is through Visual Studio:

              • Right click on your project and select Properties.
              • On the Properties window go to the Build tab.
              • Check the XML documentation file checkbox
              • Don’t forget to save these changes.

              As a result the following is added to your csproj file:

              There are a few things I don’t like about this:

              • First a condition is applied to the PropertyGroup which doesn’t seem necessary
              • Second an absolute path is used to define where to generate the documentation XML

              So I would recommend no longer to use this approach. What you can do instead is directly manipulate the csproj file and add the following line to a PropertyGroup:

              Wednesday, May 20, 2020

              Git sparse checkout

              With the growing usage of mono-repositories the standard git checkout or git status no longer work and become frustrating slow. A solution would be to use Git LFS(Large File Storage) but not all repositories have this extension installed.

              An alternative solution can be provided through the (new) git sparse-checkout command.

              To restrict your working directory to a set of directories, run the following commands:

              1. git sparse-checkout init
              2. git sparse-checkout set <dir1> <dir2> ...

              If you get stuck, run git sparse-checkout disable to return to a full working directory.

              Remark: this feature is part of git 2.25. So if the command is not recognized check your git version and update first.

              More information: https://github.blog/2020-01-17-bring-your-monorepo-down-to-size-with-sparse-checkout/

              Tuesday, May 19, 2020

              Azure Pipelines- Error executing dotnet restore task

              When trying to execute dotnet restore during a build it failed with the following error message:

              2020-05-12T18:14:36.8332220Z C:\Program Files\dotnet\sdk\3.1.201\NuGet.targets(536,5): error :   The '@' character, hexadecimal value 0x40, cannot be included in a name. Line 6, position 35. [D:\b\4\agent\_work\200\s\IAM.Core\IAM.Core.csproj]

              2020-05-12T18:14:36.8820520Z      2>Done Building Project "D:\b\4\agent\_work\200\s\IAM.Core\IAM.Core.csproj" (_GenerateRestoreGraphProjectEntry target(s)) -- FAILED.

              2020-05-12T18:14:36.9152564Z      1>Project "D:\b\4\agent\_work\200\s\IAM.Core.Tests\VLM.IAM.Core.Tests.csproj" (1) is building "D:\b\4\agent\_work\200\s\IAM.Core.Tests\IAM.Core.Tests.csproj" (1:5) on node 1 (_GenerateRestoreGraphProjectEntry target(s)).

              2020-05-12T18:14:36.9162330Z      1>C:\Program Files\dotnet\sdk\3.1.201\NuGet.targets(536,5): error : NuGet.Config is not valid XML. Path: 'D:\b\4\agent\_work\200\Nuget\tempNuGet_60617.config'. [D:\b\4\agent\_work\200\s\IAM.Core.Tests\IAM.Core.Tests.csproj]

              2020-05-12T18:14:36.9162330Z C:\Program Files\dotnet\sdk\3.1.201\NuGet.targets(536,5): error :   The '@' character, hexadecimal value 0x40, cannot be included in a name. Line 6, position 35. [D:\b\4\agent\_work\200\s\IAM.Core.Tests\IAM.Core.Tests.csproj]

              2020-05-12T18:14:36.9162330Z      1>Done Building Project "D:\b\4\agent\_work\200\s\IAM.Core.Tests\IAM.Core.Tests.csproj" (_GenerateRestoreGraphProjectEntry target(s)) -- FAILED.

              2020-05-12T18:14:36.9230692Z      1>Done Building Project "D:\b\4\agent\_work\200\s\IAM.Core.Tests\IAM.Core.Tests.csproj" (Restore target(s)) -- FAILED.

              2020-05-12T18:14:36.9230692Z

              Let’s have a look at our nuget.config file to see what is going wrong:

              It turns out that NuGet doesn’t like that you use an ‘@’ sign in the name of the feed.

              Renaming solved the problem…

              Monday, May 18, 2020

              ASP.NET Core–The magic appearance of IMemoryCache

              I created a small security library in .NET Core that simplifies the rather complex security setup we have at one of my clients. Inside this library I’m using the IMemoryCache to cache some non-volatile data.

              When a colleague tried to use this library he told me that he had to add the following line

              This doesn’t seem unexpected but the strange this was that in my example project I nowhere added this!? Time to investigate…

              A walk through the ASP.NET Core source code (always a fun experience to discover and learn something new about the framework) learned me the following; when you call AddMvc() or AddResponseCaching() the framework will register for you an IMemoryCache behind the scenes.

              If you are using a lower level method like AddControllers() this is not the case.

              Learned something? Check!

              Friday, May 15, 2020

              Lens–The Kubernetes IDE

              If you are working with Kubernetes I can recommend Lens,  an open-source and free IDE to take control of your Kubernetes clusters.

              Thursday, May 14, 2020

              Azure Pipelines–DotNet restore error

              After configuring a new build pipeline, the build failed with the following error when trying to execute the dotnet restore build task:

              NuGet.targets(124,5): error :  Unable to load the service index for source http://tfs:8080/DefaultCollection/_packaging/Feed/nuget/v3/index.json

              NuGet.targets(124,5): error :  No credentials are available in the security package

              The strange thing was that the same task worked without a problem on other builds. Only for newly created builds it failed with the error message above.

              A workaround that seemed to work was to switch the dotnet build task to the ‘custom’ command. By using the custom command I can add an extra ‘--force’ arguments to the ‘dotnet restore’ command. By adding this extra argument I got rid of the error message above.

              Wednesday, May 13, 2020

              Azure Charts–Help! Azure is evolving too fast…

              As most cloud platforms, Azure is evolving quite fast. This makes it hard to keep up-to-date and know where you need to focus your energy on. Azure Charts can help. It is a web based application which allows you to see what the Azure consists of and how it evolves. I would specifically recommend to take a look at the Learning section to see what new Learning content got published.

              More information: https://techcommunity.microsoft.com/t5/educator-developer-blog/azure-charts-visualize-your-cloud-learning-journey/ba-p/1353228

              Tuesday, May 12, 2020

              Azure Pipelines error - NuGet.CommandLine.CommandLineException: Error parsing solution file

              After installing the latest Visual Studio version on our build servers, some of our builds started to fail with the following error message:

              This error only happened on the build servers running MSBuild version 16.5.0.12403:

              One or more errors occurred. ---> NuGet.CommandLine.CommandLineException: Error parsing solution file at D:\b\4\agent\_work\153\s\VLM.MELO.sln: Exception has been thrown by the target of an invocation. at NuGet.CommandLine.MsBuildUtility.GetAllProjectFileNamesWithMsBuild(String solutionFile, String msbuildPath) at NuGet.CommandLine.RestoreCommand.ProcessSolutionFile(String solutionFileFullPath, PackageRestoreInputs restoreInputs)

              This turns out to a bug in the NuGet client where older versions have trouble with this new version of MSBuild.

              To resolve this issue in Azure Pipelines, add a NuGet Tool Installer task to your pipeline before any tasks that use NuGet, and set the version field to include the latest version.

              Monday, May 11, 2020

              Serilog - IDiagnosticContext

              The ‘classic’ way I used to attach extra properties to a log message in Serilog was through the LogContext.

              From the documentation:

              Properties can be added and removed from the context using LogContext.PushProperty():

              Disadvantage of using the LogContext is that the additional information is only available inside the scope of the specific logcontext(or deeper nested contexts). This typically leads to a larger number of logs which doesn’t always help to find out what is going on.

              Today I try to follow a different approach where I only log a single message at the end of an operation. Idea is that the log message is enriched during the lifetime of an operation and that we end up with a single log entry.

              This is easy to achieve in Serilog thanks to the IDiagnosticContext interface. The diagnostic context is provides an execution context (similar to LogContext) with the advantage that it can be enriched throughout its lifetime. The request logging middleware then uses this to enrich the final “log completion event”.

              More info: https://nblumhardt.com/2019/10/serilog-mvc-logging/

              Friday, May 8, 2020

              Virtual Azure Community Day–March 2020

              In case you missed the last (virtual) Azure Community Day in March, all content is available online:

              You have 4 tracks with each 8 hours of content! A must see for every Azure addict…

              Thursday, May 7, 2020

              XUnit–Could not load file or assembly 'Microsoft.VisualStudio.CodeCoverage.Shim’

              When executing my XUnit tests on the build server, it failed with the following message:

              System.IO.FileNotFoundException : Could not load file or assembly 'Microsoft.VisualStudio.CodeCoverage.Shim, Version=15.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. The system cannot find the file specified.

              Inside my csproj file following packages were referenced:

              <PackageReference Include="xunit" Version="2.4.0" />

              <PackageReference Include="xunit.runner.visualstudio" Version="2.4.0" />

              <PackageReference Include="coverlet.collector" Version="1.0.1" />

              The ‘xunit.runner.visualstudio’ implicitly has a dependency on the Microsoft.NET.Test.Sdk(at minimum version 15.0) what could explain why he tried to load the assembly mentioned above.

              To get rid of this error, I had to explicitly include a reference to the ‘Microsoft.NET.Test.Sdk’:

              <PackageReference Include="Microsoft.NET.Test.Sdk" Version="16.4.0" />