Skip to main content

Posts

Showing posts from 2021

ADFS - Windows authentication

One of my clients is using Microsoft ADFS as their Security Token Service. Through ADFS people can login either by using their e-id(Belgian passport) or through their internal domain account (using Windows authentication). After upgrading our ADFS server, we noticed that people were asked for their credentials and that their windows credentials were not passed automatically. This is of course quite annoying. We took a look at the ADFS settings and noticed that ‘mozilla/5.0’ was missing from the list of user agents: PS C:\Users\bawu> (Get-AdfsProperties).Wiasupporteduseragents MSAuthHost/1.0/In-Domain MSIE 6.0 MSIE 7.0 MSIE 8.0 MSIE 9.0 MSIE 10.0 Trident/7.0 MSIPC Windows Rights Management Client MS_WorkFoldersClient =~Windows\s*NT.*Edge To fix it we updated the list of supported agents: Set-ADFSProperties -WIASupportedUserAgents @("MSAuthHost/1.0/In-Domain","MSIE 6.0", "MSIE 7.0", "M

Elastic APM–Use .NET OpenTelemetry

Elastic has their own Application Performance Monitoring solution as part of their Elastic Observability product. An important part of the solution are ‘agents’. Agent are responsible for instrumenting your application, collecting all the metrics and sending it to the APM server. Specifically for .NET the APM agent is released as a serie of NuGet packages . With the release of OpenTelemetry for .NET I was wondering if we could replace this Elastic APM specific solution with standard OpenTelemetry. In a first incarnation APM server didn’t support the OpenTelemetry standard and you had to use a seperate collector that converts the OpenTelemetry data to the Elastic APM format: Since version 7.13 of Elastic APM this is no longer necessary.The OpenTelemetry Collector exporter for Elastic was deprecated and replaced by the native support of the OpenTelemetry Line Protocol in Elastic Observability (OTLP). Now the only thing you need to do is to add some specific attributes

Visual Studio 2022–New breakpoint types

Of course it is always better to not write any bugs, but I know I make mistakes. So sooner or later I need to debug my code to find what’s going wrong. In that case breakpoints are an important aid to halt your application at the right location. Visual Studio 2022 introduces 2 new breakpoint types; temporary and dependent breakpoints: The Temporary breakpoint is used to set a breakpoint that will only break once. When debugging, the Visual Studio debugger only pauses the running application once for this breakpoint and then removes it immediately after it has been hit. To set a temporary breakpoint, hover over the breakpoint symbol, choose the Settings icon, and then select Remove breakpoint once hit in the Breakpoint Settings window: You can also use the right-click context menu to set the temporary breakpoint by selecting Insert Temporary Breakpoint from the context menu: The Dependent breakpoint is used to set a breakpoint that will only break when another break

Error after upgrading to Microsoft.Data.SqlClient 4

After upgrading to Microsoft.Data.SqlClient 4 , I immediatelly started to get connection failures. Let’s have a look at the exact error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The certificate chain was issued by an authority that is not trusted.) The reason of this error is that with the release of version 4.0, a breaking change was introduced to improve security: The default value of the `Encrypt` connection option changed from `false` to `true`. With the increased emphasis on secure-by-default, the growing use of cloud databases, and the need to ensure connections are secure, Microsoft decided it was time for this backwards-compatibility-breaking change. You can of course get back to the previous situation by explicitly setting the ‘encrypt’ option to ‘false’. "server=exampleserver;database=ExampleDB;integrated security=True; Encrypt=False &qu

Kubernetes Job Containers - (Forbidden): jobs.batch "example-migration" is forbidden

For our database migrations we are usingKubernetes Jobs and init containers as discussed here . However when we tried to deploy the job container, it failed with the following error: Error from server (Forbidden): jobs.batch "example-migration" is forbidden: User "system:serviceaccount:example-ns:default" cannot get resource "jobs" in API group "batch" in the namespace "example-ns": Azure does not have opinion for this user. To read and list jobs, the deployment is using the default service account in the “example-ns” namespace. This default service account does not have the necessary api rights in the kubernetes cluster. To fix it we created a new service account, role and role binding: After doing that, we had to update our deployment to use this service account:

Become a master at Git and Open Source

Interested in contributing to an Open Source project? But your experience with Git is rather limited, than this learning course is one for you! Sign up for this free course here . You’ll get regular mails covering following topics: Learning Git by exploring open-source repositories Exploring a Git repository using Visual Studio Contributing to an open source project Add your existing code to Git and GitHub Next steps

Swagger UI - Add required header

One of our REST API’s always requires an ‘X-API-Key’ header. To simplify testing, I wanted to have the option to specify the header in the Swagger UI. Let’s see how we can get this done through SwashBuckle . First I need to create a custom IOperationFilter that will add the header: Now we need to update our Swagger configuration to use this header: If we now run our application and browse to the Swagger UI, we should see the extra header parameter:

ASP.NET Core 6–Http Logging

With the release of ASP.NET Core 6, a new Http Logging middleware became available. This middleware transmit requests as log messages through the Microsoft.Extensions.Logging framework. To enable the middleware, add the following code to your Program.cs : This will add an instance of the Microsoft.AspNetCore.HttpLogging.HttpLoggingMiddleware class. If you now run your application, you will notice no difference and nothing is logged. This is because the HTTP Logging messages are handled as informational messages and these are not enabled by default. To adjust the log level, you can add the following line to the Logging > LogLevel configuration in the appsettings.json file: You can further configure the logging through the AddHttpLogging method:

Visual Studio 2022–Inline hints

Bring enough .NET developers together and sooner or later someone will start the discussion if we should use the ‘var’ keyword or not. You have people in the ‘pro’ camp who like that they have to type less and don’t worry too much about the specific type. You have people in the ‘contra’ camp who prefer to have explicit typing. In Visual Studio 2022, you can get the best of both worlds by enabling ‘inline hints hints’. Inlay hints can display parameter name hints for literals, function calls and more. To enable this feature, go to Tools > Options > Text Editor > C# or Basic > Advanced . Check the ‘Display inline parameter name hints’ checkbox Check the ‘Display inline type hints’ checkbox Now we can see that both our arguments and the var types are annotated:

ASP.NET Core - Who stole my cookie?

I stumbled over a strange issue I had in ASP.NET Core. A cookie created in ASP.NET MVC didn’t show up in my requests in ASP.NET Core. This cookie was used to share some user preferences between 2 subsites in the same domain. The original cookie was created without setting a SameSite value but was also not marked as secure or httponly. I first updated the cookie generation logic to set both the Secure and HttpOnly value to true. This is not strictly required but as I’m only reading the cookie in the backend this is already a step in the right direction. Let’s now focus on the SameSite value. SameSite is an IETF draft standard designed to provide some protection against cross-site request forgery (CSRF) attacks. Cookies without SameSite header are treated as SameSite=Lax by default. If you know that we need SameSite=None to allow this cookie for cross-site usage, that should certainly be our next step. Remark: Cookies that assert SameSite=None must also be marked as Secu

Simple made easy

One of the inspirations for the name of my blog comes from this presentation by Rick Hickey.  In case you’ve never watched this video, time to take it off your bucket list! Watch it on InfoQ or on Youtube .

Visual Studio 2022 - Test execution with Hot reload

Building your code before you can run your tests can be a big part of the time needed to run your tests. The build time inside Visual Studio can vary depending on the kind of changes made to the code. For larger solutions, builds can be the most expensive part of the test run. Visual Studio 2022 includes an experimental(!) feature, that allow you to use hot reload to speed up the test execution by skipping builds for supported scenarios. To start using this feature, you first need to enable it by choosing Test > Options > "(Experimental) Enable Hot Reloaded Test Runs for C# and VB test projects targeting .NET 6 and higher" : Now when you execute a test in Visual Studio, Test Explorer will automatically use test execution with hot reload when possible. If a hot reload is not possible, it will fall back to the regular behavior of building and running tests. The nice thing is that this is all happening behind the scenes and you, as a developer, should not do an

ASP.NET Core - Could not load type 'Microsoft.AspNetCore.Mvc.MvcJsonOptions'

After upgrading some packages in an ASP.NET Core application, I got the following error message when I tried to run the application: System.TypeLoadException: Could not load type 'Microsoft.AspNetCore.Mvc.MvcJsonOptions' from assembly 'Microsoft.AspNetCore.Mvc.Formatters.Json, Version=3.1.20.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.    at System.Signature.GetSignature(Void* pCorSig, Int32 cCorSig, RuntimeFieldHandleInternal fieldHandle, IRuntimeMethodInfo methodHandle, RuntimeType declaringType)    at System.Reflection.RuntimeConstructorInfo.get_Signature()    at System.Reflection.RuntimeConstructorInfo.GetParametersNoCopy()    at System.Reflection.RuntimeConstructorInfo.GetParameters()    at Microsoft.Extensions.Internal.ActivatorUtilities.CreateInstance(IServiceProvider provider, Type instanceType, Object[] parameters)    at Microsoft.AspNetCore.Builder.UseMiddlewareExtensions.<>c__DisplayClass4_0.<UseMiddleware>b__

Blazor–Using Basic authentication

For an internal application I’m building I needed to use Basic authentication. Remark: In case you forgot, Basic Authentication transmits credentials like user ID/password as a base64 encoded string in the Authorization header. This is of course not the most secure way as any man-in-the-middle can capture and read this header data. If you look around on the Internet, a typical example on how to this in .NET looks like this: We create a HttpClientHandler, set the Credentials and pass it to our HttpClient as a parameter. Of course it is even better to not create an HttpClient instance yourself but instead use the HttpClientFactory to create a Named or Typed HttpClient. But if you try to use the code above in a Blazor application, you’ll end up with the following runtime error: System.PlatformNotSupportedException: Property Credentials is not supported. As the HttpClient implementation for Blazor stays as close as possible to the fetch api the browser, you’ll ne

.NET 6 - The ArgumentNullException helper class

.NET 6 introduces a new helper class that uses the new [CallerArgumentExpression] attribute and the [DoesNotReturn] attribute; the ArgumentNullException helper class. This class gives you an easy-to-use helper class that throws an ArgumentNullException for null values. Thanks to the [CallerArgumentExpression] attribute this helper method gives you better error messages as it can capture the expressions passed to a method. This is the implementation of this helper class: Before C# 10, you probably would have used the nameof keyword and implemented this helper class like this:

.NET Conf 2021 - Sessions, slides and demos are online

In case you missed .NET Conf 2021 , no worries, all sessions are recorded and available on the .NET YouTube channel or the new Microsoft Docs events hub . With over 80 sessions, you will know what to do during the Christmas Holidays. Slidedecks and demos can be found on the .NET Conf 2021 GitHub page . Have fun!  

.NET Core–The case of the disappearing authorization header

While building an internal (Blazor) application, I stumbled over some CORS issues. As this was an internal application, I decided to be lazy and just disable CORS on my request: In the example above I’m using the  SetBrowserRequestMode() to disable the CORS preflight check. Afther doing that the CORS issue was gone, unfortunately my application still didn’t work because now I got a 401 response back?! I was quite confident that the provided username/password combination was correct. So what is going on? I monitored my request using the browser developer tools and I noticed that the authorization header was missing: What was going on? The MDN documentation brought me the answer when I had a look at the request mode documentation specifically for ‘no-cors’: no-cors — Prevents the method from being anything other than HEAD , GET or POST , and the headers from being anything other than simple headers . If any ServiceWorkers intercept these requests, they may not a

Keep your project dependencies up to date with dotnet outdated

From the documentation : When using Visual Studio, it is easy to find out whether newer versions of the NuGet packages used by your project are available, by using the NuGet Package Manager. However, the .NET Core command-line tools do not provide a built-in way for you to report on outdated NuGet packages. dotnet-outdated is a .NET Core Global tool that allows you to quickly report on any outdated NuGet packages in your .NET Core and .NET Standard projects. This is a great way to keep your applications up-to-date and can easily be integrated as part of your DevOps processes. Install dotnet-outdated as a global tool: dotnet tool install --global dotnet-outdated-tool Now you can invoke it from your project or solution folder: dotnet outdated This is how the output looks like for one of my projects: The colors make it very clear. Here is the related legend: You can automatically upgrade packages by passing the ‘-u’ parameter: dotnet outdated -u

GraphQL HotChocolate 12 - Updated Application Insights monitoring

It seems that with every release of HotChocolate, I can write a follow up post. With the release of HotChocolate 11, I wrote a blog post on how to integrate it with Application Insights. With the HotChocolate 12, there was again a small update in the available interfaces and API’s. Let’s check the changes we have to make… First of all, our diagnostic class should no longer inherit from DiagnosticEventListener but from ExecutionDiagnosticEventListener . The signature of the ExecuteRequest method has changed as well. Instead of returning an IActivityScope it should return an IDisposable : This also means that our RequestScope no longer needs to implement the IActivityScope interface but only needs to implement IDisposable : Here is the full example:

C# 10–Change an existing project to file scoped namespaces

C# 10 introduces file scoped namespaces . This allows you to remove the ‘{}’ when your source file only has one namespace(like most files typically have). So instead of writing: you can now write: To apply this for an existing project written in C# 9 or lower, you can do this in one go. Therefore set the language version of your project to C# 10: Now we need to update our .editorconfig file and add the following line: After doing that Visual Studio will help us out and we can use “Fix all occurences in Solution” to apply it in one go:  

Running Azure on your laptop–Part 3–Prerequisites

In the previous post in this series I talked about why Azure Arc is also interesting for developers. Today we finally move on to the more practical part and try to get it up and running on our local machine. Let’s first focus on what you need to have up and running on your local machine first: Make sure your kubeconfig file is configured properly and you are working against your k8s cluster context . Install or update Azure CLI to version 2.25.0 and above . Install and Set Up kubectl Install Helm 3 . If you are on a Windows environment, a recommended and easy way is to use the Helm 3 Chocolatey package . As we want to run Azure Arc on our local machine, we also need to have a local AKS cluster up and running. You can use Minikube , MicroK8S , KIND (Kubernetes in Docker), or any other flavor you like that can be installed locally. I tested both in MiniKube and KIND. Now we can move on to the Azure side. Let’s see what we need the

Running Azure on your laptop–Part 2 - Azure Arc for Developers

In the previous post in this series I talked about Azure Arc and it’s multiple flavors. Although one Azure managed control plane for all your resources no matter if there are on premise, on Azure or hosted at another cloud provider sounds great if you are an IT ops guy(or girl) but why should you care as a developer as well? It is important to understand that the Azure Arc story has 2 dimensions. 1. Arc enabled infrastructure The first dimension is the Arc enabled infrastructure. This is the part that I already talked about and that allows you to connect and control hybrid resources like they are native Azure resources. This allows you to use additional Azure Services like Azure Policy, Azure Monitor, and so on to govern, secure and monitor these services. 2. Arc enabled services The second dimension is Arc enabled services. Once you have an Arc enabled infrastructure, you can start to deploy and run Azure Services outside Azure while still operation them from Azure. This

Running Azure on your laptop– Part 1–What is Azure Arc?

Before I dive into the details on how to get Azure Arc up and running on your laptop, it would be a good idea to start with a short introduction. Therefore we first have to dive in how Azure works. The hearth of the Azure ecosystem is the Azure control plane. This control plane manages all the resources you can find in Azure. It helps you to inventorize, organize and govern all resources and multiple tools exist that can help you to interact with it (think ARM templates , Bicep , Terraform , Pulumi , …) You probably know this control plane better as the Azure Resource manager. It controls and manage all the Azure resources which can be as big as a Kubernetes cluster and as small as a static ip address. These resources run inside an Azure region, one of the datacenters that Microsoft has all around the world. So where does Azure Arc fits into this picture? If we bring Azure Arc into the picture, we can bring resources that are not running on Azure to the Azure control plane

Running Azure on your laptop–Introduction

As mentioned yesterday I promised to write a series of follow up posts about my ‘Running Azure on your laptop’ session. I’ll use this post as a placeholder to point to the different parts. Microsoft is more and more embracing a hybrid cloud approach. As part of this evolution, an increasing amount of ‘Azure only’ services become available outside Azure. This idea is not new, people who work long enough in the Microsoft ecosystem maybe remember Azure Pack ,  which was a way to install Azure software on your own hardware.It gave you the Azure portal and some of it’s services. I never tried it myself and I don’t know any customer who used it in the wild. A couple of years later, Microsoft announced the Azure Pack’s successor, Azure Stack. This was a hardware appliance, that you could install in your own datacenter. Over time, the name evolved to Azure Stack Portfolio as multiple flavors of Azure Stack became available. Azure Stack is still available today and keeps evolving. At Ign

VisugXL - Running Azure on your laptop using Azure Arc

Last weekend I gave a presentation at VisugXL about Azure Arc . I’ll write a few follow-up posts explaining the steps I took to get it all up and running(and where I got into trouble). If you can’t wait until then, here is already the presentation:

.NET 6–Breaking changes

Although Microsoft takes a lot of effort to maximize backwards compatibility , migrating to .NET 6 can result in breaking changes that might affect you. So before you start to upgrade have a look at the list of breaking changes maintained here: https://docs.microsoft.com/en-us/dotnet/core/compatibility/6.0

Azure DevOps–Run a tool installed as npm package

As part of our build pipeline we wanted to use GraphQL Inspector to check if our updated GraphQL schema contained any breaking changes. GraphQL Inspector is available as a commandline tool and can be installed through NPM. So the first step was to use the NPM task to install the tool: But now the question is how can we invoke the installed tool? This is possible thanks to NPX. NPX stands for Node Package Execute and it comes with NPM. It is an npm package runner that can execute any package that you want from the npm registry. I added a command line task to invoke npx: Remark: When using NPX you even don’t need to install the package first. This means that the NPM task I created first is not necessary.

ASP.NET Core - Build a query string

What is wrong with the following code? Nothing you would say? What if I passed ‘Bert & Ernie’ as the searchterm parameter? The problem is that I’m using string interpolation to build up the query. This could be OK if you have full control on the passed parameters but in this case it is input coming from a user. The example above would lead to an incorrect query string. Writing the correct logic to handle ampersands, question marks and so on would be a challenge. Luckily ASP.NET Core offers a QueryHelpers clas with an AddQueryString function: public static string AddQueryString( string uri, string name, string value); public static string AddQueryString( string uri, IDictionary< string , string > queryString); Let’s update our code example to use this: That's better!

Azure DevOps Pipelines–A local file header is corrupt

A colleague contacted me with the following question; he tried to run a specific Azure Pipelines build but before even one build task could be executed, the build failed. I asked him to send me the logs and he shared the following screenshot: As you can see the build fails in the ‘Job initialization’ phase while downloading a specifc task ‘VersionAssemblies’. The strange this was than when I searched for this build task I couldn’t find it between the list of installed extensions on the Azure DevOps server. I took a look at the Azure DevOps marketplace and even there this specific build task was non-existent. Strange! At least this explained the error message I got, as the build pipeline probably couldn’t find the task either. In the end I fixed it by introducing an alternative build task that achieved the same goal(updating the AssemblyInfo with a build number).

Azure DevOps–SonarQube error

As part of our build pipelines, we run a code analysis through SonarQube. After moving SonarQube to a different server, our Azure DevOps pipelines started to fail. When I opened the build logs, I noticed the following error message: ERROR: JAVA_HOME exists but does not point to a valid Java home folder. No “bin\java.exe” file can be found there. I logged in on our SonarQube server and checked the value of the JAVA_HOME environment variable: JAVA_HOME = c:\program files\Zulu\zulu-11\bin\ Based on the error message above, it seems that the SonarScanner expects that we don’t include the ‘bin’ folder. So I updated the environment variable to: JAVA_HOME = c:\program files\Zulu\zulu-11 After rescheduling the build, the SonarQube analysis task completed succesfully.

GraphQL Crash Course

If you want to get started with GraphQL, you can have a look at the following video: Remark: This is part of a bigger course that is available on Udemy .

GraphQL–Strawberry Shake GraphQL client

Until recently I always used the GraphQL.Client as the GraphQL client of my choice. This client is straightforward and easy-to-use. For a new project I decided to give Strawberry Shake a try. Strawberry Shake was created by Chilicream, the creators of the HotChocolate GraphQL backend for .NET . Strawberry Shake is using a different approach as the GraphQL.Client as it heavily relies on code generation and looks similar to the Apollo GraphQL client from a design point of view. I mostly followed the “Get started” documentation to get the Strawberry Shake client up and running, but I didn’t get everything up and running immediatelly so I’ll add some extra detail on the points where I got into trouble. Add the CLI tools We start by adding the Strawberry Shake CLI tools Open the folder that contains the project where you want to add the Strawberry Shake GraphQL client. Now wee need to first create a dotnet tool-manifest. dotnet new tool-manifest Getting ready...

Azure Pipelines - Unable to determine the location of vstest.console.exe

A colleague forwarded me a question about a failing build pipeline. When I took a look at the build results, I noticed that the Visual Studio Test task was failing. Inside the logs I found more details explaining what was going on: ##[warning]No results found to publish. ##[debug]Processed: ##vso[task.logissue type=warning]No results found to publish. ##[error]System.Management.Automation.CmdletInvocationException: Unable to determine the location of vstest.console.exe ---> System.IO.FileNotFoundException: Unable to determine the location of vstest.console.exe ##[debug]Processed: ##vso[task.logissue type=error;]System.Management.Automation.CmdletInvocationException: Unable to determine the location of vstest.console.exe ---> System.IO.FileNotFoundException: Unable to determine the location of vstest.console.exe    at Microsoft.TeamFoundation.DistributedTask.Task.Internal.InvokeVSTestCmdlet.GetVsTestLocation()    at Microsoft.TeamFoundation.Distributed

.NET Tools - Cannot find a manifest file

A .NET tool is a special NuGet package that contains a console application. You can install a .NET tool as a global tool (using the --global argument) or as a local tool (using the  --local argument). However when I tried to install a specific tool locally, it failed with the following error message: “Cannot find a manifest file.” dotnet tool install StrawberryShake.Tools --local Cannot find a manifest file. For a list of locations searched, specify the "-d" option before the tool name. If you intended to install a global tool, add `--global` to the command. If you would like to create a manifest, use `dotnet new tool-manifest`, usually in the repo root directory. To install a tool for local access only, it has to be added to a tool manifest file. As I didn’t create such a file, I got the error message mentioned above. To fix this, we first need to create a tool manifest file by running the dotnet new tool-manifest command: dotnet new tool-ma

vscode.dev : Bringing VS Code to the browser

Although I’m still using Visual Studio (or Rider depending on the mood) for my day to day C# development(and F# occasionally), I use Visual Studio Code for all other languages and web development. With vscode.dev , your favorite code editor becomes available everywhere without the need to leave the browser and install anything. Thanks to the File System Access API support in modern browser, vscode.dev can access the local file system. This enables scenario’s like local file viewing end editing. Integration with Github and Azure DevOps is also available allowing you to sync your changes with repositories on both platforms. However don’t expect that vscode.dev is already on par with the desktop version in terms of functionality. For example, there's no internal debugging or terminal with vscode.dev . More information: https://code.visualstudio.com/blogs/2021/10/20/vscode-dev

MassTransit - Stop handling erroneous messages

By default when a MassTransit consumer fails to handle a message (and throws an exception), the message is moved to an _error queue (prefixed by the receive endpoint queue name). This is OK for transient exceptions but probably not what you want when you have a bug in your system or there is another reason why none of the messages can be handled succesfully. In that case, another feature of MassTransit becomes handy; the kill switch . A Kill Switch is used to prevent failing consumers from moving all the messages from the input queue to the error queue. By monitoring message consumption and tracking message successes and failures, a Kill Switch stops the receive endpoint when a trip threshold has been reached. You can configure a kill switch for a specific endpoint or for all receiver endpoints on the bus. Here is a short example on how to configure the kill switch for all receiver endpoints:   In the above example, the kill switch will activate after 10 messages

Service decomposition and service design

Finding the boundaries of your system and decompose it into multiple services sounds easy, but it certainly isn’t. If you are interested in this topic, check out the blog series by Vadim Samokhin: Why you should split the monolith Wrong way of defining service boundaries What characteristics my services should possess How to define service boundaries Remark: After writing this post, I noticed that Vadim created a blog post linking to the series above and included also some other related posts.

Azure AKS–Save some money using spot node pools

One of the ways you can save some money using Azure is by using spot node pools for your Azure Kubernetes Service cluster. What’s a spot node pool? Using a spot node pool allows you to take advantage of unused Azure capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict spot nodes. Therefore, Spot nodes are great for workloads that can handle interruptions like batch processing jobs, dev/test environments, large compute workloads, and more. Remark: A spot node pool can't be the cluster's default node pool. A spot node pool can only be used for a secondary pool. Pricing for a spot node pool Pricing for spot instances is variable , based on region and SKU. For more information, see pricing for Linux and Windows . You do have the option to set a max price. In case the price is exceeded the spot node is evicted from your cluster. Schedule a deployment to use the spot node pool A spot node

Winget–A package manager for Windows

I’ve been using Chocolatey for a long time as an easy way to get my Windows machine configured with all the software I need. With the release of version 1.1 of the Windows Package Manager(WinGet) I thought it was a good time to give it a try. Installation Chances are high that WinGet is already available on your machine. Open a terminal and type winget. If it is available you should see something like this: If not, the Windows Package Manager is distributed with the App Installer from the Microsoft Store. You can also download and install the Windows Package Manager from GitHub , or just directly install the latest available released version. Searching a package The list of available packages is quite large(more than 2,600 packages in the Windows Package Manager app repository ). Just run winget search <SomePackage> to see if the package you are looking for has available there. For example let’s search for my favorite git client GitKraken: PS C:\Users\bawu

ASP.NET Core–Running Swagger behind a reverse proxy

Today I helped out a colleague who was struggling with the following use case: We have an ASP.NET Core Web API with OpenAPI(Swagger) integration enabled. This ASP.NET Core Web API was running behind a reverse proxy( Yarp in case you want to know) and isn’t directly accessibel. To explain the problem he had, let’s start from the following situation: The ASP.NET Core application was running on http://internalname:5000 The YARP reverse proxy was running on https://localhost/proxy   When browsing to the Swagger endpoint on https://localhost/proxy/swagger , we got the following: Do you notice the application url in the address bar vs the url in the Servers dropdown? If we try to invoke a specific endpoint through the Swagger UI, Swagger tries to do the call directly to http://internalname:5000 which results in 404 error as the service is not available directly on this address. It seems that Swagger doesn’t respect the Forwarded headers as provided through Yarp.

Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException–The system cannot find the file specified

After deploying an application to production, it failed with the following error message: The system cannot find the file specified Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: at Internal.Cryptography.Pal.CertificatePal.FilterPFXStore (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at Internal.Cryptography.Pal.CertificatePal.FromBlobOrFile (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Security.Cryptography.X509Certificates.X509Certificate..ctor (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor (System.Security.Cryptography.X509Certificates, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a) Let’s take a look at the code that caused this issue: Not muc

DaprCon is coming on October 19th-20th 2021

What is Dapr ? Dapr helps developers build event-driven, resilient distributed applications. Whether on-premises, in the cloud, or on an edge device, Dapr helps you tackle the challenges that come with building microservices and keeps your code platform agnostic. The Dapr ecosystem keeps growing and now they’ll have their first virtual conference ‘DaprCon’ next week. DaprCon will include a variety of content including a keynote, technical sessions, panel discussions and real-world experiences of adopters building with Dapr DaprCon is a fully virtual event that will be streamed on YouTube and attendance is free! To watch the live events just follow these two links: DaprCon Day 1 DaprCon Day 2 More information: https://blog.dapr.io/posts/2021/10/05/join-us-for-daprcon-october-19th-20th-2021/

MassTransit–Message versioning

MassTransit dedicated a whole documentation page to message versioning but it still wasn’t completely clear to me how it worked. Let’s use this blog post to see what’s going on… Publishing messages Let’s first focus on the sending side. Publishing a first version of our message contract We’ll start with a first version of our message contract: Let’s send this to RabbitMQ using: Time to open the RabbitMQ Management portal and take a look how the message payload is constructed: Creating a second version of our message contract Let’s introduce a v2 version of our message contract: If we send it to RabbitMQ in the same way: There isn’t such a big difference when comparing the payloads: The ‘messagetype’ value is different and of course the message itself. But that’s it. Send a backwards compatible version Let’s now construct a message that implements both contracts: And send that one: If we now check the payload, we see that 1 message is put on th

.NET 6.0 - Migrate to ASP.NET Core 6

.NET 6 introduces a new hosting model for ASP.NET Core applications. This model is streamlined and reduces the amount of boilerplate code required to get a basic ASP.NET Core application up and running. There are a lot of blog posts out there explaining this new hosting model, but I like to share the guide written by David Fowler, software architect on the ASP.NET team. He walks you through the building blocks , explains the differences in the hosting model , shares a list of frequently asked questions and provides you a cheatsheet .  

ASP.NET Core - InvalidOperationException: 'VaryByQueryKeys' requires the response cache middleware.

Yes, I’m still rewriting an existing ASP.NET Web API application to ASP.NET Core. I ported an existing action method where I was using response caching. I previously ported other action methods that used caching but this one was a little bit different because I had to take the querystring values into account for the caching. This is easily arranged by specifying the name of the query string parameter using the VaryByQueryKeys property on the ResponseCache attribute. Small tip: If you want to take all query string parameters into account, you can use “*” as a single value. When I tried to call this method I got a 500 error. Inside the logs I noticed the following error message: InvalidOperationException: 'VaryByQueryKeys' requires the response cache middleware. Although caching seemed to work for other scenario’s, when I used the VaryByQueryKeys property I had to add the response caching middleware. Here is my Startup.ConfigureServices(): And my Startup.Conf

ASP.NET Core–Route arguments are case sensitive

I’m currently rewriting an existing ASP.NET Web API application to ASP.NET Core. Along the way I encountered some issues, here is one specific lesson I learned… After porting an ASP.NET Web API controller to .NET Core, a specific action method looked like this: Looked OK to me. But when I tried to invoke this action method, I got a 400 error back: {   "type" : " https://tools.ietf.org/html/rfc7231#section-6.5.1 " ,   "title" : "One or more validation errors occurred." ,   "status" : 400 ,   "traceId" : "00-3c02d7d0541cbb49a1790b11a71d871a-885c9350d2c4864c-00" ,   "errors" : {     "ID" : [       "The value '{id}' is not valid."     ]   } } It wasn’t immediatelly obvious to me what I did wrong. Maybe you spot my mistake? Let’s have a look at the route attribute: [HttpGet(“{ id }”)] and at the metho