Skip to main content


Showing posts from March, 2022

SQL Server–Map connection to a specific application

As a follow-up of my post yesterday where I talked about a production issue we had with one of services, I want to share a tip that can make debugging more enjoyable I explained how you can use the built-in reports to investigate possible problems. Here is the report I shared yesterday: You see that it mentions a Session ID but it doesn’t gives you any clue about the exact application. To map the Session ID to a specific application, you should update your SQL Server connection string with an Application Name property: Server=myServerAddress;Database=myDataBase;Trusted_Connection=True;Application Name=myApplicationName: If you know look at the same report, the Login Name column is updated with the application name: You can now also use this information in for example a stored procedure through the following command: SELECT APP_NAME();

SQL Server Timeouts–Find possible (dead)locks

Today it was all hands on deck, as one our production services started to scream error messages. After checking the log telemetry and running some tests, we could pinpoint the issue to a database timeout. Read operations were executed succesfully but all right operations timed out. This made us think that a deadlock was causing the issue. But how do you know for sure? Read on how you can check this… Open SQL Server Management Studio Connect to the suspicious database Right click on the database, select Reports and choose the Resource Locking Statistics by Objects report I didn’t create a screenshot at the moment of the issue(too busy fixing it), but the report showed multiple sessions to the same table where one session was Granted a lock while the other where waiting. To free the lock, you could kill the session using the session id that is causing the lock: KILL <sessionid>

XUnit - Change culture during your test execution

Most of the applications I’m building should support multiple languages and cultures. This means that I also need to test my application code taking into account this into account. In XUnit, there is no out-of-the-box way to change the culture of your unit test. However XUnit is easy to extend and you can even find a UseCultureAttribute example that exacty provides the functionality I need. Here is an example where I used this attribute to test if my validation messages are translated correctly:

ASP.NET Core–Set default request Culture

ASP.NET Core comes with built-in localization features through the Localization middleware . This middleware will update the current Culture and current UI Culture based on information provided by one or more 'RequestCultureProviders' The default providers are QueryStringRequestCultureProvider CookieRequestCultureProvider AcceptLanguageHeaderRequestCultureProvider It is also possible to set a default culture when no culture information could be resolved through one of the configured providers. This is useful if your website or application targets a standard audience in one region or country. To set a default culture, we need to configure  the DefaultRequestCulture property of the RequestLocalizationOptions: Now when no culture information could be resolved through one the providers, 'nl-BE' will be used as our default culture.

ImageSharp.Web–Create your own Image Provider

Yesterday I introduced ImageSharp.Web as a solution to integrate image processing capabilities into your ASP.NET Core application One of the building blocks I talked about where 'Image Providers' . Out-of-the-box there are 3 image providers available: PhysicalFileSystemProvider: Loads files from the ‘wwwroot’ folder AzureBlobStorageImageProvider: Loads files from Azure Blob Storage AWSS3StorageImageProvider: Loads files from AWS S3 Storage If you want to load files from a different source, you should create your own implementation of the IImageProvider interface. The images in our application were stored inside the database. Here are the steps we took to create own implementation. Implement the IImageProvider interface First we need to create an implementation of the IImageProvder interface . In this implementation we check if the incoming request can be handled by our Image Provider through the IsMatch property and if so we fetch the data from the

Embed image processing in your ASP.NET Core application with ImageSharp.Web

For an application we are building, we had to image processing features to an existing API. Our first idea was to build our own solution but then we discovered ImageSharp.Web . ImageSharp.Web builts on top of the great ImageSharp library and adds middleware to manipulate images. Exactly what we needed! The documentation is quite limited so let me help you out and walk you through the steps(and share a few gotcha’s along the way) to get it up and running in your ASP.NET Core application. Installation First you need to install the library through NuGet: dotnet add package SixLabors.ImageSharp.Web Now you need to register the middleware: And add it to the ASP.NET Core pipeline: Invoke the middleware To invoke the middleware, you should browse to the application and add the commands you want to execute. For example, to resize an image, you should call: http://localhost/imageprocessingexample/exampleimage.png?width=400 Sounds easy? Unfortunately there are a few gotch

Impress your colleagues with your knowledge about… startup hooks

ometimes when working with C# you discover some hidden gems. Some of them very useful, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that I discovered some days ago are Startup Hooks A Startup Hook is a class named StartupHook that has a public method static void Initialize() . Important: Don’t put this class in a namespace. To execute this Startup Hook before any other code in your application, you need to set the DOTNET_STARTUP_HOOKS environment variable. You can specify a list of managed assemblies that each contain a Startup Hook class like above. You can specify the assembly name either through the simple name (e.g. example) or through the full path (e.g. .d:\example.dll) Here is the output when I run this application:  

GraphQL and OData–Let’s discuss…

If you are following my blog, you should know that I’m a big fan of GraphQL. Of course you could wonder why am I not using OData instead? If you want to know my opinion on this topic, have a look at my previous post on this topic. But hey, it’s not about what I think, let’s hear from some of the people in the .NET community what they think about it. Therefore check out this .NET Data Community Standup: In the video Hassan Habib talks about a protocol agnostic OData spinoff: OData Neo .

Azure Application Insights– Query your log messages

By default when you have configured the Application Insights telemetry inside your application, an Application Insights logging provider is registered. This registered provider is configured to automatically capture log events with a severity of LogLevel.Warning or greater(Learn how to change this in one of my previous blog posts ). When you go to the Azure Portal and have a look at the Application Insights resource, you’ll see that by default multiple types of data are captured: Requests Traces Dependencies Exceptions More information about the different types of data can be found in the Application Insights Data Model here . This is important to know when you want to search for specific log information. The easiest way to do this is through Log Analytics (Go to Monitoring –> Logs ). But when you open Log Analytics, the available example packs are mainly focussed on Requests: If you want to search for a specific log message, you should use the Traces

SecurityTokenException: No token validator was found for the given token.

I have an ASP.NET Core application that is using WSFederation as authentication protocol. The application authenticates through our internal ADFS server where a corresponding Relying Party is configured. When attempting to authenticate, the ASP.NET Core application returns the following error message: SecurityTokenException: No token validator was found for the given token. Here is the full error page: I had a look at the application configuration, but everything looked fine there: The issue turned out to be related to the Relying Party configuration in ADFS. I had enabled token encryption there but this is not supported by the WSFederation middleware in ASP.NET Core. Here is how to fix it: Go to your ADFS server Open ADFS Management Go to Relying Parties and click on the Relying Party you want to configure Go to the Encryption tab and click Remove to delete the existing certificate

Performance matters–TryGetNonEnumeratedCount in C# 10

While writing some code to filter a collection using System.Linq I noticed a new method that I hadn't seen before: TryGetNonEnumeratedCount This is a new feature introduced in C# 10. When using this method it will first check if it can provide the count without iterating over the collection. Why can this be useful? Let’s find out together… Counting operations on IEnumerable<T> The typical way to count the number of elements on an IEnumerable<T> is through the Count() method. This Count() method will already do some optimizations. It will first check if the IEnumerable<T> can be casted to an ICollection<T> . If that is the case, the count can be returned immediately and we don’t need to iterate over all the elements. If not, Count() will have to loop through all the elements, which can be an expensive operation and be a performance issue, especially when the list of elements is long. If you want to avoid this performance penalty, you can use th

Which w3wp.exe process is which Application Pool?

Last week one instance in our RabbitMQ cluster crashed. This instance was part of a web farm that also hosted some internal web services in IIS.  After investigating the logs at the moment of the crash we discovered that the server ran out of memory which brought the RabbitMQ instance down (together with some other services). In the Resource Monitor we noticed that some of the application pools (you find them as an instance of W3WP.exe) were consuming a lot of memory.  The resource monitor only show the PID(Process ID). So the question is how can we find the related Application Pool? Let’s walk through the steps: Open the Task Manager or Resource Monitor. Write down the PID of the W3WP.exe process you want to find Open a command prompt and go to the following directory c:\windows\system32\inetsrv There execute the following command appcmd list wp This will list all application pools with their PID Now you can look for the cor

Check if a port on a remote server is open

After configuring our Elastic APM server, we tried to send some OpenTelemetry data to it. Unfortunately no data seemed to arrive on the target server, so time to put on our debugging head and find out what is going wrong… The first thing we did was to check if there was indeed an application listening on the Linux server where we installed APM server. This can be done by using a Powershell cmdlet called Test-NetworkConnection . From our Windows jump server, we opened a Powershell terminal. There we typed the following command: tnc <server> –port <portnumber> If the connection succeeds, the TcpTestSucceeded value will return true .  

Application Insights– Configure Log Level

I had configured my ASP.NET Core application to use Application Insights for logging. However when I took a look at the logged messages inside Application Insights, nothing was logged? What did I do wrong? The only code I added to enable logging was this: But I couldn’t see what could be wrong in this one line… I searched through the documentation to find a solution. There I noticed the following sentence: By default, ASP.NET Core applications have an Application Insights logging provider registered when they're configured through the code or codeless approach. The registered provider is configured to automatically capture log events with a severity of LogLevel.Warning or greater. Aha! By default the ApplicationInsightsLoggerProvider will only log messages with LogLevel Warning or higher. Let’s change this to also log Informational messages. There are 2 ways to do this; Through configuration Through code Change LogLevel through configuration

ASP.NET Core–Use a Worker Service for scheduled batch processing

In .NET Core you can use Worker Services to create long-running services. There are numerous reasons for creating long-running services such as: Processing CPU intensive data. Queuing work items in the background. Performing a time-based operation on a schedule. Through the Worker Service you can create cross platform background services that can be run as a Windows Service on Windows or a Systemd daemon in Linux. You can create a worker service through: dotnet new worker or through the Visual Studio IDE: Use a Worker Service for scheduled batch processing At one of my clients, we typically didn’t use Worker Services for batch processing. Instead we used Console applications together with the Windows Task Scheduler. The advantage of this approach is that we didn’t need to write any scheduling logic and could use all the features and monitoring available through the built-in Task Scheduler. However a disadvantage of this approach is that we couldn’t take ad

Application Insights: TelemetryChannel found a telemetry item without an InstrumentationKey.

When reviewing some log messages inside Application Insights I noticed a lot of messages like this: AI: TelemetryChannel found a telemetry item without an InstrumentationKey. This is a required field and must be set in either your config file or at application startup. I had an ApplicationInsights section inside my appsettings.json where I provided an instrumentation key, so that could not be the issue(and the trace message arrived in the correct Application Insights resource). This turned out to be a bug in the Application Insights version I was using(2.15.0). Upgrading to the latest version(2.20.0 at the moment of writing) solved the problem. Up to the next issue!

ASP.NET Core - Disable web.config transformations

When deploying your ASP.NET Core application through Web Deploy, the web.config transformation available in your project will be executed. This is a handy feature that allows you to override your default configuration inside your web.config. I had a situation where I didn’t want this transformation to happen. This IS possible but needs to be configured through an option that is really hard to find: Hope it can help someone else…

AsyncApi–Share your message contracts in a language agnostic manner–Part 2

Yesterday I introduced the AsyncApi specification as a way to share message contracts in a language neutral way. Let’s continue today by looking how we can generate a C# messagecontract based on your AsyncAPI specification file. Here is the example specification file again that I shared yesterday: To transform this specification file to a data contract, we can use the Async API generator . The generator uses templates to specify what must be generated. There is a list of official generator templates, unfortunately C# is not part of this list. Thanks to the community a csharp compatible template exists; Let’s try that one! Install the generator First install the generator through NPM: npm install -g @asyncapi/generator You can optionally pre-install the quicktype template: npm install -g @lagoni/asyncapi-quicktype-template Generate C# Message contracts Now we can generate the C# messag

AsyncApi–Share your message contracts in a language agnostic manner

As most of the systems I’m building are .NET based, I typically use NuGet Packages (published on an internal NuGet repository like Azure Artifacts or MyGet ) to share my message contracts between different parts of the system. The main disadvantage of this approach is that it creates platform coupling and is not a good solution if you are in a polyglot environment using different platforms and programming languages. What if we could write our message contracts in a language neutral way? That is exactly what AsyncAPI has to offer. From the AsyncAPI website: AsyncAPI is an open source initiative that seeks to improve the current state of Event-Driven Architectures (EDA). Our long-term goal is to make working with EDAs as easy as it is to work with REST APIs. That goes from documentation to code generation, from discovery to event management. Most of the processes you apply to your REST APIs nowadays would be applicable to your event-driven/asynchronous APIs too. To make

How to not learn GraphQL

If you are a regular reader of my blog, you know that I’m a big fan of GraphQL. Unfortunately there is still a lot of misunderstanding on what this technology exactly is and what problems it helps to tackle. So if you meet someone who is new to GraphQL or has some misconceptions around it, send them a link to the following blog post: . In this post Charly Poly handles the following mistakes: Mistake #1: GraphQL is a front-end technology Mistake #2: Design GraphQL APIs like REST APIs Mistake #3: Learning GraphQL only through Apollo Mistake #4: Throwing errors from resolvers Mistake #5: "Federation is the only viable way to compose schemas"

GitHub actions–Run vs Use

I recently added a Github Actions workflow to my Extensions.Caching.PostgreSQL open-source project. Here is the workflow I’m using: Something that I found confusing when using GitHub actions was that for some steps you had to use the uses syntax, while others use the run syntax. Let’s investigate the difference between the two. Uses uses is used for actions that run as part of a step in your workflow. Some actions require inputs that must set with the with keyword: Actions can be either JavaScript files or Docker containers. More information: Run run is used for running command-line programs in a command line shell. You specify the full command including arguments: More information:

ASP.NET Core - Branch the ASP.NET Core request pipeline–Map vs MapWhen

Yesterday I blogged about the usage of the MapWhen() method to branch the request pipeline in ASP.NET Core. What I didn’t mention is that in a first attempt I used the Map() method instead of the MapWhen() method. Map() will branch the request based on the specified request path whereas MapWhen() allows you to specify a predicate giving you more options on what criteria should be used for branching. Let’s have a second look at the MapWhen() implementation I was using: You would think I could replace this with the following Map() alternative: This turns out not to be the case. Apart from the predicate logic, another difference between Map() and MapWhen() is that Map() will add MapMiddleware to the pipeline while MapWhen will add MapWhenMiddleware to the pipeline. An important difference between these 2 middleware is that Map() will update the Request.Path and Request.PathBase to account for branching based on path (trimming the matched path segment off Reques

ASP.NET Core - Branch the ASP.NET Core request pipeline

I had a use case where I needed to create a branch in the ASP.NET Core request pipeline. For most requests the default pipeline should be run, but for all requests going to a specific URI different middleware should be executed. This was the original request pipeline: And here was my first attempt to create a branch in the request pipeline: I used the UseWhen method but this didn’t work as expected. Although the middleware specified in the branch was indeed invoked, it also called all the other middleware defined later in the pipeline. This turns out to be expected as when using UseWhen the branch is rejoined to the main pipeline if it doesn't short-circuit or contain a terminal middleware. I switched to the MapWhen method and this did the trick: More information: