Skip to main content

Posts

Showing posts from July, 2022

Run a GraphQL backend as an Azure Function

I typically run my GraphQL backends as an Azure App Service or Azure Container app, but today I had to create a really small API. I was wondering, could I use an Azure Function to host this API? Let's find out in this post. Creating an Azure Function App Let’s start by creating our Azure Function App. I’ll use Visual Studio in this example but you could perfectly use Visual Studio Code as well. Open Visual Studio and choose Create a new project . In Create a new project , enter functions in the search box, choose the Azure Functions template, and then select Next . In Configure your new project , specify a project name and click Next . in Additional Information, set the following values: Functions worker: .NET 6 (there is no reason to run this as an isolated worker) Function: HTTP Trigger Authorization level: Anonymous   Click on Create to create our Function app and the HTTP Trigger function. We’ll update our

dotnet format issue - FileNotFoundException: The file '' does not appear to be a valid project or solution file.

I’ve blogged previously about ‘dotnet format’ and how it can help you to keep your code formatting and style consistent. In one of these posts I’ve shared the following example as a way to apply a specific code formatting rule(file scoped namespaces in this example): dotnet tool update --global dotnet-format dotnet format Example.sln --severity info --diagnostics=IDE0161 However when I recently tried to execute the example above, it no longer worked ?! Instead I got the following error message: Unhandled exception: System.IO.FileNotFoundException: The file 'IDE0161' does not appear to be a valid project or solution file.    at Microsoft.CodeAnalysis.Tools.Workspaces.MSBuildWorkspaceFinder.FindFile(String workspacePath)    at Microsoft.CodeAnalysis.Tools.Workspaces.MSBuildWorkspaceFinder.FindWorkspace(String searchDirectory, String workspacePath)    at Microsoft.CodeAnalysis.Tools.FormatCommandCommon.ParseWorkspaceOptions(ParseResult parseResu

InvalidOperationException: No authenticationScheme was specified, and there was no DefaultChallengeScheme found.

Sometimes, you have of these days where you stumble from one issue to another. This was (unfortunately) one of these days. The story started when I tried to run an ASP.NET Core project I had to review. (Always my first check, can I follow the instructions in the readme.md file and get the source code up and running easily on my local machine). But this day, no luck, as ASP.NET Core decided to give me the following error message: InvalidOperationException: No authenticationScheme was specified, and there was no DefaultChallengeScheme found. Typically this indicates that no authentication is configured, but a look at the Program.cs file didn’t bring me closer to the solution. Everything looked okay: Then I noticed the following in my Visual Studio, I was running the application through Kestrel. Could that be the reason? I switched to IIS Express and ran the application again. Now I was able to start the application succesfully. Enabling Windows Authentication for Kest

Azure DevOps - Use variable in workspace mapping

Today I was having a look at a build pipeline of one of my clients. On this specific project there are still using Team Foundation Version Control(TFVC) which is still supported in Azure DevOps. If you don't remember TFVC(or never used it before), one of the differences between TFVC and Git is the way it handles branches. In Git a branch is a pointer to a commit whereas in TFVC a branch is visualized as a separate folder in your source control tree. In the image above we have a Main branch and a branch per release in the Release folder. You can map this folder structure to your local file system through a workspace mapping . This allows you to have multiple branches available and active at the same time on your local machine. When using TFVC in your build pipeline, you also need to configure a workspace mapping to specify which folders should be downloaded and mapped on the file system of the build server. Notice that I'm hardcoding the Release number(2.5) in t

ElasticSearch - Invalid NEST response built from a successful (200) low level call on POST

After upgrading our ElasticSearch cluster, I restarted our batch processing service. This service gets a data offload every night from an external party and we'll use this data to build up our search index. The BulkAllObservable helper As a lot of documents should be indexed, I use the BulkAllObservable helper to optimize the process. This helper is part of the ElasticSearch.NET client and takes care of all the complex retry, backoff or chunking logic. It gives you full control on how the underlying BulkApi should be called. This code worked perfectly before. Unfortunately after upgrading to Elastic 8.3, that was no longer the case. Enable compatibility mode I saw the following error message in the logs: BulkAll halted after receiving failures that can not be retried from _bulk   at Nest.BulkAllObservable`1.HandleDroppedDocuments(List`1 droppedDocuments, BulkResponse response)  at Nest.BulkAllObservable`1.<BulkAsync>d__20.MoveNext() --- End of stack trace

.NET Core Dependency Injection–Check if a service is already registered

It is way too hot today to write a long post, so I focus on a small but useful new feature in .NET 6. As I mentioned before in most of my projects I'm using Autofac as the Inversion of Control container of my choice. One of the features that Autofac has to offer is the IsRegistered method which allows you to check if a specific type is already registered in the IoC container. Starting from .NET 6, you can do a similar thing with the Microsoft DI container using the IServiceProviderIsService interface: You first need to resolve the interface from the container and then you can call the IsService() method to see if the service can be resolved through the container or not: That’s it! Time to cool down(literally)…

.NET Core Dependency Injection–Using multiple implementations of the same interface

On most of my projects I'm using Autofac as the Inversion of Control container of my choice. However this introduces an extra dependency and it comes with a (although short) learning curve. So o a newer project, I decided to give the built-in IoC container in .NET Core another try. The standard DI stuff worked as expected, however there was one situation where I had to investigate a little further. Let me explain... Registering multiple implementations for the same interface Here is my use case; in my application I’m using Serilog . One of the features that Serilog has to offer is the concept of an enricher . This allows you to enrich your log messages  with extra information in an easy way. As some of my enrichers required dependencies I wanted to register those enrichers in my  IoC container. When using Autofac I would have done this like this: To do the same thing using the Microsoft DI container is not that different: The AddTransient() method will not override

Add approvals to your Azure Pipelines yaml

I'm working on converting an Azure DevOps Release pipeline to a yaml pipeline. In my original Release pipeline I had multiple environments with approvals between every environment. In the 'classic' Release pipeline, this can easily done through the 'Pre-deployment conditions'. Let's find out how we can achieve the same thing through a YAML pipeline. The manual validation task Although the second solution I’ll show you is the better way in my opinion, I still want to first mention is the manual validation task. This task allows you to pause the pipeline run and wait for manual approval. The waitForValidation job pauses the run and triggers a prompt within the Pipeline UI to review and validate the task. The email addresses listed in notifyUsers receive a notification to approve or deny the pipeline run. More information: https://docs.microsoft.com/en-us/azure/devops/pipelines/release/deploy-using-approvals?view=azure-devops#set-up-manual-validatio

Set the hosting environment for a .NET Core console application

ASP.NET Core has the built-in concept of an environment. This allows you to change the behavior of your application based on the detected environment. A typical example where we use this is to load different configuration values. In your application you can access this value through the EnvironmentName property on the IHostEnvironment interface. The Launch Profiles UI If you want to change this environment value in Visual Studio while debugging, you can do this through the Launch Profiles UI . To launch this window, you have 2 options: Right click on your project, click on Properties . Scroll down to the Debug section and click on the Open debug launch profiles ui link. Or: Click on the Dropdown arrow next to the run icon. Select <Profilename> Debug Properties from the dropdown menu. This will open the Launch Profiles UI: Here you can update the ASPNETCORE_ENVIRONMENT environment variable to update the environment. Behind the scene

MassTransit - Message requeued for long running tasks in RabbitMQ

I recently upgraded the (development) RabbitMQ cluster of one of my clients to RabbitMQ 3.9. The upgrade went smoothly and none of the development teams mentioned any issues. So I was happily preparing for the production upgrade. A few weeks later I was contacted by one of the team leads who was investigating a specific issue he had in one of his applications; he was using a message published to RabbitMQ to trigger a long running task (a batch job). This message was picked by a Windows Service that uses a MassTransit consumer to execute this long running task. The strange this was that the task sometimes failed. The normal behavior in MassTransit is that this message would end up in the error queue (maybe after a few retries). However this didn’t happen and the message was put back on the queue. What was going on? I started by having a look at the error logs and notice a message like this: "Message ACK failed: 258", "The channel was closed: AMQP close

Azure Pipelines - Failed to download task 'Download'

I’m working on converting an Azure DevOps Release pipeline to a yaml pipeline. As a first step in my pipeline I want to download the build artifacts so I can use them in my release pipeline. However using this task resulted in an error message. Let's find out what was causing this… You can download an artifact in your pipeline through the ‘Download Pipeline Artifacts task’ . You can use this task as a step in your pipeline or you can use the shorthand ‘download’ alias: Executing the pipeline above resulted in the following error message: Failed to download task 'Download'. Error No task definition found matching ID 30f35852-3f7e-4c0c-9a88-e127b4f97211 and version 1.0.0. You must register the task definition before uploading the package. This error message is quite misleading and would make you think that there is something wrong with downloading the ‘Download’ task (which turned out not to be the issue). Let’s take a closer look at our yaml pipeline. We have

The test of silence

You have worked really hard on a new feature; an UI component, a library, an api, ... Then the moment is finally there. You will put this feature in the hands of your users for the first time. You enter the meeting room where Alice, your key user is waiting full of excitement. You put her in front of your laptop, open the browser and go to the page showing this new feature. And then...silence. Alice is just staring at the screen and does nothing… Now there are 2 things you can do; you can do nothing and give Alice the time to figure it out herself or you can start to explain the feature, give Alice some clues on how to use it or even take over the laptop and start demonstrating it yourself. We are typically tempted to do the second thing and start to guide our user when we see him/her struggle. But the moment you do that you lose an important opportunity to get insightfull information on how a user will perceive and experience your work. Learn from your users their mistakes S

Serializing DateOnly and TimeOnly types with System.Text.Json

.NET 6 introduced 2 new types to work with dates and times in .NET. One is the DateOnly type that represents the Date portion of a DateTime . The other is the TimeOnly type that represents the Time portion of a DateTime . Unfortunately when you try to use these types inside your data contracts, you get into trouble when you try to Serialize them through the System.Text.Json.JsonSerializer . To solve this problem, we can create 2 custom JsonConverter types: To use these converters, we need to create and pass an instance of the JsonSerializerOptions where we register these 2 converters: The generated JSON looks like this: Hope that helps!

Error CS0534: '' does not implement inherited abstract member 'JsonSerializerContext.GeneratedSerializerOptions.get'

After introducing the System.Text.Json source generator in one of my projects, I happily moved on to another task. Until I got a build failed message from the CI build. A look at the build logs showed the following error messages: ##[error]Example.Services\Infrastructure\JsonContext.cs(13,28): Error CS0534: 'JsonContext' does not implement inherited abstract member 'JsonSerializerContext.GeneratedSerializerOptions.get' D:\b\3\_work\90\s\Example.Services\Infrastructure\JsonContext.cs(13,28): error CS0534: 'JsonContext' does not implement inherited abstract member 'JsonSerializerContext.GeneratedSerializerOptions.get' [D:\b\3\_work\90\s\Example.Services\Example.Services.csproj] ##[error]Example.Services\Infrastructure\JsonContext.cs(13,28): Error CS0534: 'JsonContext' does not implement inherited abstract member 'JsonSerializerContext.GetTypeInfo(Type)' D:\b\3\_work\90\s\Example.Services\Infrastructure\JsonContext.cs(13,28): er

warning SYSLIB1037: The type '' defines init-only properties, deserialization of which is currently not supported in source generation mode.

2 days ago I introduced the System.Text.Json source generator and showed you how to use it in ASP.NET Core. What I didn’t told you that it didn’t go so smoothly. If you had a good look at the generated code I shared, you maybe noticed the following lines: The generated setter on the property will not work and instead throw an InvalidOperationException . This also appears as a warning during the build process: CSC : warning SYSLIB1037: The type 'OrderDto' defines init-only properties, deserialization of which is currently not supported in source generation mode. More information about this warning: https://docs.microsoft.com/en-us/dotnet/fundamentals/syslib-diagnostics/syslib1037 The reason is that my OrderDto is a record type using init-only properties: There is not much we can do to fix this else then switching to mutable properties: I searched around further on the web about this issue and found the following related Github issues: https://githu

InvalidOperationException: A 'JsonSerializerOptions' instance associated with a 'JsonSerializerContext' instance cannot be mutated once the context has been instantiated.

Y esterday I introduced the System.Text.Json source generator and showed you how to use it in ASP.NET Core. What I didn’t told you that it didn’t go so smoothly. In the post yesterday I showed the AddJsonOptions() on the IServiceCollection to specify our JsonContext class on the JsonSerializerOptions. However this was not my first attempt. The original code I wrote looked like this: This resulted in a runtime exception: An error occurred while starting the application. InvalidOperationException: A 'JsonSerializerOptions' instance associated with a 'JsonSerializerContext' instance cannot be mutated once the context has been instantiated. System.Text.Json.ThrowHelper.ThrowInvalidOperationException_SerializerOptionsImmutable(JsonSerializerContext context) Did you notice the difference between the 2 code samples? In the first example, I added the custom Json Converter BEFORE the source generator context. In the second example, I added the Json conver

Optimize your API performance with the System.Text.Json source generator

.NET 6 ships with a System.Text.Json source generator as a way to improve your API performance. By default the System.Text.Json serializer is using a lot of reflection behind the scenes. Of course this has a negative impact on startup performance, memory usage and is a problem for assembly trimming. With the introduction of the System.Text.Json source generator you get a compile-time alternative that can give your API performance a boost. It introduces the following benefits: Increased serialization throughput Reduced start-up time Reduced private memory usage Removed runtime use of System.Reflection and System.Reflection.Emit Trim-compatible serialization which reduces application size Let me walk you through the steps to configure this for your application. Configure the System.Text.Json source generator Source generators are a little bit magical. So we have to take some steps to get it working. Remark: The source generator is part of the 6.0 release

'SHA512Managed' is obsolete: 'Derived cryptographic types are obsolete. Use the Create method on the base type instead.'

After upgrading a project to .NET 6, the compiler started to complain with the following warning message: 'SHA512Managed' is obsolete: 'Derived cryptographic types are obsolete. Use the Create method on the base type instead.' Here is the code that caused this warning: The fix was easy: I don't know why they made this obsolete but the code change got me rid of the warning message. If you want to learn more, have a look here .

RabbitMQ–How to remove unacked messages–Part II

Yesterday I showed how to remove unacked messages from a RabbitMQ queue through the Management Portal. Today let us leave the Graphical User Interface behind and solve the same problem from the command line. Through the command line RabbitMQ has multiple command line tools available in the sbin folder. The one we need is rabbitmqctl.bat Show queues with unacked connections like this. rabbitmqctl list_queues name messages_unacknowledged The output should be something like this: VLM.eShopExample.Worker-Development 1 VLM.eShopExample.Worker-Production 0 We see that one queue has an unacked message. Let's find out the channel and associated connection that is causing the unacked message. rabbitmqctl list_channels connection messages_unacknowledged This returns the following output: <rabbit@SERVER.1650192371.27249.9> 1 Ok, we found the channel tha

RabbitMQ–How to remove unacked messages–Part I

I got a situation where a message remained unacknowledged in a RabbitMQ queue. I wanted the related consumer to stop and remove the message from the queue. Let me walk you through the steps to get this done. Through the Management Portal In this post I’ll show you how to this through the Management Portal, in a later post I’ll show you how to do this from the commandline. Open the RabbitMQ Management Portal. Go to the Queues tab. Notice that in one of my queues I have an ‘Unacked’ message. First we need to find the connection that is related to the application consuming the message. Therefore go to the Connections tab. Click on the correct connection. Expand the Close this connection section and hit the Force Close button to close the connection and related channels. Click OK when asked for confirmation. Now go back to the Queues tab. Click on the queue with the ‘Unacked’ message. Expand the Purge section and hit the Purge Messages button.