Skip to main content

Posts

Showing posts from 2024

EF Core - The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value

Athough EF Core is a developer friendly Object-Relational Mapper (ORM), working with it isn't without its challenges. One error that we encountered during a pair programming session was: The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value In this blog post, we will delve into the causes of this error and explore ways to resolve it. "Constructing a database in the 18th century" - Generated by AI Understanding the error This error typically occurs when there is an attempt to convert a datetime2 value in SQL Server to a datetime value, and the value falls outside the valid range for the datetime data type. datetime : This data type in SQL Server has a range from January 1, 1753, to December 31, 9999, with an accuracy of 3.33 milliseconds. datetime2 : This newer data type, introduced in SQL Server 2008, has a much broader range from January 1, 0001, to December 31, 9999, with an accuracy of 100 nanoseconds.

Automating MassTransit Consumer Registration

When working with MassTransit, registering consumers can become cumbersome if you have many of them. Luckily, MassTransit provides a way to register all your consumers automatically using AddConsumers . This post will guide you through the process of setting up and using AddConsumers to simplify your consumer registration. What is MassTransit? MassTransit is an open-source distributed application framework for .NET, which simplifies the creation and management of message-based systems. It supports various messaging platforms like RabbitMQ, Azure Service Bus, and Amazon SQS. Setting Up MassTransit Before diving into consumer registration, let’s quickly set up a MassTransit project. Step 1: Install MassTransit Packages First, install the necessary MassTransit packages via NuGet. You can use the following command on your favorite command line: dotnet add package MassTransit.RabbitMQ Step 2: Configure MassTransit In your Program.cs or wherever you configure your service

Don’t talk about non-functional requirements, talk about quality attributes

When discussing software development, terms shape our perception and priorities. One such discussion revolves around the terminology used for requirements that go beyond direct functionalities. Traditionally, when talking about architectural needs, I used to call them non-functional requirements (NFRs). However, I switched recently to a  more fitting term—quality attributes—as it may better emphasize their importance and value.    Generated by AI Let me explain why this more than just a semantic change but a strategic enhancement… What are non-functional requirements? Before I explain my reasoning, let me shortly define what non-functional requirements are: Non-functional requirements (NFRs), also known as quality attributes, refer to criteria that judge the operation of a system rather than specific behaviors or functions. These requirements define the overall qualities and characteristics that the system must possess, ensuring it meets user needs and performs efficiently an

Debug your .NET 8 code more efficiently

.NET 8 introduces a lot of debugging improvements. If you take a look for example at the HttpContext , you see that you get a much better debug summary than in .NET 7: .NET 7: .NET 8: But that is not a feature I want to bring under your attention. After recently updating my Visual Studio version, I noticed the following announcement among the list of new Visual Studio features: That is great news! This means that you can debug your .NET 8 applications without a big performance impact on the rest of your code. The only thing we need to do is to disable the Just My Code option in Visual Studio: If we now try to debug a referenced release binary, only the relevant parts are decompiled without impacting the other code: More information Debugging Enhancements in .NET 8 - .NET Blog (microsoft.com)

EF Core - Error CS1503 Argument 2: cannot convert from 'string' to 'System.FormattableString'

I was pair programming with a team member when she got the following compiler error: Error CS1503 Argument 2: cannot convert from 'string' to 'System.FormattableString' The error appeared after trying to compile the following code: The reason becomes apparent when we look at the signature of the SqlQuery method: As you can see the method expects a FormattableString not a string . Why is this? By using a FormattableString EF Core can protect us from SQL injection attacks. When we use this query with parameters(through string interpolation), the supplied parameters values are wrapped in a DbParameter . To get rid of the compiler error, we can do 2 things: 1. We explicitly create  a FormattableString from a regular string: 2. We use string interpolation and add a ‘$’ sign in front of the query string: More information SQL Queries - EF Core | Microsoft Learn RelationalDatabaseFacadeExtensions.SqlQuery<TResult> Method (Microsoft.EntityF

Git–Dubious ownership

Today I had a strange Git error message I never got before. When I tried to execute any action on a local Git repo, it fails with the following error: git status fatal: detected dubious ownership in repository at 'C:/projects/examplerepo' 'C:/projects/examplerepo' is owned by: 'S-1-5-32-544' but the current user is: 'S-1-5-21-1645522239-329068152-682003330-18686' To add an exception for this directory, call: git config --global --add safe.directory C:/projects/examplerepo Image generated by AI What does this error mean and how can we fix it? Let’s find out! This error means that the current user is not the owner of the git repository folder. Before git executes an operation it will check this and if that is not the case it will return the error above. The reason why this check exists is because of security reasons. Git tries to avoid that another user can place files in our git repo folder. You can check the owner of a directory by executing

EF Core - Query splitting

In EF Core when fetching multiple entities in one request it can result in a cartesian product as the same data is loaded multiple times which negatively impacts performance. An example is when I try to load a list of Customers with their ordered products: The resulting query causes a cartesian product as the customer data is duplicated for every ordered product in the result set. EF Core will generate a warning in this case as it detects that the query will load multiple collections. I talked before about the AsSplitQuery method to solve this. It allows Entity Framework to split each related entity into a query on each table: However you can also enable query splitting globally in EF Core and use it as the default. Therefore update your DbContext configuration: More information EF Core–AsSplitQuery() (bartwullems.blogspot.com) Single vs. Split Queries - EF Core | Microsoft Learn

Loading aggregates with EF Core

In Domain-Driven Design (DDD), an aggregate is a cluster of domain objects that are treated as a single unit for the purpose of data changes. The aggregate has a root and a boundary: Aggregate Root : This is a single, specific entity that acts as the primary point of interaction. It guarantees the consistency of changes being made within the aggregate by controlling access to its components. The aggregate root enforces all business rules and invariants within the aggregate boundary. Boundary : The boundary defines what is inside the aggregate and what is not. It includes the aggregate root and other entities or value objects that are controlled by the root. Changes to entities or value objects within the boundary must go through the aggregate root to ensure consistency. An example of an aggregate is an Order (which is the Aggregate root) together with OrderItems (entities inside the Aggregate). The primary function of an aggregate is to ensure data consist

Entity Framework Core– Avoid losing precision

When mapping a decimal type to a database through an ORM like EF Core, it is important to consider the precision. You don't want to lose data or end up with incorrect values because the maximum number of digits differs between the application and database. If you don’t explicitly configure the store type, EF Core will give you a warning to avoid losing precision. Imagine that we have the following Product class with a corresponding configuration: If we try to use this Product class in our application, we get the following warning: warn: SqlServerEventId.DecimalTypeDefaultWarning[30000] (Microsoft.EntityFrameworkCore.Model.Validation)       No store type was specified for the decimal property 'UnitPrice' on entity type 'Product'. This will cause values to be silently truncated if they do not fit in the default precision and scale. Explicitly specify the SQL server column type that can accommodate all the values in 'OnModelCreating' using 'Has

CS0012: The type 'System.Object' is defined in an assembly that is not referenced.

A fter referencing a .NET Standard 2.0 project in a .NET 4.8 ASP.NET MVC project, the project failed ant runtime with the following error message: CS0012: The type 'System.Object' is defined in an assembly that is not referenced. You must add a reference to assembly 'netstandard, Version=2.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51'. Whoops! Let me explain how I fixed the problem. The solution – Part I I first tried to add the NETStandard.Library nuget package to the .NET 4.8 project but that didn’t made the error disappear.(Although I come back to this in Part II below). So I removed the nuget package again and instead I did the following: I manually edited the csproj file and added the following reference: I also updated the reference and set Copy Local=true After doing that the error disappeared and the application ran successfully on my local machine. Victory… … or not? After committing the updated project, a colleague conta

Visual Studio–View .NET Counters while debugging

The .NET runtime exposes multiple metrics through the concept of Event Counters. They were originally introduced in .NET Core as a cross-platform alternative to Performance Counters that only worked on a Windows OS. With the recent introduction of OpenTelemetry support in .NET and the System.Diagnostics.Metrics API, there is a clear path forward. But this doesn’t mean that Event Counters are not useful anymore. The tooling and ecosystem around it is evolving to support both Event Counters and System.Diagnostics.Metrics. For example, you can see this in action when using the global dotnet-counters tool. Where I before always used the dotnet-counters tool to monitor the .NET counters, I recently discovered during a debugging session in Visual Studio, that you directly can access this information in the Diagnostics Tools: During a debugging session, go to Diagnostics Tools : Select the .NET Counters option from the Select Tool dropdown if the Counters are not yet enable

.NET Aspire Developers Day is coming up!

If you are part of the .NET community, you certainly have heard about .NET Aspire. It is a new framework/tool/set of patterns from Microsoft that allows you to build observable, cloud ready distributed applications. It’s main goal is to make it easier to build cloud-native applications on top of .NET. Now if you didn’t had the time yet to take a look at .NET Aspire and want to learn what all the fuzz is about, I have some good news for you. On July 23, Microsoft will host the .NET Aspire Developers Day. This livestream event will be a full day of sessions with one common goal: To show you how easy it is to harness the power of .NET Aspire, why it’s essential for modern development, and how you can leverage a vibrant community for support and innovation. If you like to join, subscribe here for the event. Hope to see you all(virtually) there! In preparation of the event, you can watch the recording of the Let’s Learn .NET Aspire beginner series: More information .NET A

Visual Studio - FastUpToDate warning

While working on updating a (very old) existing .NET application, I noticed the following message in the build output: FastUpToDate: This project has enabled build acceleration, but not all referenced projects produce a reference assembly. Ensure projects producing the following outputs have the 'ProduceReferenceAssembly' MSBuild property set to 'true': 'C:\projects\Example.Data\bin\Debug\netstandard2.0\Example.Data.dll'. See https://aka.ms/vs-build-acceleration for more information. (Example.Business) Build acceleration; that was a topic I had talked about before . It is a feature of Visual Studio that reduces the time required to build projects(as you already could have guessed). Because the mentioned projects where targeting .NET Standard 2.0, some extra work is required to make build acceleration work. Before .NET 5 (including .NET Framework and .NET Standard), you should set ProduceReferenceAssembly to true in order to speed incremental builds. S

SQL Server–Does a ‘LIKE’ query benefits from having an index?

Last week I was interviewing a possible new colleague for our team. During the conversation we were talking about database optimization techniques and of course indexing was one of the items on the list. While discussing this topic, the candidate made the following statement: An index doesn’t help when using a LIKE statement in your query. This was not in line with my idea. But he was so convinced that I decided to doublecheck. Hence this blog post… Does a ‘LIKE’ query benefits from having an index? Short answer: YES! The somewhat longer answer: Yes, a LIKE statement can benefit from an index in SQL Server, but its effectiveness depends on how the LIKE pattern is constructed. Let’s explain this with a small example. We created a Products table and an index on the ProductName column. Let’s now try multiple LIKE statement variations: Suffix Wildcard (Efficient Index Usage) This query will benefit from the index because the wildcard is at the end: Prefix Wil

Understanding Pure Domain Modelling: Bridging the Gap Between Existing Systems and the Real Domain

Domain modelling plays a crucial role in the way that I design systems to reflect the business's needs and processes. However, I experienced there is often a disconnect between the idealistic view of domain modelling and the practical reality faced by domain experts. One key issue is that domain experts tend to start from their existing systems rather than describing the 'real' domain. In this post I want to talk about pure domain modelling as a way to overcome the bias that domain experts have when explaining their needs. The domain expert bias When domain experts contribute to domain modelling, they frequently start from the perspective of the existing systems they are familiar with. These systems, whether they are legacy products, databases, or other technological solutions, shape their understanding and descriptions of the domain. While this approach has its advantages, it also introduces several challenges: Bias Towards Existing Systems: Domain experts may de

Publish a console app as a single executable

I created a small console application that automatically adds the application pool users on your local IIS server to the correct groups on the web server so that performance counter data is correctly send to Application Insights. You can find some extra content, the original announcement and the source code here: Azure Application Insights– Collect Performance counters data (bartwullems.blogspot.com) Azure Application Insights–Collect Performance Counters data - Part II (bartwullems.blogspot.com) wullemsb/AppInsightsPoolTool: Allows to automatically add AppPool users to the correct groups for Application Insights (github.com) So what is the reason for this post? When you publish the tool, it resulted in a combination of an exe and multiple DLL’s: This means that you need to copy all these files to be able to run the tool. It would be nice if there was only a single executable. Let’s see how to get this done… Switch to single file publish Open up the csproj f

Azure Static Web Apps–SWA CLI behind the scenes

As a follow-up on the presentation I did at CloudBrew about Azure Static Web Apps I want to write a series of blog posts. Part I - Using the VS Code Extension Part II - Using the Astro Static Site Generator Part III  – Deploying to multiple environments Part IV – Password protect your environments Part V – Traffic splitting Part VI – Authentication using pre-configured providers Part VII – Application configuration using staticwebapp.config.json Part VIII – API Configuration Part IX – Injecting snippets Part X – Custom authentication Part XI – Authorization Part XII -  Assign roles through an Azure function Part XIII -  API integration Part XIV – Bring your own API Part XV – Pass authentication info to your linked API Part XVI – Distributed Functions Part XVII – Data API Builder Part XVIII -  Deploy using Bicep Part XIX – Introducing the SWA CLI Part XX(this post) – SWA CLI behind the scenes Yesterday

Azure Static Web Apps–Introducing the SWA CLI

As a follow-up on the presentation I did at CloudBrew about Azure Static Web Apps I want to write a series of blog posts. Part I - Using the VS Code Extension Part II - Using the Astro Static Site Generator Part III  – Deploying to multiple environments Part IV – Password protect your environments Part V – Traffic splitting Part VI – Authentication using pre-configured providers Part VII – Application configuration using staticwebapp.config.json Part VIII – API Configuration Part IX – Injecting snippets Part X – Custom authentication Part XI – Authorization Part XII -  Assign roles through an Azure function Part XIII -  API integration Part XIV – Bring your own API Part XV – Pass authentication info to your linked API Part XVI – Distributed Functions Part XVII – Data API Builder Part XVIII -  Deploy using Bicep Part XIX(this post) – Introducing the SWA CLI Today I want to introduce you a real piece of magi

Semantic Kernel–OpenTelemetry integration in C#

I already showed in a previous post how you could integrate Semantic Kernel with the .NET Core LoggerFactory to see what is going on while interacting with your OpenAI backend. Here is the link in case you missed it: Debugging Semantic Kernel in C# (bartwullems.blogspot.com) . An even better solution is to use the OpenTelemetry integration. Therefore we need to create a LoggerFactory instance that uses OpenTelemetry as a logging provider: Now we need to register this LoggerFactory as a service of the Semantic Kernel builder: If we now take a look at our Aspire Dashboard , we could see the logged messages appear: It is also possible to collect any related metrics and traces. Therefore add the following code to your Program.cs : If we now take a look at the Aspire Dashboard, we can see both the metrics and the end-2-end trace:

Semantic Kernel–Change timeout value in C#

If you are new to Semantic Kernel , I would point you to one of my earlier posts. In this post I want to show how you can change the timeout values when using Semantic Kernel. The power of Semantic Kernel is that it gives you the ability to interact with multiple (large language) models in an uniform way. You interact using C#, Java or Python with the Semantic Kernel SDK and behind the scenes it will do the necessary API calls to OpenAI, Azure OpenAI, Hugging Face or  a local OpenAI compatible tool like Ollama . Of course as we are interacting with an API behind the scenes, it can happen that the API doesn’t return any results in time and that we get a timeout exception. The operation was cancelled because it exceeded the configured timeout. Let me share how I fixed it… Use a custom HttpClient One option you have is to explicitly pass an HttpClient instance when creating the Semantic Kernel instance: Retry when a timeout happens If the timeout typically happens be

Podman Desktop–The WSL import of guest OS failed: exit status 0xffffffff

T o avoid carrying around multiple laptops for different customers I typically ask if they can provide me a VDI (for example through Azure Virtual Desktop). One of my clients is not on a cloud platform (yet), so the VM they provided me was running on a local(read: in a traditional datacenter) hyper-v cluster. As we were investigating to move to a container based development model, I installed Podman Desktop on the provided machine. Podman Desktop requires a Podman machine to be created to be able to run workloads. However when I tried to create a new machine, it failed with the following error message: Error: the WSL import of guest OS failed: exit status 0xffffffff I did a second attempt through the commandline but this failed as well: podman machine init Extracting compressed file: podman-machine-default-amd64: done Importing operating system into WSL (this may take a few minutes on a new WSL install)... WSL2 is not supported with your current machine configuration. Pl

GraphQL–Application Insights integration for HotChocolate 13

If you are a regular reader of my blog, you've certainly seen my earlier post on how to integrate Application Insights telemetry in your HotChocolate based GraphQL backend. IMPORTANT: Although the code below will still work, I would recommend to switch to the integrated OpenTelemetry functionality and send the information to Application Insights in this way. Today I had to upgrade an application to HotChocolate 13 and (again) I noticed that the application no longer compiled. So for people who are using the ApplicationInsightsDiagnosticsListener I shared, here is an updated version that works for HotChocolate 13: As I wanted to track the full operation I have overwritten the ExecuteRequest method, however if you only want to log any exception, I would recommend to override the RequestError and ResolverError methods: More information HotChocolate OpenTelemetry (bartwullems.blogspot.com) GraphQL HotChocolate 11 - Updated Application Insights monitoring (bartwullem

Azure Application Insights–Collect Performance Counters data - Part II

About 2 years ago I blogged about the possibility to collect performance counter data as part of your Application Insights telemetry. If you missed the original post, first have a quick read here and then come back. Back? As mentioned in the original post, to be able to collect this performance data, the Application Pool user needs to be added to both the Performance Monitor Users and Performance Log Users group. Although I updated our internal wiki to emphasize this, I still noticed that most developers forget to do this with no collected data as a result. (Who reads documentation anyway?) So let’s do what any developers does in this case, let’s automate the problem away… I created a small tool that can be executed on the IIS web server. It goes through all the application pools, extracts the corresponding user and add it to the 2 groups as mentioned above. I think the code is quite self-explanatory. Here is the part where I read out the application pools: And here I ad

MassTransit–Change log level

A short post today; a colleague wanted to investigate a production issue when sending and receiving messages using MassTransit. He couldn’t find immediately how to change to capture more details about the MassTransit internals. I had a look together with him. So for further reference you can find the answer here. Using Serilog On this specific project he was using Serilog so let me first show you how to change the log level for MassTransit when using Serilog: Using Microsoft.Extensions.Logging And here is the code changes required when using the default Microsoft.Extensions.Logging: Remark: I would certainly recommend to have a look at the observability features of MassTransit as well and look to implement OpenTelemetry in your application. More information Logging · MassTransit Observability · MassTransit

ImageSharp–Image.Load() Compiler error after updating

On one of my pet projects I’m using ImageSharp to manipulate images. It is a great library that comes with a lot of features in it. And performance and memory usage is good which is quite important when manipulating images.  Remark: Be aware that a commercial license is needed in some conditions. So check out the pricing page before adding this library to your application. I decided to add some new stuff but before I did that I updated the library to the latest version. However after doing that my original code no longer compiled. Let’s see how we can fix this… Here is the original code: This specific overload no longer exists in the latest version. So I started to update my code to a supported overload: Remark: Notice that I also switched to an async overload but know that a synchronous version exists as well. However by doing that I no longer had the format information available. The good news is that this information could now be accessed through the image.Metadata

Code & Comedy 2024–This session will give you 2.6 hours of your time back

Yesterday I had the pleasure to present at Code & Comedy 2024 . Again it was a great combination of inspiring sessions, great food and a lot of nice people to meet. All of this followed by a Comedy Act by Jan Jaap van der Wal. I never thought that AI could be that much fun! Note to myself: Next time make sure that I’m not one of the 2 Flemish guys in the room. I did a presentation titled " This session will give you 2,6h a day of your time back!"; which is a hard promise to make, so I hope that the participants could confirm if I succeeded or not. In the session I shared some surprising insights in behavioral science and explained how we can apply this knowledge in building AI assistants. In case you missed my session or you want to a look at the code in more detail, here are the relevant links: Presentation: wullemsb/presentations: Repo with all my (public) presentations (github.com) Source code: wullemsb/SemanticKernel: Demo code for my AI session (githu

Github Copilot extension disabled after upgrading to Visual Studio 17.10.2

After upgrading my Visual Studio to 17.10.2, I got a warning that my Github Copilot and Github Copilot Chat extensions are incompatible with my Visual Studio version and will be disabled. The following extensions are disabled because the installed version if not compatible with Visual Studio Pro 17.10.34505.107. - GitHub Copilot - GitHub Copilot Chat I started to get a small panic because once you are used to have a Copilot assistant you never want to go back. So after my Visual Studio was up and running, I opened the extension window and indeed both extensions were disabled: I first tried to reinstall the extension but I got the same incompatibility message. Does this mean that I can no longer use Copilot? Luckily the answer is no.  Starting from Visual Studio 17.10.2 Github Copilot and Copilot Chat are no longer separate extensions but became part of Visual Studio. So although the extensions are no longer there, Copilot (Chat) was still available as I found out. A bette

List Process IDs (PIDs) for your IIS Application Pools

One of our production servers started to send out alerts because the memory consumption was too high. A first look at the server showed us that one of the w3wp processes was consuming most of the available resources on the server.   Sidenote: Web applications running within Microsoft’s Internet Information Services (IIS) utilize what is known as IIS worker processes. These worker processes run as w3wp.exe, and there are typically multiple per server. So we knew that one of our web applications was causing trouble. The question of course now is which one? As the resource manager only showed us the process id, we couldn’t immediately give the answer. But with some extra command line magic we got the answer. To get the information we need we used the appcmd utility. This tool allows; among other things; to list the IIS worker processes. We executed appcmd list wps and got the the list of worker processes with their PIDs. c:\Windows\System32\inetsrv>appcmd list wps WP

The Red Hat cloud native architecture solution patterns

Being a software architect, I'm always looking for good resources that can help me design better solutions. One of these resources I can recommend are the Solution Patterns from Red Hat . Although mainly focused on Red Hat technologies, most of the described patterns are applicable in a broader context. For every solution pattern, you get more information about the problems it tackles and uses cases it solves, a reference architecture and a technical implementation. For example, let’s have a look at API versioning . As you can see you get some use cases where API versioning plays a role: And a high level solution: If you want to further drill down in the details, you can have an in-depth look at the solution’s architecture. With all this information you can make an informed decision if this solution pattern could help you in solving your specific needs. So bookmark this link and go explore the other solution patterns . More information Solution Patterns from Red H