Skip to main content

Posts

Showing posts from November, 2020

Visual Studio - No test matches the given testcase filter

When trying to review some code from a colleague, I first tried to run the available unit tests in Visual Studio. The tests were successfully discovered but when I tried to execute them, nothing happened. In the test output window I noticed the following message: No test matches the given testcase filter This error made me think that somewhere a testcase filter was set. But where? I started searching inside the available settings and config files but I couldn’t find anything special. The error message mislead me, the problem was not related to any kind of testcase filter. It turned out that the problem was caused by the fact that the NUnit Test Adapter wasn’t installed. I opened the Package Explorer and installed the NUnit3TestAdapter .

C# 8–Default!

After enabling ‘Nullable reference types’ in a C# project, I got a lot of the following warnings: CS8618 - Non-nullable property '<propertyname>' must contain a non-null value when exiting constructor. Consider declaring the property as nullable. I got this message a lot on my ViewModels that looked like this: I could follow the suggestion and mark the property as nullable as suggested by the compiler. But I know that the ViewModel is populated from the database and that the provided value could never be ‘null’. In that case the trick to get rid of the warning is to use a property assignment with a default literal :

SQL Server Management Studio - Template explorer

I’m using SQL Server Management Studio for a long time but it wasn’t until recently I discovered that it contains a feature called Template Explorer . What is it? Through Template Explorer you get access to a large list of scripts useful for a lot of scenario’s. Next time you want to google on how to execute a specific task, take a look at the Template Explorer first. There is a good chance that a script is already available. Where can I find it? To locate the templates in SQL Server Management Studio, go to View , click Template Explorer , and then select SQL Server Templates . More information: https://docs.microsoft.com/en-us/sql/ssms/template/template-explorer

Azure DevOps Server- Upgrade ElasticSearch

After upgrading your Azure DevOps Server instance, you should also upgrade your ElasticSearch instance. If the instance is running on the same server as Azure DevOps, it can be automatically updated using the Configure Search feature in the Azure DevOps Administration console. In case you are running ElasticSearch on a different machine (what I would recommend) you need to copy the Search service package to the target server. You can find the link in the wizard that is used to Configure Search: On the remote server extract the installer files and execute the following command: \Configure-TFSSearch.ps1 --operation update The installer should be able to find the existing ElasticSearch instance and update it for you. More information: https://docs.microsoft.com/en-us/azure/devops/project/search/administration

Snapper - A snapshot does not exist

The first time I ran Snapper it failed with the following error message:     Snapper.Exceptions.SnapshotDoesNotExistException : A snapshot does not exist.     Apply the [UpdateSnapshots] attribute on the test method or class and then run the test again to create a snapshot.   Stack Trace:     SnapshotAsserter.AssertSnapshot(SnapResult snapResult)     Snapper.MatchSnapshot(Object snapshot, SnapshotId snapshotId)     Snapper.MatchSnapshot(Object snapshot, String childSnapshotName)     Snapper.MatchSnapshot(Object snapshot)     SnapperExtensions.ShouldMatchSnapshot(Object snapshot)     GraphQLTests.Queries_Succeed(String queryName) line 57     --- End of stack trace from previous location where exception was thrown --- This error makes sense as Snapper expects that a snapshot exists when you run your tests. To fix this, you need to update your tests and add the [UpdateSnapshots] attribute above your test method or class. When Snapper finds th

Using XUnit Theory with Snapper

To test my GraphQL schema I first used SnapShooter . From the documentation: Snapshooter is a flexible .Net testing tool, which can be used to validate all your test results with one single assert. It creates simply a snapshot of your test result object and stores it on the file system. If the test is executed again, the test result will be compared with the stored snapshot. If the snapshot matches to the test result, the test will pass. It is really a good fit to test your GraphQL API. Unfortunately I got into trouble when I tried to combine it with XUnit Theory. I had created 1 tests that loaded multiple graphql files and validates them: This didn’t work as expected because the same snapshot is used for every test run. As a consequence it fails when the same test runs again with a different parameter value . Probably there is a way to fix it, but I was lazy so I took a look online and found Snapper , which offers similar features as SnapShooter but supports the XUnit T

Enable .NET 5 in your Azure App Service

After deploying a .NET 5 application in an Azure App Service, I got the following error message when I tried to run the application: HTTP Error 500.31 - ANCM Failed to Find Native Dependencies Common solutions to this issue: The specified version of Microsoft.NetCore.App or Microsoft.AspNetCore.App was not found. This is because in Azure App Service .NET 5 is not enabled by default (yet). Let’s see how to fix this: Open the Azure portal Go to the App Service you want to configure Click on Configuration in the Settings section Go to the Stack Settings and change the .NET Framework version to .NET 5(Early Access) Click Save Restart the App Service

.NET 5–Source Generators–Lessons Learned–Part 3

One of the new features in .NET 5 that triggered my curiosity where source generators . I did some investigation on how to use them and want to share the lessons I learned along the way. Yesterday I got my first source generator finally up and running. Now it is time to move on and do something more interesting. I found an example on Github that created a strongly typed config class based on your appsettings.json. I tried to duplicate the code but when I build my application the source generator didn’t run. In the build output I noticed the following warning: Severity Code Description Project File Line Suppression State Detail Description Warning CS8034 Unable to load Analyzer assembly c:\lib\netstandard2.0\Syst

.NET 5–Source generators–Lessons learned–Part 2

One of the new features in .NET 5 that triggered my curiosity where source generators . I did some investigation on how to use them and want to share the lessons I learned along the way. Yesterday I wrote my first generator. Unfortunately nothing happened and no new code was generated. I tried to attach the debugger but when I did a rebuild still nothing happened. It seems that the code itself was never called. It took me some time to figure out why it didn’t work; it turns out that you have to add your source generator to a separate assembly(which makes sense). In my first try I just added the source generator logic to the same project as the other code but this doesn’t work. How to package a source generator? To get source generators up and running, you need to create a separate project add the source generator logic there. So create a new .NET Standard project and add a reference to the following package: Generators can be packaged using the same method as an Analyzer

.NET 5–Source generators–Lessons learned - Part 1

One of the new features in .NET 5 that triggered my curiosity where source generators . I did some investigation on how to use them and want to share the lessons I learned along the way. But let’s first start with an explanation what “a source generator” actually is: A Source Generator is a new kind of component that C# developers can write that lets you do two major things: Retrieve a Compilation object that represents all user code that is being compiled. This object can be inspected and you can write code that works with the syntax and semantic models for the code being compiled, just like with analyzers today. Generate C# source files that can be added to a Compilation object during the course of compilation. In other words, you can provide additional source code as input to a compilation while the code is being compiled. So in short, source generators are a new compiler feature that allow you to inspect existing code and generate new code(remark: you ca

GraphQL Hot Chocolate–Enable global authentication

There are multiple ways to enable authentication in Hot Chocolate . Here is simple approach: Step 1 – Enable ASP.NET Core authentication First step is to enable authentication at ASP.NET Core level. Let’s use JWT token for authentication: Step 2- Enable authentication at the root GraphQL query The second(and already the last step) is to enable authentication on the root query type. By providing no role or policy names we’re simply saying the user must be authenticated.

Visual Studio 2019 16.8– Git Amend

With the release of Visual Studio 2019 16.8 Git became the default source control experience. It no longer lives inside Team Explorer but got its own menu item. Read the announcement for more details. One of the things I noticed when using the new Git experience was the first class support for ‘git amend’ . But what does ‘git amend’ do? Let me explain… The git commit --amend command is a convenient way to modify the most recent commit. It lets you combine staged changes with the previous commit instead of creating an entirely new commit. Amending does not just alter the most recent commit, it replaces it entirely, meaning the amended commit will be a new entity with its own ref. It is a way to rewrite the git history. Warning: Avoid amending a commit that other developers have based their work on. To be safe only amend local commits.

Predefined type 'System.Runtime.CompilerServices.IsExternalInit' is not defined or imported

With the release of .NET 5 it was finally time to try record types . Here is my first attempt that I copied from the documentation: Unfortunately this turned out not the success I was hoping for. Visual Studio returned the following error message: Severity Code Description Project File Line Suppression State Error CS0518 Predefined type 'System.Runtime.CompilerServices.IsExternalInit' is not defined or imported NHibernate.Tests C:\NHibernate.Tests\Domain\Product.cs 17 Active The error is not very descriptive but the reason I got it because I forgot to update the project to .NET 5. The fix was easy; after updating the ta

CS8632 - The annotation for nullable reference types should only be used in code within a ‘#nullable’ annotations context.

I copy/pasted the following code from an example I found on the web to test source generators (but that is not where this post is about today). Building this code resulted in the following warning: Do you have any idea why? Take a look at the ‘Name’ property. Notice the usage of ‘?’.  This is part of the introduction of nullable reference types and declares the string as nullable. Nullable reference types are available beginning with C# 8.0, in code that has opted in to a nullable aware context . This is what the warning message is referring to. We need to enable nullable reference type support in our project. This can be done by adding <Nullable>enable></Nullable> in our csproj file:

The target framework 'netcoreapp3.0' is out of support and will not receive security updates in the future.

After upgrading to the latest Visual Studio version(16.8.1) that added support for .NET 5, I got a new warning when opening an existing .NET 3.0 solution: Severity Code Description Project File Line Suppression State Warning NETSDK1138 The target framework 'netcoreapp3.0' is out of support and will not receive security updates in the future. Please refer to https://aka.ms/dotnet-core-support for more information about the support policy. ExampleApp C:\Program Files\dotnet\sdk\5.0.100\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.EolTargetFrameworks.targets 28 You don’t have to panic when you see this warning. The only thing you need to be

ORM- Attaching detached object trees

One of the nice features of an ORM is that it allows you to change a complete object tree and that the ORM will figure out what needs to be added, updated and deleted. The reason why this works is because a typical ORM will track by default all objects that are attached to the ORM abstraction(DBContext in EF Core and ISession in NHibernate). Because it tracks these objects (and their relations) it knows what should happen when you make a change. Unfortunately when you are sending objects to a client through an API(typically using DTO’s and tools like Automapper ) and you get a changed object tree back, the objects in this tree are no longer attached to the ORM, so it doesn’t know what should be inserted, updated or deleted. Luckily most ORM’s are smart and use different tricks to figure out what is changed. However one place where these ORM’s typically fail is when you are using Automapper and mapping a child collection in your DTO’s to your domain model. An example below where we

Microsoft Orleans- Warning NoOpHostEnvironmentStatistics

When starting up my Orleans cluster I noticed the following warning message: warn: Orleans.Runtime.NoOpHostEnvironmentStatistics[100708]       No implementation of IHostEnvironmentStatistics was found. Load shedding will not work yet To get rid of this warning you need to include the following line in the SiloBuilder config: This will enable support for Load shedding. This kicks in when the system gets overloaded and prevents new clients from connecting.

Kafka - Avro - Value serialization error

The default serialization format for Kafka is Avro. I mentioned how to use this in C# yesterday. Today I got into trouble when trying to send a specific message. I changed the example a little bit to explicitly point out the issue. You see in the code above that I set the value for ‘Browser’ to ‘null’. When trying to send this message it failed with the following error: Local: Value serialization error Let’s have a look at the related avro schema: The problem is that in the schema is specified that the Browser field should a value of type string. ‘Null’ is not a valid value for string. This explains why it fails. To solve this I have two options; 1) Either change the code to send an empty string instead of null: 2) Either update the schema to allow null values for the browser field: More about Avro: https://en.wikipedia.org/wiki/Apache_Avro

Kafka- Using Avro as serialization format in C#

To help you with using Avro as the serialization format for your Kafka messages, a .NET core global tool avrogen is available. First install the tool using dotnet tool install: Next step is to specify your message schema. Therefore you need to create an .avsc file and add your message specification: Now it’s time to generate the necessary code: This will generate the following: The generated type can than be used by your Producer and Consumer logic. More information: https://www.confluent.io/blog/avro-kafka-data/

DDD–Strongly typed Ids-Using C#9 Record Types

I blogged before about the usage of Strongly typed Ids in your domain models. With the introduction of C#9 record types, an alternative approach becomes an option. Record types are reference types with built-in immutability and value semantics. They automatically provide implementations for Equals , GetHashCode , etc, and offer a very concise syntax known as positional records . This allows us to rewrite our ProductId type using records: public record ProductId(int Value); That’s all that is needed. Thank you C# 9!

Sending a message through Kafka - Value serializer not specified and there is no default serializer defined for type

My first attempt to send a typed message through Kafka resulted in the following error message: Value cannot be null. (Parameter 'Value serializer not specified and there is no default serializer defined for type PageViewEvent) Here is the code I was using: As the error message mentions, you need to explicitly specify what serializer should be used for your message object. Therefore you need to use the SchemaRegistryClient and specify a serializer(I’m using Avro in the sample below):

Configure a Kafka topic in C#

By default when you use the Confluent Kafka .NET client , a topic is created automatically for you when you publish your first message. However this will create a topic using the default settings. Typically you want to have more control when creating a topic. This is possible through the usage of the AdminClient: