Friday, September 11, 2020

.NET Core Tools

Starting with .NET Core 2.1 Microsoft introduced the Dotnet Tools platform as a simple way to consume existing (small) .NET Core applications through NuGet. In a way it is quite similar to installing tools through NPM.

You can see the tools you already installed by executing the following command:

Dotnet Tool list –g

This should output the list of global tools:

C:\Users\BaWu>dotnet tool list -g
Package Id         Version          Commands
-------------------------------------------------
dotnet-ef          3.0.0            dotnet-ef
dotnet-format      3.3.111304       dotnet-format
dotnet-try         1.0.19553.4      dotnet-try

Question is how can we easily find tools that are out there?

Go to nuget.org, click on Packages and hit the Filter button to see a list of filter options.

One of the options you have is to filter on .NET Tool Package type. That should do the trick…

More information here: https://docs.microsoft.com/en-us/dotnet/core/tools/global-tools

Thursday, September 10, 2020

GraphQL - Faker

A great way to mock your GraphQL API (or extend an existing API) is through GraphQL Faker.

It can act as a proxy between your existing API and your app making it really easy to try out new use cases and develop your frontend and backend in parallel.

Installation

To use it, first install it through NPM:

npm install -g graphql-faker

Usage

Now you can either create a completely new GraphQL api through:
graphql-faker --open
Or extend an existing API:
graphql-faker ./ext-swapi.graphql --extend http://swapi.apis.guru
Now you can open up the editor at http://localhost:9002/editor/ and start using faker directives in our GraphQL schema. You can use the @fake directive to specify how to fake data,  @listLength to specify number of returned array items and the @examples directive to provide concrete examples:

Save the model. Now you can browse to http://localhost:9002/graphql and start exploring your fake GraphQL schema.

Wednesday, September 9, 2020

Domain Driven Design–3 rules to help you get started

I recently started a new project where we are using the DDD principles to build our software. As most of my team members are new to DDD, I noticed some obvious mistakes. Here are some rules that can help you avoid the same mistakes…

Rule #1: Your domain should be modelled first

When you start a new project spend enough time on the domain before introducing a frontend and a database. Try to identify your bounded contexts, what are your entities? What can be a good aggregate? Which domain events are needed?

This allows you to really focus on the domain itself and explore it in depth. This is also a perfect time to introduce TDD(Test Driven Development) in your team.

Rule #2: Your domain should reflect your business logic not your database

If you start with Rule #1 you are already on the correct path. In too much cases I see that the domain is immediately setup in such a way that it works with the ORM tool the team has chosen. If the ORM needs getters and setters, they are added. If the ORM needs all properties to be public, it is changed.

Another thing I see is that people (especially if they are using a relational database) immediately start applying normalization techniques to let the domain better match with the database (and decrease the impedance mismatch). Unfortunately this leads to domain models that no longer reflect the domain and are coupled to your data storage mechanism.

Rule #3: Your domain should always be valid

Your domain should always represent a valid state. This means encapsulating validation inside your domain and ensuring that nobody can change the internal state. So no setters but only methods that encapsulate the behavior of your system.

This helps you to avoid a lot of extra guards and if checks spread around your code. And in the end this is where good object oriented design was all about…

Tuesday, September 8, 2020

MassTransit–Create a scoped filter that shares the scope with a consumer

Here is what I wanted to achieve:

I want a MassTransit filter that reads out some user information from a message header and store it in a scoped class. Idea is that the same scope is applicable inside my consumer so that I’m guaranteed that the correct audit information is written into the database.

This use case doesn’t sound too hard and a first look at the documentation showed that you can have scoped filters; https://masstransit-project.com/advanced/middleware/scoped.html#available-context-types

Unfortunately these filters doesn’t share the same scope with the consumer and turn out not to be a good solution for my use case. As I mentioned yesterday I discovered the way to go when having a look at the Masstransit pipeline:

What I need is an implementation of a ConsumerConsumeContext filter(quite a mouthful).

Let’s take a look at the steps involved:

  • First we create a ConsumerConsumeContext filter.
    • Notice that to get it working I had to use the Service Locator pattern and resolve the IoC instance through the context provided in the send method.
  • Now we need to add the necessary registrations. I’m registering the IUserFactory that stores the users data as a scoped instance
    • I’m using Autofac here, but the code is quite similar for other IoC containers
  • As a last step we need to apply the filter to our consumers. I’m doing this through a definition file but there are other ways to configure this as well:

Monday, September 7, 2020

MassTransit–Receive pipeline

The last few days I have been struggling with middleware and filters in MassTransit. I had a hard time understanding what was really going on.

Until I stumbled over the following image in the documentation:

I wished I had seen this image anytime sooner. It would have saved me a lot of time!

As a sidenote; it is a good example how the right amount of documentation can really make a difference. I got into the habit to spend more time on documenting what I’m building and this not only helps me to structure my thoughts but also helps a lot in the design process and even made some flaws in my reasoning visible early.

So my advice for today; document more. (I hope my team is reading this)

Friday, September 4, 2020

.NET Core Nuget - Showing a readme file after restore

Iin .NET Core and .NET Standard projects content or tools distributed as part of a NuGet package are no longer installed. However there is one (hidden?) feature that still works and that is to show a readme file. Let’s see how to get this done:

  • Add a readme.txt file to the project that you package through nuget.
  • Open the .csproj filed and add the following entry:

This will embed a readme.txt file in the root folder of the package. When the package is restored, the file gets displayed by Visual Studio:

Thursday, September 3, 2020

GraphQL–HotChocolate–How to handle an incomplete schema

HotChocolate does a lot of effort to validate that you produced a valid schema before your application starts up. This is useful when a feature is ready but during development it can be annoying as maybe you already want to run your application before having resolvers ready for every field in your schema.

By default when you try to run with an incomplete schema, you’ll get an error message similar to the following:

An error occurred while starting the application.

SchemaException: The schema builder was unable to identify the query type of the schema. Either specify which type is the query type or set the schema builder to non-strict validation mode.

HotChocolate.SchemaBuilder.Create() in SchemaBuilder.Create.cs, line 0

· SchemaException: The schema builder was unable to identify the query type of the schema. Either specify which type is the query type or set the schema builder to non-strict validation mode.

o HotChocolate.SchemaBuilder.Create() in SchemaBuilder.Create.cs

As mentioned inside the error message you can change this behavior by setting the schema builder to non-strict validation mode. But how exactly can you do this?

I’ll provide you the answer; during schema creation you can modify the options:

Wednesday, September 2, 2020

Azure DevOps - Improve build time

During our last retrospective, one of the complaints was that the build time was too long. We were using the Hosted Build Services on Azure and restoring the NPM packages on every build took way too long.

Fortunately, Azure DevOps now offers a new build task called cache task. This task will restore the cached data based on the provided inputs. Let’s see how we can use this task to cache our NPM packages:

  • Add the cache task to your build steps

  • Now we first need to set a cache key. The cache key is be used as the identifier for the cache and can be a combination of string values, file paths and file patterns. In our case we want to use the package-lock.json file as the key. Every time the package-lock.json changes the cache will be invalidated and the npm packages will be restored again.
    • **/package-lock.json, !**/node_modules/**/package-lock.json, !**/.*/**/package-lock.json
  • The second thing we need to set is the directory to populate the cache from and restore back to. As we want to cache the npm packages we cache the ‘node_modules’ folder:
    • $(System.DefaultWorkingDirectory)/Bridges.CMS.SPA/node_modules

Now if we build again, the first time the cache is populated and the second time we win a lot of time as the node_modules folder is restored from the cache. Yippie!


Tuesday, September 1, 2020

ASP.NET Core Swagger error - Conflicting method/path combination

After adding a new method to a controller, my OpenAPI(Swagger) endpoint started to complain with the following error message:

Conflicting method/path combination "POST api/Orders" for actions - eShopExample.Web.Controllers.OrdersController.CreateOrder (eShopExample),eShopExample.Web.Controllers.OrdersController.ProcessOrders (eShopExample). Actions require a unique method/path combination for Swagger/OpenAPI 3.0. Use ConflictingActionsResolver as a workaround

I find the error message itself not very insightful but taking a look at my OrdersController made it all clear:

The problem was that I had 2 methods that were both using attribute based routing(through the [HttpPost] attribute) but where resolved to the same URI; “api/orders”. To fix it I had to use an overload of the [HttpPost] attribute and specify an alternative URI: