Skip to main content

Posts

Showing posts from April, 2020

Domain Driven Design–Get rid of your anemic domain model

Although most developers have heard about Domain Driven Design, most applications I encounter today are the traditional ‘layered cake’ with an anemic domain model . I have to admit that even a lot of applications I’ve helped building the last years are still using this approach. Why? 2 reasons; Lack of DDD knowledge in the teams. Most developers ‘know’ the tactical DDD patterns but are not aware of the strategic DDD patterns. What makes it even worse is that even the tactical patterns like the repository pattern are wrongly applied. Getting a team up to speed with DDD takes time, time that we unfortunately not always have in ‘fixed price, fixed time, fixed scope’ projects. Another reason to go for the ‘product not project’ approach. Lack of access to Domain experts.   Good domain modelling can only work if the necessary expertise is around to help you find the correct ubiquitous language, identify bounded contexts and create a shared understanding of the business. Too m

Microsoft Build 2020 - Digital Event

This year Microsoft Build will be fully digital. An unique opportunity to get a non-stop, 48-hour interactive experience. It will be more than just an online streaming event. You can help shape the agenda by submitting a specific topic during registration and even get 1:1 expert guidance from the Microsoft engineers! Register here: https://mybuild.microsoft.com/

Azure AppService–How to identify a bottleneck?

My first answer on the question in the title would be to use Application Insights . But in case you didn’t configure Application Insights for your AppService, here is an alternative way to gain some insights. To immediately spoil the answer, we will use Project Kudu to look behind the curtains of our Azure App Service. How to find Project Kudu? Go to the Azure Portal Open the App Service Blade for your App Service and click on Advanced Tools. Click on go. Or if you want the quick way, browse to the scm endpoint through .scm.azurewebsites.net/">https://<appservicename>.scm.azurewebsites.net/ Process Explorer Click on the Process Explorer at the top Remark: This is only available when your are running the Windows version of an App Service Now you can see all the processes and even start profiling

C# 8–Null-coalescing

Although C# 8 is released for a while, I’m still discovering new features. When I want to conditionally assign a value to a variable whether it is null or not, I typically used either the conditional ternary operator: or a coalesce expression: With C# 8, you can even write this shorter with the null coalescing operator:

XUnit - Make your tests more readible

One of the neat tricks that you can do in XUnit is to make your test more readible. Here is one of my unit tests: And here is how this looks by default in the Test Explorer: Adding a configuration file Let’s now add a configuration file to improve the default behavior. Add a new JSON file to the root of your test project. Name the file xunit.runner.json . Add a schema reference to get autocomplete behavior while editing the file: Remark: In case of Visual Studio, this is not really necessary as it will recognize the file name and apply the correct schema automatically.  Tell the build system to copy the file into the build output directory. Edit your .csproj file and add the following: <ItemGroup> <Content Include="xunit.runner.json" CopyToOutputDirectory="PreserveNewest" /> </ItemGroup> Update the configuration Let’s now tweak the behavior by changing some settings in the

How to keep secrets in Azure Functions?

While doing a code review of an Azure Function I noticed the following line: var personalAccessToken = Environment.GetEnvironmentVariable("KeyVault_PersonalAccessToken", EnvironmentVariableTarget.Process); Could it be that the developer directly stored a secret in an environment variable? Instead of using a secure storage like Azure Keyvault? Inside the configuration on the Azure Portal I found the following: @Microsoft.KeyVault(SecretUri=https://sample-vault.vault.azure.net/secrets/personal-access-token/<removedthekey>) Luckily my assumption was wrong and it turns out that is a feature I wasn’t aware of in Azure (Functions). To use this feature you first have to create a Managed Service Identity for your Azure Functions app as described here: https://docs.microsoft.com/en-us/azure/app-service/overview-managed-identity?tabs=dotnet#creating-an-app-with-an-identity Once you have a Managed Service Identity you can add an Access policy in your Azure Ke

.NET Core - Using ClaimsIdentity in your unit tests

To be able to test my application I had to create a ‘dummy’ ClaimsPrincipal and ClaimsIdentity: However when I tried to run my test, the test failed because the ClaimsIdentity returned ‘false’ for the IsAuthenticated check. To create an authenticated ‘dummy’ identity I had to pass on an extra ‘authenticationType’ parameter to the constructor: The exact value that you specify doesn't matter.

.NET Core–Using IOptions<> in your unit test

An easy and typesafe way to use configuration values in your .NET core applications is through IOptions<T> . This allows you to create a settings section in your appsettings.json: And then map this to a specific class: The only thing you need to do is to specify the section in your Startup.cs: Now you can inject these settings in your code through IOptions<T>: But what if you want to unit test this class? How can you create an IOptions<T> instance? A solution exists through the Options.Create() method:

Building Secure and Reliable Systems

Google continues sharing there experience with Site Reliability Engineering(SRE) with a new ebook: Building Secure and Reliable Systems . Can a system be considered truly reliable if it isn't fundamentally secure? Or can it be considered secure if it's unreliable? Security is crucial to the design and operation of scalable systems in production, as it plays an important part in product quality, performance, and availability. In this book, experts from Google share best practices to help your organization design scalable and reliable systems that are fundamentally secure.

Use your own machine with Visual Studio Online

With Visual Studio Online , you get a fully managed development environment in the cloud on demand. Recently they announced that you can also register your own machine and access it remotely through Visual Studio Code or the Visual Studio Online Web Editor. Why would you want to do that? Microsoft mentions the following reasons: This is a great option for developers that want to cloud-connect an already configured work or home machine for anywhere access, or take advantage of the Visual Studio Online developer experience for specialized hardware we don’t currently support. Let’s try it! First make sure that you have an existing development plan: Go to https://online.visual.studio.com and click on Get Started> Login using your company or Microsoft account Check the top bar. You have an existing plan? Great you can skip to the next step. You don’t have an existing plan? Let’s continue… Click on the Create environment

Track timings using Serilog

So far I always used the Stopwatch class to track timings in my applications and just added the result to my log message. Until I discovered the SerilogTimings nuget package. Usage is simple, after you have configured Serilog, you can use Operation.Time() to time an operation: At the completion of the using block(!), a message will be written to the log like: [INF] Submitting payment for order-12345 completed in 456.7 ms You can also use it directly on top of an ILogger instance: More info about this library can be found here: https://github.com/nblumhardt/serilog-timings

XUnit–.NET Standard

After creating a new project in Visual Studio to use for my unit tests, I added the following NuGet packages: <PackageReference Include="xunit" Version="2.4.0" /> < PackageReference Include="xunit.runner.visualstudio" Version="2.4.0" /> However adding the second package resulted in the following warning message: Package 'xunit.runner.visualstudio 2.4.0' was restored using '.NETFramework,Version=v4.6.1, .NETFramework,Version=v4.6.2, .NETFramework,Version=v4.7, .NETFramework,Version=v4.7.1, .NETFramework,Version=v4.7.2, .NETFramework,Version=v4.8' instead of the project target framework '.NETStandard,Version=v2.0'. This package may not be fully compatible with your project. So what did I do wrong? I wasn’t really thinking when creating my new project and my muscle memory made me click on a .NET Standard Class library. This is of course a great choice if you just want to build a library, but not

ASP.NET Core–Get raw URL

To handle a specific use case I had to capture the current URL in ASP.NET Core. I found a lot of outdated answers on the Internet so here is the correct way in case I forget(which I certainly will do): In the Microsoft.AspNetCore.Http.Extensions assembly a static UriHelper class exists that allows you to get the URL from an HttpRequest object: var url= UriHelper.GetEncodedUrl(httpContext.Request); This GetEncodedUrl() method can also be used as an extension method directly on the HttpRequest(don’t forget to import the namespace): var url= httpContext.Request.GetEncodedUrl(); Maybe it helps you too!

SQL Server Query Store–How to use it in older SQL versions

One of the (not-so hidden) gems in SQL Server is the availability of the Query Store. Unfortunately you need SQL Server 2016 or higher to be able to use this feature. The good news is that there is an alternative for older SQL Server versions; OpenQueryStore . OpenQueryStore (OQS) is a collection of scripts that add Query Store like functionality to pre-SQL Server 2016 Instances! OQS is being built from the ground up to allow all versions and editions of SQL Server from 2005 up to and including 2014 to have a Query Store like functionality. The data collection, retention and cleanup will be easily configurable to allow for complete control of the OQS data storage. Installation Download the latest release of Open Query Store from the releases page and unzip the file. Open a PowerShell console and navigate to the location of the unzipped files. Copying the command below change the values of <Instance> , <dbName> and <path> and OQ

Dotnet format

I find code consistency important. Naming conventions, code formatting, … should all be aligned to make the code readable and consistent. In .NET you can force this consistency through the .editorconfig file. (If you don’t have one in your projects, please stop reading and go add one first). This is not the first time I’m mentioning the .editorconfig file: https://bartwullems.blogspot.com/2017/04/visual-studio-2017editorconfig.html https://bartwullems.blogspot.com/2019/10/visual-studiogenerate-editorconfig-file.html https://bartwullems.blogspot.com/2019/12/editorconfig-let-private-fields-start.html https://bartwullems.blogspot.com/2019/10/visual-studio-2019code-cleanup.html Today I want to take it one step further and enforce the coding style through our build pipeline. We’ll do this through a dotnet cli tool dotnet-format . Let’s first install it: dotnet tool update -g dotnet-format Now you can browse to your solution or project folder invoke the tool u

Pluralsight–Stay home and skill up for free

With the current lockdown in most countries around the world, a lot of people have to work from home. To keep growing your skills, Pluralsight is offering free access during the entire month of April. Anyone that does not have a current subscription to Pluralsight can take advantage of this offer. No credit card is required to sign up and there will be no obligation beyond April. Go here to subscribe and enjoy your free access in April.

Expose Kibana to the outside world

By default Kibana is configured to only be accessible from ‘localhost’. If you want to expose it outside your server, you’ll have to update the configuration. Go to the config folder and open the kibana.yml file. Find the following section: # Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values. # The default is 'localhost', which usually means remote machines will not be able to connect. # To allow connections from remote users, set this parameter to a non-loopback address. # server.host: "localhost" Uncomment the ‘server.host’ section and specify a non-loopback address. An example: server.host: "0.0.0.0" Don’t forget to also open up the necessary ports in your firewall(by default port 5601)

Installing Kibana as a windows service

For Kibana(part of the ELK) stack no out-of-the-box script exists to install and run it as a windows service. (This in contrast to ElasticSearch which has a batch file that allows you to install it as a windows service). As a workaround you can use NSSM ; the Non-Sucking Service Manager. With NSSM you can take any executable and run it as a windows service. Here are the steps to use it: Download NSSM . Extract the zip and put the nssm.exe executable on a location of your choice. Run nssm install <servicename> ; .e.g. nssm install kibana This will open up a configuration window where you can specify the executable you want to run and configure some other service related settings Click on Install service.

ElasticSearch– was created with version [5.3.0] but the minimum compatible version is [6.0.0-beta1]. It should be re-indexed in Elasticsearch 6.x before upgrading to 7.6.1.

As mentioned in some of the previous posts I am migrating an ‘old’ 5.3 instance of ElasticSearch to 7.6.1. In my first (too optimistic) attempt I directly migrated to ElasticSearch 7.6.1. When starting the ElasticSearch cluster this resulted in the following error message: [myindex/gTTtwHT3ShiAz-eR94eKmw]] was created with version [5.3.0] but the minimum compatible version is [6.0.0-beta1]. It should be re-indexed in Elasticsearch 6.x before upgrading to 7.6.1.                at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.checkSupportedVersion(MetaDataIndexUpgradeService.java:113)                at org.elasticsearch.cluster.metadata.MetaDataIndexUpgradeService.upgradeIndexMetaData(MetaDataIndexUpgradeService.java:87)                at org.elasticsearch.gateway.GatewayMetaState.upgradeMetaData(GatewayMetaState.java:240)                at org.elasticsearch.gateway.GatewayMetaState.upgradeMetaDataForNode(GatewayMetaState.java:223)