Skip to main content

Posts

Showing posts from September, 2023

Work hard, play hard?

Throughout your career, you may have heard the phrase “work hard, play hard.” As I like the idea behind this sentence of finding a healthy balance between giving attention to career-related and personal goals, I don't like the phrase itself. It gives the impression that work and play are two opposing forces that should be balanced. I don’t think that is correct. Dr. Stuart Brown states it quite nicely: The opposite of play is not work – the opposite of play is depression. For me this means that any kind work should have an element of play; this fosters creativity, makes your job more compelling and helps you to be more productive. So my invitation to you before you start your weekend is to allow some goofiness in your work. Enjoy!

AI and developer productivity

With the introduction of AI-powered code completion tools like Github Copilot, there is a lot of focus on developer productivity and how these kind of tools could/should/will improve it. Of course we are doing our own experiments,  and although hard to proof it scientifically, our developers claim to feel more productive. However when I ask more details, they mostly talk about it helps them write code faster. Although their statement is certainly correct, I think we still have to tap in on the real productivity win that these kind of tools can offer. What do I mean? Let me explain! A few years ago Microsoft conveyed a study where they investigated what developers do most of their time. If you look at the results, developers spend most of their time on reading and understanding code, not on writing new code. Remark: this is in line with my own experience. So if these tools really want to improve developer productivity, these scenario’s are the one to focus on. Github Copi

VS Code–Share your settings using profiles

After working a long time on the same laptop, it was time again to switch to a new machine. This can be a cumbersome experience as you need to copy over all the data, install all your favorite apps and reconfigure everything. Luckily I already created a WinGet script a long time ago that simplifies the installation process and have most information stored on my OneDrive. However I still need to reconfigure my VS Code instance. I tweaked a lot of settings, have a long list of extensions installed and adjusted the UI layout. The good news is that I don’t have to do all this work myself as VS Code supports the concept of profiles. VS Code Profiles let you create sets of customizations and quickly switch between them or share them with others. After installing VS Code, a Default Profile is created. As you modify settings, install extensions, or change UI layout by moving views, these customizations are tracked in the Default Profile. To create a new profile, you can use the File

Angular Cache

Last week I got a call from our operations team indicating that one of our build server disks was filling up. While investigating what could be the root cause, I noticed the .angular folder for each of our Angular frontend applications. I had no idea what this folder does, so let us find out together in this blog post… TLDR; The .angular folder is used by the Angular CLI to cache the previous builds to reduce the build operations and improves the build time. The .angular folder appeared in version 13 and is used as a disk cache by the Angular CLI to save a number of cachable operations on disk by default. When you re-run the same build, the build system restores the state of the previous build and re-uses previously performed operations, which decreases the time taken to build and test your applications and libraries. If we look what is inside, we see 2 things: An angular-webpack folder containing the binary files. A babel-webpack folder containing all the text fi

.NET 8–Using reflection in a AOT enabled application

With the upcoming .NET 8 release, Microsoft is spending a lot of effort in further improving AOT(Ahead-of-Time) compilation. Using Native AOT compared to the JIT(Just-in-Time) compiler offers the following benefits: Minimized disk footprint : When publishing using native AOT, a single executable is produced containing just the code from external dependencies that is needed to support the program. Reduced executable size can lead to:     Smaller container images, for example in containerized deployment scenarios. Reduced deployment time from smaller images Reduced startup time : Native AOT applications can show reduced start-up times, which means     The app is ready to service requests quicker. Improved deployment where container orchestrators need to manage transition from one version of the app to another. Reduced memory demand : Native AOT apps can have reduced memory demands, depending on the work done by the app. Reduced memo

Auto delete files in Azure Blob storage

Today I was designing a solution where I had to store files for a certain amount of time in Azure Blob Storage. I first thought I had to design and build this feature myself but I discovered that this feature is available out of the box through ‘Lifecycle Management’. Therefore: Open your Storage account on the Azure portal. Go to Lifecycle Management in the Data Management section. Click on Add a rule . The Add a rule wizard is loaded. Specify a rule name and select the applicable blob types. Click on Next . Specify the applicable conditions and the corresponding action. Click on Add to create the Rule. That’s it!  Be aware that it can take up to 48 hours before a new or updated rule   Remark: It is also possible to directly use a lifecycle management policy file. This is a collection of rules in a JSON document. More information Optimize costs by automatically managing the data lifecycle - Azure Storage | Microsoft Learn

Improve the build speed on your build server using npm ci

While looking at some of our Azure Pipelines, I noticed that some of our builds were really slow. So I opened up one of the slow builds and noticed that most time was spend installing npm packages. As we were using a local build agent, a first improvement you can make is to change the Clean settings to False so you don’t have to start with a clean working directory every time. Another improvement you can make is to switch from using npm install to npm ci . npm ci was introduced a few years ago and promised massive improvements to both the performance and reliability of builds for continuous integration / continuous deployment processes. It bypasses a package’s package.json to install modules from a package’s lockfile. This not only makes npm ci fast (sometimes twice as fast as using npm install ) it also helps to ensure reproducible builds—you are getting exactly what you expect on every install.

The fundamental theorem of Agile Software Development

J.B. Rainsberger talks about the Fundamental Theorem of Agile Software Development. In his talk he refers to Fred Brooks Mythical Man Month(a must read!), accidental vs essential complexity, the cost of a feature, how to do estimations taking both types of complexity into account, reduce accidential complexity using test driven development, the formula cost=f(g(e),h(a))=g(e) + h(a) and why refactoring is necessary to make Agile development work and all of this in 7 minutes and 26 seconds(also the name of his talk). A must watch!

Dapr workshop

Dapr (Distributed Application Runtime) is a free and open-source runtime system designed to support cloud-native and serverless computing It provides APIs that simplify microservice connectivity, enabling developers to write resilient and secure microservices. Dapr abstracts away the complexity of common challenges developers encounter regularly when building distributed applications, such as service discovery, message broker integration, encryption, observability, and secret management It runs as a sidecar wherever your application runs whether hosted on Kubernetes, VMs, deployed on the cloud, on-premises or on the edge. Although the documentation is great, it can still be a challenge to start building your first dapr enabled application. Therefore I can recommend the Dapr workshop ; it contains several hands-on assignments that will introduce you to Dapr. You will start with a simple microservices application that contains a number of services. In each assignment, you will ch

.NET 8–Keyed/Named Services

A feature that a lot of IoC container libraries support but that was missing in the default DI container provided by Microsoft is the support for Keyed or Named Services. This feature allows you to register the same type multiple times using different names, allowing you to resolve a specific instance based on the circumstances. Although there is some controversy if supporting this feature is a good idea or not, it certainly can be handy. To support this feature a new interface IKeyedServiceProvider got introduced in .NET 8 providing 2 new methods on our ServiceProvider instance: object? GetKeyedService(Type serviceType, object? serviceKey); object GetRequiredKeyedService(Type serviceType, object? serviceKey); To use it, we need to register our service using one of the new extension methods: Resolving the service can be done either through the FromKeyedServices attribute: or by injecting the IKeyedServiceProvider interface and calling the GetRequiredKeyedServic

Entity Framework Core–DateOnly and TimeOnly

Last week I had the pleasure to work with a team that started using Entity Framework Core for the first time. They had a lot of experience using NHibernate, so the concept of an ORM was not new. But it was interesting to see which things are obvious when switching to EF Core and which are not. After a few hiccups the team was finally on a role and they were starting to add more and more features. A (last?) question I got was regarding the usage of the DateOnly and TimeOnly types. These types were introduced in .NET 6 and are a welcome addition next to the DateTime type. The question was of course if we could use these types in combination with EF Core? These type have been supported for several database providers (e.g. SQLite, MySQL, and PostgreSQL) since their introduction. Unfortunately for SQL Server, we had to wait for a recent release of a Microsoft.Data.SqlClient package. So starting from EF8 DateOnly and TimeOnly are supported for SQL Server as well. Remark: If yo

Entity Framework Core–Data is null

Last week I had the pleasure to work with a team that started using Entity Framework Core for the first time. They had a lot of experience using NHibernate, so the concept of an ORM was not new. But it was interesting to see which things are obvious when switching to EF Core and which are not. I thought the team was finally on track after a few bumps in the road when they came back to me with another problem. When trying to fetch a set of entities, the query failed with the following error message: System.Data.SqlTypes.SqlNullValueException : Data is Null. This method or property cannot be called on Null values. This was the object they were trying to fetch: And here is the LINQ query they were using: I have to admit that it took me a while before I discovered why this failed. The reason becomes obvious if you take a look inside the csproj file: This project had Nullable Reference Types enabled.  If this is enabled and you have a required property that shouldn’t b

Entity Framework Core–Use separate mapping files

Last week I had the pleasure to work with a team that started using Entity Framework Core for the first time. They had a lot of experience using NHibernate, so the concept of an ORM was not new. But it was interesting to see which things are obvious when switching to EF Core and which are not. The start was a little bit of trial and error , but the next time they contacted me wasn’t to solve the next problem. Progress! Instead they had a question… In Nhibernate the mapping between your classes and database tables is typically done through HBM files(if you like XML) or through code. In EF Core you can override the OnModelCreating method on the DbContext : Using separate files for your mapping code As the team was used to have separate files for the mapping code, they were wondering if this was possible with EF Core as well. Luckily, the answer is yes. To do that you need to implement the IEntityTypeConfiguration interface: and change the OnModelCreating implementatio

Entity Framework Core - System.ArgumentException

Last week I had the pleasure to work with a team that started using Entity Framework Core for the first time. They had a lot of experience using NHibernate, so the concept of an ORM was not new. But it was interesting to see which things are obvious when switching to EF Core and which are not. Yesterday I shared a first problem they encountered. Shortly after I explained a possible solution they contacted me back with a new error message: System.ArgumentException : 'AddDbContext' was called with configuration, but the context type 'NorthwindDbContext' only declares a parameterless constructor. This means that the configuration passed to 'AddDbContext' will never be used. If configuration is passed to 'AddDbContext', then 'Northwind1DbContext' should declare a constructor that accepts a DbContextOptions<NorthwindDbContext> and must pass it to the base constructor for DbContext. So what was the problem this time? They clearly didn’t we

Entity Framework Core - No database provider has been configured

Last week I had the pleasure to work with a team that started using Entity Framework Core for the first time. They had a lot of experience using NHibernate, so the concept of an ORM was not new. But it was interesting to see which things are obvious when switching to EF Core and which are not. The first time they contacted me they had the following code in place: A DbContext (I switched the code to a simpler example): And a minimal configuration in the Program.cs file: When executing this code, it resulted in the following error message: No database provider has been configured for this DbContext. A provider can be configured by overriding the DbContext.OnConfiguring method or by using AddDbContext on the application service provider. If AddDbContext is used, then also ensure that your DbContext type accepts a DbContextOptions<TContext> object in its constructor and passes it to the base constructor for DbContext. I think this error message is quite helpful but the

NuGet - Package Source Mappings

With NuGet 6.0, a new feature Package Source Mapping was introduced. Goal of this feature is to help improve security by safeguarding your software supply chain. What is Package Source Mapping? If I want to explain what Package Source Mapping is, I should start by explaining the concept of a Package Source. In NuGet a Package Source is the source where nuget searches for packages. This can a public source(like nuget.org) and/or a private source(like Azure Artifacts or a local disk). You can add a package source by executing the following command: dotnet nuget add source <source> –n MySource It is possible to define multiple sources. By default when multiple sources are defined nuget will scan all configured package sources to find and restore a specific package. This introduces a risk as you are not sure where a specific package is coming from. A risk that can be mitigated by using Package Source Mapping(PCM). PCM allows to centrally declare which source each package

Null conditional await

While reviewing a codebase I noticed the following C# snippet:   What we are trying to do here is to simplify the null check on the transaction object by using a null-conditional operator (?.) However this doesn’t work as expected when using async / await . If we would try to execute the code as-is, we would get a NullReferenceException when the transaction object is null. Exactly the thing we are trying to avoid. This is something that you can already notice if you look at the compiler warning we get: The reason is that our await call - even if there's no result value – expect to get a Task object back . Instead the result is null and the compiler generated async\await code internally can't deal with the null value. Hence the NullReferenceException when a value of Task is expected. If you really want to use the null-conditional operator you could rewrite the code like this:   More information: Member access and null-conditional operators and expressio

The 2 rules of software architecture

While cleaning up my desktop I found 2 screenshots I had taken during a presentation I viewed online. I don't know what the original source was(if you know from which presentation this is, please let me know) but I found them too good not to share. They state 2 rules that are applicable for every software architect: Rule #1 – Every decision has it prices. No decision is free.   Rule #2 – A decision can only be evaluated with respect to it context

Build your UI as a finite state machine

As an architect I’m regularly involved in code reviews. One of the lessons I learned from reviewing so many codebases is that most codebases start quite well-defined and clean. It is only after an accumulating set of changes that the code evolves to a mess and turns into spaghetti. One of the parts of every system that is impacted the most by changes is the user interfaces. What started as a simple set of UI components evolves quite fast to an always growing set of changing conditions that impact the UI state. An example: an original requirement stating that a ‘Save’ button is disabled until all required fields are entered in a form evolves to a combination of AND all required fields are filled in AND a user has a certain role AND there is no application error AND we are not loading some data AND… What typically also starts to happen is that the same conditions start to come back in multiple places. Your UI becomes harder and harder to test and you get more complex bugs that are

Azure Pipelines - Batching your CI builds

I’m a big fan of Continuous Integration and Trunk-based development. As a result during a normal day a lot of code is pushed to the trunk(master, main,… whatever you call it)  branch. Pushing changes to this branch will trigger a CI build. Of course as a lot of changes are pushed during the day a lot of builds are triggered. If your build time is small this is ok, but when your build time starts to increase it can become  interesting to look at ways to improve this process. A possible solution is to start batching your CI builds in Azure DevOps. By enabling batching no new CI build is triggered when a previous build is still in progress. To explain this in a little more detail; this means when multiple changes are pushed with short intervals, only 2 CI builds will be triggered. One when the first change is pushed and one after the first CI build has completed(of course this all depends on the build time of your CI build and the exact interval between your pushes but you get the i

.NET 6 - Async scopes

Services in .NET Core can be registered using the built-in dependency injection functionality with different lifetimes: Transient Scoped Singleton Scoped services are created for a specific period of time linked to a scope. For example when using ASP.NET Core a scoped service lifetime is coupled to a client request. An important feature of scoped services is that they are disposed when the scope is closed. It is possible to create a scope yourself using the CreateScope method on the IServiceProvider : If you run the example above you’ll see that the Foo instance is correctly disposed after leaving the scope. So far, so good. But what about when your service implements the IAsyncDisposable interface instead? To support this scenario in a non-breaking way, a new CreateAsyncScope method is introduced on the IServiceProvider interface in .NET 6. Here is an updated example using this feature: Remark : ASP.NET Core was updated as well and implementations o