Skip to main content


The paved road

As an architect, I want to give my teams as much freedom as possible and trust them to take responsibility. Of course this should be balanced with the business and architectural goals. So how can you achieve this? Let me explain on how I like to tackle this... The paved road The way I use to lead my teams in the right direction is through a ‘paved road’. This means I’ll provide them a default stack that is easy to use with a lot of benefits, tools, documentation, support, … that help them during there day-to-day job. They can still go offroad if they want to, but I found out it is a good way to create alignment as much as possible without the teams losing their autonomy. It is also a good way to avoid the ‘ivory tower architecture’ and it leaves space for experimentation and improvements. Some things I typically provide as part of the paved road: A default application architecture: a template solution, starter kit or code generator that helps you to setup a default
Recent posts

Get number of milliseconds since Unix epoch

I'm having fun creating a small open-source project(more about that later). In a part of this project I need to integrate with an existing API. Here is (part of) the JSON schema that describes the data contract: As you can see I need to specify a timestamp value which should be provided as a number. The description adds some extra details: A number representing the milliseconds elapsed since the UNIX epoch. Mmmh. The question is first of all what is the UNIX epoch and second how can I generate this number in C#? Let’s find out! The UNIX epoch The Unix epoch is the time 00:00:00 UTC on 1st January 1970. Why this date? No clue, it seems to be just an arbitrary date. It is used to calculate the Unix time. If you want to learn more, check out Wikipedia . Get number of milliseconds since Unix epoch in C# Now that we now what the UNIX epoch is, what is the best way to calculate the number of milliseconds since Unix epoch in C#? You can start to calculate this

RabbitMQ Streams–Reliable Consumers

Last week I introduced RabbitMQ streams and how you could produce and consume streams through the RabbitMQ.Stream.Client in .NET. Yesterday I showed how you can improve and simplify producing messages by using a Reliable Producer. Today I want to introduce its counterpart on the consumer side; the Reliable Consumer. Introducing Reliable Consumers Reliable Consumers builts on top of Consumer and adds the following features: Auto-Reconnect in case of disconnection Auto restart consuming from the last offset Handle the metadata Update Auto-Reconnect The Reliable Consumer will try to restore the TCP connection when the consumer is disconnected for some reason. Auto restart consuming from the last offset The Reliable Consumer will restart consuming from the last offset stored. So you don’t have to store and query the last offset yourself. Handle the metadata update If the streams  topology changes (ex:Stream deleted or add/remove follower), the client receiv

RabbitMQ Streams–Reliable producers

Last week I introduced RabbitMQ streams and how you could produce and consume streams through the RabbitMQ.Stream.Client in .NET. The default Producer is really low-level and leaves a lot of things to be implemented by us. For example, we have to increment the PublishingId ourselves with every Send() operation. Let’s find out how we can improve this through Reliable Producers. Introducing Reliable Producers Reliable Producer builts on top of the Producer and adds the following features: Provide publishingID automatically Auto-Reconnect in case of disconnection Trace sent and received messages Invalidate messages Handle the metadata Update Provide publishingID automatically When using a Reliable Producer it retrieves the last publishingID given the producer name.  This means that it becomes important to choose a good reference value. Auto-Reconnect The Reliable Producer  will try to restore the TCP connection when the Producer is disconnected

Azure meets Chaos Monkey–Chaos Studio

Maybe you have heared about the Chaos Monkey and later the Simian Army that Netflix introduced to check the resiliency of their AWS systems. These tools are part of a concept called Chaos Engineering. The principle behind Chaos Engineering is a very simply one: since your software is likely to encounter hostile conditions in the wild, why not introduce those conditions while (and when) you can control them, and then deal with the fallout then, instead of at 3am on a Sunday. Azure Chaos Studio Time to introduce Azure Chaos Studio , a managed service that uses chaos engineering to help you measure, understand, and improve your cloud application and service resilience. With Chaos Studio, you can orchestrate safe, controlled fault injection on your Azure resources. Chaos experiments are the core of Chaos Studio. A chaos experiment describes the faults to run and the resources to run against. You can organize faults to run in parallel or sequence, depending on your needs. Let’s give - Improve how you architect web apps in React

I recently discovered the website. On this site you can find a lot of patterns, tips and tricks for improving how you architect web applications (in React). aims to be a catalog of patterns (for increasing awareness) rather than a checklist (what you must do). Keep in mind, design patterns are descriptive, not prescriptive . They can guide you when facing a problem other developers have encountered many times before, but are not a blunt tool for jamming into every scenario. The creators of the website have bundled a lot of these patterns in a free e-book :  

RabbitMQ Streams - No such host is known

Yesterday I talked about RabbitMQ Streams, a new persistent and replicated data structure in RabbitMQ 3.9 which models an append-only log with non-destructive consumer semantics. I demonstrated how you could build a small example application in C# to test this stream. I first tried this code against a local cluster I had running in a docker container (check my repo if you want a Dockerfile where this plugin is already enabled: ). At first this failed with the following error message: System.Net.Sockets.SocketException: No such host is known In my configuration you can see that I’m pointing to the local Loopback address which should be localhost: Let’s open the debugger and see what is going on… When I looked at the connection settings I noticed the following: You can see that the advertised host is not ‘localhost’ but a random string. This is the random name assigned to the node in my cluster. To get rid of this pr