Skip to main content

Posts

Showing posts from June, 2022

Change the default Outlook meeting length

An unfortunate consequence of COVID-19 is that I had a lot more meetings online. Because those meetings are planned remotely it became easy to plan them back-to-back as we no longer had to take into account that people have to physically move from one meeting room to another. This made it really difficult to regain time to think, to recharge the batteries and keep focussed. In the end it became a real energy drain and had a negative impact on my productivity. To help myself (and my team) to create some time between our meetings, I changed the default meeting length in Outlook. Open Outlook. Go to File –> Options . In the Options window, go to Calendar . In the Calendar Options section, put a check in the Shorten appointments and meetings checkbox. If you want you can change the time a meeting should be shortened.  

Generate C# code from a JSON schema - Part II

Yesterday I shared how to create your C# data contracts from an available JSON schema. In that post I demonstrated Quicktype as a way to generate those contracts. There was only one disadvantage when using Quicktype; it only supports Newtonsoft.Json . Today I want to have a look at an alternative that does have support for System.Text.Json. Introducing NJsonSchema After looking around on Github I found NJsonSchema for .NET : NJsonSchema is a .NET library to read, generate and validate JSON Schema draft v4+ schemas. The library can read a schema from a file or string and validate JSON data against it. A schema can also be generated from an existing .NET class. With the code generation APIs you can generate C# and TypeScript classes or interfaces from a schema. It offers multiple features: Read existing JSON Schemas and validate JSON data Generate JSON Schema from an existing .NET type Generate JSON Schema from sample JSON data Generate C# and TypeScript cod

Generate C# code from a JSON schema - Part I

I’m having fun creating a small open-source project(more about that later). In a part of this project I need to integrate with an existing API. I first started by browsing through the API documentation and use that to build my data contract classes in C# until I discovered that a JSON schema document was provided. In this post I want to show you can use this JSON schema document to generate the C# classes instead of writing them from scratch JSON schema Before I show you how to generate the C# code, I want to take a small detour and give you some more details about JSON schema itself. Let’s have a look how it is explained on the json-schema.org website itself: JSON Schema is a vocabulary that allows you to annotate and validate JSON documents. If you have ever used SOAP services in a previous life, JSON Schema can be compared to the WSDL documentation that was available there. It gives you an concise and easy way to describe your data format, provides you all the meta

Using secrets in your unit tests

I'm having fun creating a small open-source project(more about that later). In a part of this project I need to integrate with an existing API. Of course I want to have the necessary integration tests that help me verify if the integration with this API works correctly. There is only one problem, I need to pass an API key to call this API. How can I safely use and store this API key in my tests without checking the key in as part of my source code? ASP.NET Core Secret Manager In your ASP.NET Core application self you can use the Secret Manager Tool. This tool allows you to store your secrets in a separate location from the project tree. This guarantees that the app secrets aren’t checked into source control. The Secret Manager tool hides implementation details, such as where and how the values are stored. The values are stored in a JSON file in the local machine's user profile folder. For example on Windows: %APPDATA%\Microsoft\UserSecrets\<user_secrets_id>\secret

The paved road

As an architect, I want to give my teams as much freedom as possible and trust them to take responsibility. Of course this should be balanced with the business and architectural goals. So how can you achieve this? Let me explain on how I like to tackle this... The paved road The way I use to lead my teams in the right direction is through a ‘paved road’. This means I’ll provide them a default stack that is easy to use with a lot of benefits, tools, documentation, support, … that help them during there day-to-day job. They can still go offroad if they want to, but I found out it is a good way to create alignment as much as possible without the teams losing their autonomy. It is also a good way to avoid the ‘ivory tower architecture’ and it leaves space for experimentation and improvements. Some things I typically provide as part of the paved road: A default application architecture: a template solution, starter kit or code generator that helps you to setup a default

Get number of milliseconds since Unix epoch

I'm having fun creating a small open-source project(more about that later). In a part of this project I need to integrate with an existing API. Here is (part of) the JSON schema that describes the data contract: As you can see I need to specify a timestamp value which should be provided as a number. The description adds some extra details: A number representing the milliseconds elapsed since the UNIX epoch. Mmmh. The question is first of all what is the UNIX epoch and second how can I generate this number in C#? Let’s find out! The UNIX epoch The Unix epoch is the time 00:00:00 UTC on 1st January 1970. Why this date? No clue, it seems to be just an arbitrary date. It is used to calculate the Unix time. If you want to learn more, check out Wikipedia . Get number of milliseconds since Unix epoch in C# Now that we now what the UNIX epoch is, what is the best way to calculate the number of milliseconds since Unix epoch in C#? You can start to calculate this

RabbitMQ Streams–Reliable Consumers

Last week I introduced RabbitMQ streams and how you could produce and consume streams through the RabbitMQ.Stream.Client in .NET. Yesterday I showed how you can improve and simplify producing messages by using a Reliable Producer. Today I want to introduce its counterpart on the consumer side; the Reliable Consumer. Introducing Reliable Consumers Reliable Consumers builts on top of Consumer and adds the following features: Auto-Reconnect in case of disconnection Auto restart consuming from the last offset Handle the metadata Update Auto-Reconnect The Reliable Consumer will try to restore the TCP connection when the consumer is disconnected for some reason. Auto restart consuming from the last offset The Reliable Consumer will restart consuming from the last offset stored. So you don’t have to store and query the last offset yourself. Handle the metadata update If the streams  topology changes (ex:Stream deleted or add/remove follower), the client receiv

RabbitMQ Streams–Reliable producers

Last week I introduced RabbitMQ streams and how you could produce and consume streams through the RabbitMQ.Stream.Client in .NET. The default Producer is really low-level and leaves a lot of things to be implemented by us. For example, we have to increment the PublishingId ourselves with every Send() operation. Let’s find out how we can improve this through Reliable Producers. Introducing Reliable Producers Reliable Producer builts on top of the Producer and adds the following features: Provide publishingID automatically Auto-Reconnect in case of disconnection Trace sent and received messages Invalidate messages Handle the metadata Update Provide publishingID automatically When using a Reliable Producer it retrieves the last publishingID given the producer name.  This means that it becomes important to choose a good reference value. Auto-Reconnect The Reliable Producer  will try to restore the TCP connection when the Producer is disconnected

Azure meets Chaos Monkey–Chaos Studio

Maybe you have heared about the Chaos Monkey and later the Simian Army that Netflix introduced to check the resiliency of their AWS systems. These tools are part of a concept called Chaos Engineering. The principle behind Chaos Engineering is a very simply one: since your software is likely to encounter hostile conditions in the wild, why not introduce those conditions while (and when) you can control them, and then deal with the fallout then, instead of at 3am on a Sunday. Azure Chaos Studio Time to introduce Azure Chaos Studio , a managed service that uses chaos engineering to help you measure, understand, and improve your cloud application and service resilience. With Chaos Studio, you can orchestrate safe, controlled fault injection on your Azure resources. Chaos experiments are the core of Chaos Studio. A chaos experiment describes the faults to run and the resources to run against. You can organize faults to run in parallel or sequence, depending on your needs. Let’s give

Patterns.dev - Improve how you architect web apps in React

I recently discovered the Patterns.dev website. On this site you can find a lot of patterns, tips and tricks for improving how you architect web applications (in React). Patterns.dev aims to be a catalog of patterns (for increasing awareness) rather than a checklist (what you must do). Keep in mind, design patterns are descriptive, not prescriptive . They can guide you when facing a problem other developers have encountered many times before, but are not a blunt tool for jamming into every scenario. The creators of the website have bundled a lot of these patterns in a free e-book :  

RabbitMQ Streams - No such host is known

Yesterday I talked about RabbitMQ Streams, a new persistent and replicated data structure in RabbitMQ 3.9 which models an append-only log with non-destructive consumer semantics. I demonstrated how you could build a small example application in C# to test this stream. I first tried this code against a local cluster I had running in a docker container (check my repo if you want a Dockerfile where this plugin is already enabled: https://github.com/wullemsb/docker-rabbitmq ). At first this failed with the following error message: System.Net.Sockets.SocketException: No such host is known In my configuration you can see that I’m pointing to the local Loopback address which should be localhost: Let’s open the debugger and see what is going on… When I looked at the connection settings I noticed the following: You can see that the advertised host is not ‘localhost’ but a random string. This is the random name assigned to the node in my cluster. To get rid of this pr

RabbitMQ Streams

RabbitMQ has been the message broker of my choice for a long time. It has served me well over the years and I still like to use it today. Recently, I was able to add an extra reason to the list why I like RabbitMQ when I noticed that a new feature was added in RabbitMQ 3.9; Streams . RabbitMQ Streams From the documentation : Streams are a new persistent and replicated data structure in RabbitMQ 3.9 which models an append-only log with non-destructive consumer semantics. With streams you get Kafka like functionality in RabbitMQ without all the complexity that comes with maintaining and managing your Kafka cluster. It has been created with the following use cases in mind: Large amount of subscribers; in traditional queuing we use a dedicated queue for each consumers. This becomes ineffective whehn we have large number of consumers. Time-travelling; Streams will allow consumers to attach at any point in the log and read from there. Performance: Streams have been

“docker build” requires exactly 1 argument

A short post for today. It had been a while since the last time I created a docker image through the commandline. This resulted in me making a stupid mistake and ended up with a docker build error. I wanted to build a docker image and executed the following command from the folder containing my DOCKERFILE: docker build -t rabbitmq-streaming This failed with the following error message: "docker build" requires exactly 1 argument. See 'docker build --help'. Did you notice my mistake? I forgot to add exactly 1 extra character; a dot. This dot means that docker will use the DOCKERFILE in the local directory. So the correct command becomes: docker build -t rabbitmq-streaming . Happy coding!

ASP.NET Core MVC–The TypeFilterAttribute

If you have ever created your own Action Filter in ASP.NET Core, you are probably aware of the existance of the ServiceFilterAttribute . But did you know that there exists also a TypeFilterAttribute ? In this post, I'll explain both types and show you the differences between the two. Action Filters that are implemented as attributes and added directly to controller classes or action methods cannot have constructor dependencies provided by dependency injection (DI).  This is because attributes must have their constructor parameters supplied where they're applied. Therefore ASP.NET Core provides two out-of-the-box attributes that can make your action filters DI enabled: ServiceFilterAttribute TypeFilterAttribute The ServiceFilterAttribute The ServiceFilter attribute allows us to specify the type of our action filter and have it automatically resolve the class from the built-in DI container. This means that we can implement our action filter to accept dependenci

From project to product–Microsoft DevOps Dojo

Microsoft transitioned from a large enterprise software business to a cloud company. In that sense, they are like most compagnies out there who struggle to move from an existing business model to a new cloud first service model. This also means that there are a lot of useful lessons that are relevant for all of us. Through the Microsoft Dojo for customers program, they started to show the way they work, the way they learn, and the way they experiment. Here are some of the topics they already covered: Dojo – People & Teams Dojo – Experiential Learning Dojo – Customers & Trust Dojo – Culture & Mindset Dojo – Product Centric Model – Part 1 Dojo – Product Centric Model – Part 2 Dojo – Product Centric Model – Part 3 Dojo – OKRs (Objectives and Key Results) Interested to learn more? Check out Microsoft Learn and start your DevOps Dojo journey .

Your project does not reference ".NETFramework,Version=v4.6.2" framework

After playing around with the dotnet upgrade assistant(see this previous post ) I did an undo of the changes that were made by the tool through source control. However after doing that my code refused to compile. Here is the error message I got: Your project does not reference ".NETFramework,Version=v4.6.2" framework. Add a reference to ".NETFramework,Version=v4.6.2" in the "TargetFrameworks" property of your project file and then re-run NuGet restore. When I took a look at my project settings, I didn’t see something wrong… I tried a ‘Clean Solution’ , didn’t help… I deleted the bin folder and did a ‘Rebuild’ of the solution, didn’t help either… In the end what did the trick was to remove the obj folder for every project . Once the folder was removed, the error disappeared when I recompiled the application. Strange!

Using the .NET Upgrade Assistant to upgrade a Windows Forms App–Part II

Yesterday I demonstrated how we could use the .NET Upgrade Assistant to help us port a 10 year old WinForms application to .NET Core. We tried the 'analyze' mode to do a dry-run of the upgrade process. Today I continue with a follow-up post where we have a look at the warnings and diagnostic messages I got and see how we can get rid of them. Warning - HighDpiMode We’ll start easy with the following warning: HighDpiMode needs to set in Main() instead of app.config or app.manifest - Application.SetHighDpiMode(HighDpiMode.<setting>). It is recommended to use SystemAware as the HighDpiMode option for better results. As I’m not using the HighDpiMode in my application, I can just ignore this warning (see this related Github issue: https://github.com/dotnet/upgrade-assistant/issues/980 ). If you need to set the HighDpiMode, have a look at the changed bootstrapping logic here: https://docs.microsoft.com/en-us/dotnet/core/compatibility/windows-forms/6.0/application-boot

Using the .NET Upgrade Assistant to upgrade a Windows Forms App–Part I

About 10 years ago I was part of a team that created a rather big Windows Forms application. It took us over 2 years to build the application and at that time it was the largest application I had ever built(to give you an idea, the requirements document was over 1000 pages). Today, 10 years later, this application is still in use and recently I was requested to help introduce a new module in this application. As Windows Forms got ported to .NET Core, I thought it would be a good idea to to see if I could easily port this application to .NET Core. Microsoft created the .NET Upgrade Assistant exactly for use cases like this. Install the .NET Upgrade Assistant The .NET Upgrade assistant is available as a global tool and can be installed with the following command: dotnet tool install -g upgrade-assistant Analyze your app before upgrading As the real migration process can take up a lot of time, the .NET Upgrade Assistant tool includes an analyze mode that performs a simplified

Azure Application Insights– How to keep the amount of data under control

If you have read all my previous posts about Application Insights , my hope is that you would start to use a lot more of the great features it has to offer. However this comes with one big 'disadvantage'; you'll start to collect a lot more data which of course leads to increased telemetry traffic, data costs, and storage costs. Let's have a look at 3 ways to keep this under control: Limit the amount of data by changing the retention duration Use sampling to reduce traffic Setting a daily data volume cap Changing the retention duration Data ingested into either classic or workspace-based Application Insights is retained for 90 days without any charge. For data ingested into your Log Analytics workspace can be retained at no charge for up to first 31 days (or 90 days if Azure Sentinel is enabled on the workspace). If you are using a classic Application Insights resource, you can change the retention duration after opening the Application Insights r

Azure Application Insights–MassTransit integration

One of the features that Application Insights has to offer is ‘Dependency Tracking' . It allows you to monitor components that are called by your application. This can be a service called using HTTP, or a database, or a file system. Application Insights measures the duration of dependency calls, whether its failing or not, along with additional information like name of dependency and so on. You can investigate specific dependency calls, and correlate them to requests and exceptions. Automatically tracked dependencies Out-of-the-box the following dependencies are tracked automatically: Dependencies Details Http/Https Local or Remote http/https calls. WCF Calls Only tracked automatically if Http-based bindings are used. SQL SQL Calls made with SqlClient. Azure storage Calls made with Azure Storage Client. EventHub Client SDK   Eve

Azure Application Insights–Set cloud role name in Angular application

I've talked about how to setup Application Insights in an Angular application and also shared how to set the cloud role name in ASP.NET Core to improve the telemetry reporting. Let's build on top of these 2 posts and show you today how to update the cloud role name in an Angular application. We’ll start by extending our environment.ts file with an extra configuration setting to store the application name: Once that is done, we need to go the service ( app-insights.service.ts ) where we create our Application Insights instance. There we need to add a custom telemetry initializer by calling the addTelemetryInitializer method: Now when we run our Angular application, the cloud role name should be reported correctly to Application Insights.