Skip to main content

Posts

Showing posts from October, 2022

Visug XL 2022 - Microservices The last mile

Last Friday I did a presentation at Visug XL . If you missed my presentation or are interested in the slides, I've made them available for download here . No idea what my talk was about? Here is the abstract: There it is! After months of struggling your well decomposed microservices architecture finally sees the light. Nicely separated services with their own datastore, well defined service boundaries, clear API contracts, … . A software architect dream is coming true. But now you need to start working on the frontend and gone is all your clean separation! In this session we’ll walk the last mile and evaluate multiple ways on how to bring information from multiple (micro) services together so that they can be consumed by the frontend. ViewModel composition, gateway aggregation, Backend for Frontends, GraphQL Federation and other options are compared and pro’s and con’s of each technique will be discussed.  

Help! My IFormFile collection remains empty

Thanks to the built-in model binder feature in ASP.NET Core, uploading files is easy. You only need to specify an IFormFile as an action method parameter and the framework does all the hard work for you. All very handy and easy, until it doesn't work... Today I had an issue when I tried to upload multiple files at once. This is certainly supported and should work with any of the following collections that represent several files: IFormFileCollection IEnumerable < IFormFile > List < IFormFile > Here is a code example: Nothing wrong with the code above I would think. But unfortunately it didn’t work… To make it even stranger, although the List<IFormFile> remained empty, the uploaded files where available when I directly accessed the HttpContext and took a look at the Request.Forms.Files property. The problem turned out to be related in the way I uploaded the files. I had created a small helper library to construct the MultipartFormDataCon

Dapper–Buffered vs unbuffered readers

I’m building a data pipeline using Dataflow to migrate data from a database to an external API(A separate post about Dataflow is coming up!). Goal is to parallelize the work as much as possible while keeping the memory usage under control. To achieve this, I use a streaming approach to limit the amount of data that is in memory at a certain moment in time. Here is the (simplified) code I use to fetch the data from the database: I’m using ADO.NET out-of-the-box combined with Dapper . I fetch the data from the database and send it to a Dataflow BatchBlock (a block that groups the data in batches of a certain size). Once the buffer of the BatchBlock is full, I use a Thread.Sleep() to wait until the buffer is no longer full and the Batchblock can accept new messages. Nothing special and this should allow me to keep the memory consumption under control. However when I executed this code, I saw the memory usage growing out of control: So where is my mistake? The answer sh

Azure Application Insights–Structured logging

I think that everyone agrees when I say that logging and monitoring are important for almost every application. My preferred way is to use a 'Structured logging' approach in favor of the non-structured logging alternative. Structured vs non-structured logging With non-structured logging, the log message itself is handled as a string. . This makes it hard to query or filter your log messages for any sort of useful information. With structured logging, log messages are written in a structured format that can be easily parsed. This could be XML, JSON, or other formats. But since virtually everything these days is JSON, you are most likely to see JSON as the standard format for structured logging. A popular library that introduced the structured logging approach in .NET is Serilog . Starting from .NET 5, it became available out-of-the-box as part of the Microsoft.Extensions.Logging package. Here is an example where we are using structured logging using the default ILogger:

Visual Studio Code–Timeline view

One of the nice features of Visual Studio Code is the Timeline view. It's a lot more than a view of your Git commits as it brings you a unified view for visualizing any kind of time series events(of course Git commits, but also file saves, test runs,...) for a file. The Timeline view is enabled by default and can be found at the bottom of the File Explorer: When you expand it, you see all the changes for the selected file: And if you click on a specific line in the Timeline view you get a diff of the changes: Nice! This allows you to see what has changed without needing your Git commits as the only source.

The Art of Agile Development

The Art of Agile development is one of the must reads if you take a leading role in any software team. In 2021 James Shore released an updated version of his book. To celebrate the one year existence of his book, James was so kind to make a dozen of the practices shared in the book available for free : Free Introductory Material: What is Agile? How to Be Agile Choose Your Agility Free Practices: Teamwork: Whole Team Team Room Planning: Stories Adaptive Planning Ownership: Task Planning Capacity Accountability: Stakeholder Trust Collaboration: Collective Code Ownership Development: Zero Friction Test-Driven Development

Azure Pipelines–Templates–Part III

I'm currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML files. I already talked about the concept of templates as a replacement for Task groups in the 'classic' pipeline and the multiple ways it can be used. Before I finally shut up about templates(no promised made) I want to talk about one extra related feature I didn’t mention yet. One extra advantage that you get when you extend from a template (next to re-use) is that you can enforce a specific pipeline usage. This can be done by introducing a Required template check . Required template check With the required template check, you can enforce pipelines to use a specific YAML template. When this check is in place, a pipeline will fail if it doesn't extend from the referenced template. in your Azure DevOps project, go to the environment where you want to add the extra check. Click on the ‘+’ sign to add a new approval. Select Required temp

Azure Pipelines–Templates Part II

I'm currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML files. Yesterday I talked about the concept of templates as a replacement for Task groups in the 'classic' pipeline. I showed how to create a template and use it inside your YAML pipeline. What I didn’t mention is that next to ‘ including’ a template it is also possible to ‘inherit’ from a template. Let’s find out how to do this. Extending a template To ‘inherit’ from a template, we should let a pipeline extend from an extendible template. To have an extendible template, it must be created on the level of the stages: To create a pipeline that extends a template, we need to provide the relative path to the template file and also pass all (required) parameter:

Azure Pipelines– Templates

I'm currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML files. In the 'classic' pipeline you had the concept of a Task group . Task groups A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other task. You can choose to extract the parameters from the encapsulated tasks as configuration variables, and abstract the rest of the task information. The easiest way to create a task group is to select a sequence of tasks, open the shortcut menu, and choose Create Task Group: Templates When using YAML pipelines, you can achieve the same goal through templates . Templates let you define reusable content, logic, and parameters. To create a template, just create a YAML file and copy the tasks that you want to include. Although the template can access the same vari

30 days of Data Science

I’m always trying to expand my horizon and learn new tools, technologies and techniques. And of course at the top of my list you'll find everything about Data Science, AI, Machine Learning. As I see how tools like Github Copilot can change(and improve) the job of a software engineer, I want to dive deeper into this domain. If you recognize yourself in this, this is a good moment to join me in my learning journey. Microsoft announced there 30 days of data science program, starting today! To participate, you need to register as part of the program. In 4 weeks you go from your first Python program to an end-to-end Machine Learning project.

GraphQL on Azure Blog series

I have spend some time myself looking at ways to use GraphQL on Azure. Here are some of the more recent posts I did about this topic: Run a GraphQL backend as an Azure Function Using GraphQL in Azure API Management  - Part 1 Using GraphQL in Azure API Management – Part 2 Using GraphQL in Azure API Management – Part 3 But of course, I’m not the only one who writes about this topic. Aaron Powell did a lot of effort to share his experience on how to use GraphQL on Azure. A must read! Part 1 - Getting Started Part 2 - App Service with dotnet Part 3 - Serverless with JavaScript Part 4 - CosmosDB and GraphQL Part 5 - Can We Make GraphQL Type Safe in Code? Part 6 - GraphQL Subscriptions with SignalR Part 7 - Server-Side Authentication Part 8 - Logging Part 9 - REST to GraphQL Part 10 - Synthetic GraphQL Custom Responses Part 11 - Avoiding DoS in queries

Azure Pipelines–Variable groups

I'm currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML templates. One of the building blocks you can use in Azure Pipelines are variable groups. Variable groups allow you to create and store values and secrets that can be used in multiple pipelines. Create a variable group To create a variable group, go to your Azure DevOps project. Open the Pipelines section from the left menu and click on Library . Here you can click on + Variable Group to create a new variable group. Enter a name and a description for the group. Now you can start adding any variable you want by clicking on the + Add . Using a variable group inside your YAML pipeline Using a variable group inside a YAML pipeline, is quite easy. You just need to add a reference to the group inside the variable section: It is still possible to include other variables as well by using the name/value syntax: Scoping a variable group to a stage

Azure Pipelines–Multi stage run

I'm currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML templates. For the project I’m migrating the code moves through multiple stages, where 2 stages should run in parallel. This is how it should like: My first attempt looked like this:   And when I executed this pipeline, Azure DevOps translated this to the following diagram: This was not exactly what I wanted. As you can see the ‘Productie’ stage will be executed only when the ‘Acceptatie_Extern’ stage has completed succesfully. It doesn’t matter if the ‘Acceptatie’ stage completed or not. The good news is that this is easy to fix. We have to update the ‘Productie’ stage to depend on both the successful completion of the ‘Acceptatie’ and ‘Acceptatie_Extern’ stage. Let’s update our yaml file: And if we now execute our pipeline again, we see that both stages are linked to ‘Productie’:  

Azure Pipelines - Stage must contain at least one job with no dependencies

I’m currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML templates. As part of the migration proces I’m configuring multiple stages where each stage represents a deployment to a specific environment(Development/Test/Acceptance/…) . It’s a typical setup where deployment to the next environment can only start after deployment to a previous environment was successful. Here is how (a part of) the pipeline looked like: However when I checked if my pipeline was valid, I got the following error message: Stage AcceptatieExtern must contain at least one job with no dependencies. Did you notice my mistake? I accidently added the ‘dependsOn’ and ‘condition’ check at the ‘jobs’ level instead of the ‘stage’ level. To fix it, I moved them to the correct location:

ASP.NET Core API Versioning

A lot has been written about API versioning and the opinions differ on what the 'best' approach is. I'm not planning to add an extra opinion to the mix, instead I want to focus on one of the ways you can do API versioning in ASP.NET Core. Versioning by content type When using versioning by content type, we use custom media types instead of generic types such as application/json . To make this work we can rely on content negotiation inside our API. Here is an example: Accept: application/vnd.example.v1+json Accept: application/vnd.example+json;version=1.0 Inside our ASP.NET Core controllers, we have to introduce the [Consumes] attribute. This attribute allows an action to influence its selection based on an incoming request's content type by applying a type constraint. What about Minimal API’s? At the moment of writing, ASP.NET Core Minimal API’s don’t support content negotiation (yet).

Join Hacktoberfest

Hacktoberfest is an annual worldwide event held during the month of October. The event encourages open source developers to contribute to repositories through pull requests (PR). Subscribe between September 21 and October 31 to participate. If you want to start contributing to an open source project, this is a perfect moment to get involved! No idea where to get started? Have a look at the Microsoft Learn Hacktoberfest page .

ASP.NET Core–Reintroduce the startup.cs file

With the release of .NET 6, when you create a new ASP.NET Core application, you no longer have a Startup.cs file. Instead everything should be written inside the Program.cs file: If you don’t like this and you still want to use a Startup.cs file, this is still possible. First let’s recreate our good old Startup.cs file: In a first attempt, I tried the following: Unfortunately, this didn’t work and resulted in the following error message: Exception thrown: 'System.NotSupportedException' in Microsoft.AspNetCore.dll An unhandled exception of type 'System.NotSupportedException' occurred in Microsoft.AspNetCore.dll UseStartup() is not supported by WebApplicationBuilder.WebHost. Use the WebApplication returned by WebApplicationBuilder.Build() instead. I switched to a more manual approach where I create the Startup class myself and invoke the methods directly: Of course now it doesn’t matter anymore how the Startup.cs file is structured or ho

Get the most out of your browser console

As a web developer you probably spend a lot of time inside the browser developer tools console. The console is a lot more than just the place where you can see the browsers error messages. Over time I discovered a lot of lesser known possibilities of the console, I even blogged about some features over the years. If you want to get the most out of your browser console, I have good news for you. Christian Heilmann started a ‘Dear Console,…’ website where he is bringing the list of his favorite helper scripts together. Certainly worth to have a look at…

Use LinqPad for your Micro Benchmarks

I’m a big fan of LinqPad . Anytime I need to test a small code snippet, it is my 'goto' tool. Today I was looking at the latest beta release where I noticed the following announcement: LINQPad now lets you benchmark code with a single keypress! Simply select the code you want to benchmark, and press Ctrl+Shift+B . LINQPad uses the industrial-grade BenchmarkDotNet library as the underlying engine, while providing a customizable real-time graphical display of results. For a complete tutorial, press Ctrl+F1 and type benchmark . Whow! That looks great! Let’s give this feature a try: Download and extra the LinqPad 7 Beta release. Run LinqPad. At the top change the Language to ‘C# Program’. Now paste the following code: Select the 2 methods at the bottom and hit Ctrl+Shift+B . LinqPad will ask you to download BenchmarkDotNet. Click Yes to continue. After downloading the NuGet package and it’s dependencies, click ‘I Accept’ to

Azure Pipelines– Build a web deploy package on Linux

Today I was busy setting up a new CI pipeline on Azure DevOps. As part of this pipeline I wanted to create a web deploy package that later can be picked up by a release pipeline to deploy it on multiple environments. Here is the pipeline I created: Unfortunately when I tried to run this pipeline, it failed with the following error message: /usr/share/dotnet/sdk/6.0.401/Sdks/Microsoft.NET.Sdk.Publish/targets/PublishTargets/Microsoft.NET.Sdk.Publish.MSDeployPackage.targets(97,5): error MSB6004: The specified task executable location "%ProgramW6432%/IIS/Microsoft Web Deploy V3/msdeploy.exe" is invalid. [/home/vsts/work/1/s/Example/Example.csproj] ##[error]Error: The process '/usr/bin/dotnet' failed with exit code 1 The reason why this fails is because  MSDeploy is a Windows-only thing. Inside my pipeline I had specified to use a Linux build server.  That explains why  it cannot find "%ProgramW6432%/IIS/Microsoft Web Deploy V3/msdeploy.exe". To fix

Scaling Azure Functions

You are maybe wondering, why a blog post about scaling Azure Functions? Isn't the whole point of Azure Functions that they scale automatically based on the number of incoming events? And indeed, you are right this is exactly what Azure Functions does for you out-of-the-box. I want to talk about how you can put a limit on the scale out . In our case we have a CosmosDB instance and we want to avoid a 429 response (and also save some Request Units along the way) by limiting the througput.  By default, Consumption plan functions scale out to as many as 200 instances, and Premium plan functions will scale out to as many as 100 instances. To specify a lower maximum for a specific app, we need to modify the functionAppScaleLimit value: az resource update --resource-type Microsoft.Web/sites -g <RESOURCE_GROUP> -n <FUNCTION_APP-NAME>/config/web --set properties.functionAppScaleLimit=<SCALE_LIMIT> Another way to limit the scale out behavior is by controlling