Skip to main content

Posts

Showing posts from September, 2022

CosmosDB - Cleanup items automatically

I'm currently attending NDC Oslo . During one of the sessions, this one to be exact, the speaker shared a nice CosmosDB feature; Time to Live . Some of the typical questions, you ask yourself when building applications are: Should I use hard or soft deletes? Are there any regulations on how long I can keep this data(e.g. GDPR)? How can I keep the amount of data under control? In the talk, the example they mention is weather data that is collected through various services. This data captured and used to feed an artificial intelligence algorithm to predict the most fuel-efficient way to operate a vessel in the North and Baltic sea. After a prediction is done, the weather data is no longer necessary and can safely be deleted. Thanks to the Time to Live(TTL) feature in CosmosDB implementing this requirement is really easy. You have to 2 ways to control the TTL value: At the container level At the item level Control TTL at the container level Open

ASP.NET Core–Generate your swagger.json at build time

ASP.NET Core allows you to create an Open API specification file through Swashbuckle . By default Swashbuckle will generate the OpenAPI documentation at runtime, but did you know that it is also possible to do this at build time? Doing this at build time helps us to reduce application startup time and also gives use the possibility to start serving the OpenAPI specification file from a CDN. Let us walk through the steps to achieve this… First we need to create a tool manifest file so we call the Swashbuckle CLI tooling: dotnet new tool-manifest Now we can install the CLI tooling: dotnet tool install SwashBuckle.AspNetCore.Cli Let’s invoke the tooling to create the swagger.json file: dotnet swagger tofile --output [output] [startupassembly] [swaggerdoc] where ... [output] is the relative path where the Swagger JSON will be output to [startupassembly] is the relative path to your application's startup assembly [swaggerdoc] is the name of th

Azure Pipelines–Parallel stages

I’m currently migrating an existing CI/CD pipeline build in Azure DevOps from the ‘classic’ build approach to YAML templates. One of the things I had to do was find out a way to run multiple stages in parallel. In the original pipeline code was deployed to multiple environments at the same time and I needed something similar when using YAML templates. By default when you define multiple stages in a pipeline, they run sequentially in the order in which you define them in the YAML file. The exception to this is when you add dependencies . With dependencies, stages run in the order of the dependsOn requirements. We can use this to our advantage to run 2 stages in parallel. Here is the yaml file we used: Notice the dependsOn we use in the ‘Acceptatie_Extern’ stage. This will trigger both the ‘Acceptatie’ and ‘Acceptatie_Extern’ stage after the ‘Development’ stage. And here is how the final pipeline looked like:

ASP.NET Core–Serializing enums

By default when you return an enum value as part of your data contract in ASP.NET Core, it will be serialized as a number value. Here is an example of our data contract: And how it will be returned by our API: But what if we want to return the enum value as a string instead of a number? We can achieve this by updating our JsonOptions and register the JsonStringEnumConverter : If we now call our API, the result looks like this:

ASP.NET Core - Automatically apply the [ApiExplorer] attribute

Last week I talked about the [ApiExplorer] attribute and how it can simplify your controller code when implementing Web API's. Today I have a small tip for you, did you know it isn't necessary to annotate every controller with the [ApiExplorer] attribute? It is also possible to apply the attribute to an assembly. When the [ApiController] attribute is applied to an assembly, all controllers in the assembly have the [ApiController] attribute applied. Therefore apply the assembly-level attribute to the Program.cs file:

API Design in ASP.NET Core Part V

This week I had the honor to give a training to some of the newly started young professionals in our organisation. The topic of the training was API design in ASP.NET Core. During this training we discussed multiple topics and a lot of interesting questions were raised. I'll try to tackle some of them with a blog post. The question I try to tackle today is... Why should I make my action methods asynchronous? I asked the question during the training why the second example is better than the first example: Example 1 – Non async Example 2 – Async The answer I got the most was better performance , the idea that the second (async) example executes faster than the first example. Although I wouldn’t say that this answer is completely incorrect, it is not the performance at the individual request level that improves. Instead it will be the general performance of your web application as a whole that will improve as the async version will give you beter scalability . Why

API Design in ASP.NET Core Part IV

This week I had the honor to give a training to some of the newly started young professionals in our organisation. The topic of the training was API design in ASP.NET Core. During this training we discussed multiple topics and a lot of interesting questions were raised. I'll try to tackle some of them with a blog post. The question I try to tackle today is... What is idempotent in REST? An important aspect when building REST API’s is the concept of ‘idempotency’. ‘Idem…what?’ I here you thinking. Let’s ask MDN for an explanation: An HTTP method is idempotent if an identical request can be made once or several times in a row with the same effect while leaving the server in the same state. In other words, an idempotent method should not have any side effects — unless those side effects are also idempotent. Implemented correctly, the GET , HEAD , PUT , and DELETE methods are idempotent , but not the POST method. All safe methods are also idempotent. That explanation b

API Design in ASP.NET Core Part III

This week I had the honor to give a training to some of the newly started young professionals in our organisation. The topic of the training was API design in ASP.NET Core. During this training we discussed multiple topics and a lot of interesting questions were raised. I'll try to tackle some of them with a blog post. The question I try to tackle today is... What does the [APIController] attribute do? If you created your first Web API in ASP.NET Core (through dotnet new webapi -o MyFirstAPI ), you ended up with a WeatherForecastController containing the following code: On top of this controller class, you see above the Route attribute, the ApiController attribute. Adding this attribute on top of your controller has a bigger impact than you would think and enables the following behaviours: Attribute routing requirement Automatic HTTP 400 responses Binding source parameter inference Multipart/form-data request inference Problem details for error stat

API Design in ASP.NET Core Part II

This week I had the honor to give a training to some of the newly started young professionals in our organisation. The topic of the training was API design in ASP.NET Core. During this training we discussed multiple topics and a lot of interesting questions were raised. I'll try to tackle some of them with a blog post. The question I try to tackle today is... When do I know that my Web API is truly RESTful? Yesterday I introduced the concept of REST and mentioned that it was an architecture style. As a result you’ll find a lot of different flavors of Web API’s in the wild each using a different approach and all calling themselves REST API’s. To help you answer the question above, we can use the Richardson Maturity Model . Leonard Richardson analyzed a hundred different web service designs and divided these designs into four categories . These categories are based on how much the web services are REST compliant . He used three main factors to decide the maturity of a

API Design in ASP.NET Core Part I

Today I had the honor to give a training to some of the newly started young professionals in our organisation. The topic of the training was API design in ASP.NET Core. During this training we discussed multiple topics and a lot of interesting questions were raised. I'll try to tackle some of them with a blog post. Let's start with a first question and immediatelly a controversial one... What is REST? Let's first have a look at what Wikipedia has to say about REST: Representational state transfer ( REST ) is a software architectural style that describes a uniform interface between physically separate components, often across the Internet in a Client-Server architecture. It was originally introduced in a dissertation by Roy Fielding titled 'Architectural Styles and the Design of Network-based Software Architectures’. Here is the link if you like to read it: https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm . What is most important to understand is

Azure DevOps - Error MSB3326: Cannot import the following key file.

When trying to build a .NET application on our build server, it failed with the following error message: ##[error]C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\MSBuild\Current\Bin\Microsoft.Common.CurrentVersion.targets(3326,5): Error MSB3326: Cannot import the following key file: . The key file may be password protected. To correct this, try to import the certificate again or import the certificate manually into the current user’s personal certificate store. C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\MSBuild\Current\Bin\Microsoft.Common.CurrentVersion.targets(3326,5): error MSB3326: Cannot import the following key file: . The key file may be password protected. To correct this, try to import the certificate again or import the certificate manually into the current user’s personal certificate store. [D:\b\3\_work\32\s\Source\Example.csproj] ##[error]C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\MSBuild\Current\Bin\Microsoft.C

Azure Kubernetes Service–Volume node affinity conflict

When trying to deploy a pod on our AKS cluster, it hanged in the pending state. I looked at the logs and noticed the following warning: FailedScheduling – 1 node(s) had volume node affinity conflict The pod I tried to deploy had a persistent volume claim and I was certain that the persistent volume was succesfully deployed and available. What was going wrong? It turned out that my AKS cluster was deployed in 3 availability zones but I had only 2 nodes running: AKS cluster is gedeployed in 3 zones maar er zijn maar 2 nodes: $ kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone" Name:               aks-agentpool-38609413-vmss000003                     failure-domain.beta.kubernetes.io/zone= westeurope-1 Name:               aks-agentpool-38609413-vmss000004                     failure-domain.beta.kubernetes.io/zone= westeurope-2 Here we can see that our nodes are living in zones westeurope-1 and westeuro

Marten Memory Issues

I was talking to a colleague this week and he was sharing a story about some memory issues they had with Marten , the DocumentDB and EventStore on top of PostgreSQL. As I was partially involved during the investigation of the memory issue, I was eager to learn what the root cause was and how they fixed it. Let me share their findings… Marten extensively uses runtime code generation through Roslyn. Although this is really powerful it comes with a cost in terms of memory usage and cold start issues. Luckily this is something you can fix by generating the necessary types upfront and include the generated code in your application assemblies. The documentation describes the required steps quite well: Use the Marten command line extensions for your application Register all document types, compiled query types , and event store projections upfront in your DocumentStore configuration In your deployment process, you'll need to generate the Marten code with dotnet r

GraphQL Conf 2022

On September 29, 2022, a new edition of GraphQL Conf is hosted online. Learn about GraphQL best practices from industry experts and become part of the GraphQL community. You can subscribe for the conference here: https://graphqlconf.org/

Units of measure in C#

Yesterday I talked about the 'units of measure' feature in F# . This allows you to associate a unit of measure to a type and allows the compiler to help you avoid providing the wrong input. Unfortunately in C#, a similar feature does not exist but we can let the type system help us by creating 'Value Objects'. What are Value Objects? Value objects are probably best known in the context of Domain Driven Design where they are one of the tactical patterns. They allow you to encapsulate (domain) logic in a type and should always be in  a valid state. Eric Evans uses the following example to describe them: When a child is drawing, he cares about the color of the marker he chooses, and he may care about the sharpness of the tip. But if there are two markers of the same color and shape, he probably won’t care which one he uses. If a marker is lost and replaced by another of the same color from a new pack, he can resume his work unconcerned about the switch. Value Ob

Units of Measure in F#

One of the nice features of F#, is the support for units of measure. This allows you to associate a unit of measure(e.g. liters, centimeters,...) to a type(typically floating point, integral and decimal types). Now the compiler becomes your friend and helps you avoid the unit mismatch errors that can occur. With the compile time checking you now exactly the expected input or output type. Built-in units For some of the common unit types Microsoft has a unit library available in the FSharp.Data.UnitSystems.SI namespace. It includes SI units in both their symbol form (like m for meter) in the UnitSymbols subnamespace, and in their full name (like meter for meter) in the UnitNames subnamespace. Here you can find the list of available units: https://fsharp.github.io/fsharp-core-docs/reference/fsharp-data-unitsystems-si-unitsymbols.html To convert a unitless value to a value that has units, the easiest way is to multiply by a 1 or 1.0 value that is annotated with the appropriate

Compare framework versions with BenchmarkDotNet

I’m working on upgrading an existing .NET 5.0 application to .NET 6.0. Although the general claim is that .NET 6.0 is (a lot) faster than .NET 5.0, I like to verify if this is correct for my use cases. I have some performance critical code in this application and I have to be sure that performance remains at last on par (and preferably gets better) after the upgrade. Let's write some micro benchmarks using benchmarkdotnet . Compare framework versions I don’t want to make this a full benchmarkdotnet tutorial. If you are new to the library, check out the Getting Started section in the documentation. Let us focus on how we can run a benchmark against multiple framework version. First thing that is important to do is that our benchmark application (a .NET Core console application) should target all the frameworks we want to test. Therefore update the TargetFrameworks property: Now we need to update our Program.cs file to call the BenchmarkSwitcher: Based on the command

BenchmarkDotNet - The minimum observed iteration time is … ms which is very small

I created a small benchmark that compared the performance of my application logic with logging enabled vs disabled. Here is a (simplified) version of my code: Executed is triggered in the Program.cs file: However when I executed the benchmark using Benchmarkdotnet, I got the following warnings: // * Warnings * MinIterationTime   NHibernateWithAndWithoutLoggingEnabled.WithLogging: InvocationCount=1, UnrollFactor=1    -> The minimum observed iteration time is 20.9305 ms which is very small. It's recommended to increase it to at least 100.0000 ms using more operations.   NHibernateWithAndWithoutLoggingEnabled.WithoutLogging: InvocationCount=1, UnrollFactor=1 -> The minimum observed iteration time is 19.2794 ms which is very small. It's recommended to increase it to at least 100.0000 ms using more operations. You get these warnings when the operation you want to benchmark is very short. As this can lead to unstable results, a warning is produced. To get ri

Serverless September is back!

I have just returned from vacation and I'm still catching up with all the information in my news feed. One post got my attention, #Serverless September is back! In the upcoming days and weeks you’ll learn everything you need to know to build serverless applications on Azure. They will explore the full spectrum of serverless - from Functions-as-a-Service to Containerization and Microservices. Bookmark this link ( https://aka.ms/serverless-september ) and start your serverless learning experience.