Friday, January 29, 2021

Azure DevOps Server–Move collections–Detach/Attach

One of the basic building blocks of Azure DevOps Server is a “Project collection”.

The documentation describes it like this:

The primary organizational unit for all data in Azure DevOps Server. Collections determine the resources available to the projects added to them. 

From a technical standpoint, a project collection is a separate database. It is possible to move project collection between different database instances but be aware that just backing up a collection on one database server and restoring it on another database server is not sufficient. Instead you should do this through a detach/attach operation:

Detach a collection

Before you move a collection, you need to detach it. You can do this from the Azure DevOps Administration console:

When detaching, all jobs and services are stopped, and then the collection database is stopped. In addition, the detach process copies over the collection-specific data from the configuration database and saves it as part of the project collection database. This configuration data is what allows the collection database to be attached to a different deployment of Azure DevOps Server. If that data is not present, you cannot attach the collection to any deployment of Azure DevOps Server except the one from which it originated.

Important: During the detach operation the collection is not accessible!

Backup/Restore

Now that the configuration data is successfully copied into the collection database, you can take a backup of the collection database using a tool of your choice and restore it on the database server used by your target Azure DevOps server.

Important: Don’t forget to Start the collection again if you still want to use it on the old server instance.

Attach a collection

As a last step you need to attach the collection again. This is also done through the Azure DevOps Administration console:

More information: https://docs.microsoft.com/en-us/azure/devops/server/admin/move-project-collection

Thursday, January 28, 2021

C# 9–Use Record types in .NET Standard 2.0

I created a small Record type in C#:

However the compiler didn’t like it:

Why? Because I was adding this code to a .NET Standard 2.0 library that doesn’t support C# 9 features.

As a reminder, a small overview:

Target framework Version C# Language Version
.NET 5 C# 9.0
.NET Core 3.x C# 8.0
.NET Core 2.x C# 7.3
.NET Standard 2.1 C# 8.0
.NET Standard 2.0 C# 7.3

I didn’t want to upgrade to .NET 5 so what are my options? Turns out, that you can work around this with a small compiler hack…

Let’s try it:

  • First you need to override the language version in the csproj file:
  • Then you should add the following code to get rid of the compiler error:

Wednesday, January 27, 2021

Combining Autofac and Microsoft DI in unit tests

I had to write a test where I need to inject a configuration dependency through IOptions<>.  So far I had been using Autofac in my tests, so I wouldn’t go through all the trouble to get IOptions<> working inside Autofac (especially when Microsoft DI offers the convenient AddOptions<> and Configure<> methods).

Luckily it is not that hard to bring the 2 DI containers together. Here is the code I used inside my tests:

Tuesday, January 26, 2021

C# 9–Module Initializer

Yesterday I blogged about an issue I had with Telerik. I needed to initialize some code before I could call the PDF conversion functionality:

The question is where should I put this logic?

Perfect case to try out a new C# 9 feature: Module initializers.

Module initializers allow you to do eager, one-time initialization when a module is loaded, with minimal overhead and without the user needing to explicitly call anything.

Creating a module initializer is easy. Just add the ModuleInitializerAttribute on top of a method. Some requirements are imposed on the method targeted with this attribute:

  1. The method must be static.
  2. The method must be parameterless.
  3. The method must return void.
  4. The method must not be generic or be contained in a generic type.
  5. The method must be accessible from the containing module.

Monday, January 25, 2021

Telerik Document Processing–InvalidOperationException

When trying to convert a Word document to PDF through the Telerik Document Processing libraries I got the error message below when I tried to convert a Word document containing a high-res image:

System.InvalidOperationException

  HResult=0x80131509

  Message=FixedExtensibilityManager.JpegImageConverter cannot be null. The .NET Standard does not define APIs for converting images or scaling their quality. In order to export images different than Jpeg and Jpeg2000 or ImageQuality different than High you will need to reference the Telerik.Documents.ImageUtils assembly/NuGet in your project and to set its basic implementation to the FixedExtensibilityManager.JpegImageConverter property or to create a custom one inheriting the JpegImageConverterBase class. For more information go to: https://docs.telerik.com/devtools/document-processing/libraries/radpdfprocessing/cross-platform

  Source=Telerik.Documents.Fixed

  StackTrace:

   at Telerik.Windows.Documents.Fixed.Model.Resources.EncodedImageData.TryCreateFromUnknownImageDataWithScaledQuality(Byte[] data, ImageQuality imageQuality, EncodedImageData& encodedImageData)

   at Telerik.Windows.Documents.Fixed.Model.Resources.ImageSource.DoOnUnknownData(Byte[] unknownData, ImageQuality imageQuality, Action`1 doOnEncodedData)

   at Telerik.Windows.Documents.Fixed.Model.Resources.ImageSource.InitializeImageInfoFromUnknownData(Byte[] unknownData, ImageQuality imageQuality)

   at Telerik.Windows.Documents.Fixed.Model.Resources.ImageSource.EnsureImageInfo()

   at Telerik.Windows.Documents.Fixed.Model.Resources.ImageSource.get_Width()

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Model.Elements.Objects.ImageXObject.CopyPropertiesFrom(IPdfExportContext context, ImageSource imageSource)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.PdfExportContext.GetResource(ImageSource resource)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.PdfContentExportContext.GetResource(ImageSource imageSource)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ImageWriter.WriteOverride(PdfWriter writer, IPdfContentExportContext context, Image element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.MarkableContentElementWriter`1.Write(PdfWriter writer, IPdfContentExportContext context, Object element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ContentElementWriterBase.WriteElement(PdfWriter writer, IPdfContentExportContext context, Object element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ClippingWriter.WriteOverride(PdfWriter writer, IPdfContentExportContext context, Clipping clipping)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ContentElementWriter`1.Write(PdfWriter writer, IPdfContentExportContext context, Object element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ContentElementWriterBase.WriteElement(PdfWriter writer, IPdfContentExportContext context, Object element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ContentRootWriter.WriteOverride(PdfWriter writer, IPdfContentExportContext context, IContentRootElement element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ContentElementWriter`1.Write(PdfWriter writer, IPdfContentExportContext context, Object element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ContentElementWriters.ContentElementWriterBase.WriteElement(PdfWriter writer, IPdfContentExportContext context, Object element)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Model.Elements.DocumentStructure.ContentStream.BuildContentData(IPdfExportContext context, IResourceHolder resourceHolder, IContentRootElement contentRootElement)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Model.Elements.DocumentStructure.Page.CopyPropertiesFrom(IPdfExportContext context, RadFixedPage fixedPage)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Model.Elements.DocumentStructure.DocumentCatalog.CopyRadFixedPageProperties(RadFixedPage source, Page destination, IRadFixedDocumentExportContext context)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Model.Elements.DocumentStructure.DocumentCatalog.CopyPagePropertiesFrom(IRadFixedDocumentExportContext context)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Model.Elements.DocumentStructure.DocumentCatalog.CopyPropertiesFrom(IRadFixedDocumentExportContext context)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.PdfExporter.Export(IRadFixedDocumentExportContext context, Stream output)

   at Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.PdfFormatProvider.ExportOverride(RadFixedDocument document, Stream output)

   at Telerik.Windows.Documents.Common.FormatProviders.FormatProviderBase`1.Export(T document, Stream output)

   at Telerik.Windows.Documents.Flow.FormatProviders.Pdf.PdfFormatProvider.ExportOverride(RadFlowDocument document, Stream output)

   at Telerik.Windows.Documents.Common.FormatProviders.FormatProviderBase`1.Export(T document, Stream output)

   at DocumentGeneration.API.Features.Conversion.DocumentConverter.Save(RadFlowDocument document, String format)

The error message brought me to the following page explaining why I got this error:

In the .NET Framework version of PdfProcessing, scenarios like reading fonts and converting images or scaling their quality are something that comes out of the box. However, the .NET Standard specification doesn't specify APIs to provide such functionalities built in the library. To get around the limitations in .Net Standard you need to set a FontsProvider and JpegImageConverter yourself.

A default implementation of the JpegImageConverter class exists in the Telerik.Documents.ImageUtils assembly. This implementation uses the ImageSharp and TiffLibrary.ImageSharpAdapter libraries to convert images to Jpeg format.

Here is some example code on how to set it:

Friday, January 22, 2021

HotChocolate– GraphQL Schema Stitching Introduction

If you are new to GraphQL Schema Stitching, I can recommend the On.NET show with Michael Staib, one of the creators of Hot Chocolate.

With Schema stitching, developers can create a unified GraphQL schema from multiple underlying GraphQL APIs. This gives us the benefit of reducing multiple data queries for our data in a single request from one schema. In this episode, Jeremy chats with the author of Hot Chocolate, Michael Staib, about how .NET developers can implement GraphQL schema stitching with Hot Chocolate.

Thursday, January 21, 2021

ASP.NET Core–Performance tip

Question: What is wrong with the following ASP.NET Core controller:

No idea?

Take a look at the return type...

Returning IEnumerable from an action results in synchronous collection iteration by the serializer. The result is the blocking of calls and a potential for thread pool starvation.

One possible way to avoid this is by explicitly invoking ToListAsync before returning the result:

Starting from ASP.NET Core 3.0 you can also use IAsyncEnumerable<T>. This no longer results in synchronous iteration and is as efficient as returning IEnumerable<T>.

Let’s rewrite the example above to use IAsyncEnumerable<T>:

REMARK: If we take a look at the original example again, it will not be an issue in ASP.NET Core 3.0 or higher. Why? Because we allow the framework to choose how to handle the IQueryable<T> we get back from the Where() clause. In the situation we are using ASP.NET Core 3.0 or higher the results will be buffered and correctly provided to the serializer. However this is implicit behavior that doesn’t work in all situations(it depends on the underlying types used).

More information and other performance tips can be found here: https://docs.microsoft.com/en-us/aspnet/core/performance/performance-best-practices

Wednesday, January 20, 2021

ASP.NET Core–Open API–Possible parameter values

I’m currently building an ASP.NET Core API used to convert between document formats. You can upload a document, specify the from and to format and get a converted document back.

The list of supported conversions is dynamically determined based on a list of format providers loaded through a plugin architecture.

To help developers in getting to know the API, I created Open API documentation using Swagger. Unfortunately the generated Open API documentation wasn’t that useful out-of-the-box as you get no hint on the possible “from” and “to” values.

My first idea would be to change the string value to an enum but this would require a recompile every time a new document format is introduced(and make my plugin architecture obsolete). My second idea was to just update the documentation, but in fact this was even worse than my first idea.

Time to find a better solution: I created a custom ParameterFilter that dynamically generates a list of available values based on the loaded plugins.

Here is the ParameterFilter:

And here is the updated Swagger registration:

My Open API documentation now looks like this:

Tuesday, January 19, 2021

HotChocolate Schema stitching - Remove root types

When using (HotChocolate) Schema stitching by default all root query types are merged and available to use in your stitched schema:

This is great as default behavior but not if you want to build up the stitched schema in a different way. To start from a clean slate you can call IgnoreRootTypes():

Monday, January 18, 2021

ASP.NET Core - Set ContentType

Today I was looking at a way to set the correct MIME type for a response based on the file extension.

What is a MIME type?

A media type (also known as a Multipurpose Internet Mail Extensions or MIME type) is a standard that indicates the nature and format of a document, file, or assortment of bytes. It is defined and standardized in IETF's RFC 6838.

How do you set the MIME type in ASP.NET Core?

In ASP.NET core you can set the MIME type of the file you return through the contenttype property on the File method of the ControllerBase class:

How to get the MIME type from a file extension?

My first idea was to write a switch statement with the list of file extension and the corresponding MIME types.  But this turned out not be necessary as the ASP.NET Core team already provides you this functionality out-of-the-box through the FileExtensionContentTypeProvider:

Friday, January 15, 2021

WSFederation is back!

Great news for everyone who has still some WCF services running. Until recently only BasicHttpBinding and NetTcpBinding were supported. This limited the usefulness as most WCF services I encountered where using WSFederationHttpBinding (or the more recent WS2007FederationHttpBinding). These bindings were typically used in scenarios where users authenticate with a security token service (like Active Directory Federation Services).

Last month, the System.ServiceModel.Federation package was released, adding client(!) support for WSFederation.

Remark: You cannot use this binding yet through dotnet-svcutil or the Visual Studio’s WCF Web Service Reference Provider Tool but this will be added in the future.

Thursday, January 14, 2021

NUnit - Combining multiple asserts in one test

The default rule when writing tests is ‘one test, one assert’. There are multiple reasons why this is a good idea:

  1. It becomes very clear what your test is doing and what the success or failure condition is
  2. If there are multiple asserts in your test, the test will fail on the first assert that returns false. The other assertions will never be validated.

However this sometimes lead to duplication where you are writing the same test multiple times to assert different aspects.

In NUnit you can avoid the second reason by using ‘Assert.Multiple()’.

NUnit will store any failures encountered in the Multiple block and report all of them together. If multiple asserts failed, then all will be reported.

Wednesday, January 13, 2021

Integrating the RabbitMQ client in a .NET Core application

I couldn’t find a good example on the best way to integrate the RabbitMQ C# client in a .NET core application. So time to write a post about it.

I would recommend to use the Hosted Services, either directly through the IHostedService or through the BackgroundService.

Let’s see how to do it using the BackgroundService:

  • We create a new class that inherits from BackgroundService:
  • We’ll use the StartAsync method to create the Connectionfactory, Connection and a Channel to listen on:

Remark: Notice the DispatchConsumersAsync = true in the ConnectionFactory configuration. This is important to be able to use an async consumer. If you don’t add this configuration no messages will be picked up by the AsyncEventingBasicConsumer.

  • We also implement the StopAsync method to cleanup when our backgroundservice is shutdown:
  • Now the only thing left is to create a consumer to start receiving messages. We’ll use the ExecuteAsync method for that:
  • Of course we should not forget to register our Hosted Service:

Tuesday, January 12, 2021

Find the multiplier(s) in your team

Remark: This blog post is dedicated to all the team multipliers I had the pleasure to work with over the years.

A na├»ve way to measure the productivity of a developer is to have a look at the individual work output(let’s just ignore how we measure work output in this post, everyone agrees it is number of lines written). The reality is that most software is developed by a team where the work output of each member is a function of work output from all their teammates. Measuring individual productivity can be hard to impossible.

Taking this approach would certainly leave out the most important people in your team; the team multipliers. These people may not accomplish a lot on their own, but their effect on the team’s productivity is exponential. It are those people who can make the difference between success and failure. They’ll typically don’t produce a lot of features on their own, but make the jobs of their teammates a lot easier, are always there to help, willing to do the dirty jobs.

Find the multipliers in your team. Recognize their value and say “thank you”. They are the real heroes in your team…

Monday, January 11, 2021

GraphQL Hot Chocolate–Application Insights monitoring

If you use the default Application Insights configuration to monitor your GraphQL endpoint, there is not that much information you’ll get. For example, when a GraphQL request fails, it still results in a 200 response. Application Insights isn’t aware that you are using GraphQL and don’t flag this request as an exception.

Let’s see how we can improve our Application Insights logging by integrating it with Hot Chocolate.

Disclaimer: the code below has been written for Hot Chocolate 10. In Hot Chocolate 11, the diagnostics system is rewritten. I’ll write an update post when I have the chance to update one of my applications to Hot Chocolate 11.

  • First thing we need to do is to create a class that implement the IDiagnosticObserver marker interface and inject the Application Insights TelemetryClient:
  • For every event that we want to track, we need to add a method and specify the corresponding Hot Chocolate event through the DiagnosticName attribute:
  • We create a dictionary to hold all ongoing requests. This is necessary to update the correct TelemetryClient instance:
  • Now we can see if our GraphQL request has errors and in that case mark the request as an exception:
  • As a last step we need to register our Observer on the ServiceCollection through the AddDiagnosticObserver extension method:

Here is the complete code:

Thursday, January 7, 2021

Visual Studio Code Coding Packs

Microsoft introduced the concept of Coding Packs for Visual Studio Code.

What is a Coding Pack?

Coding Packs are specialized installs of Visual Studio Code that are pre-configured for specific coding environments. They are designed to help students and educators ramp up on VS Code more quickly by automating editor and environment configuration. Coding Packs are a single download that performs the following when installed:

  • Installs VS Code
  • Installs a specific runtime (Python, Java, etc.)
  • Installs essential extensions
  • Ensures installs happen under the user scope so as not to require administrator permissions.

There are especially useful to help you having the necessary tools installed on your student PC’s when giving a training.

Right now a coding pack exists for Python and JAVA but I’ll hope others will be added in the future:

Configure Visual Studio 2019 to use SourceLink

Source Link allows developers to debug the source code from libraries they are using. Tools like Visual Studio can step into its source code what helps you find out what is going on behind the scenes. Let’s see how we can enable SourceLink debugging in Visual Studio:

  • Go to Tools > Options > Debugging > Symbols and check if the  ‘NuGet.org Symbol Server’ option is checked. It is recommended to specify a local directory that can be used as a symbol cache to avoid the need to download symbols every time:

  • Disable 'Just My Code' in  Tools > Options > Debugging > General to make the debugger load symbols that are not part of your current solution. Also check that the  'Enable Source Link support' is set (which should be the case by default):

  • When you now try to debug code, you can step into the code through F11. Visual Studio will ask you to download the corresponding source code:

Troubleshooting

In case you get a ‘File Dialog’ asking you to specify the source location, it can be the case that you don’t have access to the source repository.

In that case click on ‘Cancel’ in the dialog. You’ll end up on the 'Source Link Authentication failed' window. Here you can try to authenticate (again) by selecting the 'Authenticate with Source Link' option from the list of available actions:

Wednesday, January 6, 2021

Using System.Diagnostic.DiagnosticSource to intercept database requests– Improvements

Yesterday I blogged about the usage of DiagnosticSource to capture database specific events. The code we ended up with looked like this:

What is rather unfortunate is that you are working with generic events and that you need to use magic strings and reflection to capture the object data:

A solution exists through the usage of the Microsoft.Extensions.DiagnosticAdapter package. This Nuget package will use code generation to avoid the cost of reflection.

dotnet add package Microsoft.Extensions.DiagnosticAdapter

  • Update the implementation to use the SubscribeWithAdapter extension method:
  • Now we no longer need to implement the IObserver<KeyValuePair<string, object>> interface. Instead, for each event that we want to handle, we need to declare a separate method, marking it with an attribute DiagnosticNameAttribute. The parameters of these methods are the parameters of the event being processed:

Tuesday, January 5, 2021

Using System.Diagnostic.DiagnosticSource to intercept database requests

.NET has built-in integration with DiagnosticSource. Through this class libraries can send events and applications can subscribe to these events. Each such event contains additional information that can help you diagnose what’s going on inside your application. The great thing is that DiagnosticSource is already used by libraries like AspNetCore, EntityFrameworkCore, HttpClient, and SqlClient, what makes it possible for developers to intercept incoming / outgoing http requests, database queries, and so on.

Let’s create a small example that allows us to intercept database requests coming from SqlClient:

dotnet add package System.Diagnostics.DiagnosticSource

  • Create an Observer class that implements the IObserver<DiagnosticListener> interface. We’ll complete the implementation later.
  • Subscribe an instance the Observer class through the DiagnosticListener.AllListeners object:
  • Complete the Observer class implementation by only observing events coming from the SqlClientDiagnosticListener:
  • As a last step we need to implement a second interface IObserver<KeyValuePair<string, object>>. This requires use to implement a method IObserver<KeyValuePair<string, object>>.OnNext that takes as a parameter KeyValuePair<string, object>, where the key is the name of the event, and the value is an object that gives us some extra context:
If we now run our application and do a database query(through Entity Framework, NHibernate or another library that is using the SqlClient) you’ll see output like this:
System.Data.SqlClient.WriteConnectionOpenBefore
{ OperationId = 3da1b5d4-9ce1-4f28-b1ff-6a5bfc9d64b8, Operation = OpenAsync, Connection = System.Data.SqlClient.SqlConnection, Timestamp = 26978341062 }
System.Data.SqlClient.WriteConnectionOpenAfter
{ OperationId = 3da1b5d4-9ce1-4f28-b1ff-6a5bfc9d64b8, Operation = OpenAsync, ConnectionId = 84bd0095-9831-456b-8ebc-cb9dc2017368, Connection = System.Data.SqlClient.SqlConnection,
Statistics = System.Data.SqlClient.SqlStatistics+StatisticsDictionary, Timestamp = 26978631500 }
System.Data.SqlClient.WriteCommandBefore
{ OperationId = 5c6d300c-bc49-4f80-9211-693fa1e2497c, Operation = ExecuteReaderAsync, ConnectionId = 84bd0095-9831-456b-8ebc-cb9dc2017368, Command = System.Data.SqlClient.SqlComman
d }
System.Data.SqlClient.WriteCommandAfter
{ OperationId = 5c6d300c-bc49-4f80-9211-693fa1e2497c, Operation = ExecuteReaderAsync, ConnectionId = 84bd0095-9831-456b-8ebc-cb9dc2017368, Command = System.Data.SqlClient.SqlComman
d, Statistics = System.Data.SqlClient.SqlStatistics+StatisticsDictionary, Timestamp = 26978709490 }
System.Data.SqlClient.WriteConnectionCloseBefore
{ OperationId = 3f6bfd8f-e5f6-48b7-82c7-41aeab881142, Operation = Close, ConnectionId = 84bd0095-9831-456b-8ebc-cb9dc2017368, Connection = System.Data.SqlClient.SqlConnection, Stat
istics = System.Data.SqlClient.SqlStatistics+StatisticsDictionary, Timestamp = 26978760625 }
System.Data.SqlClient.WriteConnectionCloseAfter
{ OperationId = 3f6bfd8f-e5f6-48b7-82c7-41aeab881142, Operation = Close, ConnectionId = 84bd0095-9831-456b-8ebc-cb9dc2017368, Connection = System.Data.SqlClient.SqlConnection, Stat
istics = System.Data.SqlClient.SqlStatistics+StatisticsDictionary, Timestamp = 26978772888 }

As you can see, we capture different event where every event includes a set of parameters.


Monday, January 4, 2021

Improve the debugging experience with deterministic builds

When creating a nuget package from a .NET Core project, I noticed the following warning in the NuGet Package Explorer:

Source Link (which I’ll talk about in another post) is Valid but the build is marked as ‘Non deterministic’.

What does this mean?

By default builds are non-deterministic, meaning there is no guarantee that building the same code twice(on the same or different machines) will produce exactly the same binary output. Deterministic builds are important as they enable verification that the resulting binary was built from the specified source and provides traceability.

How to fix it?

To enable deterministic builds a property should be set to through: ContinuousIntegrationBuild.

Important: This property should not be enabled during local dev as the debugger won't be able to find the local source files.

Therefore, you should use your CI system's variable to set them conditionally.

For Azure Pipelines,the variable is TF_BUILD can be used:

<PropertyGroup Condition="'$(TF_BUILD)' == 'true'">
  <ContinuousIntegrationBuild>true</ContinuousIntegrationBuild>
</PropertyGroup>

For GitHub Actions, the variable is GITHUB_ACTIONS, so the result would be:

<PropertyGroup Condition="'$(GITHUB_ACTIONS)' == 'true'">
  <ContinuousIntegrationBuild>true</ContinuousIntegrationBuild>
</PropertyGroup>

Friday, January 1, 2021

Customize your build with Directory.Build.props and Directory.Build.targets

With the release of MSBuild version 15, you no longer need to add custom properties to every project in your solution. Instead you can create a single file called Directory.Build.props in the root folder of your source.When MSBuild runs, it will search for this file and loads it when found.

I use this on my projects to set common properties:

Directory.Build.props is imported very early in the build pipeline and can be overridden elsewhere in the project file or in imported files. If you want to override properties set in a specific project, use Directory.Build.targets instead. It is used and discovered in the same way, but it is loaded much later in the build pipeline. So, it can override properties and targets defined in specific projects.

More information: https://docs.microsoft.com/nl-nl/visualstudio/msbuild/customize-your-build?view=vs-2019#directorybuildprops-and-directorybuildtargets