Skip to main content

Posts

Showing posts from March, 2019

ASP.NET 4.5–System.Web.Http is missing

After upgrading a project to ASP.NET 4.5, a warning appeared next to the System.Web.Http reference. Visual Studio could not find the related DLL anymore. Starting from Visual Studio 2017, this DLL is no longer available in the GAC but should be added through NuGet. You can find the corrent assembly in the Web Api Core NuGet package. Remark: Don’t be confused by the Core in the name. This package is not meant for ASP.NET Core.

ASP.NET Core 2.2 - SignalR Breaking changes

After a seamlessly flawless upgrade to ASP.NET Core, we got into trouble when using SignalR hubs. In our code we were rather lazy and had allowed CORS for everything: In ASP.NET Core 2.2 it is no longer allowed to combine AllowAnyOrigin and AllowCredentials.  When you try to use the two together, you‘ll see the following warning in the output window: warn: Microsoft.AspNetCore.Cors.Infrastructure.CorsService The CORS protocol does not allow specifying a wildcard (any) origin and credentials at the same time. Configure the policy by listing individual origins if credentials needs to be supported. Problem is that this warning breaks the preflight OPTIONS check of SignalR and the client can no longer connect to the hubs. To fix this, you have to remove the AllowAnyOrigin method and instead specify the origins explicitly(like it should be in the first place):

.NET Core–Unit Tests Configuration

Inside my (.NET Core) unit tests I wanted to load my configuration. I created a small helper method that loads my configuration from a json file: To make this code compile, you have to add 2 NuGet packages to your test project: To enable the SetBasePath method: https://www.nuget.org/packages/Microsoft.Extensions.Configuration.FileExtensions/ To enable the AddJsonFile method: https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Json/ Now you can invoke this method inside your test setup (I’m using NUnit so I use the OneTimeSetUp method) and pass on the TestDirectory (which is through TestContext.CurrentContext.TestDirectory for NUnit): Remark: Don’t forget to set the appsettings.json file properties to ‘Copy Always’

Entity Framework Core–Configure Warnings

Some posts ago I complained about my experience when using EF Core in combination with PostgreSQL.  One of the things I didn’t liked was that when EF Core couldn’t translate a LINQ query, it silently felled back to client side evaluation, and executed the query on the in-memory collection. This turns out not be specifically related to the EF Core implementation for PostgreSQL, but is in fact general behavior of .NET Core. I only didn’t noticed that the same thing happened with SQL Server as the EF Core provider for SQL Server is a lot smarter and can translate more LINQ queries correctly. I said the following in the previous post: The only way to discover this is through the logs where you see a warning when the driver couldn’t translate your LINQ statement. If you didn’t check the logs, you aren’t even aware of the issue. Yesterday I discovered that you can change this behavior.  Therefor I had to call the ConfigureWarnings() and set the following option: Now every ti

PostgreSQL performance monitoring using pg_stat_statements

To monitor the performance of one of our applications I asked to activate the pg_stat_statements module on PostgreSQL. From the documentation: The pg_stat_statements module provides a means for tracking execution statistics of all SQL statements executed by a server. The module must be loaded by adding pg_stat_statements to shared_preload_libraries in postgresql.conf , because it requires additional shared memory. This means that a server restart is needed to add or remove the module. So I asked the admins to enable the module and restart the server. Once I got confirmation that the restart was done I tried to call the stored procedure: SELECT * FROM pg_stat_statements ORDER BY total_time DESC; Unfortunately this resulted in an error message: Query failed: ERROR: relation "pg_stat_statements" does not exist Let’s check if the module is indeed installed: SELECT * FROM pg_available_extensions That seems OK. It tur

Docker–Getting a SQL Server(Express) instance up and running

To run some local tests, I needed a database. I didn’t want to spend my time installing a local SQL Server (Express) instance. So Docker to the rescue! Using SQL Server Express is easy thanks to the mssql-server-linux image. To create an instance: docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Passw0rd!123" -e "MSSQL_PID=Express" -p 1433:1433 -d --name=sqlexpress microsoft/mssql-server-linux:latest And if you want to save the data when the container is shutdown, you can create an instance using a mounted volume: docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Passw0rd!123" -e "MSSQL_PID=Express" -p 1433:1433 -d --name=sqlexpress -v //d/data/sql:/var/opt/mssql/data microsoft/mssql-server-linux:latest Can’t imagine my life anymore without Docker…

Developers, you don’t need to ask permission…

Reading the “The programmer as decision maker” by Mark Seemann hit a nerve with me. You didn’t read it yet? Ok, go do that first…. …. Done reading? Ok, let’s add my 2 cents on it. I completely agree with the statement that Mark tries to make. Every day in a programmer’s life is combination of micro- and macro decisions. Unfortunately programmer’s are not always aware that they are making these decisions giving them the feeling there are not in control. This is really sad especially if you know that the 3 things that motivate us the most are ‘Autonomy’, ‘Mastery’ and ‘Purpose’. We as developers have the luxury that we can quite easily achieve all 3 but we are not aware that these things are in our control… I sometimes here statements like; “Everything is decided for me”, “I’m not involved in the decision making”, “Our PM will never allow us to do X” where in fact these people are already making decisions(good and bad) all the time. So I would say to all developers out there; “Emb

NHibernate–PostgreSQL - Dialect does not support DbType.Guid Parameter name: typecode

When trying to create a new ISession instance, NHibernate throwed the following error message: Dialect does not support DbType.Guid Parameter name: typecode at NHibernate.Dialect.TypeNames.Get(DbType typecode) at NHibernate.Mapping.Table.SqlTemporaryTableCreateString(Dialect dialect, IMapping mapping) at NHibernate.Mapping.PersistentClass.PrepareTemporaryTables(IMapping mapping, Dialect dialect) at NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) at NHibernate.Cfg.Configuration.BuildSessionFactory() at FluentNHibernate.Cfg.FluentConfiguration.BuildSessionFactory() As you can probably guess from the title, we are using PostgreSQL and it turns out that the default used Dialect doesn’t support Guids. Luckily this was fixed a long time ago and the only thing I had to do was specify explicitly a higher Dialect version:

NHibernate–PostgreSQL–Naming strategy

In a previous post I mentioned we were using PostgreSQL together with Entity Framework Core. One of the things I stumbled over when trying to use NHibernate instead was that Entity Framework Core uses Pascal Casing to generate the names of Tables, Columns and queries whereas NHibernate uses lowercasing(which is in fact more suitable for PostgreSQL, but that is another discussion). One of the reasons I like NHibernate so much is that it is fully customizable and that we can easily change the naming strategy to take the EF Core conventions into account. Start by creating a custom naming strategy: As a second step, register this naming strategy on your NHibernate configuration object(in this example through Fluent NHibernate):

My impressions when using Entity Framework Core with PostgreSQL

On one of my projects we are using Entity Framework Core. Although my experience on combining Entity Framework Core and SQL Server was quite OK, I couldn’t say the same thing when using the Entity Framework Core drivers for PostgreSQL( https://www.npgsql.org/efcore/index.html ). Although you get the impression that everything works, the moment you have a look at the logs you see 2 obvious issues: For a lot of LINQ statements the driver cannot correctly translate it to a working SQL statement. So what happens is that everything is loaded in memory and the expression is executed on the in-memory collection. The only way to discover this is through the logs where you see a warning when the driver couldn’t translate your LINQ statement. If you didn’t check the logs, you aren’t even aware of the issue. The generated SQL statements are far from optimal. For example every time you add an include an extra correlated subquery is generated which turns out to be quite slow on PostgreSQ

Azure DevOps - Create a prerelease nuget package using versionsuffix

Creating a prerelease package is not that hard. The dotnet core task allows you to invoke the pack command and gives you the option to use Automatic package versioning: This will give you 3 options to expand your package version: 'Use the date and time':  this will generate a SemVer -compliant version formatted as X.Y.Z-ci-datetime where you choose X, Y, and Z. 'Use an environment variable'; this allows you to select an environment variable 'Use the build number', this will use the build number to version your package. Note: Under Options set the build number format to be ' $(BuildDefinitionName)_$(Year:yyyy).$(Month).$(DayOfMonth)$(Rev:.r) '. If neither of these options is OK for you, know that there exists another alternative: By adding a VersionSuffix build property you can specify any prerelease value you want.

Azure Data Studio–PostgreSQL support

With the latest release of Azure Data Studio, a new (preview) extension was introduced that supports PostgreSQL. Enabling the extension Open up Azure Data Studio . Click on the Extension tab on the left. Click in the Search Extensions in Marketplace textbox and start typing ‘PostgreSQL’. You should find the PostgreSQL extension from Microsoft. Select he Extension and click Install.

.NET Core - The nuget command failed with exit code(1) and error(System.InvalidCastException: Unable to cast object of type 'System.String' to type 'NuGet.Frameworks.NuGetFramework'.

After creating a .NET Core application, I configured our build pipeline to Restore the NuGet packages, Build the application and publish it. Unfortunately the build failed almost immediately.  The build specifically failed on the Restore NuGet packages with the following error message: The nuget command failed with exit code(1) and error(System.InvalidCastException: Unable to cast object of type 'System.String' to type 'NuGet.Frameworks.NuGetFramework'. Here are the related build logs: 2019-03-15T15:58:27.7657780Z Attempting to pack file: D:\b\4\agent\_work\83\s\SOFACore\SOFACore.Logging.Exceptions\SOFACore.Logging.Exceptions.csproj 2019-03-15T15:58:27.7706605Z [command]D:\b\4\agent\_work\_tool\NuGet\4.1.0\x64\nuget.exe pack D:\b\4\agent\_work\83\s\SOFACore\SOFACore.Logging.Exceptions\SOFACore.Logging.Exceptions.csproj -NonInteractive -OutputDirectory D:\b\4\agent\_work\83\a -Properties Configuration=Release -version 1.0.0-CI-20190315-165827 -Verbosit

Azure DevOps (Server) - Exempt from policy enforcement

Last week a colleague asked me to explain the concept of Pull Requests using our Azure DevOps Server instance. I followed the steps as described here to active a branch policy on their master branch. Then I did a local change, committed it and tried to push it to the master branch, but instead of getting an error stating that I couldn’t push to this branch, it succeeded! I tried it a few times and every time I was able to push no matter what policies I activated. Was it a bug? Or did I do something wrong? Turned out it was neither. Let’s find out what happened: Go to the Code –> Branches section inside Azure DevOps(TFS). Click on the 3 dots … and choose Branch Security. On the Security screen, I clicked on my user account and had a look at the permissions on the right. I had the Exempt from policy enforcement permission enabled. This overrules are policies and explained why I was able to push my changes. Some extra notes from the documentati

Orleans–Getting started series

If you are new to the Virtual Actor model and you want to learn Orleans , the open source actor framework built by Microsoft research, I can recommend the following blog series by Russel Hammett Jr: Getting Started with Microsoft Orleans Microsoft Orleans — Reusing Grains and Grain State Updating Orleans Project to be more ready for new Orleans Examples! Microsoft Orleans — Reporting Dashboard Microsoft Orleans — Code Generation Issue? Microsoft Orleans - Dependency Injection Microsoft Orleans — Reminders and grains calling grains Microsoft Orleans — Easily switching between “development” and “production” configurations. Microsoft Orleans — Observers Microsoft Orleans - Dashboard Update — CPU/Memory Stats

ASP.NET Core Performant logging

One of the hidden features of ASP.NET Core is the support for LoggerMessage . From the documentation : LoggerMessage features create cacheable delegates that require fewer object allocations and reduced computational overhead compared to logger extension methods , such as LogInformation , LogDebug , and LogError . For high-performance logging scenarios, use the LoggerMessage pattern. LoggerMessage provides the following performance advantages over Logger extension methods: Logger extension methods require "boxing" (converting) value types, such as int , into object . The LoggerMessage pattern avoids boxing by using static Action fields and extension methods with strongly-typed parameters. Logger extension methods must parse the message template (named format string) every time a log message is written. LoggerMessage only requires parsing a template once when the message is defined. The best way to use it is through some extension methods

Angular–Global Error Handling

By default, Angular comes with its own ErrorHandler .   This error handler captures all errors inside our angular app and prevents the app from crashing. If you want to implement your own error handling logic, this would be a good place to start. Therefore we have to create a new class that inherits from ErrorHandler: Inside this ErrorHandler we use our own ErrorService: This ErrorService exposes an observable that can be used to subscribe to errors in different places. The ErrorHandler is registered inside our AppModule:

Serilog–Filter expressions

One of the lesser known features inside Serilog is the support for filter expressions. This gives you a SQL like syntax to filter your log messages. To enable this feature you have to install the Serilog.Filters.Expressions nuget package. More information: https://nblumhardt.com/2017/01/serilog-filtering-dsl/

Serilog–Sub loggers

One of the lesser known features inside Serilog is the support for sub loggers. This allows you to redirect (part of) the log traffic to different sinks based on certain conditions. To create and use a sub logger you have to use the WriteTo.Logger() method. On this method you can create a whole new logger element with its own enrichers, filters and sinks: In this example all log data with a warning level(including coming from Microsoft) is written to warning.txt and all Microsoft related data is logged to microsoft.txt. More information here: https://nblumhardt.com/2016/07/serilog-2-write-to-logger/

Serilog–Code Analyzer

With the introduction of Roslyn as the compiler platform in Visual Studio, we got support for Roslyn analyzers. If you never heard about it, read this great introduction here; https://andrewlock.net/creating-a-roslyn-analyzer-in-visual-studio-2017/ . Although creating your own analyzer is not that easy, using them is. And there are a lot of situations where an analyzer can prevent you from making some stupid mistakes. One example I liked was when using Serilog. Serilog is a structured logging framework where your messages are logged using message templates. Parameters used inside these message templates are serialized and stored as separate properties on the log event giving you great flexibility in searching and filtering through log data. Here is an example from the Serilog website : The Position and the Elapsed properties are stored separately from the message. Problem is that you can easily make a mistake, although the message template expects 2 parameters it is possibl

SQL Server–Row vs Page compression

Row compression is one of these performance tricks that can have a big impact on your SQL Server database. I got a questioj from a colleague about the difference with page compression. So here is the (short) answer: Row compression is a subset of page compression. This means that when page compression is enabled row compression is active as well. So what is row compression adding to the table(no pun intended)? Page compression will combine in the following order: Row compression Prefix compression Dictionary compression The difference between those is well explained in the SQL Server documentation: https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2012/cc280464(v=sql.110) If you are in doubt if you should enable page compression or not, the following article will give you some guidance: https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2008/dd894051(v=sql.100)