Skip to main content

Posts

Showing posts from November, 2017

The request was aborted: could not create SSL/TLS secure channel–Part 1

A colleague asked for help with the following problem: He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend: The request was aborted: Could not create SSL/TLS secure channel. Let’s take you through the journey that brought us to our final solution. Part 1 – The unexpected .NET Framework update. What we find especially strange was that it worked before, and that the errors only started to appear recently. This brought us on the path of what was recently changed. One of the things that happened was an upgrade to .NET Framework 4.6. Could it be? In the .NET documentation we found that in .NET 4.6 the HttpClient defaults to TLS 1.2. Maybe that caused the error? We updated our code to force the system to use TLS 1.0: This worked but now we were using an older(less secure) TLS version. L

Angular–OnPush Change detection

I always had a hard time explaining how and why the OnPush change detection worked in Angular. But recently I found the following post at Angular-University: https://blog.angular-university.io/onpush-change-detection-how-it-works/ In my opinion, the best explanaition about this topic ever!

Angular: Unsubscribe observables

When you subscribe to an observable in Angular, a subscription is created. To avoid memory leaks in your Angular components it is important that you unsubscribe from any subscription in the OnDestroy lifecycle hook. Although you could think this is a good general rule, there are a lot of exceptions where Angular does the right thing and handles the unsubscribe for you: AsyncPipe: if you are using observable streams via the AsyncPipe then you do not need to worry about unsubscribing. The async pipe will take care of subscribing and unsubscribing for you. Router observables: The Router manages the observables it provides and localizes the subscriptions. The subscriptions are cleaned up when the component is destroyed, protecting against memory leaks, so we don't need to unsubscribe from the route params Observable. Http observables: Http observables produces finite values and don’t require an unsubscribe. As a general conclusion, in most cases you don’t need to ex

Progressive Web App–Redirect from HTTP to HTTPS

I’m currently working on a Progressive Web App(PWA) using ASP.NET Core. After creating our initial setup I used the Google Lighthouse chrome extension to check my application. The results looked OK, I only had one failed audit: “Does not redirect HTTP traffic to HTTPS”. Let’s fix this by adding the AspNetCore Rewrite middleware: If you need to specify a port, you can add some extra parameters:

Entity Framework 6.2–Model Cache

Entity Framework 6.2 introduces the concept of a Model Cache. This gives you the ability to use a prebuilt edmx. Why is this useful? By default Entity Framework(not Core) will generate an EDMX behind the scenes at startup. If you have a rather large EF model, it can take a lot of time. How to configure it? To configure it, you have to use the new SetModelStore method to apply the DefaultDbModelStore . This class compares the timestamp between the assembly of your context against the edmx.  If they do not match, the model cache is deleted and rebuilt.

C# 7–Deconstruction

With the introduction of ValueTuple in C# 7, the C# team also introduced support for deconstruction that allows you to split out a ValueTuple in its discrete arguments: The nice thing is that this feature is not limited to tuples; any type can be deconstructed, as long as it has a Deconstruct method with the appropriate out parameters: Remark: The Deconstruct method can also be an extension method.

ElasticSearch for YAML lovers

By default the output returned by ElasticSearch is JSON. However if you like the more dense format that YAML offers, it is possible  to ask ElasticSearch to output your data as YAML. Just add  ‘format=yaml’ as a querystring parameter to your query: GET nusearch/package/_search ?format=yaml {   "suggest": {     "package-suggestions": {       "prefix": "asp",       "completion": {         "field": "suggest"       }     }   },   "_source": {     "includes": [       "id",       "downloadCount",       "summary"     ]   } } Your output will become:

TFS 2018 and SQL Server 2017–Multidimensional server mode

Last week I did a test migration for a customer from TFS 2015 to TFS 2018. They already configured a SQL Server 2017 Database Services, Analysis Services and Reporting Services for me, so I thought I was good to go. However halfway through the migration process I noticed the following warning appear: [2017-11-15 14:18:43Z][Warning] An error was encountered while attempting to upgrade either the warehouse databases or the Analysis Services database. Reporting features will not be usable until the warehouse and Analysis Services database are successfully configured. Use the Team Foundation Server Administration console to update the Reporting configuration. Error details: TF400646: Team Foundation Server requires Analysis Services instance installed in the 'Multidimensional' server mode. The Analysis Services instance you supplied (<INSTANCE NAME>) is in 'Tabular' server mode. You can either install another instance of Analysis Services and supply that instan

PostgreSQL–Case insensitive search

By default when you use the LIKE operator in PostgreSQL, your query parameter is used in a case sensitive matter. This means that the query SELECT * FROM “Products” WHERE “Name” LIKE ‘Beverag%’ will produce different results then SELECT * FROM “Products” WHERE “Name” LIKE ‘beverag%’ A possible solution for this could be the use of regular expressions: SELECT * FROM “Products” WHERE “Name” ~* 'beverag'; This query returns all matches where the name contains the word ‘beverag’ but because it is a case-insensitive search, it also matches things like ‘BEVERAGE’ .

ADFS–Where to find issuer thumbprint for WIF(Windows Identity Foundation)?

To validate a new installation of ADFS, we created a small sample app that used Windows Identity Foundation to authenticate to the ADFS server. We got most information from our system administrator, but it turned out that the Issuer Thumbprint was missing. As the system administrator wasn’t in the office, we had to find a different solution to get the thumbprint. Here is what we did: By default every ADFS server exposes its metadata through a metadata xml. Typically the url where you can find this metadata xml will be something like https://adfs4.sample.be/federationmetadata/2007-06/federationmetadata.xml Inside this XML you can find the signing and encryption certificates: To read out the certificate information(and the thumbprint) you have to Create a new text file Copy the certificate value into the file Save the file with a .cer extension Now you can open the file, and read out the thumbprint value: Double click on th

TFS 2018– Remove ElasticSearch

Here is an update regarding my post http://bartwullems.blogspot.be/2017/05/tfs-2017how-to-uninstall-elasticsearch.html . In TFS 2018, the command to remove your ElasticSearch instance changed a little and the steps became: Open Powershell as an administrator Go to the folder where ConfigureTFSSearch.ps1 is installed. In TFS 2018, this is typically C:\Program Files\Microsoft Team Foundation Server 2018\Search\zip Run the ConfigureTFSSearch script with the remove option: ".\Configure-TFSSearch.ps1 -Operation remove"

ElasticSearch–Understand the query magic using ‘explain’

Sometimes an ElasticSearch query is invalid or doesn’t return the results you expect. To find out what is going on, you can add the explain parameter to the query string: In your results you get an extra explanation section More information: https://www.elastic.co/guide/en/elasticsearch/guide/master/_validating_queries.html

Using GuidCOMB in SQL Server and PostgreSQL

On a project I’m working on, we expect to have a really large amount of data. Therefore we decided to switch our ID strategy from Integers to GUIDs. Problem is that when you start using GUIDs as part of your database index, they become really fragmented resulting in longer write times. To solve this, you can use the GuidCOMB technique where a part of the GUID is replaced by a sorted date/time value. This guarantees that values will be sequential and avoids index page splits. NHibernate and Marten supports the GuidCOMB technique out-of-the-box but if you want to use it with other tools you can try RT.Comb ,  a small .NET Core library that generated “COMB” GUID values in C#. Here is a sample how to use it in combination with Entity Framework: Let’s first create an Entity Framework Value Generator that uses the RT.Comb library: To apply this generator when an object is added to a DbContext, you can specify it in the Fluent mapping configuration:

Kestrel error: The connection was closed because the response was not read by the client at the specified minimum data rate.

While running some performance tests on our ASP.NET Core application, after increasing the load to a certain level, we saw the following error messages appear on the server: The connection was closed because the response was not read by the client at the specified minimum data rate. This error is related to the Minimum request body data rate specified by Kestrel. From the documentation : Kestrel checks every second if data is coming in at the specified rate in bytes/second. If the rate drops below the minimum, the connection is timed out. The grace period is the amount of time that Kestrel gives the client to increase its send rate up to the minimum; the rate is not checked during that time. The grace period helps avoid dropping connections that are initially sending data at a slow rate due to TCP slow-start. The default minimum rate is 240 bytes/second, with a 5 second grace period. A minimum rate also applies to the response. The code to set the request limit

Azure Storage Explorer–Support for Cosmos DB

Great news! From now on the Azure Storage Explorer can be used to manage your Cosmos DB databases. Key features Open Cosmos DB account in the Azure portal Add resources to the Quick Access list Search and refresh Cosmos DB resources Connect directly to Cosmos DB through a connection string Create and delete Databases Create and delete Collections Create, edit, delete, and filter Documents Create, edit, and delete Stored Procedures, Triggers, and User-Defined Functions Install Azure Storage Explorer: [ Windows ], [ Mac ], [ Linux ]

.NET Core Unit Tests–Enable logging

I noticed that .NET Core Unit Tests capture the output send through tracing (via Trace.Write() ) and through the console (via Console.Write() ). It took me some time before I had the correct code to get the Microsoft.Extensions.Logging data written to my test logs. So here is a small code snippet in case you don’t want to search for it yourself: Remark: Don’t forget to include the Microsoft.Extensions.Logging.Console nuget package.

.NET Core Unit Tests–Using configuration files

Here are the steps to use Microsoft.Extensions.Configuration in your .NET Core unit tests: Include the .NET Core Configuration NuGet package: https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Json/ Copy the appsettings.json to your test project. Don’t forget to set the ‘Copy to output directory’ to ‘Copy always’ Add the following code to build up your configuration:

Git–Commit changes to a new branch

Did it ever happend to you that you were changing some code in one branch until you realized that you actually wanted to commit on another (new) branch? I was expecting that this was not easy to do, but in fact it’s rather easy. Don’t stage your changes, instead just create a new branch using git checkout -b another-branch This will create and checkout “another-branch”. Now you can stage your files using git add . and commit them using git commit -m <message> Remark: This works in Visual Studio as well

TypeScript Index Signatures

I love TypeScript and how it helps me writing better JavaScript applications. However sometimes I struggle with the dynamic world that JavaScript has to offer and the fight for type safety that TypeScript adds to the mix. A situation I had was where I had some objects each sharing the same set of properties. However in some situations extra metadata was added depending on the customer(it’s a multitenant solution). So I created an interface for all the shared properties, but what should I do with the (possible) extra metadata? Adding so many different extra properties on the interface and making them optional sounded not ideal? TypeScript allows you to add extra properties to specific objects with the help of index signatures. Adding an index signature to the interface declaration allows you to specify any number of properties for different objects that you are creating. An example:

.NET Core SignalR Client error: System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation

To test a .NET Core SignalR application, I created a sample application(using the full .NET framework) where I included the Microsoft.AspNetCore.SignalR.Client NuGet package and added the following code: However when I tried running this application, it failed with the following error message: System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation, Version=0.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) I checked all my assembly references but they all seemed OK. As a workaround, I was able to avoid the issue by removing the .WithConsoleLogger() line. Anyone who has an idea what can be wrong? Remark: I think it has to do something with the SignalR client which targets .NET Standard 2.0 and my sample application wich targets .NET Framework 4.7. B

Web.config transformations in .NET Core

In a previous post I mentioned that we started to put environment variables inside our web.config files to change the ASPNETCORE_ENVIRONMENT setting inside our ASP.NET Core apps. As we were already using Web Deploy to deploy our ASP.NET Core applications, we decided to use the web.config transformations functionality to set the environment variable in our web.config to a correct value before deploying: We created extra web.{environment}.config files And added the Xdt transformation configuration: However when we tried to deploy, we noticed that the transformation was not executed and that the original web.config file was used. What did we do wrong? The answer turned out to be “Nothing”. Unfortunately ASP.NET Core projects don’t support the transformation functionality. Luckily, a colleague(thanks Sami!) brought  the following library under my attention: https://github.com/nil4/dotnet-transform-xdt dotnet-transform-xdt is a dotnet CLI tool for applying XML D