Thursday, November 30, 2017

The request was aborted: could not create SSL/TLS secure channel–Part 1

A colleague asked for help with the following problem:

He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend:

The request was aborted: Could not create SSL/TLS secure channel.

sslerror

Let’s take you through the journey that brought us to our final solution.

Part 1 – The unexpected .NET Framework update.

What we find especially strange was that it worked before, and that the errors only started to appear recently. This brought us on the path of what was recently changed. One of the things that happened was an upgrade to .NET Framework 4.6. Could it be?

In the .NET documentation we found that in .NET 4.6 the HttpClient defaults to TLS 1.2. Maybe that caused the error?

We updated our code to force the system to use TLS 1.0:

This worked but now we were using an older(less secure) TLS version. Let’s continue the investigation in another post…

Wednesday, November 29, 2017

Angular–OnPush Change detection

I always had a hard time explaining how and why the OnPush change detection worked in Angular. But recently I found the following post at Angular-University: https://blog.angular-university.io/onpush-change-detection-how-it-works/

image

In my opinion, the best explanaition about this topic ever!

Tuesday, November 28, 2017

Angular: Unsubscribe observables

When you subscribe to an observable in Angular, a subscription is created. To avoid memory leaks in your Angular components it is important that you unsubscribe from any subscription in the OnDestroy lifecycle hook.

Although you could think this is a good general rule, there are a lot of exceptions where Angular does the right thing and handles the unsubscribe for you:

  • AsyncPipe: if you are using observable streams via the AsyncPipe then you do not need to worry about unsubscribing. The async pipe will take care of subscribing and unsubscribing for you.
  • Router observables: The Router manages the observables it provides and localizes the subscriptions. The subscriptions are cleaned up when the component is destroyed, protecting against memory leaks, so we don't need to unsubscribe from the route params Observable.
  • Http observables: Http observables produces finite values and don’t require an unsubscribe.

As a general conclusion, in most cases you don’t need to explicitly call the unsubscribe method. The default behavior of Observable operators is to dispose of the subscription as soon as .complete() or .error() messages are published.

Monday, November 27, 2017

Progressive Web App–Redirect from HTTP to HTTPS

I’m currently working on a Progressive Web App(PWA) using ASP.NET Core. After creating our initial setup I used the Google Lighthouse chrome extension to check my application.

The results looked OK, I only had one failed audit: “Does not redirect HTTP traffic to HTTPS”.

clip_image002

Let’s fix this by adding the AspNetCore Rewrite middleware:

If you need to specify a port, you can add some extra parameters:

Friday, November 24, 2017

Entity Framework 6.2–Model Cache

Entity Framework 6.2 introduces the concept of a Model Cache. This gives you the ability to use a prebuilt edmx.

Why is this useful?

By default Entity Framework(not Core) will generate an EDMX behind the scenes at startup. If you have a rather large EF model, it can take a lot of time.

How to configure it?

To configure it, you have to use the new SetModelStore method to apply the DefaultDbModelStore. This class compares the timestamp between the assembly of your context against the edmx.  If they do not match, the model cache is deleted and rebuilt.

image

Thursday, November 23, 2017

C# 7–Deconstruction

With the introduction of ValueTuple in C# 7, the C# team also introduced support for deconstruction that allows you to split out a ValueTuple in its discrete arguments:

The nice thing is that this feature is not limited to tuples; any type can be deconstructed, as long as it has a Deconstruct method with the appropriate out parameters:

Remark: The Deconstruct method can also be an extension method.

Wednesday, November 22, 2017

ElasticSearch for YAML lovers

By default the output returned by ElasticSearch is JSON. However if you like the more dense format that YAML offers, it is possible  to ask ElasticSearch to output your data as YAML. Just add  ‘format=yaml’ as a querystring parameter to your query:

GET nusearch/package/_search?format=yaml
{
  "suggest": {
    "package-suggestions": {
      "prefix": "asp",
      "completion": {
        "field": "suggest"
      }
    }
  },
  "_source": {
    "includes": [
      "id",
      "downloadCount",
      "summary"
    ]
  }
}

Your output will become:

image

Tuesday, November 21, 2017

TFS 2018 and SQL Server 2017–Multidimensional server mode

Last week I did a test migration for a customer from TFS 2015 to TFS 2018. They already configured a SQL Server 2017 Database Services, Analysis Services and Reporting Services for me, so I thought I was good to go.

However halfway through the migration process I noticed the following warning appear:

[2017-11-15 14:18:43Z][Warning] An error was encountered while attempting to upgrade either the warehouse databases or the Analysis Services database. Reporting features will not be usable until the warehouse and Analysis Services database are successfully configured. Use the Team Foundation Server Administration console to update the Reporting configuration. Error details: TF400646: Team Foundation Server requires Analysis Services instance installed in the 'Multidimensional' server mode. The Analysis Services instance you supplied (<INSTANCE NAME>) is in 'Tabular' server mode. You can either install another instance of Analysis Services and supply that instance name, or you can uninstall this instance and install it in the required server mode.

Turns out that in SQL Server 2017 Analysis Services you can choose between 3 possible modes:

Relational modeling constructs (model, tables, columns), articulated in tabular metadata object definitions in Tabular Model Scripting Language (TMSL) and Tabular Object Model (TOM) code. This is the default value. OLAP modeling constructs (cubes, dimensions, measures). Originally an add-in, but now fully integrated into Excel. Visual modeling only, over an internal Tabular infrastructure. You can import a Power Pivot model into SSDT to create a new Tabular model that runs on an Analysis Services instance.

Value

Description

TABULAR Relational modeling constructs (model, tables, columns), articulated in tabular metadata object definitions in Tabular Model Scripting Language (TMSL) and Tabular Object Model (TOM) code.
MULTIDIMENSIONAL OLAP modeling constructs (cubes, dimensions, measures).
POWERPIVOT Originally an add-in, but now fully integrated into Excel. Visual modeling only, over an internal Tabular infrastructure. You can import a Power Pivot model into SSDT to create a new Tabular model that runs on an Analysis Services instance.

More information: https://docs.microsoft.com/en-us/sql/analysis-services/comparing-tabular-and-multidimensional-solutions-ssas

Monday, November 20, 2017

PostgreSQL–Case insensitive search

By default when you use the LIKE operator in PostgreSQL, your query parameter is used in a case sensitive matter. This means that the query

SELECT * FROM “Products” WHERE “Name” LIKE ‘Beverag%’

will produce different results then

SELECT * FROM “Products” WHERE “Name” LIKE ‘beverag%’

A possible solution for this could be the use of regular expressions:

SELECT * FROM “Products” WHERE “Name” ~* 'beverag';
  

This query returns all matches where the name contains the word ‘beverag’ but because it is a case-insensitive search, it also matches things like ‘BEVERAGE’.

Friday, November 17, 2017

ADFS–Where to find issuer thumbprint for WIF(Windows Identity Foundation)?

To validate a new installation of ADFS, we created a small sample app that used Windows Identity Foundation to authenticate to the ADFS server.

We got most information from our system administrator, but it turned out that the Issuer Thumbprint was missing.

As the system administrator wasn’t in the office, we had to find a different solution to get the thumbprint.

Here is what we did:

image

  • To read out the certificate information(and the thumbprint) you have to
    • Create a new text file
    • Copy the certificate value into the file
    • Save the file with a .cer extension
  • Now you can open the file, and read out the thumbprint value:
    • Double click on the file
    • Go to the Details tab
    • Scroll to the thumbprint property

image

    Thursday, November 16, 2017

    TFS 2018– Remove ElasticSearch

    Here is an update regarding my post http://bartwullems.blogspot.be/2017/05/tfs-2017how-to-uninstall-elasticsearch.html.

    In TFS 2018, the command to remove your ElasticSearch instance changed a little and the steps became:

    • Open Powershell as an administrator
    • Go to the folder where ConfigureTFSSearch.ps1 is installed. In TFS 2018, this is typically C:\Program Files\Microsoft Team Foundation Server 2018\Search\zip
    • Run the ConfigureTFSSearch script with the remove option: ".\Configure-TFSSearch.ps1 -Operation remove"

    Wednesday, November 15, 2017

    ElasticSearch–Understand the query magic using ‘explain’

    Sometimes an ElasticSearch query is invalid or doesn’t return the results you expect. To find out what is going on, you can add the explain parameter to the query string:

    image

    In your results you get an extra explanation section

    image

    More information: https://www.elastic.co/guide/en/elasticsearch/guide/master/_validating_queries.html

    Tuesday, November 14, 2017

    Using GuidCOMB in SQL Server and PostgreSQL

    On a project I’m working on, we expect to have a really large amount of data. Therefore we decided to switch our ID strategy from Integers to GUIDs. Problem is that when you start using GUIDs as part of your database index, they become really fragmented resulting in longer write times.

    To solve this, you can use the GuidCOMB technique where a part of the GUID is replaced by a sorted date/time value. This guarantees that values will be sequential and avoids index page splits.

    NHibernate and Marten supports the GuidCOMB technique out-of-the-box but if you want to use it with other tools you can try RT.Comb,  a small .NET Core library that generated “COMB” GUID values in C#.

    Here is a sample how to use it in combination with Entity Framework:

    • Let’s first create an Entity Framework Value Generator that uses the RT.Comb library:
    • To apply this generator when an object is added to a DbContext, you can specify it in the Fluent mapping configuration:

    Friday, November 10, 2017

    Kestrel error: The connection was closed because the response was not read by the client at the specified minimum data rate.

    While running some performance tests on our ASP.NET Core application, after increasing the load to a certain level, we saw the following error messages appear on the server:

    The connection was closed because the response was not read by the client at the specified minimum data rate.

    This error is related to the Minimum request body data rate specified by Kestrel.

    From the documentation:

    Kestrel checks every second if data is coming in at the specified rate in bytes/second. If the rate drops below the minimum, the connection is timed out. The grace period is the amount of time that Kestrel gives the client to increase its send rate up to the minimum; the rate is not checked during that time. The grace period helps avoid dropping connections that are initially sending data at a slow rate due to TCP slow-start.

    The default minimum rate is 240 bytes/second, with a 5 second grace period.

    A minimum rate also applies to the response. The code to set the request limit and the response limit is the same except for having RequestBody or Response in the property and interface names.

    The problem was that we were simulating our load from one machine which was not capable of sending enough data at the expected request rate. After scaling out our load tests to multiple test agents on different machines, the problem disappeared…

    Thursday, November 9, 2017

    Azure Storage Explorer–Support for Cosmos DB

    Great news! From now on the Azure Storage Explorer can be used to manage your Cosmos DB databases.

    Key features

    • Open Cosmos DB account in the Azure portal
    • Add resources to the Quick Access list
    • Search and refresh Cosmos DB resources
    • Connect directly to Cosmos DB through a connection string
    • Create and delete Databases
    • Create and delete Collections
    • Create, edit, delete, and filter Documents
    • Create, edit, and delete Stored Procedures, Triggers, and User-Defined Functions

    image

    Install Azure Storage Explorer: [Windows], [Mac], [Linux]

    Wednesday, November 8, 2017

    .NET Core Unit Tests–Enable logging

    I noticed that .NET Core Unit Tests capture the output send through tracing (via Trace.Write()) and through the console (via Console.Write()).

    It took me some time before I had the correct code to get the Microsoft.Extensions.Logging data written to my test logs.

    So here is a small code snippet in case you don’t want to search for it yourself:

    Remark: Don’t forget to include the Microsoft.Extensions.Logging.Console nuget package.

    Tuesday, November 7, 2017

    .NET Core Unit Tests–Using configuration files

    Here are the steps to use Microsoft.Extensions.Configuration in your .NET Core unit tests:

    Monday, November 6, 2017

    Git–Commit changes to a new branch

    Did it ever happend to you that you were changing some code in one branch until you realized that you actually wanted to commit on another (new) branch?

    I was expecting that this was not easy to do, but in fact it’s rather easy.

    Don’t stage your changes, instead just create a new branch using

    git checkout -b another-branch

    This will create and checkout “another-branch”.

    Now you can stage your files using

    git add .

    and commit them using

    git commit -m <message>

    Remark: This works in Visual Studio as well

    Friday, November 3, 2017

    TypeScript Index Signatures

    I love TypeScript and how it helps me writing better JavaScript applications. However sometimes I struggle with the dynamic world that JavaScript has to offer and the fight for type safety that TypeScript adds to the mix.

    A situation I had was where I had some objects each sharing the same set of properties. However in some situations extra metadata was added depending on the customer(it’s a multitenant solution). So I created an interface for all the shared properties, but what should I do with the (possible) extra metadata? Adding so many different extra properties on the interface and making them optional sounded not ideal?

    TypeScript allows you to add extra properties to specific objects with the help of index signatures. Adding an index signature to the interface declaration allows you to specify any number of properties for different objects that you are creating.

    An example:

    Thursday, November 2, 2017

    .NET Core SignalR Client error: System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation

    To test a .NET Core SignalR application, I created a sample application(using the full .NET framework) where I included the Microsoft.AspNetCore.SignalR.Client NuGet package and added the following code:

    However when I tried running this application, it failed with the following error message:

    System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation, Version=0.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)

    I checked all my assembly references but they all seemed OK.

    As a workaround, I was able to avoid the issue by removing the .WithConsoleLogger() line. Anyone who has an idea what can be wrong?

    Remark: I think it has to do something with the SignalR client which targets .NET Standard 2.0 and my sample application wich targets .NET Framework 4.7. But no clue what exactly is causing it…

    Wednesday, November 1, 2017

    Web.config transformations in .NET Core

    In a previous post I mentioned that we started to put environment variables inside our web.config files to change the ASPNETCORE_ENVIRONMENT setting inside our ASP.NET Core apps. As we were already using Web Deploy to deploy our ASP.NET Core applications, we decided to use the web.config transformations functionality to set the environment variable in our web.config to a correct value before deploying:

    • We created extra web.{environment}.config files

    image

    • And added the Xdt transformation configuration:

    However when we tried to deploy, we noticed that the transformation was not executed and that the original web.config file was used.

    What did we do wrong?

    The answer turned out to be “Nothing”. Unfortunately ASP.NET Core projects don’t support the transformation functionality. Luckily, a colleague(thanks Sami!) brought  the following library under my attention: https://github.com/nil4/dotnet-transform-xdt

    dotnet-transform-xdt is a dotnet CLI tool for applying XML Document Transformation (typically, to ASP.NET configuration files at publish time, but not limited to this scenario).

    That’s exactly what we need!

    How to use dotnet-transform-xdt?

    • Right click on your ASP.NET Core project and choose Edit csproj
    • Add the following line to the list of Package references:

    <DotNetCliToolReference Include="Microsoft.DotNet.Xdt.Tools" Version="2.0.0" />

    • Add the following target before the closing </project>:

    <Target Name="ApplyXdtConfigTransform" BeforeTargets="_TransformWebConfig">
       <PropertyGroup>
         <_SourceWebConfig>$(MSBuildThisFileDirectory)Web.config</_SourceWebConfig>
         <_XdtTransform>$(MSBuildThisFileDirectory)Web.$(Configuration).config</_XdtTransform>
         <_TargetWebConfig>$(PublishDir)Web.config</_TargetWebConfig>
       </PropertyGroup>
       < Exec Command="dotnet transform-xdt --xml &quot;$(_SourceWebConfig)&quot; --transform &quot;$(_XdtTransform)&quot; --output &quot;$(_TargetWebConfig)&quot;" Condition="Exists('$(_XdtTransform)')" />
    < /Target>

    • If you now run dotnet publish  and examine the Web.config in the publish output folder, a transformed web.config should be there…