Skip to main content

Posts

Showing posts from 2017

WSFederation OWIN - Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly 'System.IdentityModel.Tokens.Jwt, Version=5.0.0.127, Culture=neutral, PublicKeyToken=31bf3856ad364e35'.

At one of my clients we are (still) using ASP.NET MVC 5 and Web API 2. To secure these web applications we use the WSFederation OWIN middleware together with ADFS. This combination works nice and helps us keeping our applications secure. Today one of the teams contacted me and complained that the middleware no longer worked. The error message they got looked like this: Could not load type 'System.IdentityModel.Tokens.TokenValidationParameters' from assembly 'System.IdentityModel.Tokens.Jwt, Version=5.0.0.127, Culture=neutral, PublicKeyToken=31bf3856ad364e35'. The root cause of the problem could be found in the version number, they accidently upgraded the System.IdentityModel.Tokens.Jwt assembly from version 4 to 5. It turns out that version 5 is no longer compatible with OWIN. After reverting back to version 4, everything returned back to normal…

TFS 2017 Build- MSTest v2 tests are not recognized

After upgrading our unit tests to MSTest v2 we noticed that our tests were no longer discovered by the VSTest task on our build agent. As a solution, we decided to invoke the test execution ourself. Therefore I added 2 tasks to our Build definition: One command line task to execute dotnet test One task to collect and publish the test results In the command line task I configured the following settings: To execute the dotnet command we specify ‘dotnet’ as the Tool We also specify the following arguments: test: we want to execute the test commando --no-restore: the package restore already happened in a previous build step and shouldn’t be re-executed here --no-build: assembly compilation already happened in a previous build step and shouldn’t be re-executed here --logger:trx: output the test results in the trx format A last important setting that we change is the ‘Continue on error’ setting is set to true. If we don’t do t

Designing, building, and operating microservices on Azure

Perfect reading material for during the Christmas holidays! Microsoft released some new material together with a reference implementation on how to build a microservices application on top of the Azure platform. Microservices have become a popular architectural style for building cloud applications that are resilient, highly scalable, and able to evolve quickly. To be more than just a buzzword, however, microservices require a different approach to designing and building applications. In this set of articles, we explore how to build and run a microservices architecture on Azure. Topics include: Using Domain Driven Design (DDD) to design a microservices architecture. Choosing the right Azure technologies for compute, storage, messaging, and other elements of the design. Understanding microservices design patterns. Designing for resiliency, scalability, and performance. Building a CI/CD pipeline.

VS Code–Import Cost extension

The more I use VS Code, the more I love it. It is fast and offers an ever growing list of great extensions. One of the extensions I added recently is the Import Cost Extension . From the documentation : This extension will display inline in the editor the size of the imported package. The extension utilizes webpack with babili-webpack-plugin in order to detect the imported size.   On one of the Angular projects I’m working I saw our (minified) vendor.bundle.js file growing to 8MB(!) in size. People were importing any library they found useful without being aware of the extra cost it introduces. With the help of the Import Cost extension, you see the cost early and maybe think twice before you import another library. I’m a fan! More information: https://hackernoon.com/keep-your-bundle-size-under-control-with-import-cost-vscode-extension-5d476b3c5a76

Angular–CompareWith

Angular 4 introduces a new directive that helps you compare select options. This directive is especially useful when your select options are dynamically populated. Let’s explain why… Here is a sample component that contains a dropdown with some options: We set the default option by passing the object reference through the FormControl constructor. Problem is now that when we repopulate our options list(for example through an HTTP call), the object reference is gone and the model binding of our selected value is lost. To solve this problem we can use the compareWith directive which will no longer compare the object references but uses a boolean expression or function instead:

Error CS1525: Invalid expression term 'throw'

On our build server we noticed that one of our builds failed with the following error message: Error CS1525: Invalid expression term 'throw' When building the project locally in Visual Studio, we had no errors ? We found a solution here that worked for us as well: Update Microsoft.Net.Compilers to 2.0.1 or greater. We did an Update-All of the NuGet package at the solution level after which the error disappeared…

Create a .npmrc file on Windows

Kind of stupid but Windows doesn’t like it when you try to create a file with only an extension(like .gitignore, .npmrc,…). Windows will give you an error message instead: The trick to get it working is to include an ending dot also, like .npmrc. Don’t ask me why it works…

Lettable operators in RxJs

After upgrading to Angular 5 (and having some trouble with RxJs but that is for another post), I noticed the introduction of "lettable operators", which can be accessed in rxjs/operators. What is a lettable operator? A lettable operator is basically any function that returns a function with the signature: <T, R>(source: Observable<T>) => Observable<R> . Euhm, what?! Simply put, operators (like filter, map, …) are no longer tied to an Observable directly but can be used with the current let operator(explaining the name). This means you can no longer use the dot-chaining, but will have to use another way to compose your operators. Therefore is a pipe method built into Observable now at Observable.prototype.pipe: Why lettable operators? Lettable operators were introduced to solve the following problems with the dot-chaining(from the documentation ): Any library that imports a patch operator will augment the Observable.prototype for all c

PostgreSQL: Identify your slowest queries

PostgreSQL provides a large list of modules that extends its core functionality. One of these modules is the pg_stat_statements module that provides a means for tracking execution statistics of all SQL statements executed by a server. Before you can use this module, it must be loaded by adding pg_stat_statements to shared_preload_libraries in postgresql.conf , because it requires additional shared memory. This means that a server restart is needed to add the module. After loading the module you can execute the below query to get the top 5 duration queries executed during your performance/benchmarking run: It is recommended to reset the pg_stat_statements using the query below to ensure that you only capture the statements from your performance/benchmarking run:

System.Data.SqlClient.SqlException : The ntext data type cannot be selected as DISTINCT because it is not comparable

After upgrading to NHibernate 5, one of our integration tests started to fail with the following error message: System.Data.SqlClient.SqlException : The ntext data type cannot be selected as DISTINCT because it is not comparable. The error itself happened inside a by Nhibernate generated query: Message: NHibernate.Exceptions.GenericADOException : [ SQL: select distinct supplier1_.SupplierID as supplierid1_138_, supplier1_.Address as address2_138_, supplier1_.City as city3_138_, supplier1_.CompanyName as companyname4_138_, supplier1_.ContactName as contactname5_138_, supplier1_.ContactTitle as contacttitle6_138_, supplier1_.Country as country7_138_, supplier1_.Fax as fax8_138_, supplier1_.HomePage as homepage9_138_, supplier1_.Phone as phone10_138_, supplier1_.PostalCode as postalcode11_138_, supplier1_.Region as region12_138_ from Products product0_ inner join Categories category2_ on product0_.CategoryID=category2_.CategoryID, Suppliers supplier1_ where supplier1_.Supplie

Angular 5 - EmptyError: no elements in sequence

After upgrading to Angular 5, the first run of my application ain’t a big success. I ended up with the following error message when I tried to navigate using the Angular Router: EmptyError: no elements in sequence The problem turned out not to be related to Angular directly but to RxJS 5.5.3. Reverting to 5.5.2 resolved the problem. npm install rxjs@5.5.2 More information: https://github.com/angular/angular/issues/20752

Angular Update Guide

With the fast cadence that new versions of Angular are released, it is not always easy to know what steps are necessary to go from one version to another. To help you, you can use the Angular Update Guide: https://angular-update-guide.firebaseapp.com/ . This is a small application that asks you the following questions: From what version to what version do you want to migrate? How complex is your app? Do you use ngUpgrade? What Package Manager do you use? After answering all these questions, you can click the Show me how to update! button and you get a step by step guide:

NHibernate 5–Async/await

With all the fuzz about .NET Core I almost forgot that NHibernate 5 was released a month ago. One the things I was waiting for was the introduction of async/await to optimize IO bound methods. And after waiting for a loooooong time it’s finally there:

TypeScript: Variable 'require' must be of type 'any', but here has type 'NodeRequire'.

After loading a solution for a code review, I was welcomed by the following error message: Variable 'require' must be of type 'any', but here has type 'NodeRequire'. Google brought me to the following GitHub issue; https://github.com/Microsoft/TypeScript/issues/16298 where the proposed solution was as simple as beautiful: There must be another definition of require somewhere. And indeed, after triggering a search on my solution I found another ‘declare var require’ somewhere in a typedefinition file. After removing it, the error disappeared (and everything still worked).

The request was aborted: could not create SSL/TLS secure channel–Part 3

A colleague asked for help with the following problem: He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend: The request was aborted: Could not create SSL/TLS secure channel. Let’s take you through the journey that brought us to our final solution. Part 3 – The registry hack Our acceptance and production environments are still Windows Server 2008 R2. On these OS TLS 1.0 is still the default. We can change this by altering the registry: Open the registry on your server by running ‘regedit‘ Browse to the following location: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols Add the TLS 1.1 and TLS 1.2 keys under Protocols Right click on Protocols and click New –> Key Create 2 keys Client and Server under both

The request was aborted: could not create SSL/TLS secure channel–Part 2

A colleague asked for help with the following problem: He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend: The request was aborted: Could not create SSL/TLS secure channel. Let’s take you through the journey that brought us to our final solution. Part 2 – The unexpected .NET Framework update(again). As mentioned in Part 1 , the problem started to appear after a rollout of .NET Framework 4.6. Now what made it even stranger is that we didn’t saw the issue on our production environment. So why didn’t it work on our acceptance environment and did it work on our production environment? Turned out that on our production enviroment another .NET Framework update was executed and that the behavior of the HttpClient changed (again): From the documentation : Default operating system support

The request was aborted: could not create SSL/TLS secure channel–Part 1

A colleague asked for help with the following problem: He has an ASP.NET MVC website that talks to an ASP.NET Web API backend. On development everything works as expected but on the acceptance environment, he suddenly start to get TLS errors when the httpclient invokes a call to the backend: The request was aborted: Could not create SSL/TLS secure channel. Let’s take you through the journey that brought us to our final solution. Part 1 – The unexpected .NET Framework update. What we find especially strange was that it worked before, and that the errors only started to appear recently. This brought us on the path of what was recently changed. One of the things that happened was an upgrade to .NET Framework 4.6. Could it be? In the .NET documentation we found that in .NET 4.6 the HttpClient defaults to TLS 1.2. Maybe that caused the error? We updated our code to force the system to use TLS 1.0: This worked but now we were using an older(less secure) TLS version. L

Angular–OnPush Change detection

I always had a hard time explaining how and why the OnPush change detection worked in Angular. But recently I found the following post at Angular-University: https://blog.angular-university.io/onpush-change-detection-how-it-works/ In my opinion, the best explanaition about this topic ever!

Angular: Unsubscribe observables

When you subscribe to an observable in Angular, a subscription is created. To avoid memory leaks in your Angular components it is important that you unsubscribe from any subscription in the OnDestroy lifecycle hook. Although you could think this is a good general rule, there are a lot of exceptions where Angular does the right thing and handles the unsubscribe for you: AsyncPipe: if you are using observable streams via the AsyncPipe then you do not need to worry about unsubscribing. The async pipe will take care of subscribing and unsubscribing for you. Router observables: The Router manages the observables it provides and localizes the subscriptions. The subscriptions are cleaned up when the component is destroyed, protecting against memory leaks, so we don't need to unsubscribe from the route params Observable. Http observables: Http observables produces finite values and don’t require an unsubscribe. As a general conclusion, in most cases you don’t need to ex

Progressive Web App–Redirect from HTTP to HTTPS

I’m currently working on a Progressive Web App(PWA) using ASP.NET Core. After creating our initial setup I used the Google Lighthouse chrome extension to check my application. The results looked OK, I only had one failed audit: “Does not redirect HTTP traffic to HTTPS”. Let’s fix this by adding the AspNetCore Rewrite middleware: If you need to specify a port, you can add some extra parameters:

Entity Framework 6.2–Model Cache

Entity Framework 6.2 introduces the concept of a Model Cache. This gives you the ability to use a prebuilt edmx. Why is this useful? By default Entity Framework(not Core) will generate an EDMX behind the scenes at startup. If you have a rather large EF model, it can take a lot of time. How to configure it? To configure it, you have to use the new SetModelStore method to apply the DefaultDbModelStore . This class compares the timestamp between the assembly of your context against the edmx.  If they do not match, the model cache is deleted and rebuilt.

C# 7–Deconstruction

With the introduction of ValueTuple in C# 7, the C# team also introduced support for deconstruction that allows you to split out a ValueTuple in its discrete arguments: The nice thing is that this feature is not limited to tuples; any type can be deconstructed, as long as it has a Deconstruct method with the appropriate out parameters: Remark: The Deconstruct method can also be an extension method.

ElasticSearch for YAML lovers

By default the output returned by ElasticSearch is JSON. However if you like the more dense format that YAML offers, it is possible  to ask ElasticSearch to output your data as YAML. Just add  ‘format=yaml’ as a querystring parameter to your query: GET nusearch/package/_search ?format=yaml {   "suggest": {     "package-suggestions": {       "prefix": "asp",       "completion": {         "field": "suggest"       }     }   },   "_source": {     "includes": [       "id",       "downloadCount",       "summary"     ]   } } Your output will become:

TFS 2018 and SQL Server 2017–Multidimensional server mode

Last week I did a test migration for a customer from TFS 2015 to TFS 2018. They already configured a SQL Server 2017 Database Services, Analysis Services and Reporting Services for me, so I thought I was good to go. However halfway through the migration process I noticed the following warning appear: [2017-11-15 14:18:43Z][Warning] An error was encountered while attempting to upgrade either the warehouse databases or the Analysis Services database. Reporting features will not be usable until the warehouse and Analysis Services database are successfully configured. Use the Team Foundation Server Administration console to update the Reporting configuration. Error details: TF400646: Team Foundation Server requires Analysis Services instance installed in the 'Multidimensional' server mode. The Analysis Services instance you supplied (<INSTANCE NAME>) is in 'Tabular' server mode. You can either install another instance of Analysis Services and supply that instan

PostgreSQL–Case insensitive search

By default when you use the LIKE operator in PostgreSQL, your query parameter is used in a case sensitive matter. This means that the query SELECT * FROM “Products” WHERE “Name” LIKE ‘Beverag%’ will produce different results then SELECT * FROM “Products” WHERE “Name” LIKE ‘beverag%’ A possible solution for this could be the use of regular expressions: SELECT * FROM “Products” WHERE “Name” ~* 'beverag'; This query returns all matches where the name contains the word ‘beverag’ but because it is a case-insensitive search, it also matches things like ‘BEVERAGE’ .

ADFS–Where to find issuer thumbprint for WIF(Windows Identity Foundation)?

To validate a new installation of ADFS, we created a small sample app that used Windows Identity Foundation to authenticate to the ADFS server. We got most information from our system administrator, but it turned out that the Issuer Thumbprint was missing. As the system administrator wasn’t in the office, we had to find a different solution to get the thumbprint. Here is what we did: By default every ADFS server exposes its metadata through a metadata xml. Typically the url where you can find this metadata xml will be something like https://adfs4.sample.be/federationmetadata/2007-06/federationmetadata.xml Inside this XML you can find the signing and encryption certificates: To read out the certificate information(and the thumbprint) you have to Create a new text file Copy the certificate value into the file Save the file with a .cer extension Now you can open the file, and read out the thumbprint value: Double click on th

TFS 2018– Remove ElasticSearch

Here is an update regarding my post http://bartwullems.blogspot.be/2017/05/tfs-2017how-to-uninstall-elasticsearch.html . In TFS 2018, the command to remove your ElasticSearch instance changed a little and the steps became: Open Powershell as an administrator Go to the folder where ConfigureTFSSearch.ps1 is installed. In TFS 2018, this is typically C:\Program Files\Microsoft Team Foundation Server 2018\Search\zip Run the ConfigureTFSSearch script with the remove option: ".\Configure-TFSSearch.ps1 -Operation remove"

ElasticSearch–Understand the query magic using ‘explain’

Sometimes an ElasticSearch query is invalid or doesn’t return the results you expect. To find out what is going on, you can add the explain parameter to the query string: In your results you get an extra explanation section More information: https://www.elastic.co/guide/en/elasticsearch/guide/master/_validating_queries.html

Using GuidCOMB in SQL Server and PostgreSQL

On a project I’m working on, we expect to have a really large amount of data. Therefore we decided to switch our ID strategy from Integers to GUIDs. Problem is that when you start using GUIDs as part of your database index, they become really fragmented resulting in longer write times. To solve this, you can use the GuidCOMB technique where a part of the GUID is replaced by a sorted date/time value. This guarantees that values will be sequential and avoids index page splits. NHibernate and Marten supports the GuidCOMB technique out-of-the-box but if you want to use it with other tools you can try RT.Comb ,  a small .NET Core library that generated “COMB” GUID values in C#. Here is a sample how to use it in combination with Entity Framework: Let’s first create an Entity Framework Value Generator that uses the RT.Comb library: To apply this generator when an object is added to a DbContext, you can specify it in the Fluent mapping configuration:

Kestrel error: The connection was closed because the response was not read by the client at the specified minimum data rate.

While running some performance tests on our ASP.NET Core application, after increasing the load to a certain level, we saw the following error messages appear on the server: The connection was closed because the response was not read by the client at the specified minimum data rate. This error is related to the Minimum request body data rate specified by Kestrel. From the documentation : Kestrel checks every second if data is coming in at the specified rate in bytes/second. If the rate drops below the minimum, the connection is timed out. The grace period is the amount of time that Kestrel gives the client to increase its send rate up to the minimum; the rate is not checked during that time. The grace period helps avoid dropping connections that are initially sending data at a slow rate due to TCP slow-start. The default minimum rate is 240 bytes/second, with a 5 second grace period. A minimum rate also applies to the response. The code to set the request limit

Azure Storage Explorer–Support for Cosmos DB

Great news! From now on the Azure Storage Explorer can be used to manage your Cosmos DB databases. Key features Open Cosmos DB account in the Azure portal Add resources to the Quick Access list Search and refresh Cosmos DB resources Connect directly to Cosmos DB through a connection string Create and delete Databases Create and delete Collections Create, edit, delete, and filter Documents Create, edit, and delete Stored Procedures, Triggers, and User-Defined Functions Install Azure Storage Explorer: [ Windows ], [ Mac ], [ Linux ]

.NET Core Unit Tests–Enable logging

I noticed that .NET Core Unit Tests capture the output send through tracing (via Trace.Write() ) and through the console (via Console.Write() ). It took me some time before I had the correct code to get the Microsoft.Extensions.Logging data written to my test logs. So here is a small code snippet in case you don’t want to search for it yourself: Remark: Don’t forget to include the Microsoft.Extensions.Logging.Console nuget package.

.NET Core Unit Tests–Using configuration files

Here are the steps to use Microsoft.Extensions.Configuration in your .NET Core unit tests: Include the .NET Core Configuration NuGet package: https://www.nuget.org/packages/Microsoft.Extensions.Configuration.Json/ Copy the appsettings.json to your test project. Don’t forget to set the ‘Copy to output directory’ to ‘Copy always’ Add the following code to build up your configuration:

Git–Commit changes to a new branch

Did it ever happend to you that you were changing some code in one branch until you realized that you actually wanted to commit on another (new) branch? I was expecting that this was not easy to do, but in fact it’s rather easy. Don’t stage your changes, instead just create a new branch using git checkout -b another-branch This will create and checkout “another-branch”. Now you can stage your files using git add . and commit them using git commit -m <message> Remark: This works in Visual Studio as well

TypeScript Index Signatures

I love TypeScript and how it helps me writing better JavaScript applications. However sometimes I struggle with the dynamic world that JavaScript has to offer and the fight for type safety that TypeScript adds to the mix. A situation I had was where I had some objects each sharing the same set of properties. However in some situations extra metadata was added depending on the customer(it’s a multitenant solution). So I created an interface for all the shared properties, but what should I do with the (possible) extra metadata? Adding so many different extra properties on the interface and making them optional sounded not ideal? TypeScript allows you to add extra properties to specific objects with the help of index signatures. Adding an index signature to the interface declaration allows you to specify any number of properties for different objects that you are creating. An example:

.NET Core SignalR Client error: System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation

To test a .NET Core SignalR application, I created a sample application(using the full .NET framework) where I included the Microsoft.AspNetCore.SignalR.Client NuGet package and added the following code: However when I tried running this application, it failed with the following error message: System.IO.FileLoadException: Could not load file or assembly 'System.Runtime.InteropServices.RuntimeInformation, Version=0.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) I checked all my assembly references but they all seemed OK. As a workaround, I was able to avoid the issue by removing the .WithConsoleLogger() line. Anyone who has an idea what can be wrong? Remark: I think it has to do something with the SignalR client which targets .NET Standard 2.0 and my sample application wich targets .NET Framework 4.7. B

Web.config transformations in .NET Core

In a previous post I mentioned that we started to put environment variables inside our web.config files to change the ASPNETCORE_ENVIRONMENT setting inside our ASP.NET Core apps. As we were already using Web Deploy to deploy our ASP.NET Core applications, we decided to use the web.config transformations functionality to set the environment variable in our web.config to a correct value before deploying: We created extra web.{environment}.config files And added the Xdt transformation configuration: However when we tried to deploy, we noticed that the transformation was not executed and that the original web.config file was used. What did we do wrong? The answer turned out to be “Nothing”. Unfortunately ASP.NET Core projects don’t support the transformation functionality. Luckily, a colleague(thanks Sami!) brought  the following library under my attention: https://github.com/nil4/dotnet-transform-xdt dotnet-transform-xdt is a dotnet CLI tool for applying XML D

Visual Studio 2017 (Enterprise) - Where are the Web performance and load testing tools?

Yesterday I wanted to do some performance testing before we put a new application into production. However when I opened Visual Studio (2017) I couldn’t find the Web performance and load testing tools. Are there no longer available in VS 2017? Luckily, they still are. But they are not installed out-of-the-box. Let’s open the Visual Studio installer and fix this: Search for Visual Studio Installer and execute it Click on More –> Modify Go to the Individual components tab, scroll to the Debugging and testing section and select Web performance and load testing tools . Click Modify to start the installation

C# 7: Lambdas vs Local functions

C# 7 introduces the concept of local functions. Local functions can be nested in other functions similar to anonymous delegates or lambda expressions. Doesn’t this make local functions redundant? Not at all, anonymous functions and lambda expressions have certain restrictions that local functions have not. Here is a list of things a local function can do that lambda’s can’t: Local functions can be called without converting them to a delegate Local functions can be recursive Local functions can be iterators Local functions can be generic Local functions have strictly more precise definite assignment rules In certain cases, local functions do not need to allocate memory on the heap More information can be found at: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/local-functions https://blogs.msdn.microsoft.com/seteplia/2017/10/03/dissecting-the-local-functions-in-

Angular: Cancel a pending HTTP request

In case you are wondering how to cancel a pending HTTP request, here is the (simple) answer.  You can do this by calling unsubscribe on the Subscription object returned by the  subscribe method. An example:

Angular aot build error–No template specified for component

A colleague sent me the following screenshot with an error he got after switching to an AOT build in Angular: Here is the related TypeScript file for this component: The problem was caused by the “template: require('./perceel-list.component.html')” statement in the component file. The aot build doesn’t like it when you try to resolve html templates dynamically. Removing the require and just using the templateUrl instead solved the problem:

Angular CLI– ng build --prod

Did you know that you can pass a “--prod” parameter when executing compiling your Angular code using “ng build” ? The "--prod” option also has a development counterpart “--dev”. They will set the following list of parameters: Flag --dev --prod --aot false true --environment dev prod --output-hashing media all --sourcemaps true false --extract-css false true --named-chunks true false More information: https://github.com/angular/angular-cli/wiki/build

Swagger–Expose enums as strings in your Web API

By default Swagger exposes enums in your API definitions as numbers which makes it not easy to understand what a specific parameter value means. You can configure Swagger to expose enums using string names instead. Therefore add the following line to your Swagger configuration:   c .DescribeAllEnumsAsStrings();

Send extra arguments to npm script in package.json

In our package.json we defined some script commands to automate certain actions. However sometimes we want to add extra parameters to the script when executing it. This is possible by adding an extra pair of dashes  and the extra parameter when executing the command: ngbuild -- --environment=local

.NET Conf 2017

In case you missed .NET Conf 2017 , all the videos are available online on Channel 9 .

SQL Server Full Text Search–Wildcards

The SQL Server Full Text Search option is really powerful. However you need to be aware that by default a search is always done on a full word. For example if you had indexed ‘the quick brown fox jumps over the lazy dog’ and you search for ‘brow’ you don’t get a result back. To solve this you can use wildcards, but you have to be aware that you put the full search term between quotes. This query will not work: SELECT BookID,BookTitle FROM Books WHERE CONTAINS(BookTitle, 'brow*' ) But this query will: SELECT BookID,BookTitle FROM Books WHERE CONTAINS(BookTitle, '"brow*"' )

Angular i18n issue - Cannot read property 'toLowerCase' of null

After following the steps in the Angular documentation to setup internationalization(i18n) support, I tried to execute my brand new i18n npm command: PS C:\Projects\test\AngularLocalization\angularlocal> npm run i18n > angularlocal@0.0.0 i18n C:\Projects\test\AngularLocalization\angularlocal > ng-xi18n TypeError: Cannot read property 'toLowerCase' of null     at Extractor.serialize (C:\Projects\test\AngularLocalization\angularlocal\node_modules\@an gular\compiler-cli\src\extractor.js:47:32)     at C:\Projects\test\AngularLocalization\angularlocal\node_modules\@angular\compiler-cli\sr c\extractor.js:33:33     at process._tickCallback (internal/process/next_tick.js:109:7)     at Module.runMain (module.js:606:11) at run (bootstrap_node.js:389:7)     at startup (bootstrap_node.js:149:9)     at bootstrap_node.js:502:3 Extraction failed npm ERR! Windows_NT 10.0.15063 npm ERR! argv "C:\\Program Files\\nod

Impress your colleagues with your knowledge about…Expression Evaluator Format Specifiers

Sometimes when working with C# you discover some hidden gems. Some of them very useful, other ones a little bit harder to find a good way to benefit from their functionality. One of those hidden gems that I discovered some days ago are Expression Evaluator Format Specifiers . What is it? Expression Evaluator Format Specifies come into the picture when you are debugging in Visual Studio. The part of the debugger that processes the language being debugged is known as the expression evaluator (EE). A different expression evaluator is used for each language, though a default is selected if the language cannot be determined. A format specifier , in the debugger, is a special syntax to tell the EE how to interpret the expression being examined. You can read about all of the format specifiers in the documentation . One of really useful format specifiers is the ‘ac’ (always calculate) format specifier. This format specifier will force evaluation of the expression on every step. This is

Seeing the power of types

Most applications I’ve seen don’t take advantage of the power of the type system and fall back to primitive types like string, int, … . But what if you start using the type system to design a more understandable and less buggy application? You don’t believe it is possible? Have a look at the Designing with Types blog series, it will change the way you write your code forever… The complete list of posts: 1. Designing with types: Introduction Making design more transparent and improving correctness 2. Designing with types: Single case union types Adding meaning to primitive types 3. Designing with types: Making illegal states unrepresentable Encoding business logic in types 4. Designing with types: Discovering new concepts Gaining deeper insight into the domain 5. Designing with types: Making state explicit Using state machines to ensure correctness 6. Designing with types: Constrained strings Adding more semantic information to a p

Angular: Analyze your webpack bundles

To optimize your application it can be useful to investigate all the things that are loaded and used inside your webpack bundles. A great tool to visualize this information is the webpack dependency analyzer: https://www.npmjs.com/package/webpack-bundle-analyzer From the documentation: The Webpack dependency analyzer is a Webpack plugin and CLI utility that represents bundle content as convenient interactive zoomable treemap This module will help you: Realize what's really inside your bundle Find out what modules make up the most of it's size Find modules that got there by mistake Optimize it! How to use it inside your Angular app? Install the bundle through npm: npm install webpack-bundle-analyzer Update your package.json with an extra command: "analyze": "ng build --prod --stats-json && webpack-bundle-analyzer dist/stats.json" Invoke the command through npm npm run analyze

IIS Server configs

If you are hosting your ASP.NET applications inside IIS I have a great tip for you: https://github.com/h5bp/server-configs-iis This GitHub project contains a list of boilerplate web.config files applying some best practices(like security hardening) and taking maximal advantage of the powerfull functionality that IIS has to offer. It shows and explains how to: Apply security through obscurity by not exposing specific information through the headers Apply GZIP compression on static content Disable tracing Secure your cookies Cache static content Support cache busting …

ASP.NET Core–Environment variables

ASP.NET Core references a particular environment variable, ASPNETCORE_ENVIRONMENT to describe the environment the application is currently running in. This variable can be set to any value you like, but 3 values are used by convention: Development , Staging , and Production . Based on this information the ASP.NET Core configuration system can load specific configuration settings (through .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) ) or execute a specific Startup class or Startup method through the Startup conventions(e.g. a Startup{EnvironmentName} class or a Configure{EnvironmentName}() method inside the Startup class). At one customer we are hosting our ASP.NET Core applications inside IIS. The IIS environment is used both for development and testing. So we want to host the same application twice with a different environment setting. By default the environment setting is loaded from a system level environment variable which of course c

ASP.NET Core 2.0–Authentication Middleware changes

ASP.NET Core 2.0 introduces a new model for authentication which requires some changes when upgrading your existing ASP.NET Core 1.x applications to 2.0. In ASP.NET Core 1.x, every auth scheme had its own middleware, and startup looked something like this: In ASP.NET Core 2.0, there is now only a single Authentication middleware, and each authentication scheme is registered during ConfigureServices() instead of during Configure(): More information: https://docs.microsoft.com/en-us/aspnet/core/migration/1x-to-2x/identity-2x

Angular 4.3: HTTP Interceptors are back

With the introduction of a new HttpClient in Angular 4.3, an old feature of Angular.js was re-introduced; HttpInterceptors. Interceptors are sitting between the application and the backend and allow you to transform a request coming from the application before it is actually submitted to the backend. And of course you when a response arrivers from the backend an interceptor can transform it before delivering it to your application logic. This allows us to simplify the interaction with the backend in our Angular app and hide most of the shared logic inside an interceptor. Let’s create a simple example that injects an OAuth token in our requests: To be able to use the interceptor, you’ll have to register it:

Enabling Application Insights on an existing project

Yesterday I lost some time searching how to Enable Application Insights on an existing project in Visual Studio. I thought it was available on the context menu when you right click on your Visual Studio project, but no option found there: Turns out you need to go one level deeper ; Right click on your project Click on Add and select Application Insights Telemetry… Now you can go through the configuration wizard by clicking on Start Free :

Visual Studio 2017 Offline install - “Unable to download installation files”

After creating an offline installer for Visual Studio 2017 using vs_enterprise.exe --layout c:\vs2017offline we were ready to install Visual Studio on our build servers(which don’t have Internet access). However when we tried to run the installer, it failed after running for a few minutes with the following error message: Unable to download installation files This error message was not that useful as we found out that the problem was not related to missing installation files but due to the fact that we forgot to install the required certificates first. To install the certificates first, you have to Browse to the " certificates " folder inside the layout folder you created(e.g. c:\vs2017offline\certificates) Right-click each one and choose Install PFX . Specify Local machine as target certificate store Leave the password field empty More information: https://docs.microsoft.com/en-us/visualstudio/install/install-certificates

Team Foundation Server–Upgrade your build agents

If you upgrade your TFS installation to a newer version, a new version of the build agent is available as well. To upgrade your agents, you have 2 options: If a new major version of the agent is released, you’ll have to manually delete the old agent and install a new agent. If a new minor version of the agent is released, the existing agent is upgraded automatically when it runs a task that requires a newer version of the agent. If you want to trigger the update manually, you can go to the Agent Pool hub, right click on a Queue and click on Update All Agents . More information at https://docs.microsoft.com/nl-nl/vsts/build-release/concepts/agents/agents#agent-version-and-upgrades

NPGSQL–Relation does not exist

PostgreSQL has great .NET support thanks to the open source NPGSQL library . From the home page : Npgsql is an open source ADO.NET Data Provider for PostgreSQL, it allows programs written in C#, Visual Basic, F# to access the PostgreSQL database server. It is implemented in 100% C# code, is free and is open source. In addition, providers have been written for Entity Framework Core and for Entity Framework 6.x. However I immediately had some problems the moment I tried to execute a query. Strange thing was that my code almost the same as what could be found on the Getting Started page; The error I got was the following: Query failed: ERROR: relation "Northwind.Products" does not exist I tried to execute the same query directly in the PGAdmin tool and indeed I got the same error. What am I doing wrong? The problem is that PostgreSQL by default implicitly converts unquoted identifiers in my query to lowercase. So the following query; SELECT Id,

TypeScript error–Property ‘assign’ does not exists on type ‘ObjectConstructor’

A colleague asked me for help when he got into trouble with his TypeScript code. Here is a simplified version: Although this looks like valid code , the TypeScript compiler complained: After some headscratching, we discovered that there was a “rogue” tsconfig.json at a higher level that set “ES5” as the target. Object.Assign was added as part of “ES6” explaining why TypeScript complained. After changing the target to “es6”, the error disappeared.

VSWhere.exe–The Visual Studio Locator

As someone who has built a lot if CI and CD pipelines, one of the struggles I always got when new Visual Studio versions were released was how to make my build server use the correct version of MSBuild when multiple Visual Studio versions were installed. It got a lot better over the years, but even recently I was sitting together with a customer to investigate how we could make the build server understand that the Visual Studio 2017 Build tools should be used. One of the (badly documented) tricks you could use was scanning the registry for specific registry keys. Luckily Microsoft released recently a new tool that makes finding your Visual Studio instances a lot easier: vswhere.exe From the documentation: vswhere is designed to be a redistributable, single-file executable that can be used in build or deployment scripts to find where Visual Studio - or other products in the Visual Studio family - is located. For example, if you know the relative path to MSBuild, you can find