Tuesday, June 30, 2020

Evaluate your business strategies using Microsoft Assessments

Microsoft previewed ‘Microsoft Assessments’, a free, online platform that helps customers evaluate their business strategies.

From the documentation:

Microsoft Assessments is a free, online platform that helps customers in a self-service online manner evaluate their business strategies and workloads, and through curated guidance from Microsoft, they are able to improve their posture in Azure

At the moment of writing there are 4 available assessments:

  • Cloud Journey Tracker: Identify your cloud adoption path based on your needs with this tracker and navigate to relevant content in the Cloud Adoption Framework for Azure.
  • Governance Benchmark: Identify gaps in your organizations current state of governance. Get a personalized benchmark report and curated guidance on how to get started.
  • Microsoft Azure Well-Architected Review: Examine your workload through the lenses of reliability, cost management, operational excellence, security and performance efficiency
  • Strategic Migration Assessment and Readiness Tool: Preparing for a scale migration is critical to ensure your project is executed smoothly and that you realize intended benefits

Every assessment will walk you through a list of questions. As a result you get a report and a list of recommended next steps:

Monday, June 29, 2020

Node issue–Call retries were exceeded

When trying to build an Angular 9 application on the build server, it failed on the ‘Generating ES5 bundles for differential loading’ step with the following error message:

An unhandled exception occurred: Call retries were exceeded

On the developer machines we couldn’t reproduce the issue (as always).  Inside the angular-errors.log we found the following extra details:

[error] Error: Call retries were exceeded
    at ChildProcessWorker.initialize (\node_modules\@angular-devkit\build-angular\node_modules\jest-worker\build\workers\ChildProcessWorker.js:193:21)
    at ChildProcessWorker.onExit (\node_modules\@angular-devkit\build-angular\node_modules\jest-worker\build\workers\ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:210:5)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12)
We were able to solve the issue by upgrading the node version on the build server. Hopefully this helps for you as well…

Friday, June 26, 2020

ASP.NET Core–Using Value Tuples in Razor

I was wondering if it was possible to use ValueTuples as your Razor model. This turned out to work perfectly!

Here is my ViewComponent method(notice that this also works in Razor pages and MVC controllers):

And here is my Razor view:

Thursday, June 25, 2020

NHibernate - CreateMultiCriteria is obsolete

It was code cleanup day. So time to get rid of some nagging warnings I didn’t had time to tackle before. One of the warnings I wanted to get rid of was an NHibernate warning about the fact the CreateMultiCriteria method was now obsolete.

This is how my code looked like originally:

And this how I got rid of the CreateMultiCriteria message:

Notice that I replaced the CreateMultiCriteria call with a CreateQueryBatch call. The API is a little bit different. Most important to notice is that there is a GetResult method where you can specify what call result you want to get back.

Wednesday, June 24, 2020

ASP.NET Core–Endpoint authorization

Until recently I always used an (empty) [Authorize] attribute on top of my Controllers to active authorization on a specific endpoint. (or I used a global AuthorizeFilter)

This will authorize users using the DefaultPolicy which just requires an authenticated user.

With the introduction of endpoint routing there is a new alternative. Disadvantage of the AuthorizeFilter or Authorize attribute are that these are MVC-only features.

A (better) solution is to use the RequireAuthorization() extension method on IEndpointConventionBuilder:

This has the same effect as applying an [Authorize] attribute on every controller.

Tuesday, June 23, 2020

WSFederation–Implementing logout on ADFS

In one my ASP.NET Core applications we are (still) using WSFederation as the authentication protocol. While implementing the signout functionality I noticed that I correctly was signout at ADFS level but that ADFS didn’t return me back to my application afterwards.

This is handled by the wreply parameter and this parameter was correctly send to ADFS.

Here is my logout code:

After some trial and error I could pinpoint the issue to the following situation; when the reply URL was a subpath of the configured WSFederation endpoint it worked and I got correctly redirected.

For example:

I guess it makes sense as it is kind of a security measure.

Monday, June 22, 2020

ElasticSearch–Upgrade error - System.IO.IOException: Source and destination path must have identical roots.

When trying to update an ElasticSearch cluster through the Windows Installer(MSI) it always seemed to fail.

In the error logs I found the following message:

System.IO.IOException: Source and destination path must have identical roots. Move will not work across volumes.

   at System.IO.Directory.InternalMove(String sourceDirName, String destDirName, Boolean checkHost)

   at System.IO.Abstractions.DirectoryWrapper.Move(String sourceDirName, String destDirName)

   at Elastic.InstallerHosts.Elasticsearch.Tasks.Install.CreateDirectoriesTask.CreateConfigDirectory(FileSystemAccessRule rule)

   at Elastic.InstallerHosts.Elasticsearch.Tasks.Install.CreateDirectoriesTask.ExecuteTask()

   at Elastic.InstallerHosts.SessionExtensions.Handle(Session session, Func`1 action)

There is a problem with the installer when you are using different volumes for your ElasticSearch application and your ElasticSearch data(which I think is a good practice).  In that case the installer always fails as he tries to copy some files from one disk to another.

As a workaround(I tried multiple versions of the Windows installer but all got the same issue) I installed the ElasticSearch application on the data disk.

Friday, June 19, 2020

.NET Core–Disable a specific compiler warning

In my .NET Core app I wanted Visual Studio to stop complaining about missing XML comments (CS1591 warning).

To enable this I had to add a <NoWarn></NoWarn> configuration:

Thursday, June 18, 2020

The power of tuples and deconstructors

Today I updated a part of my code using a combination of tuples and tuple deconstruction.

This was the original code:

And here it is after applying my changes:

I like it!

Wednesday, June 17, 2020

Upgrading ElasticSearch–Discovery configuration is required in production mode

While preparing for an ElasticSearch upgrade I checked the Upgrade assistant.

One (critical) issue was mentioned: Discovery configuration is required in production mode

Let’s have a look what the documentation has to mention:

Production deployments of Elasticsearch now require at least one of the following settings to be specified in the elasticsearch.yml configuration file:

  • discovery.seed_hosts
  • discovery.seed_providers
  • cluster.initial_master_nodes
  • discovery.zen.ping.unicast.hosts
  • discovery.zen.hosts_provider

The first three settings in this list are only available in versions 7.0 and above. If you are preparing to upgrade from an earlier version, you must set discovery.zen.ping.unicast.hosts or discovery.zen.hosts_provider.

In our case we don’t want to form a multiple-node cluster. So we ignore the documentation above and instead change the discovery.type to single-node.

For more information about when you might use this setting, see Single-node discovery.

Tuesday, June 16, 2020

Azure Monitor / Application Insights : Workbooks

An easy way to get started with Azure Monitor / Application Insights is through ‘Workbooks’.

From the documentation:

Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.

You can build your own workbooks but there is a large list of available templates out-of-the-box that can help you gain insight in your Azure services.

  • To get started open the Azure Portal
  • Go to Azure Monitor/ Application Insights
  • Select Workbooks from the menu on the left

    • You can create a new report or choose one of the existing templates.

      • Let’s have a look at the ‘Usage through the day’ report for example.

        • You can click on Edit to start customizing the report.
        • Every report can be a combination of text, parameters, graphs and metrics.

          Monday, June 15, 2020

          Seq - ERR_SSL_PROTOCOL_ERROR

          Structured logging is the future and tools like ElasticSearch and Seq can help you manage and search through this structured log data.

          While testing Seq, a colleague told me that he couldn’t access Seq. Instead his browser returned the following error:

          ERR_SSL_PROTOCOL_ERROR

          The problem was that he tried to access the Seq server using HTTPS although this was not activated. By default Seq runs as a windows service and listens only on HTTP.

          To enable HTTPS some extra work needs to be done:

          • First make sure you have a valid SSL certificate installed in either the Local Machine or Personal certificate store of your Seq server.
          • Open the certificate manager on the server, browse to the certificate and read out the thumbprint value.
          • Now open a command prompt on the server and execute the following commands:
            • seq bind-ssl --thumbprint="THUMBPRINT HERE --port=9001
            • seq config -k api.listenUris -v https://YOURSERVER:9001
            • seq restart

          Remark: The ‘--port’ parameter is only necessary when you are not listening on the standard HTTPS port(443).

          More information: https://docs.datalust.co/docs/ssl

          Friday, June 12, 2020

          Try an API call directly in Chrome Dev Tools

          Quick tip if you want to test an api call; you can make an HTTP request directly from the Chrome Developer Tools:

          • Open your Developer Tools(F12)
          • Go to Console
          • Enter the following:

          Thursday, June 11, 2020

          TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface.

          A colleague shared with me the following strange error message he got when he tried to use a .NET Standard library I created:

          An unhandled exception occurred while processing the request.

          TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface.

          System.Reflection.Emit.TypeBuilder.CreateTypeNoLock()

          · TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface.

          o System.Reflection.Emit.TypeBuilder.CreateTypeNoLock()

          o System.Reflection.Emit.TypeBuilder.CreateTypeInfo()

          o System.Reflection.DispatchProxyGenerator+ProxyBuilder.CreateType()

          o System.Reflection.DispatchProxyGenerator.GenerateProxyType(Type baseType, Type interfaceType)

          o System.Reflection.DispatchProxyGenerator.GetProxyType(Type baseType, Type interfaceType)

          o System.Reflection.DispatchProxyGenerator.CreateProxyInstance(Type baseType, Type interfaceType)

          o System.Reflection.DispatchProxy.Create<T, TProxy>()

          o System.ServiceModel.Channels.ServiceChannelProxy.CreateProxy<TChannel>(MessageDirection direction, ServiceChannel serviceChannel)

          o System.ServiceModel.Channels.ServiceChannelFactory.CreateProxy<TChannel>(MessageDirection direction, ServiceChannel serviceChannel)

          o System.ServiceModel.Channels.ServiceChannelFactory.CreateChannel<TChannel>(EndpointAddress address, Uri via)

          o System.ServiceModel.ChannelFactory<TChannel>.CreateChannel(EndpointAddress address, Uri via)

          o System.ServiceModel.ChannelFactory<TChannel>.CreateChannel()

          o System.ServiceModel.ClientBase<TChannel>.CreateChannel()

          o System.ServiceModel.ClientBase<TChannel>.CreateChannelInternal()

          o System.ServiceModel.ClientBase<TChannel>.get_Channel()

          o IRD3Service.IRD3ServiceClient.MbCoreGetIdentificerendeEenheidAsync(int idIe) in Reference.cs

          Inside this library I’m doing a WCF call to get some data from a backend service. WCF internally generates a proxy for the WCF client through the ProxyBuilder and it is this ProxyBuilder that started to complain…

          The problem seems to be that I generated all my proxy types as internal(which should be a good thing). But the ProxyBuilder does not agree with me.

          Some research (thanks Google!) brought me to the following possible solutions:

          • Change all proxy types from internal to public
          • Add an InternalsVisibleToAttribute(“ProxyBuilder”) to the library

          I tried the second approach and it worked! Up to the next problem…

          Wednesday, June 10, 2020

          InternalsVisibleTo in your csproj file

          I blogged before about how to use the [InternalsVisibleTo] in your .NET Standard/.NET Core project. Today I discovered an alternative approach where you specify the attribute information in your csproj file:

          During compilation of your project an AssemblyInfo.cs file is generated (take a look at your object folder):

          Tuesday, June 9, 2020

          ElasticSearch.NET exception after upgrade

          After upgrading ElasticSearch.NET to the latest version, my application failed with the following error message:

          Could not load type 'Elasticsearch.Net.IInternalSerializerWithFormatter' from assembly 'Elasticsearch.Net, Version=7.0.0.0, Culture=neutral, PublicKeyToken=96c599bbe3e70f5d'.

          A look at my packages.config (yes it is still an old(er) ASP.NET application), showed the following:

          <package id="CommonServiceLocator" version="2.0.1" targetFramework="net461" requireReinstallation="true" />

          <package id="Elasticsearch.Net" version="7.7.1" targetFramework="net472" />

          <package id="Iesi.Collections" version="4.0.1.4000" targetFramework="net461" />

          <package id="LazyCache" version="0.7.1.44" targetFramework="net461" />

          <package id="Microsoft.CSharp" version="4.6.0" targetFramework="net472" />

          <package id="NEST" version="7.1.0" targetFramework="net472" />

          The problem was that although I had updated the Elasticsearch.NET nuget packages I forgot to do the same thing for the NEST high level client.

          To fix it I had to update the NEST nuget package as well.

          Monday, June 8, 2020

          GraphQL vs OData

          In case you didn’t noticed yet, I’m a big fan of GraphQL. One of the questions I get a lot (especially from .NET developers) is what the difference is with OData?

          At first sight they have a lot of similarities and partially try to achieve the same goal, but there are some reasons why I prefer GraphQL over OData.

          Let’s first have a look at the “official” descriptions:

          From odata.org:

          OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests.

          OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools.

          From graphql.org:

          GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

          Sounds familiar?

          I can understand that people who have used OData will think it does the same thing, but what makes it different?

          Decoupling

          OData brought the power of SQL to your URI’s at  the cost of a high coupling. The OData ecosystem was meant to replace your existing REST api’s and your implementation had a direct technical coupling. GraphQL is more like a backend for frontend where you can bring multiple REST api’s together in one uniform interface.

          Scaling

          Although technical feasible to create one OData schema for your whole organization, it would be hard to build and maintain. Compare it with GraphQL Federation where it becomes easy to create a single data graph for your whole organization.

          Adoption

          Although OData is an open standard and there are some other big names next to Microsoft who jumped on the bandwagon, I mostly encounter OData usage at companies that use SAP and/or .NET.  GraphQL has a much broader adoption across multiple ecosystems and platforms.

          I’ve used OData in the passed and I really liked it in the context of WCF Data Services and Silverlight(RIP) but the flexibility, rich ecosystem and amazing tools and solutions(e.g. Apollo) of GraphQL should be enough to convince you…

          Remark: I can recommend the following read to go in more detail about the differences: https://jeffhandley.com/2018-09-13/graphql-is-not-odata

          Thursday, June 4, 2020

          Azure Service Bus Explorer in Azure Portal

          Until recently I used the Service Bus Explorer to debug and manage Azure Service Bus. But last week I noticed the following new menu item in Azure Service Bus:

          To use the Azure Service Bus explorer, you need to navigate to the Service Bus namespace on which you want to perform send, peek, and receive operations. Then select either ‘Queues’ or ‘Topics’ from the from the navigation menu. After doing that you should see the ‘Service Bus Explorer’ option.

          Following operations are supported:

          • Queues
            • 'Send' to a queue
            • 'Receive' from a queue.
            • 'Peek' from a queue.
            • 'Receive' from DeadLetterQueue.
            • 'Peek' from the DeadLetterQueue.
          • Topics
            • 'Send' to a topic.
          • Subscriptions
            • 'Peek' from a subscription on a topic.
            • 'Receive' from a subscription.
            • 'Peek' from the DeadLetter subscription.
            • 'Receive' from the DeadLetter subscription.

          To learn more about the Service Bus Explorer tool, please read the documentation.

          Application Insights - Stop tracking 404 errors

          By default Application Insights will log every 404 error in your web app as an error. I think this is a good default, but what if you don’t want to see these 404 errors?

          There are 2 options to solve this:

          Telemetry Processor

          A telemetry processor gives you direct control over what is included or excluded from the telemetry stream.

          We can register our new TelemetryProcessor by using the AddApplicationInsightsTelemetryProcessor extension method on IServiceCollection, as shown below:

          Telemetry Initializer

          Telemetry initializers allow you to enrich telemetry with additional information and/or to override telemetry properties set by the standard telemetry modules. By default, any request with a response code >= 400 is flagged as failed. But if we want to treat a 404 as a success, we can provide a telemetry initializer that sets the Success property:

          We can register the TelemetryInitializer in our Startup.cs:

          Advantage of the Telemetry Initializer is that we still log the 404 event but no longer as an error.

          More information: https://docs.microsoft.com/en-us/azure/azure-monitor/app/api-filtering-sampling

          Tuesday, June 2, 2020

          Sharing authentication ticket between .NET Core and ASP.NET (Owin)

          By default authentication tickets cannot be shared between .NET Core and OWIN. The good news is that it is possible but we have to take some extra steps:

          .NET Core App

          On .NET Core side we have to change the cookie authentication middleware:

          • The cookie name should match the name used by the OWIN Cookie Authentication Middleware (.AspNet.SharedCookie for example).
          • An instance of a DataProtectionProvider should be initialized to the common data protection key storage location.

          ASP.NET (OWIN) App

          On ASP.NET (OWIN) side we have to install the Microsoft.Owin.Security.Interop package first.

          Then we can change the cookie authentication middleware:

          • The cookie name should match the name used by the ASP.NET Core Cookie Authentication Middleware (.AspNet.SharedCookie in the example).
          • An instance of a DataProtectionProvider should be initialized to the common data protection key storage location.

            Monday, June 1, 2020

            ASP.NET Core–Set environment through the commandline

            ASP.NET Core has built-in support for multiple environments. This makes it easy to load different configuration and apply different middleware depending on the environment.

            The typical to control the environment we want to use is through the ASPNETCORE_ENVIRONMENT environment variable.

            It is also possible to set the environment variable by passing it to the dotnet run command as an argument.

            To set this up, we have to modify the Program.cs:

            The AddCommandLine method allows us to read configuration values from the command line.

            Now we can start the app with dotnet run --environment Development.