Skip to main content

Posts

Showing posts from June, 2020

Evaluate your business strategies using Microsoft Assessments

Microsoft previewed ‘Microsoft Assessments’ , a free, online platform that helps customers evaluate their business strategies. From the documentation : Microsoft Assessments is a free, online platform that helps customers in a self-service online manner evaluate their business strategies and workloads, and through curated guidance from Microsoft, they are able to improve their posture in Azure At the moment of writing there are 4 available assessments: Cloud Journey Tracker : Identify your cloud adoption path based on your needs with this tracker and navigate to relevant content in the Cloud Adoption Framework for Azure. Governance Benchmark : Identify gaps in your organizations current state of governance. Get a personalized benchmark report and curated guidance on how to get started. Microsoft Azure Well-Architected Review : Examine your workload through the lenses of reliability, cost management, operational excellence, security and performance efficiency

Node issue–Call retries were exceeded

When trying to build an Angular 9 application on the build server, it failed on the ‘Generating ES5 bundles for differential loading’ step with the following error message: An unhandled exception occurred: Call retries were exceeded On the developer machines we couldn’t reproduce the issue (as always).  Inside the angular-errors.log we found the following extra details: [error] Error: Call retries were exceeded at ChildProcessWorker.initialize (\node_modules\@angular-devkit\build-angular\node_modules\jest-worker\build\workers\ChildProcessWorker.js:193:21) at ChildProcessWorker.onExit (\node_modules\@angular-devkit\build-angular\node_modules\jest-worker\build\workers\ChildProcessWorker.js:263:12) at ChildProcess.emit (events.js:210:5) at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12) We were able to solve the issue by upgrading the node version on the build server. Hopefully this helps for you as well…

ASP.NET Core–Using Value Tuples in Razor

I was wondering if it was possible to use ValueTuples as your Razor model. This turned out to work perfectly! Here is my ViewComponent method(notice that this also works in Razor pages and MVC controllers): And here is my Razor view:

NHibernate - CreateMultiCriteria is obsolete

It was code cleanup day. So time to get rid of some nagging warnings I didn’t had time to tackle before. One of the warnings I wanted to get rid of was an NHibernate warning about the fact the CreateMultiCriteria method was now obsolete. This is how my code looked like originally: And this how I got rid of the CreateMultiCriteria message: Notice that I replaced the CreateMultiCriteria call with a CreateQueryBatch call. The API is a little bit different. Most important to notice is that there is a GetResult method where you can specify what call result you want to get back.

ASP.NET Core–Endpoint authorization

Until recently I always used an (empty) [Authorize] attribute on top of my Controllers to active authorization on a specific endpoint. (or I used a global AuthorizeFilter ) This will authorize users using the DefaultPolicy which just requires an authenticated user. With the introduction of endpoint routing there is a new alternative. Disadvantage of the AuthorizeFilter or Authorize attribute are that these are MVC-only features. A (better) solution is to use the RequireAuthorization() extension method on IEndpointConventionBuilder: This has the same effect as applying an [Authorize] attribute on every controller.

WSFederation–Implementing logout on ADFS

In one my ASP.NET Core applications we are (still) using WSFederation as the authentication protocol. While implementing the signout functionality I noticed that I correctly was signout at ADFS level but that ADFS didn’t return me back to my application afterwards. This is handled by the wreply parameter and this parameter was correctly send to ADFS. Here is my logout code: After some trial and error I could pinpoint the issue to the following situation; when the reply URL was a subpath of the configured WSFederation endpoint it worked and I got correctly redirected. For example: The ADFS WSFederation endpoint for my Relying Party was configured to use https://localhost/example/federationresult/ If I used https://localhost/example/logoutsuccess/ as reply URL nothing happened and I stayed on the ADFS logout page If I used https://localhost/example/federationresult/logoutsuccess/ as reply URL I am correctly redirected back to my application I guess it makes sen

ElasticSearch–Upgrade error - System.IO.IOException: Source and destination path must have identical roots.

When trying to update an ElasticSearch cluster through the Windows Installer(MSI) it always seemed to fail. In the error logs I found the following message: System.IO.IOException: Source and destination path must have identical roots. Move will not work across volumes.    at System.IO.Directory.InternalMove(String sourceDirName, String destDirName, Boolean checkHost)    at System.IO.Abstractions.DirectoryWrapper.Move(String sourceDirName, String destDirName)    at Elastic.InstallerHosts.Elasticsearch.Tasks.Install.CreateDirectoriesTask.CreateConfigDirectory(FileSystemAccessRule rule)    at Elastic.InstallerHosts.Elasticsearch.Tasks.Install.CreateDirectoriesTask.ExecuteTask()    at Elastic.InstallerHosts.SessionExtensions.Handle(Session session, Func`1 action) There is a problem with the installer when you are using different volumes for your ElasticSearch application and your ElasticSearch data(which I think is a good practice).  In that case the installer

Upgrading ElasticSearch–Discovery configuration is required in production mode

While preparing for an ElasticSearch upgrade I checked the Upgrade assistant . One (critical) issue was mentioned: Discovery configuration is required in production mode Let’s have a look what the documentation has to mention: Production deployments of Elasticsearch now require at least one of the following settings to be specified in the elasticsearch.yml configuration file: discovery.seed_hosts discovery.seed_providers cluster.initial_master_nodes discovery.zen.ping.unicast.hosts discovery.zen.hosts_provider The first three settings in this list are only available in versions 7.0 and above. If you are preparing to upgrade from an earlier version, you must set discovery.zen.ping.unicast.hosts or discovery.zen.hosts_provider . In our case we don’t want to form a multiple-node cluster. So we ignore the documentation above and instead change the discovery.type to single-node. For more information about when you might use this se

Azure Monitor / Application Insights : Workbooks

An easy way to get started with Azure Monitor / Application Insights is through ‘Workbooks’. From the documentation : Workbooks provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences. You can build your own workbooks but there is a large list of available templates out-of-the-box that can help you gain insight in your Azure services. To get started open the Azure Portal Go to Azure Monitor/ Application Insights Select Workbooks from the menu on the left You can create a new report or choose one of the existing templates. Let’s have a look at the ‘Usage through the day’ report for example. You can click on Edit to start customizing the report. Every report can be a combination of text, parameters, graphs and metrics.

Seq - ERR_SSL_PROTOCOL_ERROR

Structured logging is the future and tools like ElasticSearch and Seq can help you manage and search through this structured log data. While testing Seq, a colleague told me that he couldn’t access Seq. Instead his browser returned the following error: ERR_SSL_PROTOCOL_ERROR The problem was that he tried to access the Seq server using HTTPS although this was not activated. By default Seq runs as a windows service and listens only on HTTP. To enable HTTPS some extra work needs to be done: First make sure you have a valid SSL certificate installed in either the Local Machine or Personal certificate store of your Seq server. Open the certificate manager on the server, browse to the certificate and read out the thumbprint value. Now open a command prompt on the server and execute the following commands: seq bind-ssl --thumbprint="THUMBPRINT HERE --port=9001 seq config -k api.listenUris -v https://YOURSERVER:9001 seq restart Remark

TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface.

A colleague shared with me the following strange error message he got when he tried to use a .NET Standard library I created: An unhandled exception occurred while processing the request. TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface. System.Reflection.Emit.TypeBuilder.CreateTypeNoLock() · TypeLoadException: Type 'generatedProxy_5' from assembly 'ProxyBuilder, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' is attempting to implement an inaccessible interface. o System.Reflection.Emit.TypeBuilder.CreateTypeNoLock() o System.Reflection.Emit.TypeBuilder.CreateTypeInfo() o System.Reflection.DispatchProxyGenerator+ProxyBuilder.CreateType() o System.Reflection.DispatchProxyGenerator.GenerateProxyType(Type baseType, Type interfaceType) o System.Reflection.DispatchProxyGenerator.GetPr

InternalsVisibleTo in your csproj file

I blogged before about how to use the [InternalsVisibleTo] in your .NET Standard/.NET Core project. Today I discovered an alternative approach where you specify the attribute information in your csproj file: During compilation of your project an AssemblyInfo.cs file is generated (take a look at your object folder):

ElasticSearch.NET exception after upgrade

After upgrading ElasticSearch.NET to the latest version, my application failed with the following error message: Could not load type 'Elasticsearch.Net.IInternalSerializerWithFormatter' from assembly 'Elasticsearch.Net, Version=7.0.0.0, Culture=neutral, PublicKeyToken=96c599bbe3e70f5d'. A look at my packages.config (yes it is still an old(er) ASP.NET application), showed the following: <package id="CommonServiceLocator" version="2.0.1" targetFramework="net461" requireReinstallation="true" /> <package id="Elasticsearch.Net" version="7.7.1" targetFramework="net472" /> <package id="Iesi.Collections" version="4.0.1.4000" targetFramework="net461" /> <package id="LazyCache" version="0.7.1.44" targetFramework="net461" /> <package id="Microsoft.CSharp" version="4.6.0" tar

GraphQL vs OData

In case you didn’t noticed yet, I’m a big fan of GraphQL. One of the questions I get a lot (especially from .NET developers) is what the difference is with OData? At first sight they have a lot of similarities and partially try to achieve the same goal, but there are some reasons why I prefer GraphQL over OData. Let’s first have a look at the “official” descriptions: From odata.org : OData (Open Data Protocol) is an ISO/IEC approved , OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests. OData RESTful APIs are easy to consume. The ODa

Azure Service Bus Explorer in Azure Portal

Until recently I used the Service Bus Explorer to debug and manage Azure Service Bus. But last week I noticed the following new menu item in Azure Service Bus: To use the Azure Service Bus explorer, you need to navigate to the Service Bus namespace on which you want to perform send, peek, and receive operations. Then select either ‘Queues’ or ‘Topics’ from the from the navigation menu. After doing that you should see the ‘Service Bus Explorer’ option. Following operations are supported: Queues 'Send' to a queue 'Receive' from a queue. 'Peek' from a queue. 'Receive' from DeadLetterQueue. 'Peek' from the DeadLetterQueue. Topics 'Send' to a topic. Subscriptions 'Peek' from a subscription on a topic. 'Receive' from a subscription. 'Peek' from the DeadLetter subscription. 'Receive' from

Application Insights - Stop tracking 404 errors

By default Application Insights will log every 404 error in your web app as an error. I think this is a good default, but what if you don’t want to see these 404 errors? There are 2 options to solve this: Through a Telemetry Processor Through a Telemetry Initializer Telemetry Processor A telemetry processor gives you direct control over what is included or excluded from the telemetry stream. We can register our new TelemetryProcessor by using the AddApplicationInsightsTelemetryProcessor extension method on IServiceCollection , as shown below: Telemetry Initializer Telemetry initializers allow you to enrich telemetry with additional information and/or to override telemetry properties set by the standard telemetry modules. By default, any request with a response code >= 400 is flagged as failed. But if we want to treat a 404 as a success, we can provide a telemetry initializer that sets the Success property: We can register the TelemetryInitializer in our

Sharing authentication ticket between .NET Core and ASP.NET (Owin)

By default authentication tickets cannot be shared between .NET Core and OWIN. The good news is that it is possible but we have to take some extra steps: .NET Core App On .NET Core side we have to change the cookie authentication middleware: The cookie name should match the name used by the OWIN Cookie Authentication Middleware ( .AspNet.SharedCookie for example). An instance of a DataProtectionProvider should be initialized to the common data protection key storage location. ASP.NET (OWIN) App On ASP.NET (OWIN) side we have to install the Microsoft.Owin.Security.Interop package first. Then we can change the cookie authentication middleware: The cookie name should match the name used by the ASP.NET Core Cookie Authentication Middleware ( .AspNet.SharedCookie in the example). An instance of a DataProtectionProvider should be initialized to the common data protection key storage location.

ASP.NET Core–Set environment through the commandline

ASP.NET Core has built-in support for multiple environments. This makes it easy to load different configuration and apply different middleware depending on the environment. The typical to control the environment we want to use is through the ASPNETCORE_ENVIRONMENT environment variable. It is also possible to set the environment variable by passing it to the dotnet run command as an argument. To set this up, we have to modify the Program.cs : The AddCommandLine method allows us to read configuration values from the command line. Now we can start the app with dotnet run --environment Development.