Skip to main content

Posts

Showing posts from 2019

SQL Server–Activate Memory Optimized tables

When trying to create a Memory Optimized table in SQL Server, I got the following error message: Cannot create memory optimized tables. To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container. To fix this I first had to add a new memory optimized filegroup to the database: ALTER DATABASE OntSessionTmp ADD FILEGROUP ontsessiontmp_mod CONTAINS MEMORY_OPTIMIZED_DATA After creating the filegroup I had to link a container: ALTER DATABASE ontsessiontmp ADD FILE (name='ontsessiontmp_mod1', filename='F:\Program Files\Microsoft SQL Server\MSSQL13.ONTINST2\MSSQL\DATA\ontsessiontmp_mod1') TO FILEGROUP ontsessiontmp_mod Now I could switch a table to a Memory Optimized version. Yes!

Install .NET Core 3.1 SDK on Ubuntu 18.04 inside WSL

I’m a big fan of WSL ( Windows Subsystem for Linux ) to test my .NET Core applications on Linux. Recently I tried to install the .NET Core 3.1 SDK on my Ubuntu distribution inside WSL: bawu@ORD02476:~$ sudo apt-get install dotnet-sdk-3.1 Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package dotnet-sdk-3.1 E: Couldn't find any package by glob 'dotnet-sdk-3.1' E: Couldn't find any package by regex 'dotnet-sdk-3.1' This didn’t seem to work. He couldn’t find the .NET Core 3.1 SDK inside the package manager. I first tried to refresh the packages list: bawu@ORD02476:~$ sudo apt update 0% [Connecting to archive.ubuntu.com] [Connecting to security.ubuntu.com] Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-backports InRelea

Delete a Windows Service

I’m so spoiled on using TopShelf that I don’t know anymore on how to remove a Windows Service the old fashioned way. If you have Powershell 6 or higher, you can do the following: Remove-Service –name “Your Service Name” An alternative is to directly use the Service Control Manager: sc.exe delete "Your Service Name" Don’t forget the ‘.exe’ when invoking the Service Control Manager inside a Powershell command window. More information: https://docs.microsoft.com/en-us/dotnet/framework/windows-services/how-to-install-and-uninstall-services

RedisTimeoutException - High ‘in’ value

While investigating a performance issue we noticed that the we had a lot of RedisTimeoutExceptions inside our logs: 2019-12-15 16:26:57.425 +01:00 [ERR] Connection id "0HLS1E3OEP520", Request id "0HLS1E3OEP520:0000001D": An unhandled exception was thrown by the application. StackExchange.Redis.RedisTimeoutException: Timeout performing PEXPIRE CookiesStorageAuthSessionStore-4a07a1bb-04e0-442d-9223-e1612967bf2b, inst: 2, queue: 8, qu: 0, qs: 8, qc: 0, wr: 0, wq: 0, in: 14411, ar: 0, clientName: SERVER1, serverEndpoint: Unspecified/redis:6379, keyHashSlot: 4822 (Please take a look at this article for some common client-side issues that can cause timeouts: http://stackexchange.github.io/StackExchange.Redis/Timeouts ) You would think that this indicates there is a performance issue in Redis but this turned out not to be the case. Let’s have a second look at the error message and especially the strange parameters: inst: 2, queue: 8, qu: 0, qs: 8, qc: 0, wr:

OpenLayers - Use SVG icons for markers

On one of my projects we are using OpenLayers to visualize geographical data. I got stuck when I tried to show markers on a map by using SVG icons. My first attempt looked like this: But on the map I only got a weird black triangle.😕 I tried a lot of alternatives but nothing made any difference. In the end I found out that I could fix it by specifying a size(width and height) inside my SVG file: Remark: Changing the scale and size properties on the style seems to have no affect when using SVG’s. So make sure to set an appropriate size in the SVG file.

Disable transformations in ASP.NET Core

By default when you add a web.config file to an ASP.NET Core application, the config file is transformed with the correct processPath and arguments. This can be quite annoying especially if you want to apply some web config transformation magic. Inside the documentation , I found that you can prevent the Web SDK from transforming the web.config file by setting the <IsTransformWebConfigDisabled> property in the project file: This disabled the web config transformation behavior but didn’t solve the problem that Visual Studio updated the web.config file. A colleague sent me another (undocumented?) flag <ANCMPreConfiguredForIIS> : Remark: This setting only works when you are running in IIS InProcess.

Change password in another domain

As a consultant I’m active at multiple clients. For each of these clients I get a domain account to be able to login onto their systems. But as they all enforce a password policy, I have to update a lot of passwords every month(thank god that password managers exists). Here is a quick tip of you want to change the password of an Active Directory account but your PC isn’t part of the same domain: You can still change your password if your PC is able to talk to the domain controller: Hit CTRL-ALT-DELETE Select Change a Password You’ll see the Change a Password screen where the focus is on the old Password field. But what is not immediately obvious is that you can change the username(and domain!) in the first field. By setting here a different username you can change the password directly for any account you have access to.

Transport Tycoon exercises for DDD

While searching something on Github I stumbled over the following repo: https://github.com/Softwarepark/exercises/blob/master/transport-tycoon.md This is a set of Domain-Driven Design (DDD) exercises. They take place in the universe of the Transport Tycoon . It is a game "in which the player acts as an entrepreneur in control of a transport company, and can compete against rival companies to make as much profit as possible by transporting passengers and various goods by road, rail, sea and air." If you want to learn about DDD, or practice your DDD skills this is a great way to start…

ASP.NET Core - Load session state asynchronously

While browsing through the ASP.NET Core documentation I noticed the following section: Load session state asynchronously . The default session provider in ASP.NET Core loads session records from the underlying IDistributedCache backing store asynchronously only if the ISession.LoadAsync method is explicitly called before the TryGetValue , Set , or Remove methods. If LoadAsync isn't called first, the underlying session record is loaded synchronously, which can incur a performance penalty at scale. To have apps enforce this pattern, wrap the DistributedSessionStore and DistributedSession implementations with versions that throw an exception if the LoadAsync method isn't called before TryGetValue , Set , or Remove . Register the wrapped versions in the services container. To avoid this performance penalty, I created 2 extensions methods that do a LoadAsync before reading or writing the session state:

Autofac–Configure Container in .NET Core WorkerService

.NET Core 3.0 introduced a new WorkerService template that can be used as a starting point for long running service apps. As the worker service template didn’t use a Startup.cs file like in a traditional ASP.NET Core application, it wasn’t immediately obvious to me where I had to configure my IoC container. So a quick tip for my future self, you can do this using the ConfigureContainer() method on the HostBuilder: Remark: The example above is using Autofac but the approach is similar for other containers.

NuGet Restore error - Response status code does not indicate success: 401 (Unauthorized)

When trying to build a project in Visual Studio, it failed while downloading the nuget packages from our internal Azure Artifacts nuget store. In the logs I could find the following error message: C:\Program Files\dotnet\sdk\3.0.100\NuGet.targets(123,5): error : Failed to retrieve information about 'Example.WebApi.Client' from remote source 'http://tfs:8080/tfs/DefaultCollection/_packaging/892779dc-d854-4c9f-8b26-833d52585ae6/nuget/v3/flat2/example.webapi.client/index.json'. [C:\Projects\MapSU\MapSU.Server.sln] C:\Program Files\dotnet\sdk\3.0.100\NuGet.targets(123,5): error :   Response status code does not indicate success: 401 (Unauthorized). [C:\Projects\MapSU\MapSU.Server.sln] Directly accessing the Azure Artifacts url worked without a problem, but when I tried to do this through Visual Studio or through the commandline it failed with the error above. I was able to solve the problem by removing the ‘vscredentials’ in the Windows Credentials manager t

HTTP Error 500.35 - ANCM Multiple In-Process Applications in same Process

After switching one of our ASP.NET Core applications from out-of-process hosting to InProcess hosting, we got the following error message: This is caused by the fact that we are running multiple applications in IIS in the same application pool. As long as we were running out-of-process this was not an issue. But when we switched to InProcess hosting it turned out that you cannot run multiple In-Process applications in the same application pool. The solution is simple, give each application its own app pool.

.NET Core 3 - Minibook

The people from InfoQ released a free (mini)book about .NET Core . In this book, five authors talk about the current state of .NET Core 3.0 from multiple perspectives. Each author brings their experience and ideas on how different .NET Core 3.0 features are relevant to the .NET ecosystem, both present and future. It covers the following topics: Navigating the .NET Ecosystem - In 2002, .NET was released. Over the next 12+ years, the .NET developer community patiently grew at a seemingly steady pace. Then, things started evolving rapidly. Microsoft anticipated the changing ecosystem and embraced the open-source development mindset, even acquiring GitHub. Interview with Scott Hunter on .NET Core 3.0 - Chris Woodruff talks to Director of Program Management for the .NET platform Scott Hunter about what developers can expect from .NET Core 3. Single Page Applications and ASP.NET Core 3.0 - Web development has changed in the past few years, with the maturity of Angular, Re

SQL Server–Find biggest tables

Did you know it is really easy to find out which tables in SQL Server use the most disk space? Open SQL Server Management Studio Right click on your database Go to Reports –> Standard Reports Choose the Disk Usage by Table report:

Running ElasticSearch in Docker–Memlock check

While updating my docker-compose file   to the latest docker images, I noticed I had to set some extra values before I could run my ELK cluster: I had to add an extra bootstrap check: - "bootstrap.memory_lock=true" and set the memlock ulimits: ulimits:     memlock:         soft: -1         hard: –1 The memory lock setting will disable swapping out parts of the JVM heap to disk. Memory swapping is bad for performance and node stability so it should be avoided at all cost. More information: https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html The memlock soft and hard values configures the range of memory that ElasticSearch will use. Setting this to –1 means unlimited.

Publish a .NET Core app using Azure Pipeline

To publish a .NET Core application using Web Deploy, you can use the .NET Core Task inside Azure pipelines. Through the .NET Core Task you can invoke the dotnet CLI and invoke the available commands. To use Web Deploy you can use the ‘Publish’ command. It took me some time to find the correct arguments, so here they are in case you need it: dotnet publish --configuration $(BuildConfiguration) /p:PublishProfile=$(BuildConfiguration) /p:Password=$(WebDeployPassword) /p:Username=$(WebDeployUser) /p:AllowUntrustedCertificate=True Remark: Don’t forget to first create a PublishProfile(pubxml) file in Visual Studio and commit it to your repo. Git ignored my pubxml by default and it took me some time to figure out why nothing happened…

EditorConfig - Let private fields start with an underscore

I find code consistency really important. This removes a lot of mental burden from the developer. One of the conventions that is quite common in C#is to use camelCase for local variables and _camelCase for private or internal fields. An example: Unfortunately this convention is not automatically enforced by Visual Studio. To fix this you can add an editorconfig file with the following rules: dotnet_naming_rule.instance_fields_should_be_camel_case.severity = suggestion dotnet_naming_rule.instance_fields_should_be_camel_case.symbols = instance_fields dotnet_naming_rule.instance_fields_should_be_camel_case.style = instance_field_style   dotnet_naming_symbols.instance_fields.applicable_kinds = field   dotnet_naming_style.instance_field_style.capitalization = camel_case dotnet_naming_style.instance_field_style.required_prefix = _

ASP.NET ReportViewer - ReportViewerWebControl.axd was not found

After migrating an old ASP.NET application to a new IIS server, the application started to throw errors. The app was using a ReportViewer control to show some SQL Server Reporting Services reports. But when I tried to load a specific report, the report viewer kept showing a spinning wheel. When I took a look at the console, I noticed 404 errors for ReportViewerWebControl.axd. The problem was that the original application was hosted on IIS 6. To configure the ReportViewer in IIS6 an entry for the Http Handler was added inside the system.web <httpHandler> section. Starting from IIS 7.0 this section is no longer read. Instead we have to add the ReportViewerWebControl.axd to the system.webserver <handlers> section:

SignalR–Automatic reconnect

With the release of .NET Core 3.0, Microsoft updated SignalR as well. One of the new features is the re-introduction of Automatic Reconnect. Automatic reconnect was part of the original SignalR for ASP.NET, but wasn’t available (until recently) in ASP.NET Core. The JavaScript client for SignalR can be configured to automatically reconnect using the withAutomaticReconnect method on HubConnectionBuilder . It won't automatically reconnect by default. Without any parameters, withAutomaticReconnect() configures the client to wait 0, 2, 10, and 30 seconds respectively before trying each reconnect attempt, stopping after four failed attempts. You can configure the number of reconnect attempts before disconnecting and change the reconnect timing, by passing an array of numbers representing the delay in milliseconds to wait before starting each reconnect attempt: Watch the video here:

Kubernetes Learning Path–50 days from zero to hero

Microsoft released a learning path for Kubernetes: Kubernetes is taking the app development world by storm. By 2022, more than 75% of global organizations will be running containerized applications in production.* Kubernetes is shaping the future of app development and management—and Microsoft wants to help you get started with it today. This guide is meant for anyone interested in learning more about Kubernetes. In just 50 days, you’ll understand the basics of Kubernetes and get hands-on experience with its various components, capabilities, and solutions, including Azure Kubernetes Service. Go from zero to hero with Kubernetes to set your company up for future app development success. Go check it out here: https://azure.microsoft.com/en-us/resources/kubernetes-learning-path/

GraphQL - A love story

In case you’ve missed my presentation at VISUG XL , you can find the slides here: GraphQL - A love story from bwullems Demo’s are available at Github: https://github.com/wullemsb/visugxlgraphqldemos

ProxyKit–Bad gateway error

We are using ProxyKit as an API proxy for our Geo services. But although it worked perfectly on our development environment and when talking to our GeoServer , it failed with the following exception when we tried to access an ASP.NET Core API through the same proxy: 502 Bad Gateway This turned out to be a bug in ProxyKit version 2.2.1. ProxyKit sets an empty content when executing a GET request. ASP.NET Core doesn’t like this as it expects that GET requests doesn’t have a body. As a work around I explicitly set the content to null when executing a GET request: Remark: A new version 2.2.2 was released last week to fix the issue: https://github.com/damianh/ProxyKit/releases/tag/v2.2.2 .

ASP.NET SessionState error: Provider must implement the class 'System.Web.SessionState.SessionStateStoreProviderBase'

After switching to the async SQL Session State Provider one of our projects failed to run. Instead we got a yellow-screen-of-death with the following message: Provider must implement the class 'System.Web.SessionState.SessionStateStoreProviderBase' I had to take one extra manual step to get the async provider up and running. I had to replace the default session state module by an async alternative:

Postman–Working with data files

Instead of doing a manual cleanup of my local RabbitMQ cluster I decided to use the Management HTTP API and script the whole process. I decided to use Postman and the collection runner data files support. Let’s walk through the steps: I started by creating a new Collection inside Postman: Inside this collection I started adding DELETE requests to different objects(exchanges, queues,…) inside RabbitMQ I used variables (recognizable with the double mustaches {{variableName}}) inside the URLs. E.g. http://user:pw@localhost:15672/api/exchanges/%2F/ {{exchange}} These variables should be replaced by values from a JSON data file. Create a JSON file containing a list of values. Unfortunately it is not possible to add the data file to your collection. So save it anywhere you want Here is how the data file looked like for the above request: Let’s now try to use the collection runner. Click on the Runner button at the top:

RabbitMQ - Management HTTP API

One nice extra you get when installing the RabbitMQ management plugin is a corresponding HTTP api that allows you to do everything that is possible from the management portal through the API. Remark: API documentation here: https://rawcdn.githack.com/rabbitmq/rabbitmq-management/v3.8.1/priv/www/api/index.html There is only one thing I had to figure out… You can query a specific vhost inside RabbitMQ by adding a vhost section inside the API. An example: /api/queues/ vhost / name The problem was I had to target the default vhost which was named “/” by default. Trying to use that “/” inside the api didn’t work and resulted in an object not found error: /api/queues/// name The correct way was to add some URL encoding magic: /api/queues/ %2F / name

IdentityModel 4 - Request custom grant

While updating my training material about OIDC I noticed that the documentation of IdentityServer didn’t reflect the latest changes in the IdentityModel library. The example found in the documentation was still using the old TokenClient together with a RequestCustomGrantAsync() method: This method no longer exists in version 4 of IdentityModel. Instead you need to use the RequestTokenAsync() extension method on the HttpClient:

ADFS error - The audience restriction was not valid because the specified audience identifier is not present in the acceptable identifiers list of this Federation Service.

After adding a claims provider trust in ADFS, we got the following error message when trying to use the configured 3th party IP-STS. The audience restriction was not valid because the specified audience identifier is not present in the acceptable identifiers list of this Federation Service. User Action See the exception details for the audience identifier that failed validation. If the audience identifier identifies this Federation Service, add the audience identifier to the acceptable identifiers list by using Windows PowerShell for AD FS.  Note that the audience identifier is used to verify whether the token was sent to this Federation Service. If you think that the audience identifier does not identify your Federation Service, adding it to the acceptable identifiers list may open a security vulnerability in your system. Exception details: Microsoft.IdentityServer.AuthenticationFailedException: ID1038: The AudienceRestrictionCondition was not valid because the

Using a scoped service inside a HostedService

When trying to resolve a scoped dependency inside a HostedService, the runtime returned the following error message: System.InvalidOperationException: Cannot consume scoped service ‘IScoped’ from singleton ‘Microsoft.Extensions.Hosting.IHostedService’. The problem is that the IHostedService is a singleton and is created outside a dependency injection scope. Trying to inject any scoped service(e.g.  an EF Core DbContext) will result in the error message above. To solve this problem you have to create a dependency injection scope using the IServiceScopeFactory. Within this scope you can use the scoped services:

IdentityServer 4 - CORS

If an endpoint is called via Ajax calls from JavaScript-based clients, CORS configuration is required. This can be done by setting the AllowedCorsOrigins collection on the client configuration . IdentityServer will consult these values to allow cross-origin calls from the origins. Remark: Be sure to use an origin (not a URL) when configuring CORS. For example: https://foo:123/ is a URL, whereas https://foo:123 is an origin.

MassTransit–Batch Message Consumption

A lesser known feature inside MassTransit is the support of batch messages. This can be a really nice feature if you want to combine a batch of high-volume smaller messages into a single atomic consumer. How does this work? MassTransit will combine multiple messages into a single consume by specifying a window, such as a message count (batch size), time period, or a combination of both. There are 2 configurable limits: Size: A limit specifying the maximum number of messages which can fit into a single batch will trigger once that many messages are ready to be consumed. The batch size must be less than or equal to any prefetch counts or concurrent message delivery limits in order reach the size limit. Time: A limit specifying how long to wait for additional messages from the time when the first message is ready, after which the messages ready within that time are delivered as a single batch. The time limit should be well within the lock time of a message,

Azure DevOps - Kanban board limits

Got a question from a customer about the Kanban board in Azure DevOps Server: I noticed that on the Kanban board on the first and last column only a limited number of work items are shown. Is this something that can be configured? Quick answer: no . To limit the number of items on the board, the first and last column of your kanban board will only show 20 work items. To see more work items you need to use the Show more items link in the bottom.

ASP.NET Core 3.0 - Enable Authentication

Quick tip for anyone using ASP.NET Core 3.0 (especially when you did an upgrade from ASP.NET Core 2.x); if you want to enable authentication don’t forget to add the correct middleware. You need both UseAuthentication and UseAuthorization: In earlier versions of ASP.NET Core, authorization support was provided via the [Authorize] attribute. Authorization middleware wasn't available. In ASP.NET Core 3.0, authorization middleware is required(!). Therefore the ASP.NET Core Authorization Middleware ( UseAuthorization ) should be placed immediately after UseAuthentication . Add the UseAuthorization and UseAuthentication methods AFTER the UseRouting() but BEFORE the UseEndpoints() : A DefaultPolicy is initially configured to require authentication, so no additional configuration is required.

Azure DevOps NuGet error

When trying to restore a NuGet package inside an Azure DevOps Build it failed with the following error message: 2019-11-13T09:17:59.8686867Z ##[section]Starting: NuGet restore 2019-11-13T09:17:59.8813825Z ============================================================================== 2019-11-13T09:17:59.8813825Z Task         : NuGet Installer 2019-11-13T09:17:59.8813825Z Description  : Installs or restores missing NuGet packages 2019-11-13T09:17:59.8813825Z Version      : 0.2.31 2019-11-13T09:17:59.8813825Z Author       : Microsoft Corporation 2019-11-13T09:17:59.8813825Z Help         : [More Information]( https://go.microsoft.com/fwlink/?LinkID=613747 ) 2019-11-13T09:17:59.8813825Z ============================================================================== 2019-11-13T09:18:01.6050815Z [command]C:\Windows\system32\chcp.com 65001 2019-11-13T09:18:01.6050815Z Active code page: 65001 2019-11-13T09:18:01.6070347Z Detected NuGet version 3.3.0.212 / 3.3.0 2019-11-13

ElasticSearch–Decrease the monitoring data

Quick tip to prevent the monitoring index to get out of control: To decrease the amount of data of the ElasticSearch monitoring you can change the _cluster settings More information: https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-settings.html

Azure DevOps–How to change the default iteration?

After upgrading to Azure DevOps 2019, I got a question from a customer asking how to change the default iteration used when creating new work items. By default the current iteration is used. If you are creating user stories, this is probably not what you want as these user stories should first be analyzed, groomed, … before they can be planned inside a specific iteration/sprint. Fortunately this is something that can be changed easily at the team level: Click on the Show Team Profile  icon on the Kanban board: Click on Team settings: Go to Iterations and Areas: Click on Iterations: Now you change the default iteration how you want:

Fork–A fast and friendly GIT client

Tooling is important and great tools can make a difference. Especially when you are using so rich and complex as GIT can be. Last week I discovered a new GIT client; Fork . I was especially impressed by the interactive rebase functionality. Could be a lifesaver if you are now afraid to use rebase … Remarks: My current favorite is GitKraken in case you want to know. But I’m giving Fork a try and so should you…

Orleans–Fire and forget

The basic building block in Orleans is a ‘Grain’. A grain is an atomic unit of isolation, distribution, and persistence. A grain perfectly matches the OO principles as it encapsulates state of an entity and encodes its behavior in the code logic. The ‘normal’ way to communicate between Grains is through message passing. This is greatly simplified thanks to the async/await programming model in C# combined with some code generation voodoo. It almost ‘feels’ like normal method invocations. A consequence of the async programming model is that you always should ‘await’ the results. However there are some situations where you don’t care about the results and want to call another grain in a ‘fire-and-forget’ mode. To achieve this you can use the InvokeOneWay() extension method on a GrainReference:

Testing–Faking a delay in SQL Server

To test the timeout settings of my application, I needed a query that exceeds a certain time. This was not that hard to achieve in SQL Server thanks to the WAITFOR statement. This statement blocks the execution of a batch, stored procedure, or transaction until a specified time or time interval is reached, or a specified statement modifies or returns at least one row. ----Delay for 10 seconds WAITFOR DELAY '000:00:10' SELECT 'Hello finally!' GO More information: https://docs.microsoft.com/en-us/sql/t-sql/language-elements/waitfor-transact-sql?view=sql-server-ver15

MassTransit - A convention for the message type {typename} was not found

While preparing a presentation I created a simple demo app where I wanted to publish a message to a queue. My first (naïve) attempt looked like this: This failed with the following error message: A convention for the message type {typename} was not found What was my mistake? In MassTransit you have to make a difference between sending a message and publishing a message. Whereas publishing a message doesn’t require any extra configuration, sending a message does. Why? Because when sending a message you directly target a queue. MassTransit has to know which queue the message should be send to. We have 2 options to make this work: 1) You specify the target queue using a convention: 2) You use the ISendEndpointprovider to specify a queue:

Reconfigure the ElasticSearch Windows Service

If you are running ElasticSearch on a Windows Cluster you probably are using a Windows Service to run it in the background and make it start automatically at boot time. This Windows Service feature is available out-of-the-box if you use the .zip package . Some settings of ElasticSearch are managed as command-line arguments(e.g. the min and max memory usage). To manipulate these settings when using the Windows Service, you have to go through the ElasticSearch Windows Service Manager. To open the manager, go the ElasticSearch installation folder. (e.g. c:\elasticsearch-7.4.0\) Browse to the bin folder (where you should find a elasticsearch-service.bat file) Run the following command: elasticsearch-service.bat manager This will open a GUI where you can manage multiple ElasticSearch settings: Remark: Most changes will require a restart of the service.

ASP.NET Core gRPC–Unary RPC vs Streams

When creating your gRPC implementation you have to be aware about the difference between Unary RPC and Streams. Unary RPC This is the simplest type of RPC, where the client sends a single request and gets back a single response. This is the simplest approach and similar to what we know when using WCF. Streams With streaming we have to make a difference between server streaming, client streaming and bidirectional streaming. A server-streaming RPC is most similar to the Unary RPC, the only difference is that the server sends back a stream of responses instead of a single response after getting the client’s request message. After sending back all its responses, the server’s status details (status code and optional status message) and optional trailing metadata are sent back to complete on the server side. The client completes once it has all the server’s responses. A client-streaming RPC turns things around, the client sends a stream of requests to the server instead of a

Why you should not fear rebase…

Last week our junior developers shared their experiences and lessons learned during a ‘dev case’. One thing they al mentioned is that doing a ‘rebase’ in GIT is, according to their opinion, a bad idea. Although rebasing is a powerful tool, and you have to apply it carefully, there is no reason to avoid it all cost. Some things to keep in mind before you rebase: Never rebase commits that have been pushed to a remote origin and shared with others. Use rebase to catch up with the commits on another branch as you work with a local feature branch. You can't update a published branch with a push after you've rebased the local branch. You'll need to force push the branch to rewrite the history of the remote branch to match the local history. Never force push branches in use by others. As a general rule it would only use rebase on local changes that haven't been shared with others. Once you’ve shared your changes, switch to merge instead. To use rebase,

ASP.NET Core 3.0 - ConfigureContainer magic

Last week I blogged about the changes I had to make to let Autofac work with ASP.NET Core 3.0. Inside my Startup.cs file I had to use the .ConfigureContainer() method: But where is this method coming from? Let’s dig into the ASP.NET Core source code to find out… The source of all magic is the StartupLoader class: https://github.com/aspnet/Hosting/blob/rel/1.1.0/src/Microsoft.AspNetCore.Hosting/Internal/StartupLoader.cs . This class uses reflection to find the following 3 methods in the Startup.cs file: a Configure() method a ConfigureServices() method a ConfigureContainer() method If you want environment-specific setup you can put the environment name after the Configure part, like ConfigureDevelopment , ConfigureDevelopmentServices , and ConfigureDevelopmentContainer . If a method isn’t present with a name matching the environment it’ll fall back to the default. If a ConfigureContainer() method is found, the IServiceProviderFactory<TContainerBuilder&

Visual Studio 2019–Code Cleanup

Did you notice the small broom icon at the bottom of your code editor window in Visual Studio? This is the  Code Cleanup button. It allows you to apply code styles from an EditorConfig file or from the Code Style options page. (The .editorconfig takes precedence). To configure the exact Code Cleanup actions, you can click the expander arrow next to the code cleanup broom icon and then choose Configure Code Cleanup . After you've configured code cleanup, you can either click on the broom icon or press Ctrl + K , Ctrl + E to run code cleanup.

Visual Studio–Generate an EditorConfig file

I talked about the .editorconfig file a long time ago as a way to standardize code style conventions in your team. These conventions allow Visual Studio to offer automatic style and format fixes to clean up your document. But did you know that in Visual Studio 2019, you can generate an .editorconfig file dynamically based on the style and formatting found in your existing codebase? Go to Tools –> Options –> IntelliCode . Change the EditorConfig inference setting to Enabled . Right click on you solution in the Solution Explorer and choose Add –> New EditorConfig (IntelliCode) After you add the file in this way, IntelliCode automatically populates it with code style conventions it infers from your codebase.

Azure DevOps - Publish code as wiki–Advanced features

One of the features I really like in Azure DevOps is to publish your code as a wiki . This allows you to choose a Git repository, branch and folder that contain Markdown files. The markdown files are then published as pages inside the wiki. Unfortunately when we take a look at the table of contents(TOC) we see all the markdown files listed in alphabetical order. Every subfolder is also shown as a wiki page even when it doesn’t contain any markdown files. This is probably not what we want. Let’s improve the TOC… Change the page order To change the order of the files in the TOC, you can add a .order file to the repository. Each .order file defines the sequence of pages contained within a folder. The root .order file specifies the sequence of pages defined at the root level. And for each folder, a .order file defines the sequence of sub-pages added to a parent page. Inside the .order file you can specify each file name without the .md extension. An example: README

.NET Core 3.0 - HTTP/2 support

You maybe think this is a bad title for this blog post? HTTP/2 support was already available for .NET Core 3.0. So why a blog post with the release of .NET Core 3.0? The reason is that although it was possible to host an ASP.NET Core 2.0 application behind an HTTP/2 endpoint, the HttpClient class didn’t had support for it! There are 2 ways that can be used to enable HTTP/2: Enable HTTP/2 at the instance level To enable HTTP/2 support at the instance level, you can set the DefaultRequestVersion when creating the HttpClient instance: For example, the following code creates an HttpClient instance using HTTP/2 as its default protocol: Of course even better is to use the HttpClientFactory to create and configure the HttpClient: Enable HTTP/2 at the request level It is also possible to create a single request using the HTTP/2 protocol: Remark: Remember that HTTP/2 needs to be supported by both the server and the client. If either party doesn't support HTTP/2, both

ASP.NET Core 3.0 - Autofac error

After upgrading my ASP.NET Core application to 3.0 I got an error in Startup.cs  when I switched to the new HostBuilder: System.NotSupportedException: 'ConfigureServices returning an System.IServiceProvider isn't supported.' Let’s take a look at my ConfigureServices() method: Inside my ConfigureServices I’m using the Autofac containerbuilder to build up my container and return an AutofacServiceProvider. Therefore I updated the ConfigureServices() method signature to return an IServiceProvider. This worked perfectly in ASP.NET Core 2.x but is not allowed anymore when using the new HostBuilder in ASP.NET Core 3.0. Time to take a look at the great Autofac documentation for a solution: https://autofac.readthedocs.io/en/latest/integration/aspnetcore.html#asp-net-core-3-0-and-generic-hosting . Ok, so to fix this we have to change our ConfigureServices() method to no longer return an IService Provider: Then we have to update the program.cs to register our AutofacSe

ASP.NET Core 3.0–Swashbuckle error

As probably most people in the .NET ecosystem I’m using Swashbuckle to generate my OpenAPI documentation. (Anyone using NSwag instead?) After upgrading to ASP.NET Core 3.0 (and switching to the 5.0-rc4 prerelease version of Swashbuckle), the following code no longer compiled: I had to replace the Info class which couldn’t be found by the OpenApiInfo class: This OpenApiInfo class is now part of the Microsoft.OpenApi.Models namespace.

Setup the Kubernetes dashboard on Docker for Windows

A useful tool when you are new to Kubernetes is the Kubernetes Dashboard . Unfortunately the Kubernetes Dashboard is not included out-of-the-box with Docker for Windows however it can be easily setup for your local cluster. To setup the dashboard use the following command: kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml The output should look like this: secret/kubernetes-dashboard-certs created serviceaccount/kubernetes-dashboard created role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created deployment.apps/kubernetes-dashboard created service/kubernetes-dashboard created To view the Dashboard in your web browser run: kubectl proxy And nagivate to your Kubernetes Dashboard at: http://localhost:8001/api/v1/namespaces

Switch between Kubernetes contexts

Lost some time yesterday figuring out how to switch between different Kubernetes environments. So a quick post, just as a reminder for myself: You can view contexts using the kubectl config command: kubectl config get-contexts CURRENT  NAME                            CLUSTER                    NAMESPACE            *         docker-desktop                   docker-desktop             docker-desktop                                              docker-for-desktop               docker-desktop             docker-desktop You can set the context by specifying the context name: kubectl config use-context docker-for-desktop

ElasticSearch–Performance testing

When trying to load test our ElasticSearch cluster, we noticed big variations in results that we couldn’t explain based on the changes we made. Turned out that our tests were not executed in comparable situations as we didn’t clear the ElasticSearch cache. So before running our tests, we cleared the cache using following command: POST /<myindexname>/_cache/clear?request=true If you want to view what’s inside the Elastic node cache, you can use the following command:: GET /_cat/nodes?v&h=id,name,queryCacheMemory,queryCacheEvictions,requestCacheMemory,requestCacheHitCount,requestCacheMissCount,flushTotal,flushTotalTime

GraphQL Rules

As with every technology you give to your team everyone has different opinions and conventions. A style guide becomes an indispensible part of your development organisation. Otherwise the ‘tabs vs spaces’ discussion can go on forever. This also applies to GraphQL. So to help you get started take a look at https://graphql-rules.com/ . Rules and recommendations mentioned here were the results of 3 years' experience of using GraphQL both on the frontend and backend sides. We also include the recommendations and experience of Caleb Meredith (PostGraphQL author, Facebook ex-employee) and Shopify engineers. This guide is intended to be open source and could change in the future, - the rules may be improved on, changed, or even become outdated. What is written here is a culmination of time and pain suffered from the use of horrible GraphQL-schemas.

Cannot create or delete the Performance Category 'C:\Windows\TEMP\tmp3DA0.tmp' because access is denied.

After migrating some .NET applications from an old server to a brand new Windows Server 2019 instance, we stumbled over a range of errors. Yesterday we got one step closer to a solution but we are not there yet. The application still doesn’t work but now we get the following error message: Server Error in '/AppServices' Application. Cannot create or delete the Performance Category 'C:\Windows\TEMP\tmp3DA0.tmp' because access is denied. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Cannot create or delete the Performance Category 'C:\Windows\TEMP\tmp3DA0.tmp' because access is denied. ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET req