Skip to main content

Posts

Showing posts from January, 2018

ElasticSearch–Searching through arrays of objects

To better understand how to search through array data in your ElasticSearch JSON document, it is important to know how ElasticSearch stores arrays behind the scenes. Given a document which contains a ‘names’ array with a list of different name properties(e.g. firstname, lastname,…) using the following mapping: With the following sample data: When ElasticSearch indexes this document, it is stored like this: Lucene has no concept of inner objects, so Elasticsearch flattens object hierarchies into a simple list of field names and values. If you don’t want this behavior, you have to map the array as a nested object . Internally ElasticSearch maps these array objects as separate documents and does a child query.

Serilog–Debugging

Last week a collegae mentionted he had some problems with Serilog , a structured logging library for .NET. Most of the time it just worked fine, but he noticed that in some situations no log messages were written to a sink. We first thought it had something to do with misconfigured log levels but that turned out to be a wrong assumption. In the end, we found the root cause after enabling the Serilog debug log: After doing that, we discovered that in some cases the following message was logged: Maximum destructuring depth reached. Due to a recursive object in one of our log messages, the JSON serializer gives up and Serilog discards the message.

To REST or not to REST that is the question

There is a lot of discussion going on about REST, how ‘RESTfull’ you should be and how everyone is misinterpreting Roy Fielding’s dissertation(yes, even you… ). A good read for during the weekend is REST is the new SOAP : https://medium.freecodecamp.org/rest-is-the-new-soap-97ff6c09896d And don’t forget to also read the response by Phil Sturgeon on the article above: https://philsturgeon.uk/api/2017/12/18/rest-confusion-explained/ Update(8/2/2018): Phil Sturgeon continued his explaination in another post:  https://philsturgeon.uk/api/2018/01/20/rest-confusion-explained-further/ 

Decrease Docker build time by using .dockerignore

Similar to the .tfignore file I blogged about yesterday, a .dockerignore file exists for Docker. Why is this usefull? To answer this question I have to explain a little bit how Docker works. When you are using Docker commands, there are in fact 2 components at work; a client and a daemon. The client only sends the commands to the daemon whereas the daemon does all the ‘real’ work. One of the commands you probably used is docker build which allows you to build an image from a Dockerfile. When you execute this command the client will send all the files in the directory passed to the command to the daemon. For big projects, this can get very large resulting in slow builds as you have to wait until the client is done sending all files to the daemon. Most of the time not all files are needed by docker build(e.g. previous bin and obj folders, your .git folder, …).  To avoid sending all these files you can create a .dockerignore file in your root directory. This works like a .gitig

tfignore file for .NET developers

I blogged earlier about the existence of the .tfignore file in Team Foundation Server source control similar to the .gitignore file in GIT. Still they are a lot of developerd who are unaware of the existence of this file. With the .tfignore file, you can configure which kinds of files are ignored by placing it in the folder where you want rules to apply. The effects of the .tfignore file are recursive. However, you can create .tfignore files in sub-folders to override the effects of a .tfignore file in a parent folder. Here is a good sample file to get started with: To automatically generate a .tfignore file In the Pending Changes page, in the Excluded Changes section, select the Detected link. The Promote Candidate Changes dialog box appears. Select a file, open its context menu, and choose Ignore this local item , Ignore by extension , Ignore by file name , or Ignore by folder . Choose OK or Cancel to close the Promote Candidate

TFS Build Error CS1617: Invalid option 'latest' for /langversion; must be ISO-1, ISO-2, Default or an integer in range 1 to 6

After setting the Language version to C# latest minor version for my project, our TFS build started to fail with the following error message: Error CS1617: Invalid option 'latest' for /langversion; must be ISO-1, ISO-2, Default or an integer in range 1 to 6 While looking through the build logs I noticed that the build server was still using MSBuild v14 instead of 15. I first thought that my build agents were not up to date, so I triggered an upgrade: http://bartwullems.blogspot.be/2017/09/team-foundation-serverupgrade-your.html This made no difference, the used build tasks were not able to discover the newer version of MSBuild on the build server as they didn’t use VSWhere.exe yet. In the end I decided to do an upgrade of TFS(a TFS 2017 instance) to TFS 2017 Update 3. After doing that, a new version of the Build task was available that correctly found MSBuild v15.0 and allowed me to use the new C# features. Finally!

TFS Build Server: Error CS5001: Program does not contain a static 'Main' method suitable for an entry point

When trying to build a newly created project on our build server, it failed with the following error message: ##[error]CSC(0,0): Error CS5001: Program does not contain a static 'Main' method suitable for an entry point I was using the new async main functionality in C# 7.1 but for a reason I didn’t understand the build server didn’t pick this up. Locally however, everything was working as expected. The only difference I noticed that I was building on my machine with a Debug configuration whereas on the Build server a Release configuration was used. And indeed after changing my local settings to Release, my build started to fail. Lesson learned: You have to change the Language Version for every configuration you are using…

MassTransit–No queue is created

Yesterday, I was working on a new project where we planned to use MassTransit. After creating the RabbitMQ configuration and publishing a first message, I noticed that an Exchange was created in RabbitMQ but that it was not bound to a queue. This resulted in the fact that the message was lost . Here is the related code: In the RabbitMQ administration I saw the following in the list of Exchanges: But when I clicked on the Exchange to view the details, I saw that the exchange was not linked to any queue: As I found out, this is expected behavior in MassTransit. From StackOverflow : Publishing a message does not create any queues. Queues are created when receive endpoints are added to a bus. Until the receive endpoints are started, and their topology configured in RabbitMQ (the exchange-exchange-queue bindings), any messages published are not delivered because there are no bindings to any queues. Once the receive endpoints are started, those bindings exist and messages

Entity Framework Core: Log parameter values

Entity Framework Core provides a rich logging mechanism that allows you to see what’s going on behind the scenes. If you register EF Core in your ASP.NET Core application using the AddDbContext method, it will integrate automatically with the logging mechanism of ASP.NET Core . Otherwise you have a little bit extra work and should register a LoggerFactory yourself(more information here: https://docs.microsoft.com/en-us/ef/core/miscellaneous/logging ) Unfortunately after enabling it, I still couldn’t see the query parameters that were used by the EF queries. This is a security feature that is enabled by default. As query parameters can contain sensitive information, it is excluded by default from the log messages. To include this information you have to explicitly enable it by calling EnableSensitiveDataLogging(): From the documentation Enables application data to be included in exception messages, logging, etc. This can include the values assigned to properties of your en

ElasticSearch debugging–Log request and response messages using NEST

It can sometimes be quiet hard to figure out why a specific ElasticSearch query fails or don’t returns the expected results. As I’m using NEST, the high-level ElasticSearch client, it is not always obvious what is exactly happening behind the scenes. Here is a code snippet that allows you to log the generated request and response messages: Remark: Using this code has a performance impact, so only use it for development or testing purposes.

Debugging a .NET Core project in VS 2017 - Unable to start program, An operation is not legal in the current state

When trying to debug a .NET Core application using Visual Studio 2017, attaching the debugger suddenly started to fail with the following error message: Unable to start program, An operation is not legal in the current state Starting without a debugger attached worked, but attaching the debugger later on didn’t. Rebuilding my project, restarting Visual Studio, restarting my PC, all seemed to not bring a solution to the table. This is a known issue , the solution (workaround) is to turn off JavaScript debugging on Chrome: Go to Tools –> Options –> Debugging –> General Turn off the setting for Enable JavaScript Debugging for ASP.NET (Chrome and IE) Remark: A fix will be provided in the Visual Studio 2017 15.6 release.

StructureMap: Using the IoC container inside the Registry

StructureMap has an easy to use Registry DSL that can be used in a Registry class. By using (one or more) Registry classes you can group all your IoC plumbing together. Yesterday I had a situation where I wanted to use some object that was already registered in the IoC container in the registration of another class. This is possible by passing an IContext as a second parameter of the Use() method in the fluent API:

F# with .NET Core in Visual Studio

Microsoft has supported F# on .NET Core from the beginning. Unfortunately the same story is not true when we talk about Visual Studio integration. Before, if you wanted to use F# in combination with .NET Core, you had to use VS Code.  With the release of Visual Studio 2017 15.5, this has finally changed and Visual Studio integration is available.

Postman API network

Yesterday I mentioned that reading other peoples code is a great way to extend your skills as a developer. A similar thing can be said about looking at other people their (REST) api’s… A good starting point is the Postman API Network , a directory of public APIs available maintained by the creators of the Postman app . Every API listed in this directory includes a complete Postman collection , that can be directly consumed from the Postman app.

SmartHotel360 Demo Apps and Architecture

Looking at other people their code is a great way to extend your skills as a developer and architect.  One great set of examples are the reference apps that Microsoft created for the Connect event last year .  Time to check SmartHotel360 in GitHub . SmartHotel360 is a fictitious smart hospitality company showcasing the future of connected travel. Their vision is to provide: Intelligent, conversational, and personalized apps and experiences to guests Modern workplace experiences and smart conference rooms for business travelers Real-time customer and business insights for hotel managers & investors Unified analytics and package deal recommendations for campaign managers . The heart of this application is the cloud – best-in-class tools, data platform, and AI – and the code is built using a microservice oriented architecture orchestrated with multiple Docker containers. There are various services developed in different languages: .NET Cor

Taking screenshots of desktop applications using the Test and Feedback extension

Microsoft released a Test and Feedback extension for Chrome and Firefox as a replacement for the exploratory testing feature in Microsoft Test Manager. Earlier versions of this extension were rather limited in functionality but with the most recent version you can: Take screenshots of any application on your desktop(not only inside the current browser chrome) Take screen recordings and save them as videos Getting started Go to the Visual Studio Marketplace: https://marketplace.visualstudio.com/items?itemName=ms.vss-exploratorytesting-web Click on one of the links in the Supported Browser section I’m using Chrome, so I click on Install . You will be taken to Google Chrome web store in a new tab. Follow the steps from Chrome web store to install the extension. After the installation is done, you see a new icon on the top right in your browser, next to the address bar. When you click on the icon, a popup is shown.

The Joel test

While reading the following post https://www.7pace.com/blog/developer-burnout-how-to-prevent-boredom-blow-ups-and-other-bullshit , the author mentioned the Joel test. From the article : Joel Spolsky, author of the popular blog Joel on Software, devised a simple 12-point rubric for determining how good (or terrible) your software development environment is to work in. According to the rubric, everyone should shoot for a 12 out of 12–but anything below a 10 is dangerous territory. Time to do the check yourself…

DNS changed when using a single HttpClient

I blogged before about the fact that it is better to re-use your HttpClient object instead of disposing and creating a new one for every request.  There is however one caveat with a global HttpClient, and that is that DNS changes are not honored because the connections are re-used. A solution is to set a connection timeout before your connection is released and recreated by using the ServicePointManager: This will release the connection after 60 seconds.

The .NET API Browser

One of the nice additions to the .NET documentation is the API Browser . It gives you a quick and easy way to find a specify method or function throughout multiple versions of .NET(Core). It is not limited to the Base Class Library but also includes search options for specific SDKs like Xamarin, Azure, Roslyn …

Dotnet test–Set test output

In a previous post I mentioned how to run and publish your Visual Studio Unit tests on your build server. By using the dotnet test command with the –logger:trx option, tests results were generated in a format that can be published to TFS. One problem we noticed when applying this solution was that the default naming convention sets a unique name(based on the test execution date) for every generated trx file. This sounds fine but the problem was that for every run of our build pipeline, new trx files were generated. As a consequence, when trying to import the test results using the Publish Test Results task, test result files from previous builds were imported as well, resulting in an ever increasing number of published test results. To solve this, I added an extra option to the dotnet test command and explicitly specify the filename: dotnet test --logger trx;logfilename=TEST.trx In the Publish Test Results task, I updated the wildcard to only include TEST.trx files: