Skip to main content

Posts

Showing posts from 2025

Securing File Uploads Part 4: Malware Scanning with Windows AMSI

Welcome to the final post in our file upload security series. We've covered content type validation, file size validation, and file signature validation—each providing a crucial layer of defense. Today, we're implementing the final and most sophisticated protection: malware scanning using Windows Antimalware Scan Interface (AMSI) . The last line of defense Even after all our previous validation steps, a determined attacker could still upload malicious content: A legitimate PDF with embedded JavaScript exploits A valid Office document containing malicious macros An actual image file with embedded steganographic payloads A genuine archive containing malware Zero-day exploits targeting file processing libraries These files pass all our previous validations because they are legitimate file formats—they're just weaponized. This is where malware scanning becomes essential. Why AMSI? Windows Antimalware Scan Interface (AMSI) is a powerful, oft...

Using Personal Access Tokens(PAT) to clone Azure DevOps Git Repositories

When working with Azure DevOps repositories, Personal Access Tokens (PATs) offer an alternative to traditional authentication. Although I would not recommend them for general usage, there are some scenario's where a PAT is a secure option providing security through scoped permissions, expiration dates, and the ability to revoke access without changing your primary credentials. I had a situation where I needed to clone a set of GIT repositories and run a scan on each repository. As the script would be running for a long time I thought it would be better to create and use a PAT instead of my own account. Creating a Personal Access Token (PAT) Sign in to your Azure DevOps organization Click on your profile icon in the top right corner Select "Personal access tokens" Click "+ New Token" Configure your token: Give it a meaningful name Set an expiration date Select the organization Under "Scopes," ...

Securing File Uploads Part 3: File Signature Validation

In our previous posts, we covered content type validation and file size validation as the first two layers of defense in our file upload security pipeline. Today, we're diving into what I consider the most critical validation step: file signature validation , also known as "magic number" validation. This is where we stop trusting what files claim to be and start verifying what they actually are. The Problem: files that lie Here's a sobering truth: both content type headers and file extensions are trivially easy to manipulate. An attacker can: Rename malicious.php to harmless.jpg Upload a PHP web shell with the content type set to image/jpeg Disguise an executable as a PDF by simply changing the extension Bypass your content type validation while still delivering malicious payloads Consider this scenario: Your application accepts image uploads for user profiles. You've implemented content type validation that only allows image/jpeg , image/...

Securing File Uploads Part 2: File Size Validation

In the first post of this series, we explored how content type validation serves as the first line of defense against malicious file uploads. Today, we're tackling another critical security concern: file size validation and why it's essential for protecting your application from resource exhaustion attacks. The threat: Death by a thousand uploads File size validation might seem like a simple feature requirement, but it's actually a crucial security control. Without proper size limits, attackers can: Exhaust disk space : Fill up your storage with massive files, causing system failures Consume bandwidth : Drain network resources by uploading gigantic files repeatedly Trigger out-of-memory errors : Crash your application by forcing it to process files larger than available memory Enable denial-of-service attacks : Tie up server resources processing oversized files, preventing legitimate users from accessing your application Inflate storage costs : In c...

Securing File Uploads: Content Type Validation–A defense against malicious files

File upload functionality is a common feature in web applications, but it's also one of the most common attack vectors. A recent security review of our applications revealed some vulnerabilities in our file upload handling that needed our attention. This is the first post in a series where I'll share how we systematically secured our file upload functionality. The problem The fundamental issue with file uploads is trust. When users upload files, we're essentially allowing them to store content on our servers. Without proper validation, attackers can: Upload malicious scripts disguised as innocent files Bypass security controls by manipulating file extensions Execute server-side code through crafted payloads Consume excessive server resources with oversized files The first line of defense? Content type validation . Our approach: A validation pipeline Rather than implementing ad-hoc validation checks scattered throughout our codebase, we designed a...

Discovering Visual Studio 2026 – Bring your own model

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. A feature that exists for a while in VS Code but didn’t made it yet in Visual Studio, finally arrived in Visual Studio 2026. You are no longer limited to use any of the built-in models that Visual Studio supports, but you can now connect to your own language models. Selecting a model To select a different model, go to the models dropdown in the Copilot Chat window : Click on the dropdown and choose Manage Models :   This will open up the Bring your own model window:   Here you can select a model from any of the available providers; Anthropic, Google, OpenAI and xAI:   After entering your API key, you can select one or more models supported by the chosen provider:   Unfortunately, you don’t have the option yet to choose a loc...

Visual Studio 2026–The Copilot Profiler Agent

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. One feature that is really new and only available inside VS 2026 (at the moment of writing this post) is the new Copilot Profiler agent. The profiler agent should be of assistant in any of the following tasks: Analyzing CPU usage, memory allocations, and runtime behavior Surfacing   bottlenecks in your code Generate new BenchmarkDotNet benchmarks Validate fixes with before/after metrics, all in a smooth guided loop And more… A good introduction of the profiler agent can be found here: But of course, we want to try it ourselves… Hello profiler! I have an existing set of BenchmarkDotNet test that I use to evaluate the performance of some features I’m working on and also to avoid that any change...

SonarQube–The ‘MakeUniqueDir’ task failed unexpectedly

In our efforts to improve the (code) quality of our applications, we started an initiative to get all our teams integrate their projects in SonarQube. We have SonarQube running for a long time inside our organization, but adoption remained fragmented. The initiative turned out quite successful, but as a consequence, we encountered some issues with SonarQube. Teams started to complain that there build pipelines became flaky and sometimes resulted in errors. The reported error was related to SonarQube and the message was like this: Error MBS0418: The ‘MakeUniqueDir’ task failed unexpectedly. System.UnauthorizedAccessException: Access to the path ‘’'db1_work80_sonarqubeout’ is denied. We found out that the problem was related to our build server setup where we have multiple agents running on the same server. As multiple agents try to execute the ’Prepare Analysis’ task, it sometimes fails with the error message above. Furher research brought us to the NodeReuse parameter of...

Microsoft.Extensions.AI – Part IX–Semantic kernel integration

Semantic Kernel was the first AI library specifically created to build AI agent and chat experiences in .NET. Later the .NET team started working on Microsoft.Extensions.AI as a common abstraction layer for integrating AI capabilities in your .NET applications. As a consequence, these 2 libraries have some overlap and similar abstractions exist in both libraries. This post is part of a blog series. Other posts so far: Part I – An introduction to Microsoft.Extensions.AI Part II – ASP.NET Core integration Part III –Tool calling Part IV – Telemetry integration Part V – Chat history Part VI – Structured output Part VII -  MCP integration Part VIII – Evaluations Part VIII – Evaluations (continued) Part IX (this post) -  Semantic Kernel Integration What now? The good news is that Microsoft is actively working on aligning both libraries and (re)building Semantic Kernel on top of the same Microsoft.Extensions.AI abstractions. This mea...

Discovering Visual Studio 2026 – Code coverage

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Today I want to take a look at Code Coverage in Visual Studio. “Wait… what?!” I here you think, “Code coverage is not a new feature in Visual Studio.”. And yes you are right, But until this version Code Coverage was only available for the Enterprise Edition of Visual Studio. With Visual Studio 2026, it is finally a part of the Community and Professional Edition as well. (I always thought it was strange to call yourself professional but don’t focus on code coverage.) How to use Code Coverage in Visual Studio So, if you never had the opportunity to use the Code Coverage feature in Visual Studio, let me walk you through the steps. Go to the Test menu and select the Analyze Code Coverage for All Tests option from the menu. Another option is to right click...

Discovering Visual Studio 2026 – Copilot Actions

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Until recently you had 2 kinds of interactions with GitHub Copilot; either it was automatically with features like AI autocompletion in your editor, next edit suggestions or the intelligent copy paste feature I talked about yesterday; or it was manually by using prompts through one of the available chat modes. With the introduction of Copilot Actions, a third interaction mode is introduced. Copilot Actions Copilot Actions give you direct access to Copilot from the context menu inside the editor without the need to type any prompt. Right now, the list of available actions is limited to the following 5: Explain Optimize selection Generate comments Generate tests Add to Chat Remark: The Optimize option is only available when you have...

Discovering Visual Studio 2026–Adaptive paste

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Let’s be honest. Every developer copy and pastes other code. Typically after pasting there is some cleanup left to do; correcting styles, adapt to your naming conventions, fixing small errors. process often comes with extra steps. What if the pasted code is automatically adapted incorporating one or more of the following actions Aligning syntax and styling with the document Inferring parameter adjustments Fixing minor errors Supporting language translation, human and code-based Completing patterns or filling in blanks Wouldn’t that be great? Enter adaptive paste. Adaptive paste The adaptive paste UI appears when you do a regular paste (CTRL-V). Press the TAB key afterwards to get an Adaptive Paste suggestion. You can also trigger Ada...

Auto update the .NET core versions on your server

.NET Full Framework updates on your server(s) become available as Windows Updates and can be pushed through centralized tools like Microsoft Intune, System Center Configuration Manager (SCCM), or Windows Server Update Services (WSUS), allowing IT ops teams to control update scheduling and minimize unexpected downtime. However such an option didn’t exist for a long time for .NET Core. This changed some time ago when .NET Core updates became available via Microsoft Updates as an opt-in(!) feature. How to enable automatic updates for .NET Core Enabling automatic .NET updates on your Windows Server requires modifying the Windows Registry. You have several options depending on your needs: Enable All .NET Updates (Recommended for most scenarios): [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NET] "AllowAUOnServerOS" = dword:00000001 Version-Specific Updates : .NET 9.0 : [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NET\9.0] .NET 8.0 : [HKEY_LOCAL_MACHINE\SOFTWARE\Microsof...

Discovering Visual Studio 2026–Installation

Yes! The new Visual Studio 2026 edition is available in preview (now called Insiders). I'll take some time this week to walk through some of the features I like and maybe some of the rough edges I discover along the way. Start by downloading the version you prefer here: https://visualstudio.microsoft.com/insiders/ After downloading the installer and executing the installer, the Visual Studio Installed is loaded and you are welcomed by a new screen: This is already a great feature as it allows you to import the configuration settings and extensions from a previous version. We don’t have installed Visual Studio yet but I’m already happy! The remaining part of the installation process is not changed, you can select the Workloads and Individual components you like and start the installation. I didn’t have the feeling that the installation process was much slower or faster but don’t take it as an official measure. After the installation completed, I opened up an existing sol...

How to get rid of the smartcard popup when interacting with LDAP over SSL

In one of our applications we are connecting with LDAP through System.DirectoryServices.AccountManagement. . This code worked fine for years until we had to make the switch from LDAP to LDAPS and incorporate SSL in our connections. Let me start by showing you the original code (or at least a part of it): We thought that making the switch to SSL would be easy. We therefore added the ContextOptions.SecureSocketLayer to the ContextOptions enum; However after doing that, we get a SmartCard popup everytime this code is called: I couldn’t find a good solution to fix it while keeping the PrincipalContext class. After some help of GitHub Copilot and some research I discovered that I could get it working when I switched to the underlying LdapConnection and explicitly setting the ClientCertificate to null : More information c# - PrincipalContext with smartcard inserted - Stack Overflow c# - How to validate server SSL certificate for LDAP+SSL connection - Stack Overflow

Migrating from XUnit v2 to v3–Troubleshooting

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I’ll share some of the migration problems I encountered and how to fix them. XUnit.Abstractions is not recognized This is an easy one. The XUnit.Abstractions has become an internal namespace and should no longer be referenced. Just remove any using Xunit.Abstractions statement from you code. No v3 version of Serilog.Sinks.XUnit After switching to the v3 version of the Xunit packages, I noticed that the old XUnit v2 version was still used somewhere causing the following compiler error: The type 'FactAttribute' exists in both 'xunit.core, Version=2.4.2.0, Culture=neutral, Publ...

Migrating from XUnit v2 to v3 – Getting started

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. Yesterday I talked about some of the features that I like in the new version. Today I want to walk you through the basic steps needed to migrate an existing V2 project to V3. Understanding the architectural changes Before diving into the migration steps, it's crucial to understand the fundamental changes in xUnit v3 that impact how you'll structure and run your tests From Libraries to Executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued ...

Migrating from XUnit v2 to v3 - What’s new?

The XUnit team decided to do a major overhaul of the XUnit libraries and created completely new V3 packages. So don't expect backwards compatibility but a significant architectural shift that brings improved performance, better isolation, and modernized APIs to .NET testing. While the migration requires some work, the benefits should make it worthwhile for most projects. In this post I'll explain some of the reasons why I think you should consider migrating to the v3 version. From libraries to executables The most significant change in v3 is that test projects are now stand-alone executables rather than libraries that require external runners. This architectural shift solves several problems that plagued v2: Dependency Resolution : The compiler now handles dependency resolution at build time instead of runtime Process Isolation : Tests run in separate processes, providing better isolation than the Application Domain approach used in v2 Simplified Execution :...

Writing your own batched sink in Serilog

Serilog is one of the most popular structured logging libraries for .NET, offering excellent performance and flexibility. While Serilog comes with many built-in sinks for common destinations like files, databases, and cloud services, we created a custom sink to guarantee compatibility with an existing legacy logging solution. However as we noticed some performance issues, we decided to rewrite the implementation to use a batched sink. In this post, we'll explore how to build your own batched sink in Serilog, which can significantly improve performance when dealing with high-volume logging scenarios. At least that is what we are aiming for… Understanding Serilog's batched sink architecture Serilog has built-in batching support and handles most of the complexity of batching log events for you. Internally will handle things like: Collecting log events in an internal queue Periodically flushing batches based on time intervals or batch size limits Handling backpre...

Ollama– Running LLM’s locally

Ollama remains my go to tool to run LLM’s locally. With the latest release the Ollama team introduced a user interface. This means you no longer need to use the command line or tools like OpenWebUI to interact with the available language models. After installing the latest release, you are welcomed by a new chat window similar to ChatGPT: Interacting with the model can be done directly through the UI:   A history of earlier conversations is stored and available:   You can easily switch between models by clicking on the model dropdown. If a model is not yet available locally, you can download it immediately by clicking on the Download icon: If you need a bigger context window, you can now change this directly from the settings: Some other feature worth mentioning are the file support (simply drag and drop a file to the chat window) and multimodal support: All this makes the new Ollama app a good starting point to try and interact with the available LLM's local...