Skip to main content

Posts

Showing posts from 2026

Why AI helps you to discover new parts of your favorite SDK–The NullLogger

I'll admit it - I first encountered NullLogger in code generated by an AI coding assistant. At first glance, I almost dismissed it as one of those "AI quirks," but then I realized it was actually a real class available in the .NET Framework that I'd been missing out on. What is NullLogger? NullLogger is part of Microsoft.Extensions.Logging and implements the null object pattern for logging. It's a logger that does absolutely nothing - all of its methods are no-ops. While that might sound useless at first, it's actually quite valuable. Imagine you have the following code: Now when testing this code I would normally mock the logger, but thanks to the built-in NullLogger I no longer have to do that: How to use it In the example above I showed the typical way to get an ILogger<T> instance. There is also a non-generic one to create an ILogger instance: Both are singletons, so you're always getting the same instance - no memory overhe...

Testing your MCP server with Visual Studio HTTP Files

If you're building or testing Model Context Protocol (MCP) servers and you need a quick way to verify your endpoints are working correctly. Visual Studio's HTTP files ( .http ) provide an elegant, code-based approach to API testing What are Visual Studio HTTP files? HTTP files are plain text files with a .http or .rest extension that let you define and execute HTTP requests directly in your editor. They're supported in Visual Studio, Visual Studio Code (with the REST Client extension), and JetBrains IDEs. Think of them as executable documentation for your API. Why use HTTP files for MCP testing? The Model Context Protocol defines a JSON-RPC 2.0 interface for AI model interactions. Testing these endpoints traditionally meant using tools like Postman or curl commands, but HTTP files offer several advantages: Version control friendly : Store your test requests alongside your code Easy sharing : Team members can run the same tests instantly Fast iteration...

Passing parameters to a hosted MCP Server in C#

The Model Context Protocol (MCP) enables seamless integration between AI applications and external data sources. When working with MCP servers in C#, you'll often need to pass parameters to configure server connections, specify endpoints, or provide authentication details. This post walks you through the practical approaches to handling URL parameters when connecting to MCP servers. Understanding MCP transport mechanisms Before diving into configuration, it's crucial to understand the two primary ways MCP servers communicate, as this fundamentally impacts how you pass parameters. STDIO Transport involves running the MCP server as a child process that communicates through standard input/output streams. Your MCP client launches the server (typically a Node.js script or executable) and exchanges messages via stdin/stdout. This is the most common approach for local development and desktop applications. Hosted MCP Servers run as independent web services that expose Se...

Permission to experiment granted

In my post yesterday about shifting from "why" to "what" questions, I explored how this simple change can transform leadership conversations. Today, I want to focus on one specific question that came up in a related Coaching for Leaders podcast episode with Elizabeth Lotardo: What have you already tried? What makes this question so powerful? When someone comes to you with a problem and you ask, "What have you already tried?" you're sending several key messages simultaneously: You expect initiative. You're not surprised that they've already taken action—you assume it. This presumption of capability builds confidence. Experimentation is valued. By asking what they've tried , not what they've done , you're acknowledging that not everything works on the first attempt. And that's okay. Their attempts matter. Even if their experiments didn't solve the problem, the learning from those attempts is valuable informa...

The power of "What" over "Why"

Leadership isn't about having all the answers—it's about asking the right questions. I was reminded about this while listening to Shannon Minifie, CEO of Box of Crayons, on the Coaching for Leaders podcast ( Episode 760 ), where she explored how the quality of our questions shapes the quality of our leadership. The problem with "Why" As leaders, we're trained to dig deep, to understand root causes. "Why did this happen?" "Why didn't you finish that project?" These questions feel investigative and thorough. But here's what Minifie points out: "why" questions often put people on the defensive. When someone hears "Why did you do that?" their brain doesn't hear curiosity—it hears judgment. They start building walls instead of opening doors. The conversation shifts from exploration to explanation, from possibility to justification. The "What" alternative This connects directly to Michael Bungay Sta...

Azure DevOps Server –Mermaid support improvements

Azure DevOps (Server) had Mermaid support for a long time. Unfortunately that support was limited to the Wiki functionality. This means that if you had markdown files somewhere else in your repo; for example as part of your readme.md file or an architecture decision record(ADR), any mermaid diagram inside these files was not rendered. Some extensions exists in the Visual Studio marketplace that tried to solve this,  but when using these extensions I noticed that none of these extensions were bug free and all had limitations or caused problems with other features inside Azure DevOps. With the December 2025 release of Azure DevOps, you no longer need a 3th party solution as this feature is finally available. Create a mermaid diagram in any .md file: And the preview tab will now nicely render your diagram: Nice! More information Markdown Syntax for Files, Widgets, Wikis - Azure DevOps | Microsoft Learn Azure DevOps Server Release Notes - Azure DevOps Server & T...

Help! Copilot is eating my Fabric capacity.

Last week I shared my understanding of how capacity works in Microsoft Fabric. I just hit publish when I got a message that I exceeded my Fabric capacity (again). Time to open the Fabrics Metrics dashboard I introduced in my last post about Fabric and see who is the culprit: I opened the Compute tab and hovered over the top item in the list.  Turns out that Copilot is eating up a lot of my capacity. Whoops! I decided to disable copilot for this data warehouse. Therefore, I opened up a query in the data warehouse and clicked on the Copilot completions item at the bottom of the screen: This opens the configuration settings for my data warehouse where I can disable the completions:   If you want to disable Copilot completely, you can do that at the tenant level :   Remark: If you are still confused on how Fabric capacity exactly works, I found this great post by Tom Keim where he compares CU usage to watts: Just like electricity, where we’re billed by k...

Azure DevOps( Server) –Check repository health and usage

Azure DevOps offers a repository health feature, which allows to monitor multiple metrics that contribute to the health of your Git repositories. If you are using Azure DevOps services, you maybe already know this feature. But with the latest release of Azure DevOps server (December 2025) , it finally arrived on-premise as well. Reasons enough to have a deeper look at it and write this post. Let’s dive in! Why is this feature relevant for my team? Think of your Git repository like a living organism. As it grows with more commits, blobs, branches, and objects, it can become sluggish and unwieldy. Large repositories increase the load on Azure DevOps infrastructure, affecting performance and user experience. Without proper maintenance, your team could face slower clone times, degraded git operations, and even service disruptions. The repository health feature now provides visibility into key health metrics and offers actionable recommendations before problems escalate. Getting t...

Understanding Microsoft Fabric Capacity and Throttling–A first attempt

Being new to Microsoft Fabric, one of the topics that I found challenging, is how Fabric capacity and especially the throttling works. And what is a better way to structure my understanding than writing a blog post. Let’s give it a try! Remark : If I made some mistakes, please feel free to let me know so I can update this article. What is Microsoft Fabric Capacity? Microsoft Fabric capacity is the compute and storage resources you purchase to run Fabric workloads. Unlike traditional per-service pricing models, Fabric uses a unified capacity model where you purchase Capacity Units (CUs) that power all Fabric experiences including Data Engineering, Data Warehouse, Data Science, Real-Time Analytics, Power BI, and Data Factory. When you purchase a Fabric capacity, you're essentially reserving a pool of compute resources that can be shared across different workloads and users within your organization. This capacity is measured in Capacity Units, which represent the computational...

Structuring Projects in Dependency-Track

I promised yesterday that it would be my last post about how we are using Dependency Track. But turns out that there is some confusion and a few people asked me the following question: "How should I organize all these projects?" This is a good question because a well-structured project hierarchy makes the difference between a dashboard that provides clarity and one that creates confusion. In this post, I'll share the project organization strategies we've explored and practical examples you can adapt to your organization. Understanding Dependency-Track's model Dependency-Track organizes work around several key concepts: Projects : The fundamental unit representing a software component. Each project has a name and version. Remark: A project can also be flagged as current. Versions : Projects can have multiple versions representing different releases or deployments. Classifiers : Indicates if the project type is a library, framework, ...

Integrating Dependency-Track into Azure DevOps Pipelines

Welcome to the final post in this Dependency-Track series! We've covered what Dependency-Track is and why we started using it , how to deploy it on Azure Container Apps , and how to configure OIDC authentication with Microsoft Entra ID. Now it's time to put it all together by integrating Dependency-Track into our Azure DevOps CI/CD pipelines. In this post, I'll show you how to automatically generate Software Bill of Materials (SBOMs and upload them to Dependency-Track, By the end, you'll have a fully automated vulnerability management workflow that provides continuous visibility into your software supply chain. CI/CD architecture We adopted the following approach for integrating Dependency-Track in our CI/CD architecture: During the CI phase we generate an SBOM as part of the build process using language-specific tools and store it among the other build artifacts During the CD phase , the code is rolled out in multiple environments with approval checks between...

Configuring Dependency-Track with Microsoft Entra ID (Azure AD) OIDC Authentication

In my previous posts, I introduced Dependency-Track and showed you how to deploy it on Azure Container Apps . Now that you have a working instance, it's time to secure it properly by integrating with your organization's identity provider. In this post, I'll walk you through configuring Dependency-Track to use OpenID Connect (OIDC) authentication with Microsoft Entra ID (formerly Azure Active Directory). This integration will allow your users to log in using their existing corporate credentials, enable single sign-on (SSO), and leverage conditional access policies for enhanced security. Why using OIDC with Microsoft Entra ID? Before diving into the configuration, let's understand the benefits of this integration: Centralized Identity Management : Users authenticate with their existing Microsoft Entra ID accounts, eliminating the need to manage separate credentials for Dependency-Track. Single Sign-On (SSO) : Users already logged into Microsoft services can acc...

Setting Up Dependency-Track on Azure Container Apps

In my previous post , I introduced Dependency-Track and explained why we chose it to manage our software supply chain security. Now it's time to get practical. In this post, I'll walk you through how we deployed Dependency-Track on Azure Container Apps, including our architecture decisions, configuration choices, and lessons learned along the way. Why Azure Container Apps? Before diving into the setup, let me explain why we chose Azure Container Apps for hosting Dependency-Track. We evaluated several deployment options including Azure Kubernetes Service (AKS), Azure Container Instances (ACI), and App Service, but Container Apps emerged as the best fit for our needs: Simplified Management : Container Apps abstracts away much of the complexity of Kubernetes while still providing container orchestration capabilities. We don't need to manage nodes, clusters, or complex networking configurations. Cost-Effective : With built-in autoscaling and the ability to scale to zero...

Dependency-Track: Taking control of our software supply chain

Modern software development relies heavily on third-party dependencies. Whether you're building a Java application with Maven, a Javascript app with npm,  a .NET application with NuGet, or a Python service with pip, you're likely incorporating dozens—if not hundreds—of external libraries into your codebase. While this approach accelerates development, it also introduces significant security and compliance risks that many organizations struggle to manage effectively. This is the first post in a three-part series about how we evolved from OWASP Dependency Checker to Dependency-Track to gain visibility and control over our software supply chain. In this post, I'll introduce what Dependency-Track is and explain why we decided to adopt it. What is Dependency-Track? Dependency-Track is an open-source Component Analysis platform that helps organizations identify and reduce risk in their software supply chain. At its core, it's a continuous monitoring solution that ingest...