Skip to main content

Posts

Showing posts from April, 2026

DAB 2.0 Preview: Autoconfiguration with autoentities

If you've been maintaining a large dab-config.json , you know the pain: every table, view, and stored procedure needs its own entities block. Schema grows, config grows. Someone adds a table and forgets to update the config, and suddenly your API is silently missing endpoints. DAB 2.0 Preview introduces autoentities — a pattern-based approach that discovers and exposes database objects automatically, every time DAB starts. This post covers how it works, how to configure it from the CLI, and what to watch for. Getting started As DAB 2.0 is still in preview, you first need to install the preview version: dotnet tool install microsoft.dataapibuilder --prerelease Note: MSSQL data sources only, for now. Initialize a new dab-config.json file if it doesn't exists yet: dotnet dab init Remark:  Notice that we prefix dab with dotnet to avoid collisions with the globally installed release version. How it works Instead of defining each entity explicitly, you define one ...

Fixing the "Newer version of Aspire.Hosting.AppHost required" error

After pulling some NuGet packages into my .NET Aspire project, I ran into this cryptic startup failure: Aspire.Hosting.DistributedApplicationException: Newer version of the Aspire.Hosting.AppHost package is required to run the application. Ensure you are referencing at least version '13.2.2'. at Aspire.Hosting.Dcp.DcpDependencyCheck.EnsureDcpVersion(DcpInfo dcpInfo) at Aspire.Hosting.Dcp.DcpDependencyCheck.GetDcpInfoAsync(...) at Aspire.Hosting.Dcp.DcpHost.StartAsync(...) The app host refuses to start, and the logs point you toward a version check deep inside DCP internals. The error message tells me that a minimum version of the hosting package is required,but no matter how many times I ran dotnet restore or update NuGet packages through Visual Studio, the error persists. That's because there are two places that pin the Aspire version. Why updating NuGet packages isn't enough Aspire AppHost projects use a special SDK reference at the very...

Microsoft Agent Framework–Building a multi-agent workflow with DevUI in .NET

Yesterday, I created a minimal .NET project with DevUI and registered a couple of standalone agents. That gets you surprisingly far for interactive testing. But real business scenarios quickly outgrow a single agent: you need data flowing through multiple specialized steps, decisions being made along the way, and a clear picture of the whole pipeline. That's what workflows are for. In this post, we'll build a content review pipeline as a concrete example — a Writer agent drafts a response, a Reviewer agent critiques it, and a deterministic formatting step finalizes the output. All of it visualized in DevUI. Agents vs Workflows — the key distinction The Agent Framework docs put it cleanly: an agent is LLM-driven and dynamic — it decides which tools to call and in what order, based on the conversation. A workflow is a predefined graph of operations, some of which may be AI agents, but the topology is explicit and deterministic . You decide exactly what runs after what. ...

Microsoft Agent Framework - Getting started with DevUI in .NET

If you've been exploring the Microsoft Agent Framework , you've probably seen the Python DevUI example showcased prominently in the docs. DevUI is a fantastic inner-loop tool — it lets you visually inspect your agents: their messages, reasoning steps, tool calls, and conversation state, all in a browser dashboard while you develop locally. Think of it as Swagger UI, but for AI agents. The problem? When I went looking for a .NET / C# equivalent , I couldn't find one. The official Microsoft Learn page for DevUI samples simply said: "DevUI samples for C# are coming soon." — Not great when you're trying to ship. So I built one. This post walks through a complete, working .NET Core example using Microsoft Agent Framework 1.0, with DevUI wired up and ready to go. What is DevUI? DevUI is a lightweight developer dashboard shipped as part of the Microsoft Agent Framework. It is not intended for production — it's a local dev tool, similar in spirit to what...

ADFS policies vs authorization rules - understanding the difference

While preparing our MFA rollout at ADFS level, we started making the switch from classic authorization rules to custom access control policies in ADFS. This post explains the difference and the rationale behind this switch. A tale of two mechanisms When you work with Active Directory Federation Services (ADFS), there are two ways to control what happens when a user tries to authenticate: authorization rules and access control policies . On the surface, they feel similar; both let you define conditions around user access. But under the hood, they represent two distinct generations of the same capability. Understanding the difference matters especially when implementing MFA, because the mechanism you choose affects flexibility, maintainability, and how cleanly your logic can scale. Authorization rules: the classic approach Authorization rules are the original ADFS mechanism, introduced back when claims-based identity was first baked into the platform. They use a proprietary la...

Keep your context short - Manual compaction is now available in VS Code

In my earlier posts about the GitHub Copilot CLI , I already introduced the /compact command, a slash command that summarizes your conversation history to free up context space, letting you keep working in the same session without losing momentum. Well, good news! It's no longer CLI-only. The February 2026 release of VS Code brings /compact directly into the editor, and it's part of a much broader story about making agents actually usable for the kind of long, messy, real-world tasks developers deal with every day. The Context Window problem Here's what happens without context compaction: you start an agent session, ask it to dig into a complex feature, go back and forth a few times, and eventually the conversation grows so large that the model starts losing the thread, or the session simply stops. You're forced to start over, re-explain everything, and lose all the accumulated understanding the agent had built up. Context compaction solves this by summarizing...

Git for Windows 2.49.0 broke my Azure DevOps pushes

After a routine Visual Studio update silently upgraded Git for Windows to version 2.49.0, pushes to Azure DevOps started failing with a cryptic NTLM authentication error — even though our setup was supposed to use Kerberos. Here's what happened and how to fix it in 30 seconds. The symptoms You update Visual Studio (or Git for Windows directly), and suddenly any push to your Azure DevOps remote fails. The error mentions NTLM even though your network is configured for Kerberos. Typical signs: Git push fails immediately after a Visual Studio or Git for Windows update Error references NTLM authentication failure Clones and fetches from the same remote may still work The remote URL is an internal TFS/Azure DevOps server (e.g. https://tfs.yourcompany.com ) What changed Git for Windows 2.49.0 (shipped as MinGit 2.49.0) changed how it negotiates authentication for HTTPS remotes. The new default behaviour causes Git to attempt NTLM where it previously fell through ...

Debugging your VSCode agent interactions

If you've spent any time working with GitHub Copilot in agent mode, you've probably hit that frustrating moment: the agent does something unexpected, picks the wrong tool, ignores a prompt file, or just… takes forever, and you have no idea why. Until recently, your only recourse was the raw Chat Debug view : useful, but dense, and not exactly designed for quick diagnosis. That changes with the Agent Debug Log panel, available in preview as of VS Code 1.110. What it is The Agent Debug Log panel shows a chronological event log of everything that happens during a chat session, including tool calls, LLM requests, prompt file discovery, and errors. Think of it as a structured, human-readable trace of your agent's entire thought process, rendered right inside VS Code. It replaces the old Diagnostics chat action with a richer, more detailed view, and is particularly valuable as your agent setups grow in complexity; custom instructions, multiple prompt files, MCP servers, and...

Icons for Tools, Resources, and Prompts–Because a picture is worth a thousand words

With the ever growing list of MCP servers and supported tools, it is hard to spot the right tool. With the v1.0 release of the official MCP C# SDK, you can make it a little bit easier to discover your tools thanks to the introduction of icon support — tools, resources, and prompts can now carry icon metadata that clients can display in their UIs. Because a picture is worth a thousand words MCP servers expose tools, resources, and prompts through list endpoints ( tools/list , resources/list , prompts/list ). Up until now, those lists were purely textual — names and descriptions. With icons, client applications like MCP Inspector or AI agent UIs can render visual identifiers alongside each item, making large tool catalogs much easier to navigate at a glance. The simple case: a single icon via attribute The quickest way to add an icon to a tool is through the IconSource parameter on the [McpServerTool] attribute: The same IconSource parameter is available on [McpServerResour...

GitHub Copilot–Format your code using hooks

After giving a GitHub Copilot training last week where I introduced the concept of hooks, one of the attendants asked me what would be a good example for a hook. Great question! A first use case I could think of is that we use a hook to format the AI generated code to match the style preferences and static analysis recommendations specified in an .editorconfig file. Tip: If you are looking for some inspiration, check out the hooks section in Awesome Copilot: awesome-copilot/docs/README.hooks.md at main · github/awesome-copilot What event should we use? There are multiple hook events that you can use: sessionStart , sessionEnd , userPromptSubmitted , preToolUse , postToolUse , and errorOccurred . As the formatting should be done after every code change, postToolUse seems the logical choice. Why not at SessionEnd ? postToolUse formats the file immediately after each edit. This means the agent sees clean, correctly structured usings before it reads the file again for its ...

CycloneDX 1.7 not yet supported by Dependency-Track

As part of our secure SDLC strategy, we generate an SBOM(Software Bill of Material) and store it inside Dependency Track . This gives us a good overview of all our applications, their dependencies and vulnerable components. However after upgrading to the latest CycloneDx-dotnet version, our SBOM pipeline turned out broken. The problem When uploading an XML-based Software Bill of Materials (SBOM) to Dependency-Track, we started to encounter a 400 – Bad Request response. The culprit is a version mismatch: a recent update to the dotnet-CycloneDX tool now generates SBOMs in CycloneDX 1.7 format by default — a version that Dependency-Track does not yet support. Dependency-Track validates incoming SBOMs against its supported schemas. When it receives a 1.7 document, schema validation fails and the upload is rejected entirely. The dotnet-CycloneDX package was updated our your build server, silently bumping the default output format from CycloneDX 1.6 to 1.7. No code change, just...

Agents can now verify your UI changes without leaving VS Code

Verifying frontend changes always meant a mental context switch: write code, alt-tab to a browser, poke around in DevTools, switch back. Even with a decent dev server, the loop was still manual — and for AI agents, it was essentially broken. Agents could write unit tests for logic, but verifying whether a button actually renders, whether a dialog triggers, or whether a layout holds up? That required a human in the loop. I first tried to tackle this problem by using the Playwright or Chrome Dev-Tools MCP server, but with the February 2026 release of VS Code (1.110) , that changes. Agents can now open, interact with, and inspect your running application directly inside VS Code's integrated browser — closing the development loop without any manual hand-off. How it works When browser agent tools are enabled, Copilot gains access to a set of tools that let it read and interact with pages in the integrated browser. As the agent interacts with the page, it sees updates to page co...

Awesome GitHub Copilot just got awesommer (if that’s a word)

If you've been following the GitHub Copilot ecosystem, you've probably heard of the Awesome GitHub Copilot repo . It launched back in July 2025 with a straightforward goal: give the community a central place to share custom instructions, prompts, and chat modes for tailoring Copilot's AI responses. A lot of people contributed. As a result, the repo now contains 175+ agents, 208+ skills, 176+ instructions, 48+ plugins, 7 agentic workflows, and 3 hooks. And now the maintainers took it one step further and created an Awesome GitHub Copilot website and Learning hub . A website that actually helps you find things The new site lives at awesome-copilot.github.com and wraps the repo in a browsable interface built on GitHub Pages. The headline feature is full-text search across every resource — agents, skills, instructions, hooks, workflows, and plugins — with category filters to narrow things down. Each resource has its own page with a modal preview, so you can see exac...

Shining a light on .NET versions across our organisation with OpenTelemetry

At our organisation running a large fleet of .NET services, a deceptively simple question can be surprisingly hard to answer: what versions of .NET are our apps actually running in production? You'd think this would be easy. It isn't. Services get deployed, teams move on, and before long nobody is quite sure whether that one legacy service is still on .NET 6 — or even .NET Core 3.1. Spreadsheets fall out of date. README files lie. The only source of truth is what's actually running. We solved this with three lines of OpenTelemetry configuration. The problem We run dozens of .NET services across multiple teams. We are the middle of a push to .NET 10, but we have no reliable, centralised way to see the current state. We wanted to answer questions like: Which services are still on end-of-life .NET versions? Which teams still have work to do? After a migration wave, how do we confirm everything moved? The solution We already had OpenTelemetry set up ac...