Skip to main content

Posts

ADFS policies vs authorization rules - understanding the difference

While preparing our MFA rollout at ADFS level, we started making the switch from classic authorization rules to custom access control policies in ADFS. This post explains the difference and the rationale behind this switch. A tale of two mechanisms When you work with Active Directory Federation Services (ADFS), there are two ways to control what happens when a user tries to authenticate: authorization rules and access control policies . On the surface, they feel similar; both let you define conditions around user access. But under the hood, they represent two distinct generations of the same capability. Understanding the difference matters especially when implementing MFA, because the mechanism you choose affects flexibility, maintainability, and how cleanly your logic can scale. Authorization rules: the classic approach Authorization rules are the original ADFS mechanism, introduced back when claims-based identity was first baked into the platform. They use a proprietary la...
Recent posts

Keep your context short - Manual compaction is now available in VS Code

In my earlier posts about the GitHub Copilot CLI , I already introduced the /compact command, a slash command that summarizes your conversation history to free up context space, letting you keep working in the same session without losing momentum. Well, good news! It's no longer CLI-only. The February 2026 release of VS Code brings /compact directly into the editor, and it's part of a much broader story about making agents actually usable for the kind of long, messy, real-world tasks developers deal with every day. The Context Window problem Here's what happens without context compaction: you start an agent session, ask it to dig into a complex feature, go back and forth a few times, and eventually the conversation grows so large that the model starts losing the thread, or the session simply stops. You're forced to start over, re-explain everything, and lose all the accumulated understanding the agent had built up. Context compaction solves this by summarizing...

Git for Windows 2.49.0 broke my Azure DevOps pushes

After a routine Visual Studio update silently upgraded Git for Windows to version 2.49.0, pushes to Azure DevOps started failing with a cryptic NTLM authentication error — even though our setup was supposed to use Kerberos. Here's what happened and how to fix it in 30 seconds. The symptoms You update Visual Studio (or Git for Windows directly), and suddenly any push to your Azure DevOps remote fails. The error mentions NTLM even though your network is configured for Kerberos. Typical signs: Git push fails immediately after a Visual Studio or Git for Windows update Error references NTLM authentication failure Clones and fetches from the same remote may still work The remote URL is an internal TFS/Azure DevOps server (e.g. https://tfs.yourcompany.com ) What changed Git for Windows 2.49.0 (shipped as MinGit 2.49.0) changed how it negotiates authentication for HTTPS remotes. The new default behaviour causes Git to attempt NTLM where it previously fell through ...

Debugging your VSCode agent interactions

If you've spent any time working with GitHub Copilot in agent mode, you've probably hit that frustrating moment: the agent does something unexpected, picks the wrong tool, ignores a prompt file, or just… takes forever, and you have no idea why. Until recently, your only recourse was the raw Chat Debug view : useful, but dense, and not exactly designed for quick diagnosis. That changes with the Agent Debug Log panel, available in preview as of VS Code 1.110. What it is The Agent Debug Log panel shows a chronological event log of everything that happens during a chat session, including tool calls, LLM requests, prompt file discovery, and errors. Think of it as a structured, human-readable trace of your agent's entire thought process, rendered right inside VS Code. It replaces the old Diagnostics chat action with a richer, more detailed view, and is particularly valuable as your agent setups grow in complexity; custom instructions, multiple prompt files, MCP servers, and...

Icons for Tools, Resources, and Prompts–Because a picture is worth a thousand words

With the ever growing list of MCP servers and supported tools, it is hard to spot the right tool. With the v1.0 release of the official MCP C# SDK, you can make it a little bit easier to discover your tools thanks to the introduction of icon support — tools, resources, and prompts can now carry icon metadata that clients can display in their UIs. Because a picture is worth a thousand words MCP servers expose tools, resources, and prompts through list endpoints ( tools/list , resources/list , prompts/list ). Up until now, those lists were purely textual — names and descriptions. With icons, client applications like MCP Inspector or AI agent UIs can render visual identifiers alongside each item, making large tool catalogs much easier to navigate at a glance. The simple case: a single icon via attribute The quickest way to add an icon to a tool is through the IconSource parameter on the [McpServerTool] attribute: The same IconSource parameter is available on [McpServerResour...

GitHub Copilot–Format your code using hooks

After giving a GitHub Copilot training last week where I introduced the concept of hooks, one of the attendants asked me what would be a good example for a hook. Great question! A first use case I could think of is that we use a hook to format the AI generated code to match the style preferences and static analysis recommendations specified in an .editorconfig file. Tip: If you are looking for some inspiration, check out the hooks section in Awesome Copilot: awesome-copilot/docs/README.hooks.md at main · github/awesome-copilot What event should we use? There are multiple hook events that you can use: sessionStart , sessionEnd , userPromptSubmitted , preToolUse , postToolUse , and errorOccurred . As the formatting should be done after every code change, postToolUse seems the logical choice. Why not at SessionEnd ? postToolUse formats the file immediately after each edit. This means the agent sees clean, correctly structured usings before it reads the file again for its ...

CycloneDX 1.7 not yet supported by Dependency-Track

As part of our secure SDLC strategy, we generate an SBOM(Software Bill of Material) and store it inside Dependency Track . This gives us a good overview of all our applications, their dependencies and vulnerable components. However after upgrading to the latest CycloneDx-dotnet version, our SBOM pipeline turned out broken. The problem When uploading an XML-based Software Bill of Materials (SBOM) to Dependency-Track, we started to encounter a 400 – Bad Request response. The culprit is a version mismatch: a recent update to the dotnet-CycloneDX tool now generates SBOMs in CycloneDX 1.7 format by default — a version that Dependency-Track does not yet support. Dependency-Track validates incoming SBOMs against its supported schemas. When it receives a 1.7 document, schema validation fails and the upload is rejected entirely. The dotnet-CycloneDX package was updated our your build server, silently bumping the default output format from CycloneDX 1.6 to 1.7. No code change, just...