Skip to main content

Posts

Showing posts from 2026

Azure Pipelines–Failed to set a git tag

I mostly use the built-in functionality to set a tag on a specific commit after a successful release. However, in this case I was contacted by a colleague who was using the Git Tag task . Unfortunately, he couldn’t get the task working. A look at the build log made it obvious what the problem was: Starting: GitTag ============================================================================== Task         : Git Tag Description  : A simple task that tags a commit Version      : 7.0.0 Author       : ATP P&I IT Help         : tags the current commit with a specified tag. ### Prerequisites * Repository must be VSTS Git. * Allow scripts to access Oauth must be **Enabled** * Project Collection Build Service must have **Contribute** & **Create Tag** set to **Allow** or **Inherit Allow** for that particular repository =======...

How I built a custom agent skill to configure Application Insights

If you've ever found yourself repeating the same Azure setup ritual — adding the Application Insights SDK, wiring up telemetry, configuring sampling rules — you already know the pain. It's not hard, but it's tedious. Every new service needs the same scaffolding. Every new team member has to learn the same conventions. That's exactly what I solved with a custom skill. Now, when I need to instrument a service, I just tell Copilot to configure Application Insights, and it does everything exactly the way our team expects. No extra prompting, no re-explaining our conventions. It just works. This post explains what Skills are, how they work inside VS Code, and how to build one for your own team — using my Application Insights skill as a hands-on example. What is an agent skill? An agent skill is a folder of instructions, scripts, and reference files that teaches your AI agent how to handle a specific task. Think of it as institutional knowledge made executable. Instea...

I didn't notice this VS Code feature until it made me question how I code

I was working on a refactoring using VS Code the other day when I noticed something I hadn't seen before: a tiny bar chart quietly living in the status bar, tracking my AI vs. manual typing usage over the last 24 hours. It's called AI Statistics, and it shipped in VS Code 1.103. To enable it, open settings and search for "AI stats" — flip the checkbox, and a small gauge appears in the bottom-right of your status bar. Hover over it and you get a breakdown: how much of your recent code came from AI completions versus your own keystrokes. On the surface it sounds like a novelty. But I found myself actually pausing when I saw the numbers. It reframed something I hadn't really thought about consciously: not whether AI coding tools are good or bad, but just how much I'm actually leaning on them day to day. That visibility is weirdly valuable. It's the kind of data point that makes you more intentional — maybe you lean in harder on AI for boilerplate an...

ActionFlix Because even rom-coms deserve an explosion

Last year Valentine's Day I built a Romantic Movie Generator — an app that turned action movies into sweeping romantic dramas - using AI. Die Hard became a tender love story about a man who just wanted to spend Christmas with his wife. It was fun, it was silly, and it required a surprising amount of hand-holding to get the AI to behave. At that time a colleague took my idea and crafted his own version; Loveflix . This year, my partner made it abundantly clear that another "action movie as romance" project wasn't going to cut it for February 14th. Fair enough. So I did what any reasonable developer does under domestic pressure: I flipped the concept entirely. Built on top of the version from my colleague I created: ActionFlix: turn any rom-com into a high-octane action thriller. Because Love Actually is basically a heist movie if you squint hard enough. Same concept, inverted. Sweet home setup, chaos onscreen. Points successfully gained. But here's ...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks - Part 5: Putting it all together

Wow! We covered a lot in this series. Part 1 - Overview & Architecture Part 2 – Data collection with Azure Arc Part 3 – Data persistence in Log Analytics Part 4 -  Data visualization with Azure Workbooks Time for a wrap up and some troubleshooting Let's trace the data flow from start to finish to make sure everything connects: The Azure Monitor Agent runs on each Arc-enabled on-prem VM. The Data Collection Rule tells the agent what health data to gather — application pools, Windows services, and scheduled tasks. The agent collects that data on a regular interval and ships it to Azure. The DCR routes the incoming data to our custom table ( OnPremHealthStatus_CL ) in the Log Analytics Workspace. The Workbook queries that table and renders the dashboard. If any link in that chain breaks, data stops flowing. The troubleshooting section below covers the most common failure points. Troubleshooting checklist No data appearing in the workbook: ...

Copilot Memory in VS Code: Your AI assistant just got smarter

If you've ever found yourself repeatedly correcting GitHub Copilot with the same preferences or re-explaining your team's coding standards in every chat session, the January 2026 release of VS Code brings a possible solution: Copilot Memory . What Is Copilot Memory? Copilot Memory is a new feature that allows GitHub Copilot to remember important context across your coding sessions. Think of it as giving your AI assistant a notebook where it can jot down your preferences, team conventions, and project-specific guidelines—and actually refer back to them later. Released as a preview feature in VS Code version 1.109 (January 2026), Copilot Memory changes how you interact with AI-powered coding assistance by making your conversations with Copilot persistent and personalized. How it works The magic of Copilot Memory happens through a new memory tool that's integrated directly into VS Code's chat interface. Here's how it works: Intelligent detection Copilot a...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 4: Data visualisation with Azure Workbooks

In part 1 I explained that we want to setup an application health dashboard to gain insights on the availability and health of the on-premise parts of our applications. Specifically, we want to monitor our application pools, scheduled tasks and windows services. I introduced the overall architecture and explained the building blocks. Part 2 was all about the data collection part using Azure Arc Data Collection rules. I continued in Part 3 with our custom table in Log Analytics to persist our data. And today it is time for Part 4 were I share how visualize all this info using Azure Workbooks. What we're visualizing The workbook is the user-facing piece. Our goal is a dashboard that lets an operator quickly answer three questions: What's running? What's stopped or failed? Which machines need attention? A good health dashboard has two modes: the "glance" mode where an operator can immediately see if anything is wrong, and the "investigate" mod...

VSCode–Finetune your AI instructions with /init

If you're using GitHub Copilot in Visual Studio Code, there's a powerful new feature that can save you time and make your AI-powered development workflow more efficient: the /init command. This slash command provides a quick way to set up custom instructions for your workspace or adapt your existing instructions to the specific project context, helping you establish consistent coding practices and AI responses across your projects. What is the /init command? The /init command is a chat slash command in VSCode that helps you quickly prime your workspace with custom instructions for GitHub Copilot. When you type /init in the chat input box, it automatically generates a .github/copilot-instructions.md file tailored to your workspace. Think of it as a quick-start wizard for setting up AI guidelines that will influence how Copilot generates code and handles development tasks throughout your project. How to use the /init command? Using the /init command is straightforw...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 3: Data persistence in Log Analytics

In part 1 I explained that we want to setup an application health dashboard to gain insights on the availability and health of the on-premise parts of our applications. Specifically we want to monitor our application pools, scheduled tasks and windows services. I introduced the overall architecture and explained the building blocks. Part 2 was all about the data collection part using Azure Arc Data Collection rules. Today I’ll focus on how we used a custom table in Log Analytics to persist our data. Why a custom table The built-in Windows event logs in Log Analytics (the Event table) contain a lot of data, but the format isn't optimized for health-status queries. Parsing event log XML to extract service states or scheduled task results on every query adds latency and complexity. When you query the Event table for service state changes, you're filtering through thousands of rows, parsing semi-structured XML from the EventData column, and then correlating multiple ev...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 2: Data collection with Azure Arc

In part 1 I explained that we want to setup an application health dashboard to gain insights on the availability and health of the on-premise parts of our applications. Specifically we want to monitor our application pools, scheduled tasks and windows services. I introduced the overall architecture and explained the building blocks. Today we'll dive in the first one of these blocks; the data collection part using Azure Arc Data Collection rules. Understanding Data Collection rules A Data Collection Rule (DCR) is a declarative configuration object in Azure that defines the full lifecycle of telemetry: what to collect, how to transform it, and where to send it. It's the connective tissue between the Azure Monitor Agent running on your VMs and the Log Analytics Workspace where the data lands. DCRs replaced the older model where agents were configured locally via XML files. The new model is centralized — you define the DCR in Azure, associate it with your VMs, and the agent...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 1: Overview & Architecture

On-premises VMs don't disappear just because you are working on a cloud strategy. We are running a lot of Windows workloads on-prem — application pools, Windows services, scheduled tasks — and still need visibility into whether they're healthy. Traditional on-prem monitoring solutions could work, but they come with their own operational overhead and are directly tied to our on-premise infrastructure. When an incident happens, we don’t want to context-switch between our cloud monitoring stack and our on-prem monitoring stack. It's not ideal. We wanted a single, cloud-native view into the health of our on-prem workloads without having to lift and shift them into Azure. Azure Arc made this possible by extending Azure's management plane to our on-premises infrastructure. By combining Arc with Log Analytics and Workbooks, we built a unified health dashboard that sits alongside our cloud monitoring, uses the same query language (KQL), and requires no additional on-prem in...

Creating recursion in TPL Dataflow with LinkTo predicates

In the previous post , I showed how to use LinkTo predicates to route messages conditionally across different blocks. Today, we're going to take that concept a step further and do something that surprises most developers the first time they see it: Link a block back to itself to create recursion — entirely through the dataflow graph, with no explicit recursive method calls. The core idea Traditional recursion involves a function calling itself. In TPL Dataflow, we achieve the same result structurally: a block's output is linked back to its own input via a predicate. Messages that match the "recurse" condition loop back, while messages that match the "base case" condition flow forward. The dataflow runtime handles the iteration for us. Sounds complicated? An example will make it clear immediately. A good example to illustrate this walks through a directory tree and computing MD5 hashes for every file in a directory. Directories need to be expanded ...

Creating conditional flows in TPL Dataflow with LinkTo predicates

While building a data processing pipeline with TPL Dataflow, I needed to route messages to different blocks based on specific conditions. The LinkTo method's predicate parameter is the feature I needed to create branching logic in my dataflow network. In this post, I explore how to use predicates to build conditional flows that are both efficient and maintainable. Understanding LinkTo predicates The LinkTo method in TPL Dataflow connects a source block to a target block, creating a pipeline for data to flow through. The method signature includes an optional predicate parameter: The predicate is a function that evaluates each message and returns true if the message should be sent to the target block, or false if it should be offered to the next linked block in the chain. A simple example Let's start with a straightforward example that demonstrates the basic concept. We'll create a pipeline that routes even numbers to one block and odd numbers to another: ...