Skip to main content

Posts

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks - Part 5: Putting it all together

Wow! We covered a lot in this series. Part 1 - Overview & Architecture Part 2 – Data collection with Azure Arc Part 3 – Data persistence in Log Analytics Part 4 -  Data visualization with Azure Workbooks Time for a wrap up and some troubleshooting Let's trace the data flow from start to finish to make sure everything connects: The Azure Monitor Agent runs on each Arc-enabled on-prem VM. The Data Collection Rule tells the agent what health data to gather — application pools, Windows services, and scheduled tasks. The agent collects that data on a regular interval and ships it to Azure. The DCR routes the incoming data to our custom table ( OnPremHealthStatus_CL ) in the Log Analytics Workspace. The Workbook queries that table and renders the dashboard. If any link in that chain breaks, data stops flowing. The troubleshooting section below covers the most common failure points. Troubleshooting checklist No data appearing in the workbook: ...
Recent posts

Copilot Memory in VS Code: Your AI assistant just got smarter

If you've ever found yourself repeatedly correcting GitHub Copilot with the same preferences or re-explaining your team's coding standards in every chat session, the January 2026 release of VS Code brings a possible solution: Copilot Memory . What Is Copilot Memory? Copilot Memory is a new feature that allows GitHub Copilot to remember important context across your coding sessions. Think of it as giving your AI assistant a notebook where it can jot down your preferences, team conventions, and project-specific guidelines—and actually refer back to them later. Released as a preview feature in VS Code version 1.109 (January 2026), Copilot Memory changes how you interact with AI-powered coding assistance by making your conversations with Copilot persistent and personalized. How it works The magic of Copilot Memory happens through a new memory tool that's integrated directly into VS Code's chat interface. Here's how it works: Intelligent detection Copilot a...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 4: Data visualisation with Azure Workbooks

In part 1 I explained that we want to setup an application health dashboard to gain insights on the availability and health of the on-premise parts of our applications. Specifically, we want to monitor our application pools, scheduled tasks and windows services. I introduced the overall architecture and explained the building blocks. Part 2 was all about the data collection part using Azure Arc Data Collection rules. I continued in Part 3 with our custom table in Log Analytics to persist our data. And today it is time for Part 4 were I share how visualize all this info using Azure Workbooks. What we're visualizing The workbook is the user-facing piece. Our goal is a dashboard that lets an operator quickly answer three questions: What's running? What's stopped or failed? Which machines need attention? A good health dashboard has two modes: the "glance" mode where an operator can immediately see if anything is wrong, and the "investigate" mod...

VSCode–Finetune your AI instructions with /init

If you're using GitHub Copilot in Visual Studio Code, there's a powerful new feature that can save you time and make your AI-powered development workflow more efficient: the /init command. This slash command provides a quick way to set up custom instructions for your workspace or adapt your existing instructions to the specific project context, helping you establish consistent coding practices and AI responses across your projects. What is the /init command? The /init command is a chat slash command in VSCode that helps you quickly prime your workspace with custom instructions for GitHub Copilot. When you type /init in the chat input box, it automatically generates a .github/copilot-instructions.md file tailored to your workspace. Think of it as a quick-start wizard for setting up AI guidelines that will influence how Copilot generates code and handles development tasks throughout your project. How to use the /init command? Using the /init command is straightforw...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 3: Data persistence in Log Analytics

In part 1 I explained that we want to setup an application health dashboard to gain insights on the availability and health of the on-premise parts of our applications. Specifically we want to monitor our application pools, scheduled tasks and windows services. I introduced the overall architecture and explained the building blocks. Part 2 was all about the data collection part using Azure Arc Data Collection rules. Today I’ll focus on how we used a custom table in Log Analytics to persist our data. Why a custom table The built-in Windows event logs in Log Analytics (the Event table) contain a lot of data, but the format isn't optimized for health-status queries. Parsing event log XML to extract service states or scheduled task results on every query adds latency and complexity. When you query the Event table for service state changes, you're filtering through thousands of rows, parsing semi-structured XML from the EventData column, and then correlating multiple ev...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 2: Data collection with Azure Arc

In part 1 I explained that we want to setup an application health dashboard to gain insights on the availability and health of the on-premise parts of our applications. Specifically we want to monitor our application pools, scheduled tasks and windows services. I introduced the overall architecture and explained the building blocks. Today we'll dive in the first one of these blocks; the data collection part using Azure Arc Data Collection rules. Understanding Data Collection rules A Data Collection Rule (DCR) is a declarative configuration object in Azure that defines the full lifecycle of telemetry: what to collect, how to transform it, and where to send it. It's the connective tissue between the Azure Monitor Agent running on your VMs and the Log Analytics Workspace where the data lands. DCRs replaced the older model where agents were configured locally via XML files. The new model is centralized — you define the DCR in Azure, associate it with your VMs, and the agent...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 1: Overview & Architecture

On-premises VMs don't disappear just because you are working on a cloud strategy. We are running a lot of Windows workloads on-prem — application pools, Windows services, scheduled tasks — and still need visibility into whether they're healthy. Traditional on-prem monitoring solutions could work, but they come with their own operational overhead and are directly tied to our on-premise infrastructure. When an incident happens, we don’t want to context-switch between our cloud monitoring stack and our on-prem monitoring stack. It's not ideal. We wanted a single, cloud-native view into the health of our on-prem workloads without having to lift and shift them into Azure. Azure Arc made this possible by extending Azure's management plane to our on-premises infrastructure. By combining Arc with Log Analytics and Workbooks, we built a unified health dashboard that sits alongside our cloud monitoring, uses the same query language (KQL), and requires no additional on-prem in...