Skip to main content

Posts

Accessing Microsoft Fabric data locally with OneLake file explorer

If you've spent any time working with Microsoft Fabric, you know that navigating to the web portal every time you need to inspect, upload, or tweak a file gets old fast. OneLake File Explorer is Microsoft's answer to that friction — a lightweight Windows application that mounts your entire Fabric data estate directly in Windows File Explorer, the same way OneDrive handles your documents. One..what? OneLake is the unified data lake underpinning every Microsoft Fabric tenant. Unlike traditional architectures where teams maintain separate data lakes per domain or business unit, every Fabric tenant gets exactly one OneLake — one place where Lakehouses, Warehouses, KQL databases, and other Fabric items store their data. There's no need to copy data between engines; Spark, SQL, and Power BI all read from the same underlying storage. The organizational hierarchy is straightforward: Tenant → Workspaces → Items (Lakehouses, Warehouses, etc.) → Files/Tables . This maps neat...
Recent posts

Azure Pipelines–Failed to set a git tag

I mostly use the built-in functionality to set a tag on a specific commit after a successful release. However, in this case I was contacted by a colleague who was using the Git Tag task . Unfortunately, he couldn’t get the task working. A look at the build log made it obvious what the problem was: Starting: GitTag ============================================================================== Task         : Git Tag Description  : A simple task that tags a commit Version      : 7.0.0 Author       : ATP P&I IT Help         : tags the current commit with a specified tag. ### Prerequisites * Repository must be VSTS Git. * Allow scripts to access Oauth must be **Enabled** * Project Collection Build Service must have **Contribute** & **Create Tag** set to **Allow** or **Inherit Allow** for that particular repository =======...

How I built a custom agent skill to configure Application Insights

If you've ever found yourself repeating the same Azure setup ritual — adding the Application Insights SDK, wiring up telemetry, configuring sampling rules — you already know the pain. It's not hard, but it's tedious. Every new service needs the same scaffolding. Every new team member has to learn the same conventions. That's exactly what I solved with a custom skill. Now, when I need to instrument a service, I just tell Copilot to configure Application Insights, and it does everything exactly the way our team expects. No extra prompting, no re-explaining our conventions. It just works. This post explains what Skills are, how they work inside VS Code, and how to build one for your own team — using my Application Insights skill as a hands-on example. What is an agent skill? An agent skill is a folder of instructions, scripts, and reference files that teaches your AI agent how to handle a specific task. Think of it as institutional knowledge made executable. Instea...

I didn't notice this VS Code feature until it made me question how I code

I was working on a refactoring using VS Code the other day when I noticed something I hadn't seen before: a tiny bar chart quietly living in the status bar, tracking my AI vs. manual typing usage over the last 24 hours. It's called AI Statistics, and it shipped in VS Code 1.103. To enable it, open settings and search for "AI stats" — flip the checkbox, and a small gauge appears in the bottom-right of your status bar. Hover over it and you get a breakdown: how much of your recent code came from AI completions versus your own keystrokes. On the surface it sounds like a novelty. But I found myself actually pausing when I saw the numbers. It reframed something I hadn't really thought about consciously: not whether AI coding tools are good or bad, but just how much I'm actually leaning on them day to day. That visibility is weirdly valuable. It's the kind of data point that makes you more intentional — maybe you lean in harder on AI for boilerplate an...

ActionFlix Because even rom-coms deserve an explosion

Last year Valentine's Day I built a Romantic Movie Generator — an app that turned action movies into sweeping romantic dramas - using AI. Die Hard became a tender love story about a man who just wanted to spend Christmas with his wife. It was fun, it was silly, and it required a surprising amount of hand-holding to get the AI to behave. At that time a colleague took my idea and crafted his own version; Loveflix . This year, my partner made it abundantly clear that another "action movie as romance" project wasn't going to cut it for February 14th. Fair enough. So I did what any reasonable developer does under domestic pressure: I flipped the concept entirely. Built on top of the version from my colleague I created: ActionFlix: turn any rom-com into a high-octane action thriller. Because Love Actually is basically a heist movie if you squint hard enough. Same concept, inverted. Sweet home setup, chaos onscreen. Points successfully gained. But here's ...

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks - Part 5: Putting it all together

Wow! We covered a lot in this series. Part 1 - Overview & Architecture Part 2 – Data collection with Azure Arc Part 3 – Data persistence in Log Analytics Part 4 -  Data visualization with Azure Workbooks Time for a wrap up and some troubleshooting Let's trace the data flow from start to finish to make sure everything connects: The Azure Monitor Agent runs on each Arc-enabled on-prem VM. The Data Collection Rule tells the agent what health data to gather — application pools, Windows services, and scheduled tasks. The agent collects that data on a regular interval and ships it to Azure. The DCR routes the incoming data to our custom table ( OnPremHealthStatus_CL ) in the Log Analytics Workspace. The Workbook queries that table and renders the dashboard. If any link in that chain breaks, data stops flowing. The troubleshooting section below covers the most common failure points. Troubleshooting checklist No data appearing in the workbook: ...

Copilot Memory in VS Code: Your AI assistant just got smarter

If you've ever found yourself repeatedly correcting GitHub Copilot with the same preferences or re-explaining your team's coding standards in every chat session, the January 2026 release of VS Code brings a possible solution: Copilot Memory . What Is Copilot Memory? Copilot Memory is a new feature that allows GitHub Copilot to remember important context across your coding sessions. Think of it as giving your AI assistant a notebook where it can jot down your preferences, team conventions, and project-specific guidelines—and actually refer back to them later. Released as a preview feature in VS Code version 1.109 (January 2026), Copilot Memory changes how you interact with AI-powered coding assistance by making your conversations with Copilot persistent and personalized. How it works The magic of Copilot Memory happens through a new memory tool that's integrated directly into VS Code's chat interface. Here's how it works: Intelligent detection Copilot a...