Skip to main content

Posts

Cleaner switch expressions with pattern matching in C#

Ever find yourself mapping multiple string values to the same result? Being a C# developer for a long time, I sometimes forget that the C# has evolved so I still dare to chain case labels or reach for a dictionary. Of course with pattern matching this is no longer necessary. With pattern matching, you can express things inline, declaratively, and with zero repetition. A small example I was working on a small script that should invoke different actions depending on the environment. As our developers were using different variations for the same environment e.g.  "tst" alongside "test" , "prd" alongside "prod" .  We asked to streamline this a long time ago, but as these things happen, we still see variations in the wild. This brought me to the following code that is a perfect example for pattern matching: The or keyword here is a logical pattern combinator , not a boolean operator. It matches if either of the specified pattern...
Recent posts

Run SQL queries on local parquet and delta files using DuckDB

Yesterday I showed how we could query local parquet and delta files using pandas and deltalake. Although these libraries work, once you start loading big parquet files you see your system stall while your memory usage spikes. A colleague suggested to give DuckDB a try. I never heard about it, but let’s discover it together. What is DuckDB? DuckDB is an in-process analytical database — think SQLite, but built for OLAP workloads instead of transactional ones. It runs entirely inside your Python process (no server to spin up, no connection string to manage) and is optimized for the kinds of queries data engineers run every day: large scans, aggregations, joins, and window functions over columnar data. A few things that make it stand out: It reads files directly. You don't import data into DuckDB before querying it. You point it at a Parquet file, a folder of Parquet files, or a Delta table, and it queries them in place. No ETL step, no intermediate copy. It's c...

How to work with OneLake files locally using Python

Last week I shared how you could use the  OneLake File Explorer to sync your Lakehouse tables to your local machine. It's a convenient way to get your Parquet and Delta Lake files off the cloud and onto disk — but what do you actually do with them once they're there? In this post, I’ll walk you through how to interact with your locally synced OneLake files using Python. We'll cover four practical approaches, with real code you can drop straight into a notebook. Where are your files? When OneLake File Explorer syncs your files, they land in a path that looks something like this: C:\Users\<you>\OneLake - <workspace name>\<lakehouse name>.Lakehouse\Tables\<table name> Keep that path in mind— you'll be passing it into every example below. Delta Lake tables are stored as folders containing multiple Parquet files plus a _delta_log/ directory, so make sure you're pointing at the table's root folder, not an individual file. Readin...

Accessing Microsoft Fabric data locally with OneLake file explorer

If you've spent any time working with Microsoft Fabric, you know that navigating to the web portal every time you need to inspect, upload, or tweak a file gets old fast. OneLake File Explorer is Microsoft's answer to that friction — a lightweight Windows application that mounts your entire Fabric data estate directly in Windows File Explorer, the same way OneDrive handles your documents. One..what? OneLake is the unified data lake underpinning every Microsoft Fabric tenant. Unlike traditional architectures where teams maintain separate data lakes per domain or business unit, every Fabric tenant gets exactly one OneLake — one place where Lakehouses, Warehouses, KQL databases, and other Fabric items store their data. There's no need to copy data between engines; Spark, SQL, and Power BI all read from the same underlying storage. The organizational hierarchy is straightforward: Tenant → Workspaces → Items (Lakehouses, Warehouses, etc.) → Files/Tables . This maps neat...

Azure Pipelines–Failed to set a git tag

I mostly use the built-in functionality to set a tag on a specific commit after a successful release. However, in this case I was contacted by a colleague who was using the Git Tag task . Unfortunately, he couldn’t get the task working. A look at the build log made it obvious what the problem was: Starting: GitTag ============================================================================== Task         : Git Tag Description  : A simple task that tags a commit Version      : 7.0.0 Author       : ATP P&I IT Help         : tags the current commit with a specified tag. ### Prerequisites * Repository must be VSTS Git. * Allow scripts to access Oauth must be **Enabled** * Project Collection Build Service must have **Contribute** & **Create Tag** set to **Allow** or **Inherit Allow** for that particular repository =======...

How I built a custom agent skill to configure Application Insights

If you've ever found yourself repeating the same Azure setup ritual — adding the Application Insights SDK, wiring up telemetry, configuring sampling rules — you already know the pain. It's not hard, but it's tedious. Every new service needs the same scaffolding. Every new team member has to learn the same conventions. That's exactly what I solved with a custom skill. Now, when I need to instrument a service, I just tell Copilot to configure Application Insights, and it does everything exactly the way our team expects. No extra prompting, no re-explaining our conventions. It just works. This post explains what Skills are, how they work inside VS Code, and how to build one for your own team — using my Application Insights skill as a hands-on example. What is an agent skill? An agent skill is a folder of instructions, scripts, and reference files that teaches your AI agent how to handle a specific task. Think of it as institutional knowledge made executable. Instea...

I didn't notice this VS Code feature until it made me question how I code

I was working on a refactoring using VS Code the other day when I noticed something I hadn't seen before: a tiny bar chart quietly living in the status bar, tracking my AI vs. manual typing usage over the last 24 hours. It's called AI Statistics, and it shipped in VS Code 1.103. To enable it, open settings and search for "AI stats" — flip the checkbox, and a small gauge appears in the bottom-right of your status bar. Hover over it and you get a breakdown: how much of your recent code came from AI completions versus your own keystrokes. On the surface it sounds like a novelty. But I found myself actually pausing when I saw the numbers. It reframed something I hadn't really thought about consciously: not whether AI coding tools are good or bad, but just how much I'm actually leaning on them day to day. That visibility is weirdly valuable. It's the kind of data point that makes you more intentional — maybe you lean in harder on AI for boilerplate an...