Skip to main content

Posts

Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 1: Overview & Architecture

On-premises VMs don't disappear just because you are working on a cloud strategy. We are running a lot of Windows workloads on-prem — application pools, Windows services, scheduled tasks — and still need visibility into whether they're healthy. Traditional on-prem monitoring solutions could work, but they come with their own operational overhead and are directly tied to our on-premise infrastructure. When an incident happens, we don’t want to context-switch between our cloud monitoring stack and our on-prem monitoring stack. It's not ideal. We wanted a single, cloud-native view into the health of our on-prem workloads without having to lift and shift them into Azure. Azure Arc made this possible by extending Azure's management plane to our on-premises infrastructure. By combining Arc with Log Analytics and Workbooks, we built a unified health dashboard that sits alongside our cloud monitoring, uses the same query language (KQL), and requires no additional on-prem in...
Recent posts

Creating recursion in TPL Dataflow with LinkTo predicates

In the previous post , I showed how to use LinkTo predicates to route messages conditionally across different blocks. Today, we're going to take that concept a step further and do something that surprises most developers the first time they see it: Link a block back to itself to create recursion — entirely through the dataflow graph, with no explicit recursive method calls. The core idea Traditional recursion involves a function calling itself. In TPL Dataflow, we achieve the same result structurally: a block's output is linked back to its own input via a predicate. Messages that match the "recurse" condition loop back, while messages that match the "base case" condition flow forward. The dataflow runtime handles the iteration for us. Sounds complicated? An example will make it clear immediately. A good example to illustrate this walks through a directory tree and computing MD5 hashes for every file in a directory. Directories need to be expanded ...

Creating conditional flows in TPL Dataflow with LinkTo predicates

While building a data processing pipeline with TPL Dataflow, I needed to route messages to different blocks based on specific conditions. The LinkTo method's predicate parameter is the feature I needed to create branching logic in my dataflow network. In this post, I explore how to use predicates to build conditional flows that are both efficient and maintainable. Understanding LinkTo predicates The LinkTo method in TPL Dataflow connects a source block to a target block, creating a pipeline for data to flow through. The method signature includes an optional predicate parameter: The predicate is a function that evaluates each message and returns true if the message should be sent to the target block, or false if it should be offered to the next linked block in the chain. A simple example Let's start with a straightforward example that demonstrates the basic concept. We'll create a pipeline that routes even numbers to one block and odd numbers to another: ...

Simplifying data movement in Microsoft Fabric

If you started using Microsoft Fabric, one of the first things you want to do is to get data in the platform. You could of course, start creating your own Data Factory pipelines, but there is a less complicated alternative to get started; the Microsoft Fabric's Copy Job feature. It offers a streamlined, no-code solution that eliminates the need for complex pipeline development. In this post, we'll explore what Copy Jobs are, why they matter, and how you can leverage them in your data workflows. What is a Copy Job? Copy Job is Microsoft Fabric Data Factory's answer to simplified data movement. It's a purpose-built solution designed to move data from various sources to multiple destinations without requiring you to build traditional data pipelines. Whether you're working with databases, cloud storage, or on-premises systems, Copy Job provides an intuitive, guided experience that handles the complexity for you. At its core, Copy Job addresses a common challenge: ...

Using HTTP Files in VS Code with REST Client

I regularly switch between multiple IDEs, mostly VS Code and Visual Studio but sometimes also Rider or Cursor. One of the Visual Studio features I miss when using VS Code is the support for HTTP files. What if I told you that there is a way to use HTTP files in VS Code? Enter the REST Client extension for Visual Studio Code, a lightweight, powerful tool that lets you test APIs without ever leaving your editor. What is the REST client extension? Similar to the HTTP support in Visual Studio, the REST Client extension allows you to send HTTP requests and view responses directly in VS Code. It's minimal, scriptable, and integrates seamlessly into your existing workflow. It offers the same functionality as in Visual Studio and more:   .http and .rest file extensions support Syntax highlight (Request and Response)         Auto completion for method, url, header, custom/system variables, mime types and so on Comments (line ...

Checkout a Git repository using a tag in VSCode

If you're working with Git repositories in Visual Studio Code, you might occasionally need to checkout a specific tag—perhaps to review a previous release, test an older version, or understand how the codebase looked at a particular milestone. While VSCode's built-in Git integration is powerful, checking out tags wasn't immediately obvious to me. Let me walk you through the process. What are Git tags? Before diving in, a quick refresher: Git tags are references that point to specific commits in your repository's history. They're commonly used to mark release points (like v1.0.0, v2.1.3, etc.). Unlike branches, tags are meant to be immutable snapshots of your code at a particular moment in time. Checkout a Git tag in VS Code Method 1: Using the Command Palette The quickest way to checkout a tag in VSCode is through the Command Palette: Open the Command Palette by pressing Ctrl+Shift+P (Windows/Linux) or Cmd+Shift+P (Mac) Type "Git: Checko...

Error when using the Microsoft Fabric Capacity Metrics app

The Microsoft Fabric Capacity Metrics app allows you to monitor your Microsoft Fabric capacities. You can use the app to monitor your capacity consumption and use these insights to decide when to scale (or setup autoscaling). After installing the Microsoft Fabric Capacity Metrics App , I noticed that no data was shown on the Health page: Clicking on the error details showed me the following info: The NoCapacitiesInRegion.Error message mentions that no capacity is available. So, let’s have a look at the assigned capacity for this workspace: The cause of this error is not related to having no capacity but to the fact that I had a Power BI Pro capacity assigned instead of a Fabric capacity. After changing to the Fabric capacity, the health page started to work as expected: Nice! More information What is the Microsoft Fabric Capacity Metrics app? - Microsoft Fabric | Microsoft Learn Understanding Microsoft Fabric Capacity and Throttling–A first attempt