On-premises VMs don't disappear just because you are working on a cloud strategy. We are running a lot of Windows workloads on-prem — application pools, Windows services, scheduled tasks — and still need visibility into whether they're healthy. Traditional on-prem monitoring solutions could work, but they come with their own operational overhead and are directly tied to our on-premise infrastructure. When an incident happens, we don’t want to context-switch between our cloud monitoring stack and our on-prem monitoring stack. It's not ideal. We wanted a single, cloud-native view into the health of our on-prem workloads without having to lift and shift them into Azure. Azure Arc made this possible by extending Azure's management plane to our on-premises infrastructure. By combining Arc with Log Analytics and Workbooks, we built a unified health dashboard that sits alongside our cloud monitoring, uses the same query language (KQL), and requires no additional on-prem in...
In the previous post , I showed how to use LinkTo predicates to route messages conditionally across different blocks. Today, we're going to take that concept a step further and do something that surprises most developers the first time they see it: Link a block back to itself to create recursion — entirely through the dataflow graph, with no explicit recursive method calls. The core idea Traditional recursion involves a function calling itself. In TPL Dataflow, we achieve the same result structurally: a block's output is linked back to its own input via a predicate. Messages that match the "recurse" condition loop back, while messages that match the "base case" condition flow forward. The dataflow runtime handles the iteration for us. Sounds complicated? An example will make it clear immediately. A good example to illustrate this walks through a directory tree and computing MD5 hashes for every file in a directory. Directories need to be expanded ...