Skip to main content

Posts

Cleaner Minimal API Endpoints with [AsParameters]

I only recently started using the ASP.NET Core's minimal API style, but an annoying thing I already encountered is the "long parameter list" problem. Route handlers that accept five, six, or seven parameters start to feel unwieldy fast. The good news is that a solution exists through the [AsParameters] attribute, introduced in .NET 7,  that gives you a clean way out. The problem it solves Minimal APIs are appealing precisely because they're lightweight — no controllers, no ceremony. But that simplicity starts to break down as your endpoints grow more complex. Consider this example endpoint: That's eight(!) parameters before you've written a single line of business logic. It's hard to read, hard to test, and grows more painful every time requirements change. Enter [AsParameters] [AsParameters] lets you group related parameters into a plain C# class or record and bind them all at once. ASP.NET Core inspects the type's constructor and public ...
Recent posts

ASP.NET Core - TryParse error when using Minimal APIs

Minimal APIs are the recommended approach for building fast HTTP APIs with ASP.NET Core. They allow you to build fully functioning REST endpoints with minimal code and configuration. You don't need a controller but can just declare your API using a fluent API approach: This makes it very convenient to build your APIs. However you need to be aware that a lot of magic is going behind the scenes when using this approach. And this magic can bite you in the foot. Exactly what happened to me while building an autonomous agent invoked through a web hook. The code In my application I created the following API endpoint using the minimal API approach: The minimal API injects an AzureDevOpsWebhookParser that looks like this: Nothing special… The problem The problem was when I called this endpoint, it failed with the following error message: InvalidOperationException: TryParse method found on AzureDevOpsWebhookParser with incorrect format. Must be a static method with for...

Community is a verb

Over the course of my career, I've been part of multiple initiatives to start internal communities at work. Some of them became something genuinely special — people showing up, contributing, looking forward to the next gathering. Others quietly died after a few months, victims of low attendance and dwindling energy. For a long time I couldn't figure out what separated the successes from the failures. Was it the topic? The timing? The right people involved? I kept searching for the formula. Then I listened to an episode of the ReThinking podcast last week where Adam Grant sat down with Dan Coyle — author of The Culture Code and his new book Flourish — and one thing Coyle said stopped me in my tracks. Community, he pointed out, literally means shared gifts . And shared gifts aren't something you passively receive. They're something you participate in . We've been thinking about it the wrong way Maybe you’ve tried to build an internal community before. You ...

Python vs PySpark notebooks in MS Fabric

Being new to Microsoft Fabric I noticed that you have multiple options when writing notebooks using Python: run your code with PySpark (backed by a Spark cluster) or with Python (running natively on the notebook's compute). Both options look almost identical on the surface — you're still writing Python syntax either way — but under the hood they behave very differently, and picking the wrong one can cost you time, money, and unnecessary complexity. In this post I try to identify the key differences and give you some heuristics for deciding which engine to reach for. Python vs PySpark: what's actually different? When you select PySpark in a Fabric notebook, your code runs on a distributed Apache Spark cluster. Fabric spins up a cluster, distributes your data across multiple worker nodes, and executes transformations in parallel. The core abstraction is the DataFrame (or RDD), and operations are lazy — nothing actually runs until you trigger an action like .show() ...

Cleaner switch expressions with pattern matching in C#

Ever find yourself mapping multiple string values to the same result? Being a C# developer for a long time, I sometimes forget that the C# has evolved so I still dare to chain case labels or reach for a dictionary. Of course with pattern matching this is no longer necessary. With pattern matching, you can express things inline, declaratively, and with zero repetition. A small example I was working on a small script that should invoke different actions depending on the environment. As our developers were using different variations for the same environment e.g.  "tst" alongside "test" , "prd" alongside "prod" .  We asked to streamline this a long time ago, but as these things happen, we still see variations in the wild. This brought me to the following code that is a perfect example for pattern matching: The or keyword here is a logical pattern combinator , not a boolean operator. It matches if either of the specified pattern...

Run SQL queries on local parquet and delta files using DuckDB

Yesterday I showed how we could query local parquet and delta files using pandas and deltalake. Although these libraries work, once you start loading big parquet files you see your system stall while your memory usage spikes. A colleague suggested to give DuckDB a try. I never heard about it, but let’s discover it together. What is DuckDB? DuckDB is an in-process analytical database — think SQLite, but built for OLAP workloads instead of transactional ones. It runs entirely inside your Python process (no server to spin up, no connection string to manage) and is optimized for the kinds of queries data engineers run every day: large scans, aggregations, joins, and window functions over columnar data. A few things that make it stand out: It reads files directly. You don't import data into DuckDB before querying it. You point it at a Parquet file, a folder of Parquet files, or a Delta table, and it queries them in place. No ETL step, no intermediate copy. It's c...

How to work with OneLake files locally using Python

Last week I shared how you could use the  OneLake File Explorer to sync your Lakehouse tables to your local machine. It's a convenient way to get your Parquet and Delta Lake files off the cloud and onto disk — but what do you actually do with them once they're there? In this post, I’ll walk you through how to interact with your locally synced OneLake files using Python. We'll cover four practical approaches, with real code you can drop straight into a notebook. Where are your files? When OneLake File Explorer syncs your files, they land in a path that looks something like this: C:\Users\<you>\OneLake - <workspace name>\<lakehouse name>.Lakehouse\Tables\<table name> Keep that path in mind— you'll be passing it into every example below. Delta Lake tables are stored as folders containing multiple Parquet files plus a _delta_log/ directory, so make sure you're pointing at the table's root folder, not an individual file. Readin...