Skip to main content

A/B testing in .NET using Microsoft Feature Management

I’ m currently working on a new feature in one of our microservices. As this new feature could have a large performance impact, we did some performance benchmarks up front through BenchMarkDotNet. The results looked promising but we were not 100% confident that these results are representative for real life usage. Therefore we decided to implement A/B testing.

In this post, we'll explore what A/B testing is, why it matters, and how to implement it effectively using .NET Feature Management.

What is A/B testing?

A/B testing, also known as split testing, is a method of comparing two versions of a web page, application feature, or user experience to determine which one performs better. In its simplest form, you show version A to half your users and version B to the other half, then measure which version achieves your desired outcome more effectively.

The beauty of A/B testing lies in its scientific approach. Rather than making changes based on opinions or assumptions, you're letting actual user behavior guide your decisions. This leads to improvements backed by real data rather than guesswork.

Common A/B testing scenarios

A/B testing can be applied to virtually any aspect of your application. Here are some popular use cases:

  • User Interface Changes: Testing different button colors, layouts, or navigation structures to improve user engagement and conversion rates.
  • Feature Variations: Comparing different implementations of the same feature to see which approach users prefer or find more effective.
  • Algorithmic Changes: Testing different recommendation algorithms, search ranking methods, or personalization approaches.
  • Pricing and Messaging: Experimenting with different pricing models, promotional offers, or marketing copy to optimize conversion rates.
  • Performance Optimizations: Comparing different technical implementations to see which provides better user experience while maintaining functionality. (This our scenario in this case)

Introduction to .NET Feature Management

There are multiple ways to implement A/B testing in .NET, but we decided to use the Microsoft's Feature Management library. This library provides a robust foundation for implementing feature flags and A/B testing in .NET applications. It integrates seamlessly with ASP.NET Core's dependency injection system and configuration providers, making it easy to manage features across different environments.

The Feature Management library supports several key concepts that make A/B testing straightforward. Feature flags allow you to toggle functionality on and off without code deployments. Feature filters provide sophisticated logic for determining when features should be enabled, including percentage-based rollouts perfect for A/B testing. The library also includes built-in support for configuration providers, meaning you can manage your feature flags through the appsettings.json, or other configuration sources.

Setup feature management in your ASP.NET Core project

Let's start by setting up a new ASP.NET Core project with Feature Management. First, we need to install the necessary NuGet packages:

dotnet add package Microsoft.FeatureManagement.AspNetCore

Next, configure Feature Management in your Program.cs file:

Configure a feature flag

Feature flags are configured through your application's configuration system. Add the following to your appsettings.json:

This configuration creates one feature flag. The NewFunctionality feature will be enabled for 50% of users. The percentage filter ensures consistent assignment—the same user will always see the same variation.

Integrate the feature flag in your controllers

In your controllers, inject the IFeatureManager service to check feature flag status:

That’s it!

Best practices for A/B testing

Successful A/B testing requires more than just technical implementation. Here are key practices to follow:

  • Statistical Significance: Don't end tests too early. Ensure you have enough data to make statistically significant conclusions. Tools like online sample size calculators can help determine how long to run your tests.
  • Single Variable Testing: Test one change at a time. If you test multiple changes simultaneously, you won't know which change caused any observed differences in behavior.
  • Consistent User Experience: Ensure users see the same variation throughout their session or longer. Inconsistent experiences can confuse users and skew results.
  • Meaningful Metrics: Choose metrics that truly matter to your business goals. High-level metrics like conversion rate or user engagement are often more valuable than vanity metrics.
  • Test Documentation: Document your hypotheses, test parameters, and results. This creates institutional knowledge and helps inform future testing strategies.
  • Gradual Rollouts: Start with small percentages and gradually increase traffic to winning variations. This approach minimizes risk while maximizing learning.

More information

.NET feature flag management - Azure App Configuration | Microsoft Learn

Feature Flags in .NET, from simple to more advanced

Home | BenchmarkDotNet

Popular posts from this blog

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

DevToys–A swiss army knife for developers

As a developer there are a lot of small tasks you need to do as part of your coding, debugging and testing activities.  DevToys is an offline windows app that tries to help you with these tasks. Instead of using different websites you get a fully offline experience offering help for a large list of tasks. Many tools are available. Here is the current list: Converters JSON <> YAML Timestamp Number Base Cron Parser Encoders / Decoders HTML URL Base64 Text & Image GZip JWT Decoder Formatters JSON SQL XML Generators Hash (MD5, SHA1, SHA256, SHA512) UUID 1 and 4 Lorem Ipsum Checksum Text Escape / Unescape Inspector & Case Converter Regex Tester Text Comparer XML Validator Markdown Preview Graphic Col...