One of the best ways to improve the results you get back from GitHub Copilot is by carefully defining your custom instructions. This helps the LLM to better understand your application, preferred technologies, coding guidelines, etc.. This information is shared with the LLM for every request, so you don’t have to provide all these details every time in your prompts. But creating such a set of custom instructions can be a challenge. If you are looking for inspiration, here are some possible sources: Awesome Copilot Instructions Link: Code-and-Sorts/awesome-copilot-instructions: ✨ Curated list of awesome GitHub copilot-instructions.md files Description: Contains a list of copilot instructions for different programming languages Cursor Rules Link: Free AI .cursorrules & .mdc Config Generator | Open Source Developer Tools Description: Originally created for the Cursor IDE but also applicable when defining custom instructions for GitHub Copilot. No examples for .NET or CSh...
I’ m currently working on a new feature in one of our microservices. As this new feature could have large performance impact, we did some performance benchmarks up front through BenchMarkDotNet . The results looked promising but we were not 100% confident that these results are representative for real life usage. Therefore we decided to implement A/B testing. Yesterday I showed how to implement A/B testing in .NET using .NET Feature Management . Today I want to continue on the same topic and show you how we added telemetry. Measuring and analyzing results The most critical aspect of A/B testing is measuring results. You need to track relevant metrics for each variation to determine which performs better. The most simple way to do this is to fall back to the built-in logging in .NET Core : Although this is a good starting point, we can simplify and improve this by taking advantage of the built-in telemetry through OpenTelemetry and/or Application Insights. Using OpenTelemetry...