The list of available models in Github Copilot keeps growing. Whereas last year you could already use GPT-4o, o1, o1-mini and Claud Sonnet 3.5, now you can also try OpenAI o3 and Google Gemini 2.0 Flash. About Gemini 2.0 Flash The Gemini 2.0 Flash model is a highly efficient, large language model (LLM) designed for high-volume, high-frequency tasks. It excels in multimodal reasoning , handling inputs like text, images, and audio, and providing text outputs. With a context window of 1 million tokens , it can process vast amounts of information quickly and accurately. Gemini 2.0 Flash is optimized for speed and practicality, making it ideal for everyday tasks, coding, and complex problem-solving. GitHub Copilot uses Gemini 2.0 Flash hosted on Google Cloud Platform (GCP). When using Gemini 2.0 Flash, prompts and metadata are sent to GCP, which makes the following data commitment : Gemini doesn't use your prompts, or its responses, as data to train its models. About OpenAI o3 ...
I hope that you already had a change to try .NET Aspire, a comprehensive set of tools, templates, and packages designed to help developers build observable, production-ready applications. It enhances the development experience by providing dev-time orchestration, integrations with commonly used services, and robust tooling support. .The goal is to simplify the management of multi-project applications, container resources, and other dependencies, making it easier to develop interconnected apps. To make it even better, it includes features like service discovery, connection string management, and environment variable configuration, streamlining the setup process for local development environments. And if that couldn’t convince you yet, than maybe the Aspire Dashboard will. Sounds great right? Reason enough to add it to your existing projects if you are not using it yet. A great tutorial exists on Microsoft Learn to help you add it to your existing projects. Unfortunately I had a ...