Skip to main content

Getting started with the GitHub Copilot SDK in .NET

In the previous post talked about why the GitHub Copilot SDK matters: it gives you a production-grade agent harness out of the box, so you can skip building the infrastructure and focus on your actual product. Now let's make it concrete. This post walks through everything you need to get up and running with the SDK in .NET — from prerequisites to a working streaming agent with a custom tool.

What we will build

We’ll keep it simple. By the end of this post you'll have a console application that:

  • Connects to Copilot's agent runtime
  • Sends a prompt and receives a streaming response
  • Has a multi-turn conversation with persistent context
  • Calls a custom tool you define in C#

Prerequisites

You'll need three things before touching any code.

1. .NET 8 or later

The SDK requires .NET 8+. Verify your version:

dotnet --version

2. GitHub Copilot CLI, installed and authenticated

The SDK communicates with the Copilot CLI running as a local process — it doesn't call the API directly. This design keeps your credentials out of application code, which is a meaningful security advantage.

Install the CLI via the official installation guide, then authenticate using the /login slash command.

Verify it's working:

copilot --version

3. A GitHub Copilot subscription

Any Copilot plan works — including the free tier. If you'd rather use your own model API keys (OpenAI, Azure AI Foundry, or Anthropic), BYOK is supported and we'll touch on that at the end.

Note: Every prompt counts against your Copilot subscription's premium request quota. For early experiments this is fine, but if you're building automated workflows that fire many requests, keep an eye on usage.

Step 1: Create the project

Create a new console app and add the SDK NuGet package:

mkdir copilot-demo && cd copilot-demo
dotnet new console
dotnet add package GitHub.Copilot.SDK

The CLI is bundled automatically with the .NET SDK package — no separate installation step required.

For anything beyond a quick script, also add:

dotnet add package Microsoft.Extensions.AI

This gives you AIFunctionFactory, which makes registering custom tools clean and ergonomic.

Step 2: Your first message

Open Program.cs and replace its contents with this:

Run it:

dotnet run

That's all it takes. CopilotClient manages the connection to the Copilot CLI process. CreateSessionAsync starts a conversation with a specific model. SendAndWaitAsync sends your prompt and blocks until the full response is ready.

Step 3: Add streaming

Waiting for the full response works fine for short answers, but for longer outputs you'll want to stream — displaying content as it arrives makes your app feel much more responsive.

The SDK emits assistant.message_delta events as tokens arrive, and session.idle when the turn is complete. You subscribe to events rather than polling — a clean, event-driven model that fits naturally with .NET.

Step 4: Multi-turn conversation

One of the things the SDK handles for you automatically is conversational context. Each CopilotSession maintains state across turns — you don't have to manually track message history or stuff it back into each request.

The agent remembers the full conversation history within the session. Ask a follow-up question and it knows what you were talking about — no extra wiring required.

Step 5: Add a custom tool

The agent runtime can call code you define whenever it determines a tool is needed to answer a request. You register tools using AIFunctionFactory.Create() from Microsoft.Extensions.AI.

Here's an example that gives the agent the ability to look up the current weather:

When you send that prompt, the agent decides on its own that it needs to call get_current_weather — once for Brussels, once for Berlin — and incorporates the results into its response. You didn't write any routing logic. The execution loop handles it.

Step 6: Handling permissions

The SDK expects that you always specify a permission request handler.  A built-in one is PermissionHandler.ApproveAll, which is convenient for development but not something you want in production. For anything beyond local experiments, you'll want to inspect tool requests before they execute.

The permission framework lets you intercept each tool call by type — shell, write, read, url, custom-tool — and return approved or denied:

Using BYOK instead of a Copilot subscription

With the recent announcement about the upcoming price changes, you maybe prefer to use your own API keys from OpenAI, Azure AI Foundry, or Anthropic. Therefore you can configure the SDK to use those instead:

BYOK means no GitHub Copilot subscription is required. It's a useful option if you're already paying for model access elsewhere and want to avoid double-billing.

Putting it together

Here's a quick mental model of the three core concepts you've been working with:

  • CopilotClient — manages the connection to the Copilot CLI process running locally. Create one per application (register it as a singleton in DI).
  • CopilotSession — holds a persistent conversational context. Create one per conversation or user session.
  • SessionConfig — where you configure the model, tools, streaming, permissions, and custom instructions.

For production applications, the recommended pattern is to register CopilotClient as a singleton in your DI container and create sessions on demand:

Then inject it wherever you need it, and call CreateSessionAsync per request or conversation.

What's next

You now have a working .NET app that connects to the Copilot agent runtime, handles multi-turn conversations, streams responses, and can call your own code as a tool. That's the foundation.

In the next post, we'll go deeper on tool use and customization — how to build tools that do real things (query a database, call an API, read files), how to control which first-party tools are available, and how to think about tool design for agentic workflows.

Popular posts from this blog

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

VS Code Planning mode

After the introduction of Plan mode in Visual Studio , it now also found its way into VS Code. Planning mode, or as I like to call it 'Hannibal mode', extends GitHub Copilot's Agent Mode capabilities to handle larger, multi-step coding tasks with a structured approach. Instead of jumping straight into code generation, Planning mode creates a detailed execution plan. If you want more details, have a look at my previous post . Putting plan mode into action VS Code takes a different approach compared to Visual Studio when using plan mode. Instead of a configuration setting that you can activate but have limited control over, planning is available as a separate chat mode/agent: I like this approach better than how Visual Studio does it as you have explicit control when plan mode is activated. Instead of immediately diving into execution, the plan agent creates a plan and asks some follow up questions: You can further edit the plan by clicking on ‘Open in Editor’: ...