Skip to main content

Debugging your VSCode agent interactions

If you've spent any time working with GitHub Copilot in agent mode, you've probably hit that frustrating moment: the agent does something unexpected, picks the wrong tool, ignores a prompt file, or just… takes forever, and you have no idea why.

Until recently, your only recourse was the raw Chat Debug view: useful, but dense, and not exactly designed for quick diagnosis. That changes with the Agent Debug Log panel, available in preview as of VS Code 1.110.

What it is

The Agent Debug Log panel shows a chronological event log of everything that happens during a chat session, including tool calls, LLM requests, prompt file discovery, and errors. Think of it as a structured, human-readable trace of your agent's entire thought process, rendered right inside VS Code. It replaces the old Diagnostics chat action with a richer, more detailed view, and is particularly valuable as your agent setups grow in complexity; custom instructions, multiple prompt files, MCP servers, and sub-agents all interacting at once.

How to enable it

The panel is off by default. To turn it on, enable the github.copilot.chat.agentDebugLog.enabled setting in VS Code, then open it via the ... menu in the Chat view and select Show Agent Debug Logs. If you also want logs written to disk for offline analysis, enable github.copilot.chat.agentDebugLog.fileLogging.enabled as well.



The agent debug log panel

Open the panel through the elipsis (...) menu in the Chat view and select Show Agent Debug Logs:


Or use the command palette: Run Developer: Open Agent Debug Log:

The panel surfaces your session data in three complementary ways:

Logs view

A chronological list of events with timestamps, event types, and summary information. You can expand each event to see full details — the complete system prompt for an LLM request, or the input and output for a tool call. You can switch between a flat list and a tree view that groups events by sub-agent and use filter options to focus on specific event types.

Summary view

Aggregate statistics about the chat session: total tool calls, token usage, error count, and overall duration. Great for a quick health check before diving into the logs.

Agent Flow Chart

A visual graph of the sequence of events and interactions between agents, making it easier to understand complex orchestrations. You can pan and zoom the flow chart and select any node to see details about that event.

Export, Import, and /troubleshoot

You can export a debug session to an OpenTelemetry JSON (OTLP format) file to share it with others or analyze it offline and import a previously exported file to view it in the Agent Debug panel. This is a big deal for team debugging — you can now hand off a session log the same way you'd share a stack trace.

You can even take it one step further: a /troubleshoot skill exists that reads the JSONL debug log files exported from the chat session and can help you understand why tools or sub-agents were used or skipped, why instructions or skills didn't load, what contributed to slow response times, and whether network connectivity problems occurred.

Just type /troubleshoot followed by a description of the issue and let the AI analyze its own log:

Conclusion

Today Agent mode has become the way how I interact with Copilot for real, multi-step tasks. The more powerful my setup (custom agents, prompt files, MCP tool chains), the harder it becomes to reason about what's actually happening under the hood.

Thanks to the Agent Debug Log panel I can finally understand what is going on. It brings the same kind of observability that you expect from their backend services; traces, spans, structured events; directly into the editor. If you're building or debugging anything in agent mode, this panel should be your first stop.

More information

Debug chat interactions

Popular posts from this blog

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...

Cleaner switch expressions with pattern matching in C#

Ever find yourself mapping multiple string values to the same result? Being a C# developer for a long time, I sometimes forget that the C# has evolved so I still dare to chain case labels or reach for a dictionary. Of course with pattern matching this is no longer necessary. With pattern matching, you can express things inline, declaratively, and with zero repetition. A small example I was working on a small script that should invoke different actions depending on the environment. As our developers were using different variations for the same environment e.g.  "tst" alongside "test" , "prd" alongside "prod" .  We asked to streamline this a long time ago, but as these things happen, we still see variations in the wild. This brought me to the following code that is a perfect example for pattern matching: The or keyword here is a logical pattern combinator , not a boolean operator. It matches if either of the specified pattern...