Skip to main content

GitHub Copilot CLI Tips & Tricks — Part 3: Parallelizing Work

In the previous posts we covered the different CLI modes and session management. This time we're looking at one of Copilot CLI's most powerful features: the /fleet command. If you've ever wished you could clone yourself to tackle several parts of a codebase at once, this is the closest thing to it.


What is /fleet?

When you send a prompt to Copilot CLI, by default a single agent works through the task sequentially. /fleet changes that model entirely.

The /fleet slash command lets Copilot CLI break down a complex request into smaller tasks and run them in parallel, maximizing efficiency and throughput. The main Copilot agent analyzes the prompt and determines whether it can be divided into smaller subtasks. It then acts as an orchestrator, managing the workflow and dependencies between those subtasks, each handled by a separate subagent.

In practice, this means a task that might take 20 minutes sequentially can complete in a fraction of the time — because independent chunks of work are being executed concurrently.

How to use /fleet

The typical workflow is to use /fleet after creating an implementation plan. Switch into plan mode with Shift+Tab, describe the feature or change you want, and work with Copilot to produce a structured plan. Once the plan is complete, you'll be presented with two options:

  • Accept plan and build on autopilot + /fleet — Copilot immediately spins up subagents and works autonomously to implement the plan without further input.
  • Exit plan mode and prompt myself — you're dropped back to the main prompt, where you can then type /fleet implement the plan to kick things off manually.

The first option is the faster path. The second gives you a moment to review or tweak your prompt before committing.

You can also use /fleet directly without going through plan mode first, by prefixing any prompt with the command:

/fleet add unit tests for every service in src/services/

Copilot will assess whether the work can be parallelized and assign subtasks to subagents accordingly. For something like writing tests across multiple independent service files, this is a natural fit.




Monitoring subagents with /tasks

Once /fleet kicks off, you don't have to sit in the dark wondering what's happening. Use the /tasks slash command to see a list of all background tasks for the current session, including any subtasks being handled by subagents. Navigate the list with the up and down arrow keys. For each subagent task you can:

  • Press Enter to view details — and see a summary of what was done once it completes
  • Press k to kill the process
  • Press r to remove completed or killed subtasks from the list
  • Press Esc to exit the task list and return to the main prompt

This is your control panel while fleet is running. Make a habit of opening /tasks after launching /fleet so you can catch any subtask that gets stuck or goes in the wrong direction early.


When to reach for /fleet

Not every task benefits from parallelization. /fleet shines when your work is naturally divisible into independent chunks.

Good candidates:

  • Writing a test suite for an existing feature — each test file can be worked on independently
  • Applying a consistent change across multiple modules (e.g., updating an import path, migrating an API version)
  • Generating boilerplate for several similar components at once
  • Running a refactor across files that don't depend on each other

Poor candidates:

  • Tasks with strict sequential dependencies — if step B needs the output of step A, parallelization won't help and may cause conflicts
  • Ambiguous or exploratory tasks — if the goal isn't clearly defined, subagents may head in diverging directions
  • Small, single-file tasks — the orchestration overhead isn't worth it for simple jobs a single agent can handle quickly

When you're using autopilot mode and want the quickest possible completion of a large task, /fleet is the right tool. But if your task cannot be cleanly split into independent subtasks, the main agent will handle it sequentially regardless.

Wrapping up

/fleet is the multiplier that makes Copilot CLI genuinely competitive with human multitasking. Once you've identified a task that parallelizes well, the combination of plan mode + /fleet + autopilot is one of the most productive workflows the CLI offers.

In the next post, we'll look at extending GitHub Copilot agent behavior with hooks.

Popular posts from this blog

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...