With the introduction of agent skills , we can teach our AI agent to handle our most repetitive and specialized workflows. After adding context through an agent.md file, integrating tool calls using MCP, creating our own Agents, this is a logical next step in defining your AI enabled software development lifecycle. Here's everything you need to know to get started. What are agent skills? Agent skills are folders of instructions, scripts, and resources that GitHub Copilot can load automatically when relevant to your prompt. Think of them as reusable "playbooks" you write once and invoke repeatedly — without having to re-explain the context every time. Unlike custom instructions , which set broad coding guidelines that apply across nearly every task, skills are meant for specialized, on-demand capabilities: things like running a specific test suite, converting file formats, generating components, or following a custom deployment checklist. How to create your firs...
GitHub Copilot CLI is my 'go-to' coding agent when I work directly from your terminal. It understands my codebase, proposes edits, runs commands, and helps me move faster without leaving the command line. As I care about privacy, offline workflows, or custom model experimentation, I decided to try Copilot CLI entirely on local LLMs using Ollama. No cloud dependency. No API keys. Just my machine, a local model and my workflow. In this post, I’ll walk through how to set it up, and how to use it effectively. Why combine Copilot CLI with Ollama? Copilot CLI gives you a powerful agentic interface for your codebase. Ollama gives you a fast, local model runtime with support for dozens of open models. Together, you get: Local-first AI coding: keep your code and prompts on your machine Predictable performance: no rate limits or network delays Model flexibility : swap between Qwen, Llama, Mistral, Gemma, and more Agentic workflows: Copilot CLI can edit...