Skip to main content

How I built a custom agent skill to configure Application Insights

If you've ever found yourself repeating the same Azure setup ritual — adding the Application Insights SDK, wiring up telemetry, configuring sampling rules — you already know the pain. It's not hard, but it's tedious. Every new service needs the same scaffolding. Every new team member has to learn the same conventions.

That's exactly what I solved with a custom skill. Now, when I need to instrument a service, I just tell Copilot to configure Application Insights, and it does everything exactly the way our team expects. No extra prompting, no re-explaining our conventions. It just works.

This post explains what Skills are, how they work inside VS Code, and how to build one for your own team — using my Application Insights skill as a hands-on example.

What is an agent skill?

An agent skill is a folder of instructions, scripts, and reference files that teaches your AI agent how to handle a specific task. Think of it as institutional knowledge made executable. Instead of pasting a wall of context into every conversation, you define it once in a SKILL.md file, and Copilot (or Claude) loads it automatically when the task is relevant.

Skills can be as simple as a few lines of instructions or as complex as multi-file packages with executable code. The best skills encode your team's standards in a reusable, shareable package — essentially turning Claude from a general-purpose assistant into a specialized expert for a specific workflow.

Skills where originally introduced in Claude but are now also available in VS Code through GitHub Copilot's Agent Skills integration. The format is the same across all of them, so a skill you write once is portable.

How agent skills work in VS Code

Agent skills is an open standard that works across multiple AI agents, including GitHub Copilot in VS Code, GitHub Copilot CLI, and GitHub Copilot coding agent.

In practice, here's what happens when you use a skill in VS Code:

  1. You open the Chat panel and start a new conversation.
  2. VS Code detects which skills are available and their descriptions.
  3. When you describe a task, the agent automatically loads the most relevant skill based on those descriptions.
  4. The agent follows your skill's instructions — running bundled scripts if needed — and produces output consistent with your defined standards.

Skills are stored inside .github/skills/ and use a SKILL.md file that defines the skill's behavior.

The anatomy of a SKILL.md file

Every skill starts with a SKILL.md file containing two parts: YAML frontmatter and a Markdown body.

---
name: your-skill-name
description: One sentence explaining what the skill does and when to use it.
---

# Skill Title

Instructions, context, examples, and anything else an agent needs to do this job well.

The YAML frontmatter holds the required name and description fields. The Markdown body is the second level of detail — your agent accesses it when performing the task.

Building the Application Insights skill

Here's how I structured my Application Insights skill. It handles three things: adding the right NuGet packages, generating the correct configuration boilerplate, and applying our team's conventions around dependency tracking, log levels etc.

Start by creating a skill directory

mkdir -p ~/.github/skills/application-insights

Create a SKILLS.md markdown file:

We mainly support Angular, ASP.NET Core and Worker apps. So we isolated the specific instructions in separate files:

I also included a powershell script to fetch the Application Insights connectionstring from Azure:

Remark: You have to tweak this Powershell script to your needs as I hardcoded some configuration values to our setup.

Tips & tricks

Write descriptions like trigger conditions. The description isn't a tagline — it's what the agent reads to decide when to load your skill. "Use when setting up telemetry, adding the Application Insights SDK, or instrumenting a new service" is far more effective than "Application Insights helper."

Keep skills focused. Creating separate skills for different workflows is better than a single skill meant to do everything. Focused skills compose better than large ones. I have a separate skill for configuring logging with Serilog, and they work well together automatically.

Include a validation checklist. Adding a checklist at the end of your instructions gives the agent a concrete definition of done. It dramatically reduces the chance of skipped steps.

Use examples. Include two or three concrete code examples in your SKILL.md. This shows what success looks like and improves consistency.

Start simple, then add scripts. Your instructions should be structured, scannable, and actionable. Start with basic instructions in Markdown before adding complex scripts. My Application Insights skill worked well as pure Markdown for weeks before I added the project-detection script.

Test with different phrasings. After uploading, try triggering the skill with different prompts: "add telemetry," "instrument this service," "configure observability." If the skill doesn't activate reliably, broaden the description and add more use cases.

And a last tip;

Use the Skill-Creator skill: The skill-creator skill guides you through creating well-structured skills. It asks clarifying questions, suggests description improvements, and helps format instructions properly.

Putting it together

I started using skills only recently but we already see the advantage for our teams by automating common tasks.

The Application Insights skill I built means I never have to think about whether the sampling config is right, or whether someone remembered to gate telemetry for the development environment. Claude knows our conventions, and it applies them every time.

If you have workflows that follow a consistent pattern — configuring infrastructure, generating boilerplate, enforcing code standards — that's a candidate for a skill. The time to write a good SKILL.md for your team pays for itself the first week.

More information

About Agent Skills - GitHub Docs

agentskills/agentskills: Specification and documentation for Agent Skills

How to create Skills for Claude: steps and examples | Claude

Popular posts from this blog

Kubernetes–Limit your environmental impact

Reducing the carbon footprint and CO2 emission of our (cloud) workloads, is a responsibility of all of us. If you are running a Kubernetes cluster, have a look at Kube-Green . kube-green is a simple Kubernetes operator that automatically shuts down (some of) your pods when you don't need them. A single pod produces about 11 Kg CO2eq per year( here the calculation). Reason enough to give it a try! Installing kube-green in your cluster The easiest way to install the operator in your cluster is through kubectl. We first need to install a cert-manager: kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.5/cert-manager.yaml Remark: Wait a minute before you continue as it can take some time before the cert-manager is up & running inside your cluster. Now we can install the kube-green operator: kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml Now in the namespace where we want t...

Azure DevOps/ GitHub emoji

I’m really bad at remembering emoji’s. So here is cheat sheet with all emoji’s that can be used in tools that support the github emoji markdown markup: All credits go to rcaviers who created this list.

Podman– Command execution failed with exit code 125

After updating WSL on one of the developer machines, Podman failed to work. When we took a look through Podman Desktop, we noticed that Podman had stopped running and returned the following error message: Error: Command execution failed with exit code 125 Here are the steps we tried to fix the issue: We started by running podman info to get some extra details on what could be wrong: >podman info OS: windows/amd64 provider: wsl version: 5.3.1 Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM Error: unable to connect to Podman socket: failed to connect: dial tcp 127.0.0.1:2655: connectex: No connection could be made because the target machine actively refused it. That makes sense as the podman VM was not running. Let’s check the VM: >podman machine list NAME         ...