Custom Agents & Skills
Build your own .md-based extensions for specialized workflows.
What You’ll Learn
Claude Code is not a fixed product. It is an extensible platform. By dropping Markdown files into specific directories, you add entirely new capabilities — slash commands, specialized reviewers, automated workflows, domain-specific assistants.
By the end, you’ll understand:
- The difference between skills and agents
- How to write a skill (user-invoked) and an agent (AI-invoked)
- Design patterns for both
- Tool restrictions for safety
- The lifecycle from invocation to execution
The Problem
Every team has workflows. Deploy procedures, review checklists, testing protocols. This knowledge usually lives in two places: documentation nobody reads, and one person’s head.
"How do we deploy?" → Ask Sarah
"What's our PR checklist?" → It's in the wiki... somewhere
"Database migrations?" → Dave wrote a script, check his home dir
What if this knowledge were AI-executable instructions — not just documentation, but instructions the AI follows step by step, every time?
How It Works
Skills vs Agents
┌──────────────────────────────────────────────────────┐
│ Skills (User-invoked) Agents (AI-invoked) │
│ │
│ Triggered by slash command Triggered by AI via │
│ /deploy, /review Agent tool spawn │
│ │
│ Runs in main context Runs in child context │
│ (instructions injected) (isolated subagent) │
│ │
│ Location: Location: │
│ .claude/skills/name/SKILL.md .claude/agents/x.md │
└──────────────────────────────────────────────────────┘
| Aspect | Skill | Agent |
|---|---|---|
| Triggered by | User (/skill-name) | AI (subagent spawning) |
| Context | Injected into main conversation | Isolated child context |
| Purpose | Step-by-step workflows | Specialized roles (reviewer, explorer) |
| Location | .claude/skills/name/SKILL.md | .claude/agents/name.md |
Skill Anatomy
A skill is a Markdown document with workflow instructions:
# File: .claude/skills/deploy/SKILL.md
# Deploy Skill
1. Run `git status` to check for uncommitted changes
2. Run the full test suite with `npm test`
3. If tests fail, stop and report the failures
4. Run `npm run build` and verify no errors
5. Show the user: test results, build size, files changed
6. Ask: "Ready to deploy? [yes/no]"
7. Only if confirmed: run `git push origin main`
8. Report deployment status
## Rules
- NEVER push without user confirmation
- If any step fails, stop and report
When a user types /deploy, Claude Code finds the matching skill, injects its content, and follows the instructions.
Agent Anatomy
An agent defines a specialized role the AI can spawn as a subagent:
# File: .claude/agents/security-reviewer.md
# Security Reviewer
Review code for security vulnerabilities. Be thorough
but avoid false positives.
## Checklist
- SQL injection: raw queries with user input
- XSS: unescaped output in templates
- Auth bypass: missing middleware on protected routes
- Secrets: hardcoded API keys, passwords, tokens
- Path traversal: unsanitized file paths
## Output Format
For each finding: file, line, severity, issue, fix.
If clean: "No security issues found."
## Tools: Read, Glob, Grep only
The AI spawns this when needed:
{
"type": "tool_use",
"name": "Agent",
"input": {
"agent": "security-reviewer",
"prompt": "Review changes in src/api/ for security issues"
}
}
The Skill Lifecycle
┌──────────────────────────────────────────────────────┐
│ 1. User types /deploy │
│ ▼ │
│ 2. Claude Code scans .claude/skills/ for match │
│ ▼ │
│ 3. Found: .claude/skills/deploy/SKILL.md │
│ ▼ │
│ 4. Skill body injected into conversation context │
│ ▼ │
│ 5. AI reads instructions, follows step by step │
│ ▼ │
│ 6. Workflow completes (or user interrupts) │
└──────────────────────────────────────────────────────┘
Skill Design Patterns
Step-by-Step Workflow — numbered sequence with conditionals:
1. Check preconditions
2. Gather information
3. Present plan to user
4. Execute with safety checks
5. Verify results
6. Report summary
Decision Tree — branching based on project type:
Check the project type:
- If `package.json` exists → Node.js
- If `tsconfig.json` → use `npx tsc --noEmit`
- Else → use `node --check`
- If `requirements.txt` exists → Python
- Use `python -m py_compile`
- If `Cargo.toml` exists → Rust
- Use `cargo check`
Template-Based Output — structured content generation:
Generate a changelog using this format:
### [version] - YYYY-MM-DD
#### Added / Changed / Fixed
Fill by analyzing git commits since the last tag.
Tool Restrictions
Limiting tools follows the principle of least privilege. A reviewer should not edit code. An explorer should not run commands.
## Tools
Allowed: Read, Glob, Grep
Not allowed: Edit, Write, Bash, Agent
When the AI spawns this agent, only the listed tools are accessible. This prevents a security reviewer from accidentally “fixing” the issues it finds.
Key Insight
Custom agents and skills encode your team’s workflow knowledge. They turn tribal knowledge into repeatable, consistent AI behavior.
The difference between a mediocre Claude Code setup and a powerful one is not the model — it is the quality of the skills and agents. A well-written deploy skill means every deploy follows the same safety checks. A well-written security reviewer means every PR gets the same scrutiny.
This is “infrastructure as code” applied to AI workflows: workflows as Markdown. No compilation, no framework, no deployment. Drop a file, gain a capability.
Hands-On Example
Build a review skill that spawns a security agent:
The Review Skill (.claude/skills/review/SKILL.md)
# Code Review
1. Run `git diff main...HEAD --stat` to see changed files
2. For each changed file, read the diff and check for:
- Missing error handling
- Hardcoded values that should be config
- Functions longer than 50 lines
- Console.log left in
3. Spawn the security-reviewer agent on changed files
4. Compile findings:
### Summary
[1-2 sentence overview]
### Findings
| File | Line | Severity | Issue |
### Security Review
[Output from security-reviewer agent]
### Recommendation
[Approve / Request Changes]
The Security Agent (.claude/agents/security-reviewer.md)
# Security Reviewer
Check for: SQL injection, XSS, hardcoded secrets,
missing auth middleware, input validation gaps.
Output: file, line, severity, fix for each finding.
If clean: "No security issues found."
Tools: Read, Glob, Grep
Type /review and the AI follows the skill, spawns the security agent, and produces a structured report. Every review is consistent, regardless of who runs it.
What Changed
| Without Extensions | With Custom Agents & Skills |
|---|---|
| Workflows in documentation | Workflows are AI-executable |
| Consistency depends on who runs it | Same steps every time |
| New members need training | Skills encode the training |
| Reviews vary by reviewer | Agents apply the same checklist |
| Adding capabilities requires code | Adding capabilities requires a Markdown file |
Next Session
Session 24 brings it all together with Production Patterns — observability, CI/CD integration, cost management, and the maturity model for scaling AI-assisted development across your organization.