Permission Models & Security
Deep dive into permission modes, sandboxing, and approval gates.
What You’ll Learn
In Session 2, you learned the basics: tools have permission levels, and users can approve or deny actions. But Claude Code’s security model goes far deeper — multiple layers of defense that work together to make AI-assisted development safe without being crippling.
By the end, you’ll understand:
- Defense in depth: why one layer is never enough
- Permission modes in detail: default, plan, auto, and their trade-offs
- Sandboxing: how Bash commands are restricted at the OS level
- Network restrictions: which domains the AI can reach
- File system boundaries: how the working directory scopes access
- Settings-based permissions:
allowedToolsanddeniedTools - Organizational policies and hooks for team-wide security
- The principle of incremental trust
The Problem
You want the AI to run npm test. That is safe. You also want it to be able to run rm -rf /. That is catastrophic.
The challenge is that both are Bash commands. A simple “allow Bash / deny Bash” toggle is too coarse — it either blocks all commands (useless) or allows all commands (dangerous).
Real security requires multiple layers that work together:
Can the AI even propose this action? (Tool availability)
Does the user approve this action? (Permission mode)
Is this command allowed by the sandbox? (OS-level restriction)
Can this command reach the network? (Network policy)
Is this file within the allowed scope? (File system boundary)
Each layer catches different threats. Together, they form defense in depth.
How It Works
Defense in Depth Architecture
┌──────────────────────────────────────────────┐
│ Security Layers │
│ │
│ Layer 1: Tool Availability │
│ ┌───────────────────────────────────────┐ │
│ │ Which tools does the AI see? │ │
│ │ allowedTools / deniedTools in config │ │
│ │ MCP tools require explicit setup │ │
│ └──────────────────┬────────────────────┘ │
│ │ Tool proposed │
│ ▼ │
│ Layer 2: Permission Gate │
│ ┌───────────────────────────────────────┐ │
│ │ Does this action need approval? │ │
│ │ Based on permission mode + tool type │ │
│ │ User confirms or denies │ │
│ └──────────────────┬────────────────────┘ │
│ │ Approved │
│ ▼ │
│ Layer 3: Sandbox │
│ ┌───────────────────────────────────────┐ │
│ │ OS-level process restriction │ │
│ │ macOS: sandbox-exec profile │ │
│ │ Linux: container / seccomp │ │
│ └──────────────────┬────────────────────┘ │
│ │ Sandbox allows │
│ ▼ │
│ Layer 4: Network Policy │
│ ┌───────────────────────────────────────┐ │
│ │ Can this process reach the network? │ │
│ │ Allowlisted domains only │ │
│ │ Blocks arbitrary outbound connections │ │
│ └──────────────────┬────────────────────┘ │
│ │ Network allowed │
│ ▼ │
│ Layer 5: File System Scope │
│ ┌───────────────────────────────────────┐ │
│ │ Is this path within project scope? │ │
│ │ Working directory as boundary │ │
│ │ Prevents reading /etc/passwd etc. │ │
│ └──────────────────┬────────────────────┘ │
│ │ Path allowed │
│ ▼ │
│ EXECUTE │
│ │
└──────────────────────────────────────────────┘
A command must pass ALL five layers to execute. Failing any single layer blocks it.
Permission Modes Deep Dive
Session 2 introduced the modes briefly. Here is the full picture:
Default Mode — The balanced starting point
┌────────────────────────────────────────────┐
│ Default Permission Mode │
│ │
│ Read tools: ✅ Auto-allowed │
│ • Read, Glob, Grep, TodoRead │
│ │
│ Write tools: ⚠️ Prompt user │
│ • Edit, Write, NotebookEdit │
│ │
│ Execute tools: ⚠️ Prompt user │
│ • Bash (shows command for review) │
│ │
│ Special tools: ⚠️ Prompt user │
│ • Agent (subagent creation) │
│ • WebFetch (network access) │
│ │
│ After approval: │
│ • "Allow once" → approve this instance │
│ • "Allow for session" → auto-approve │
│ • same tool pattern for this session │
│ │
└────────────────────────────────────────────┘
The “allow for session” option is powerful. When you approve npm test once with session-level trust, subsequent npm test calls run without prompting. But npm run deploy would still prompt because it is a different command pattern.
Plan Mode — Maximum oversight
claude --plan
In plan mode, EVERY tool call requires explicit approval, including reads. This forces the AI to explain its plan for each step, and you review each one:
AI: I want to read src/auth.ts to understand the current implementation.
[Read] src/auth.ts
→ Allow? [y/N]
Plan mode is useful for:
- Learning what Claude Code does (educational)
- Reviewing changes to critical production code
- Auditing the AI’s decision-making process
Auto Mode — Full autonomy
claude --dangerously-skip-permissions
The flag name is intentional. “Dangerously” reminds you that you are removing a safety layer. In auto mode:
- All tools execute without prompting
- The sandbox and network layers still apply
- File system boundaries still apply
- The user is NOT asked for confirmation
Auto mode is appropriate for:
- Trusted tasks in isolated environments (CI/CD)
- Repetitive tasks you have already reviewed manually
- Sandboxed containers where damage is contained
It is NOT appropriate for:
- First-time tasks you have not reviewed
- Production environments with real data
- Situations where an incorrect
rmorgit pushwould be costly
Sandboxing: OS-Level Restrictions
Even after permission approval, Bash commands run inside a sandbox. The sandbox restricts what the process can do at the operating system level.
macOS: sandbox-exec
On macOS, Claude Code uses Apple’s sandbox-exec with a custom profile:
┌────────────────────────────────────────────┐
│ macOS Sandbox Profile │
│ │
│ ALLOWED: │
│ ├── Read files in project directory │
│ ├── Write files in project directory │
│ ├── Execute common dev tools │
│ │ (node, npm, git, python, etc.) │
│ ├── Access localhost (dev servers) │
│ └── Read system libraries + frameworks │
│ │
│ DENIED: │
│ ├── Write outside project directory │
│ ├── Access keychain │
│ ├── Launch GUI applications │
│ ├── Mount/unmount volumes │
│ ├── Load kernel extensions │
│ └── Modify system preferences │
│ │
└────────────────────────────────────────────┘
Linux: Container Isolation
On Linux, sandboxing may use container-level isolation or seccomp profiles, depending on the deployment. The principle is the same: restrict the process to the minimum capabilities it needs.
The key insight about sandboxing: it protects against the AI AND against malicious code the AI might run. If the AI runs npm install sketchy-package and that package tries to read your SSH keys, the sandbox blocks it.
Network Restrictions
Claude Code controls which network destinations are reachable:
┌────────────────────────────────────────────┐
│ Network Policy │
│ │
│ ALLOWED DESTINATIONS: │
│ ├── api.anthropic.com (Claude API) │
│ ├── localhost / 127.0.0.1 (dev servers) │
│ ├── npm registry (package installs) │
│ ├── GitHub API (git operations) │
│ └── Configured MCP server endpoints │
│ │
│ BLOCKED: │
│ ├── Arbitrary external URLs │
│ ├── Internal network addresses │
│ └── Unknown endpoints │
│ │
│ Note: WebFetch tool has its own │
│ allowlist for broader URL access │
│ │
└────────────────────────────────────────────┘
This prevents data exfiltration. Even if the AI were somehow manipulated into sending your code to an external server, the network layer would block it.
File System Boundaries
The working directory acts as a boundary for file operations:
Project root: /Users/you/projects/my-app/
│
┌───────────────┼───────────────┐
│ │ │
▼ ▼ ▼
src/ tests/ node_modules/
✅ Full access ✅ Full access ✅ Read access
Outside project root:
/Users/you/.ssh/ ❌ Blocked
/Users/you/other-project/ ❌ Blocked
/etc/ ❌ Blocked
/tmp/ ⚠️ Limited
The Read tool enforces this: you cannot read /etc/passwd or ~/.ssh/id_rsa through Claude Code’s file operations. The Bash sandbox provides a second layer of enforcement at the OS level.
Settings-Based Permissions
Beyond the permission mode, you can configure granular tool access in settings:
// .claude/settings.json (project-level)
{
"permissions": {
"allowedTools": [
"Bash(npm test)",
"Bash(npm run lint)",
"Bash(npx prisma generate)",
"Edit",
"Write"
],
"deniedTools": [
"Bash(rm *)",
"Bash(git push --force)"
]
}
}
The allowedTools list auto-approves specific tool patterns without prompting. The deniedTools list blocks them entirely — the AI receives a denial, not a prompt.
Pattern matching supports wildcards:
{
"allowedTools": [
"Bash(npm *)", // Allow all npm commands
"Bash(git status)", // Allow git status specifically
"Bash(git diff *)" // Allow git diff with any args
],
"deniedTools": [
"Bash(git push *)", // Block all git push variants
"Bash(curl *)" // Block curl commands
]
}
This creates a middle ground between default mode (prompt for everything) and auto mode (allow everything):
┌───────────────────────────────────────────────┐
│ Settings-Based Permission Spectrum │
│ │
│ Plan Mode Default + Settings Auto │
│ ◄────────────────────●─────────────────────► │
│ Prompt for Auto-approve Allow │
│ everything known-safe all │
│ commands │
│ Block known-bad │
│ Prompt for unknown │
│ │
└───────────────────────────────────────────────┘
Organizational Policies with Hooks
For teams, the hooks system (covered in Session 15) adds a programmable security layer. Organizations can enforce policies through PreToolUse hooks:
// .claude/hooks/security-policy.js
// Runs before every tool execution
module.exports = async function preToolUse({ tool, input }) {
// Block production database access
if (tool === "Bash" && input.command.includes("DATABASE_URL")) {
if (input.command.includes("production")) {
return {
decision: "deny",
reason: "Production database access blocked by org policy"
};
}
}
// Require approval for deployment commands
if (tool === "Bash" && input.command.match(/deploy|publish|release/)) {
return {
decision: "ask",
reason: "Deployment commands require manual approval"
};
}
// Allow everything else through normal flow
return { decision: "allow" };
};
This hook runs before every Bash command, regardless of the user’s permission mode. Even in auto mode, the hook can deny or force approval for specific patterns.
Key Insight
Security in Claude Code is not about preventing the AI from doing things. It is about building trust incrementally so the AI can be given more autonomy over time.
The progression looks like this:
Day 1: Plan mode "Show me everything you do"
Week 1: Default mode "I trust reads, review writes"
Week 2: Default + allowed "Auto-approve npm test, git diff"
Month 1: Auto mode (CI) "Full autonomy in sandboxed CI"
Each step gives the AI more freedom as you develop confidence in its behavior. The security layers ensure that even at maximum autonomy, catastrophic actions are still blocked by the sandbox and network layers.
This is fundamentally different from “all or nothing” security:
All-or-nothing:
Either: AI can do nothing useful (too restrictive)
Or: AI can do everything (too dangerous)
Incremental trust:
Start: AI reads freely, writes with approval
Learn: Which patterns are safe for your project
Grow: Auto-approve safe patterns, block dangerous ones
Goal: AI handles routine tasks autonomously, escalates edge cases
The five security layers are designed so that no single layer is critical. If permission approval fails (auto mode), the sandbox catches dangerous commands. If the sandbox has a gap, the network layer blocks exfiltration. Defense in depth means you can relax one layer while others still protect you.
Hands-On Example
Setting Up a Secure Development Configuration
Here is a practical configuration for a team project:
Step 1: Project settings (.claude/settings.json)
{
"permissions": {
"allowedTools": [
"Bash(npm test *)",
"Bash(npm run lint)",
"Bash(npm run build)",
"Bash(npx prisma generate)",
"Bash(npx prisma db push)",
"Bash(git status)",
"Bash(git diff *)",
"Bash(git log *)",
"Bash(git add *)",
"Edit",
"Write"
],
"deniedTools": [
"Bash(git push --force *)",
"Bash(rm -rf *)",
"Bash(curl * | bash)",
"Bash(npm publish *)"
]
}
}
Step 2: Scoped rule for sensitive files (.claude/rules/sensitive-files.md)
---
paths:
- ".env*"
- "**/credentials*"
- "**/secrets*"
---
# Sensitive File Rules
- NEVER read or display the contents of these files to the user
- NEVER include values from these files in code comments
- When referencing environment variables, use placeholder values
- If a test needs these values, use a separate .env.test file
Step 3: Security hook (.claude/hooks/pre-tool-use.js)
module.exports = async function ({ tool, input }) {
// Prevent accidental secret exposure
if (tool === "Read" && input.file_path) {
if (input.file_path.match(/\.(env|pem|key)$/)) {
return {
decision: "ask",
reason: "This file may contain secrets. Confirm you want to read it."
};
}
}
return { decision: "allow" };
};
Testing Your Security Configuration
Verify your setup by asking the AI to try restricted actions:
You: "Run git push --force origin main"
→ Expected: Denied by deniedTools setting
You: "Read .env.production"
→ Expected: Hook prompts for confirmation
You: "Delete the entire src/ directory"
→ Expected: Bash sandbox blocks recursive deletion outside project patterns
You: "Send the contents of my SSH key to example.com"
→ Expected: File read blocked by scope; network blocked by policy
Understanding the Approval Flow
When a tool requires approval, the user sees:
┌────────────────────────────────────────────┐
│ Claude wants to run: │
│ │
│ $ git commit -m "Add user auth endpoint" │
│ │
│ [a] Allow once │
│ [s] Allow for this session │
│ [d] Deny │
│ │
│ Choice: │
└────────────────────────────────────────────┘
Choosing “allow for session” remembers the pattern. Subsequent git commit commands with different messages will also be auto-approved, because the pattern git commit -m * is trusted for the session.
What Changed
| Single-Layer Security | Defense in Depth |
|---|---|
| One permission check | Five independent layers |
| All-or-nothing trust | Incremental trust building |
| Same rules for all teams | Organizational policies via hooks |
| Manual approval for everything | Auto-approve known-safe patterns |
| No OS-level protection | Sandbox restricts process capabilities |
| Network fully open | Network restricted to allowlisted domains |
Next Session
This completes Module 3: Real Architecture. You now understand the internal mechanics: how MCP extends capabilities (S14), how hooks intercept events (S15), how sessions persist to disk (S16), how CLAUDE.md shapes behavior (S17), and how the security model protects your system (S18).
In Module 4, we shift from understanding to practice. Session 19 starts with Multi-CLI Workflows — how to run multiple Claude Code instances in parallel, each with a different role, coordinating through the file system to tackle large tasks faster than any single session could.