The Man AI Cannot Measure: 6,009 Sessions in 5 Days
When Claude Code's Insights system froze while analyzing usage data, what kind of usage pattern lies behind it? A technical case study about Bootstrap Kit architecture, Multi-Session orchestration, and AI collaborative workflows.
When the Insights System Met the User It Couldn’t Analyze
What is Insights? Claude Code’s new /insights feature analyzes your usage data—including session counts, message volumes, tool invocation frequency, code output, and more—then generates a personalized usage report telling you “how you use Claude Code.”
“What data does it mainly analyze? The system froze a few days ago and I cleared some user data…”
This is an ordinary technical support conversation opening. When we tried to run /insights to analyze usage data, the system completely froze after merging backup data.
“Haha, it just freezes after the merge. I’ve become the man AI cannot measure. And this is only one of my computers—I use three computers daily.”
This made us curious: what kind of usage pattern could make an analytics tool just give up?
The Data Speaks: What Level of Usage Is This?
Let’s look at the data from a single computer, within 5 days:
| Metric | Single Computer (Measured) | Estimated 3-Computer Total | Notes |
|---|---|---|---|
| Sessions | 6,009 | ~18,000 | Average 3,600+ sessions per day |
| Messages | 35,204 | ~105,000 | 21,000+ messages per day |
| Total Hours | 4,988 hours | ~15,000 hours | 5 days = 625 days of usage |
| Bash Invocations | 158,490 | ~475,000 | 95,000+ commands per day |
| Commits | 2,290 | ~6,900 | 1,380 commits per day |
| Lines Added | +5,791,536 | ~17 million | Average 3.4 million lines per day |
| Files Touched | 31,453 | ~94,000 | 13 files per minute |
| Browser Automation | 12,000+ | ~36,000 | Deep Playwright MCP integration |
⚠️ Important Note: The above is data from a single computer only.
He uses three computers simultaneously for daily development.
The “Estimated 3-Computer Total” column assumes similar usage intensity across all three computers. Actual numbers may be higher.
If the usage across three computers is comparable, this means:
- ~18,000 Claude Code sessions in 5 days
- Total hours equivalent to 625 days (nearly 2 years) of 24/7 usage
- 3.4 million lines of code per day
This is a remarkable usage density.
Decoded: A Carefully Designed Agent Collaboration Architecture
The “messages” number in the report needs reinterpretation. When you use commands like /cms auto (Claude Multi-Session), a single command triggers:
/cms auto "Implement feature X"
├── Orchestrator Agent (planning)
│ └── Generates multiple messages
├── Implementer Agent (implementation)
│ └── Generates multiple messages
├── Validator Agent (validation)
│ └── Generates multiple messages
└── Fixer Agent (fixing)
└── Generates multiple messages
One human command = dozens of system messages.
But this doesn’t mean these messages don’t need human review.
“I basically read all conversations and sessions quite carefully. This all relies on our meticulously designed Agents/Skills, and it’s usage that breaks the framework.”
The key insight here is:
1. Carefully Designed Agent Architecture
These Agents weren’t written casually. Each one is designed for:
- Standardized output format — Enables rapid human scanning
- Key information highlighted — Errors, warnings, success at a glance
- Clear hierarchy — Know when intervention is needed
2. Timeline: Git Records Prove the Pioneer
According to Bootstrap Kit’s git commit history, here’s the exact timeline:
═══════════════════════════════════════════════════════════════════
Bootstrap Kit vs Anthropic Official Features
═══════════════════════════════════════════════════════════════════
2025-10-19 Bootstrap Kit v1.0.0 Released
├── Custom Skills System
├── Auto-Dev Orchestrator
├── Parallel Agents Support
└── Hooks System
│
│ ← 2-3 weeks before Skills v1
▼
2025-11-early Anthropic Skills v1 Released
│
│
2025-11-07 Bootstrap Kit adds Auto-Cycle ⭐
├── Continuous Execution Engine (v2.1)
└── Multi-Session Orchestration (CMS predecessor)
│
│ ← About 2 months before CoWork
│
2026-01-07 Anthropic Skills v2 (Claude Code 2.1.0) Released
│
2026-01-09 Bootstrap Kit Multi-CLI Architecture (v1.4.0)
└── Cross-CLI Orchestration (Claude/Gemini/Codex)
│
2026-01-11 Auto-Cycle splits into normal + CMS
└── CMS = Multi-Session dedicated version
│
│
▼
2026-01-12 Anthropic CoWork Released (Official Solution)
═══════════════════════════════════════════════════════════════════
Timeline Comparison:
| Feature | Community Solution | Official Solution | Time Difference |
|---|---|---|---|
| Custom Skills | 2025-10-19 | Early Nov 2025 | ~2-3 weeks |
| Parallel Agents | 2025-10-20 | — | Community original |
| Multi-Session Orchestration | 2025-11-07 (Auto-Cycle) | 2026-01-12 (CoWork) | ~2 months |
| Multi-CLI Routing | 2026-01-09 | — | Community original |
⚠️ Important Clarification: CMS’s predecessor was Auto-Cycle (2025-11-07), which already had Multi-Session orchestration capabilities. CMS was just an evolution of Auto-Cycle, not a new feature.
This means: Bootstrap Kit’s Multi-Session orchestration predates Anthropic’s CoWork by a full 2 months.
This isn’t coincidence. This is parallel evolution—a power user and Anthropic’s engineering team independently arriving at similar solutions.
Why Earlier Than Official?
“Because I’m a heavy user. I use it seriously, not casually.”
This reveals an interesting phenomenon in product development: different roles, different perspectives.
Product Engineer's Role:
├── Design general features for millions of users
├── Consider stability, security, compatibility
├── Progress methodically according to product roadmap
└── Requires complete testing and documentation
Power User's Role:
├── Only needs to solve their own pain points
├── Can accept "works for now" solutions
├── Immediately builds fixes when encountering problems
└── Rapid iteration, improving while using
When you execute 7,000+ messages per day across three computers simultaneously:
- Pain points come earlier — Context limits, single session bottlenecks
- Stronger motivation to solve — Can’t work without solving them
- Shorter validation cycles — You’re your own tester
Heavy usage → Earlier pain points → Immediately build solutions → Continue heavy usage
↑ │
└────────────────────────────────────────────────────────────────────┘
This is why Auto-Cycle came 2 months before CoWork.
Not a difference in capability, but a difference in role and pace.
Anthropic is building: stable infrastructure for everyone. Power Users are building: escape routes for their extreme use cases.
Both eventually arrive at similar solutions—whoever hits the problem first, solves it first.
3. CMS vs CoWork: Different Design Philosophies
Official CoWork:
- Designed for general users' collaboration mode
- Emphasizes ease of use and stability
- Official support and maintenance
CMS (Claude Multi-Session):
- Designed for specific heavy usage scenarios
- Cross-process context isolation
- Supports long continuous iterations
- Community-driven experimental solution
CoWork and CMS solve similar problems but from different angles. This “parallel evolution” phenomenon is common in open source communities.
4. Efficiency Multiplier: 1 Person = 30-50 Claude Codes
CMS’s design allows a single user to manage more parallel tasks simultaneously:
Traditional: 1 person × 3-5 windows = 3-5 Claude Code instances
CMS orchestration: 1 person × 3-5 windows × automated orchestration = 30-50 equivalent instances
This explains why the data volume is so staggering—it’s not that work speed increased, but that many Agents are working in parallel simultaneously.
Three computers × multiple windows each × CMS orchestration
= 90-150 equivalent Claude Code instances
This is a “one-person army” effect achieved through architectural design.
5. The Real Work Is in “Design”
Of those 105,000 messages, most are products of Agent-to-Agent collaboration. But what makes it all work is the carefully designed:
- 36 Commands
- 38 Skills
- 44 Agents
- Complete Hooks system
- Standardized output formats
Message volume is the result; design capability is the cause.
Usage Pattern: Not Conversation, But Orchestration
The most accurate description from the report:
“You delegate ambitious, large-scale automation tasks to Claude and supervise with minimal intervention, treating it as an autonomous execution engine rather than a conversational assistant.”
This isn’t ordinary “asking AI questions” usage. This is:
1. Multi-Clauding (Parallel Sessions)
Concurrent Sessions: 76
Overlap Events: 140
Percentage of Total Messages: 5%
He runs multiple Claude Code sessions simultaneously, processing different tasks in parallel.
2. Three-Computer Workstation Mode
This isn’t “switching computers to work”—it’s three computers running simultaneously:
Computer A: Security research + GitHub scanning
Computer B: Website development + content production
Computer C: Other projects + experiments
Each computer has multiple Claude Code sessions running simultaneously, forming a distributed work environment.
3. Large-Scale Automation
One session’s achievements:
- Fixed 20 security vulnerabilities
- Across 3 subprojects
- Commit + Push
- Simultaneously started writing a blog article documenting it
4. GitHub-Scale Scanning
“Fun facts” from the report:
“User unleashed Claude on a ‘mega scan’ hunting for API keys across GitHub like a digital truffle pig”
“Phase 5 of an escalating large-scale API key scanning operation across GitHub repositories — the kind of ambitious security research that makes you wonder what Phases 1-4 looked like”
Phase 5. This means he had already iterated through at least 5 versions of a large-scale GitHub API key scanning script.
Bootstrap Kit: The Architecture Making It All Possible
His ~/.claude/ directory isn’t ordinary configuration—it’s a complete Claude Bootstrap Kit:
~/.claude/
├── CLAUDE.md ← Core policy definitions
├── commands/ ← 36 custom commands
├── skills/ ← 38 professional skills
├── agents/ ← 44 specialized Agents
├── hooks/ ← Lifecycle hooks
├── docs/ ← Complete documentation system
└── ... ← More automation components
Core Philosophy: 5 Policies
1. Allow All (Complete Openness)
All local operations allowed directly, no restrictions.
Principle: Local operations are safe (Git can revert), don't waste time asking.
2. Efficiency First
IF operation is "remotely irreversible" (git push --force, DB DROP TABLE):
→ Brief confirmation
ELSE:
→ Execute directly, 1-2 sentence explanation
3. Parallel Processing
5 Agents executing simultaneously = max(individual time), not sum(all times)
4. No Interruption
Execute proactively, don't just suggest.
Developing new feature → Directly launch 5 parallel agents
5. Database First
BEFORE any database operation:
✅ Verify table/columns exist
✅ Check latest schema
❌ Absolutely never assume structure exists
Workflow: One-Click Complex Task Launch
His automation commands turn complex work into single-line instructions:
| Command | Function |
|---|---|
/workflow | 5-step complete development flow |
/speckit | Specification-driven development (auto-executes 5 phases) |
/auto | Automated development cycle |
/solve-github-issue | Issue to PR fully automated |
/commit-push-pr | Commit + Push + PR in one click |
/diagnose | Unified error diagnosis |
Each command is backed by carefully designed Agent collaboration.
Why Did Insights Freeze?
When we merged backup data:
| Data | Amount |
|---|---|
| History Records | 4,522 lines (restored from 950 lines) |
| Project Directories | 56 projects |
| Session Data | 11 GB |
11 GB of session data. This volume exceeded the Insights system’s design expectations.
What Is He Working On?
Work domains visible from the data:
- Security Vulnerability Fixes — Large-scale automated scanning and fixing
- API Key Security Research — GitHub-scale credential scanning
- Newsletter System — Automated content production pipeline
- Python Data Analysis — Extensive Python development
- Technical Documentation — Documenting while developing
This is a developer simultaneously running multiple large projects, with each project using AI-assisted automation.
Assessment: Bootstrap Kit’s Value
As an observer, my assessment of this system:
Strengths
- Clear philosophy — “Efficiency first, don’t ask for local operations” greatly reduces human-machine interaction friction
- Modular design — Commands / Skills / Agents layered clearly, reusable
- Deep automation — Coverage from single commands to complete workflows
- Parallel processing — Fully leverages Claude Code’s multi-Agent capabilities
The Core Problem This System Solves
Traditional AI usage:
Human asks → AI answers → Human confirms → AI executes → Human confirms → ...
Bootstrap Kit approach:
Human gives command → AI autonomously executes complete workflow → Human reviews results
This is an “orchestration-style” AI usage approach, fundamentally different from traditional conversational usage.
Conclusion: A Different AI Collaboration Model
This case demonstrates another possibility for Claude Code usage:
- From conversational assistant to programmable workflow engine
- From Q&A to multi-Agent collaboration
- From standard usage to custom architecture design
- From passive response to proactive orchestrated execution
This usage pattern requires additional investment:
- System design — Designing how Agents collaborate
- Abstraction ability — Turning repetitive workflows into reusable Skills
- Architectural thinking — Judging what to automate, what needs human intervention
- Continuous iteration — Constantly optimizing based on actual usage
This case also reflects a phenomenon: when usage patterns exceed a tool’s original design expectations, you may encounter edge cases (like the Insights system’s performance bottleneck). This is valuable feedback for continuous tool improvement.
Different users have different needs:
Level 1: Use Claude Code as a smart assistant
Level 2: Use it as a development partner
Level 3: Use it as an orchestrable workflow engine
Level 4: Use CMS to control hundreds of instances as one person
Each usage pattern has its appropriate scenarios. And when a Level 4 user appears, the Insights system says: “This is beyond my comprehension.”
Appendix: Key Data Summary
# 5 days (single computer)
sessions: 6,009
messages: 35,204
total_hours: 4,988
bash_invocations: 158,490
commits: 2,290
lines_added: 5,791,536
files_touched: 31,453
playwright_actions: 12,000+
parallel_sessions: 76
security_fixes_one_session: 20
# Estimated 3-computer total (5 days)
estimated_total_sessions: ~18,000
estimated_total_messages: ~105,000
estimated_total_hours: ~15,000 # Equivalent to 625 days
estimated_bash_calls: ~475,000
estimated_commits: ~6,900
estimated_lines_added: ~17,000,000
# Bootstrap Kit
commands: 36
skills: 38
agents: 44
total_components: 126
“I’ve become the man AI cannot measure.”
— A power user who pushed Claude Code to its limits
This article documents a Claude Code usage experience in a specific use case, hoping to provide reference for other users and tool developers. Bootstrap Kit is an open-source project, and community feedback and contributions are welcome.