TL;DR - Key Takeaways
- Run 5-10 Claude agents in parallel across terminal and browser
- Use Opus 4.5 with thinking mode - slower but requires less steering
- Maintain a CLAUDE.md file as shared memory for mistakes and conventions
- Plan first, execute second - don't let AI start coding until the plan is solid
- Verification is non-negotiable - every change must be tested
The Thread That Broke Developer Twitter
Boris Cherny (@bcherny), the creator of Claude Code at Anthropic, recently shared his complete workflow for AI-assisted development. The numbers are staggering:
In a 30-day period, Boris reported:
- 259 Pull Requests
- 497 Commits
- ~40,000 lines added
- ~38,000 lines removed
All written by Claude Code under his direction.
The 10-Step Workflow
Let me break down each element of Boris's workflow and explain why it works.
1. Parallel Agent Instances
Boris runs 5-10 Claude agents simultaneously:
| Setup | Purpose |
|---|---|
| 5 Terminal instances | Each with separate git checkout |
| 5-10 Browser sessions | Via claude.ai/code |
| Mobile sessions | Start on phone, resume on desktop |
Why this works: Different agents handle different concerns—one refactors, one writes tests, one updates docs. No merge conflicts because each has its own branch.
How to implement:
# Create multiple working directories
git worktree add ../project-refactor feature/refactor
git worktree add ../project-tests feature/tests
git worktree add ../project-docs feature/docs
# Run Claude Code in each
cd ../project-refactor && claude
cd ../project-tests && claude
cd ../project-docs && claude2. Model Choice: Opus 4.5 with Thinking
Boris exclusively uses Opus 4.5 (not Sonnet) with thinking mode always on.
| Model | Speed | Accuracy | Steering Needed |
|---|---|---|---|
| Sonnet | Fast | Good | More corrections |
| Opus 4.5 | Slower | Better | Less steering |
The insight: Raw speed is a trap. A faster model that needs constant correction is slower than a deliberate model that gets it right the first time.
3. The CLAUDE.md File
The CLAUDE.md file is the secret weapon. It's a living document of:
- Mistakes Claude should avoid
- Architectural preferences
- Naming conventions
- Project-specific patterns
Example CLAUDE.md structure:
# CLAUDE.md - Project Guidelines
## Architecture
- Use functional components only
- State management via Zustand, not Redux
- All API calls go through /lib/api.ts
## Common Mistakes to Avoid
- Don't use `any` type in TypeScript
- Always handle loading and error states
- Never commit console.log statements
## Naming Conventions
- Components: PascalCase
- Hooks: useCamelCase
- Utils: camelCase
## Testing
- Every component needs a test file
- Use React Testing Library, not Enzyme4. PR Reviews Update CLAUDE.md
The team has automated learning built into their PR process:
flowchart LR
A[PR Created] --> B[Human Review]
B --> C{Issue Found?}
C -->|Yes| D[Tag @claude]
D --> E[GitHub Action]
E --> F[Update CLAUDE.md]
F --> G[Future agents learn]
C -->|No| H[Merge PR]This is organizational memory at scale. Every mistake becomes a lesson for all future Claude sessions.
5. Plan Mode First
The workflow is Plan → Refine → Execute:
| Phase | Mode | Human Involvement |
|---|---|---|
| 1. Plan | Plan mode | High - back-and-forth |
| 2. Refine | Plan mode | Medium - clarifications |
| 3. Execute | Auto-accept | Low - monitoring |
Why this matters: Fixing a bad plan is cheap. Fixing bad code from a bad plan is expensive.
6. Slash Commands for Automation
Slash commands turn common workflows into one-liners:
.claude/commands/
├── commit-push-pr.md
├── run-tests.md
├── format-all.md
├── deploy-staging.md
└── update-deps.mdExample: commit-push-pr.md
# Commit, Push, and Create PR
1. Stage all changes
2. Generate commit message from diff
3. Push to current branch
4. Create PR with description from commits
5. Request review from team7. Hooks for Continuous Automation
Hooks trigger automation at specific points:
| Hook Type | Trigger | Example Use |
|---|---|---|
| PostToolUse | After any tool runs | Auto-format code |
| PreToolUse | Before tool runs | Validate safety |
| Stop | When Claude stops | Run test suite |
8. Pre-Approved Permissions
Example .claude/settings.json:
{
"permissions": {
"allow": [
"npm install",
"npm run test",
"npm run build",
"git add",
"git commit",
"git push",
"eslint --fix"
],
"deny": [
"rm -rf",
"sudo",
"npm publish"
]
}
}9. Tool Integrations
Claude Code integrates with:
| Tool | Purpose |
|---|---|
| Slack | Communication, status updates |
| BigQuery | Query production data |
| Sentry | Read error logs |
| GitHub | PR management |
| Internal APIs | Custom business logic |
10. Verification is #1
Verification is non-negotiable:
flowchart TB
A[Claude writes code] --> B[Unit tests pass?]
B -->|No| A
B -->|Yes| C[Integration tests?]
C -->|No| A
C -->|Yes| D[Browser verification?]
D -->|No| A
D -->|Yes| E[Ready for review]What Could Be Improved
While Boris's workflow is impressive, there are areas for enhancement:
1. Better Agent Coordination
Currently, parallel agents are isolated. Future improvements could include:
- Shared context between agents
- Automatic task delegation based on agent specialization
- Conflict detection before branches diverge too far
2. Learning Across Projects
CLAUDE.md is per-repo. What if learnings could be:
- Shared across similar projects (all React apps, all Python services)
- Company-wide mistake databases
- Industry-wide best practice libraries
3. Smarter Verification Pipelines
Current verification is manual to set up. Improvements:
- Auto-detect what needs testing based on changes
- Visual regression testing without configuration
- Performance impact analysis before merge
4. Better Rollback and Recovery
When an agent goes wrong:
- Automatic checkpoint before major changes
- One-click rollback to last known good state
- Blame-aware debugging showing which agent introduced bugs
5. Cost Optimization
Running 10 Opus 4.5 instances is expensive. Potential improvements:
- Dynamic model selection based on task complexity
- Agent hibernation when idle
- Batch processing for similar tasks
How to Apply This Today
You don't need to be at Anthropic to use these techniques:
Starter Setup
- Create a CLAUDE.md in your repo
- Set up git worktrees for parallel work
- Create your first slash command
- Configure permissions in settings
Minimum Viable Workflow
# 1. Create CLAUDE.md
echo "# Project Guidelines\n\n## Rules\n- Use TypeScript\n- Write tests" > CLAUDE.md
# 2. Create commands directory
mkdir -p .claude/commands
# 3. Create a simple command
echo "Stage, commit with conventional message, push, create PR" > .claude/commands/ship.md
# 4. Start with plan mode
claude --plan "Add user authentication feature"Conclusion
Boris Cherny's workflow represents the cutting edge of AI-assisted development. The key insights:
- Quantity enables quality - More agents mean more parallel progress
- Memory matters - CLAUDE.md turns mistakes into organizational knowledge
- Plan before execute - Upfront thinking saves downstream debugging
- Verify everything - AI-written code needs the same scrutiny as human code
- Automate the repetitive - Slash commands and hooks free you for high-value work
The future isn't AI replacing developers—it's developers orchestrating fleets of AI agents, each specialized, each learning, each contributing to a velocity previously impossible.
Sources:
Last Updated: January 2026