You may have heard of Claude Code, or even used it to write some code or edit documents. But have you ever thought: if AI is not just a “tool for temporary use,” but a formal member of your development process, or even an automated collaboration system—how would it change the way you work?
Boris Cherny, as the father of Claude Code, wrote a very detailed tweet sharing how he efficiently uses this tool, and how he and his team deeply integrate Claude into the entire engineering workflow in practical work.
This article will systematically organize and interpret his experience in a simple way.
How does Boris make AI an automated partner in his workflow?
Core points:
He introduces his workflow, including:
How to use Claude:
Run multiple Claudes simultaneously: open 5–10 sessions on terminal and web pages to handle tasks in parallel, and also use Claude on mobile.
Don’t mess with default settings: Claude is ready to use out of the box, no need for complicated configuration.
Use the most powerful model (Opus 4.5): although slower, it’s smarter and more convenient to use.
Plan before coding (Plan mode): let Claude help you think through before writing, increasing success rate.
After generating code, use tools to check formatting to avoid errors.
How to make Claude smarter over time:
Team maintains a “knowledge base”: whenever Claude makes mistakes, add the experience in, so it won’t happen again.
Automatically train Claude when writing PRs: let Claude review PRs to learn new usage or standards.
Turn frequently used commands into slash commands, which Claude can call automatically, saving repetitive work.
Use “sub-agents” to handle fixed tasks, such as code simplification, feature verification, etc.
How to manage permissions:
Don’t skip permissions casually; instead, set safe commands to be automatically approved.
Sync Claude’s workflow across multiple devices (web, terminal, mobile).
The most important point:
Always provide Claude with a “verification mechanism” so it can confirm whether what it writes is correct.
For example, Claude automatically runs tests, opens browsers to test web pages, and checks if features work.
Claude Code is a “partner,” not just a “tool”
Boris first conveys a core idea: Claude Code is not a static tool, but an intelligent partner that can cooperate with you, learn continuously, and grow together.
It doesn’t require too many complex configurations; it’s very powerful out of the box. But if you’re willing to invest time in building better usage methods, the efficiency gains can be exponential.
Model selection: choose the smartest, not the fastest
Boris uses Claude’s flagship model Opus 4.5 + “with thinking” mode for all development tasks.
Although this model is larger and slower than Sonnet:
It has stronger understanding
Better at using tools
Less need for repeated guidance, fewer back-and-forths
Overall, saves more time than using faster models
Insight: true productivity isn’t about execution speed, but about “fewer errors, less rework, less repeated explanation.”
1. Plan mode: use AI to write code, but don’t rush it to “write” immediately
When we open Claude, many people instinctively input “Help me write an interface,” “Refactor this code,” etc… Claude usually “writes some,” but often goes off track, misses logic, or even misunderstands the requirements.
Boris’s first step is never to let Claude write code directly. He uses the Plan mode—first, jointly formulate the implementation plan with Claude, then proceed to execution.
How does he do it?
When opening a PR, Boris first doesn’t let Claude write code directly, but uses Plan mode:
Describe the goal
Develop a plan together with Claude
Confirm each step
Then let Claude start coding
Whenever he needs to implement a new feature, like “adding rate limiting to an API,” he confirms step-by-step with Claude:
Use middleware or embed logic?
Should rate limit configs support dynamic changes?
Is logging needed? What to return on failure?
This “plan negotiation” process is like two people drawing a “construction blueprint.”
Once Claude understands the goal clearly, Boris switches to “auto-accept editing” mode, allowing Claude to modify code and submit PRs directly, sometimes without manual confirmation.
“Claude’s code quality depends on whether you’ve agreed on the plan before coding.” — Boris
Insight: Instead of repeatedly fixing Claude’s mistakes, it’s better to clarify the roadmap together from the start.
Summary
Plan mode isn’t a waste of time; it’s about stabilizing execution through upfront negotiation. Even with powerful AI, “you need to say it clearly.”
2. Multiple Claudes in parallel: not just one AI, but a virtual dev team
Boris doesn’t just use one Claude. His daily setup looks like:
Run 5 local Claudes in terminal, assign sessions to different tasks (refactoring, testing, bug fixing)
Open 5–10 Claudes in browser, working in parallel
Use Claude iOS app on mobile, initiating tasks anytime
Each Claude instance is like a “dedicated assistant”: some responsible for coding, some for document completion, some for long-term background testing.
He even sets up system notifications, so when Claude is waiting for input, he gets alerted immediately.
Why do this?
Claude’s context is local, not suitable for “one window doing everything.” Boris splits Claude into multiple roles running in parallel, reducing wait times and “interference memory.”
He also uses system notifications to remind himself: “Claude 4 is waiting for your reply,” “Claude 1 finished testing,” managing these AIs like a multi-threaded system.
Analogy
Imagine five smart interns sitting around you, each responsible for a task. You don’t do everything yourself; just “switch people” at key moments to keep the workflow smooth.
Insight: Treating Claude as multiple “virtual assistants,” each handling different tasks, can significantly reduce waiting time and context switching costs.
3. Slash commands: turn your daily routines into Claude shortcuts
Some workflows involve dozens of repetitions:
Modify code → commit → push → create PR
Check build status → notify team → update issue
Sync changes to Web and local sessions
Boris doesn’t want to keep prompting Claude: “Please commit first, then push, then create PR…”
He encapsulates these operations into Slash commands, such as:
/commit-push-pr
Behind these commands are Bash scripts stored in the .claude/commands/ folder, managed via Git, accessible to team members.
How does Claude use these commands?
When Claude encounters this command, it doesn’t just “execute instructions,” but understands the workflow it represents, and can automatically perform intermediate steps, pre-fill parameters, and avoid repeated communication.
Key point
Slash commands are like “automatic buttons” you install for Claude. You train it to understand a task flow, then it can execute with one click.
“It’s not just I can save time with commands, Claude can too.” — Boris
Insight: Don’t repeat prompts every time; abstract high-frequency tasks into commands, so your collaboration with Claude becomes “automated.”
4. Team knowledge base: Claude doesn’t learn from prompts but from team-maintained knowledge
Boris’s team maintains a .claude knowledge base, managed via Git.
It’s like an “internal wiki” for Claude, recording:
Correct writing styles
Team’s best practices
How to fix common issues
Claude automatically references this knowledge base to understand context and judge code style.
What if Claude makes mistakes?
Whenever Claude misunderstands or writes incorrect logic, lessons are added to the knowledge base.
Each team maintains its own version.
Everyone collaborates on editing, and Claude references this knowledge base in real-time.
Example:
If Claude keeps writing incorrect pagination logic, just add the correct pagination standards into the knowledge base, and all users benefit automatically.
Boris’s approach: don’t scold it or shut it down, but “train it once”:
This code isn’t written like that, add it to the knowledge base
Next time, Claude won’t make the same mistake again.
More importantly, this mechanism isn’t maintained by Boris alone, but contributed and updated weekly by the entire team.
Insight: Using AI isn’t about individual effort, but building a “collective memory” system.
5. Automatic learning: PRs are also Claude’s “training data”
Boris often @Claude in PRs during code reviews, for example:
@.claude add this function’s style to the knowledge base
With GitHub Actions, Claude automatically learns the intent behind these changes and updates its internal knowledge.
It’s like “continuous training of Claude”: each review not only merges code but also enhances AI capabilities.
This is no longer “post-maintenance,” but integrating AI’s learning mechanism into daily collaboration.
Teams improve code quality via PRs, and Claude’s knowledge level also advances synchronously.
Insight: PRs are not just code review processes—they’re opportunities for AI self-improvement.
6. Subagents: modular execution of complex tasks by Claude
Besides main task workflows, Boris defines some sub-agents to handle common auxiliary tasks.
Sub-agents are automated modules, such as:
code-simplifier: automatically refines code structure after Claude writes
verify-app: runs full tests to verify new code
log-analyzer: analyzes error logs to quickly locate issues
These sub-agents act like plugins, automatically integrated into Claude’s workflow, working collaboratively without repeated prompts.
Insight: Sub-agents are like team members for Claude, upgrading it from “assistant” to “project commander.”
Claude is not just a person, but a small manager you can lead a team with.
7. Additional paragraph 1: PostToolUse Hook — the final gatekeeper for code formatting
In a team, it’s hard to ensure everyone writes in a consistent style. Claude has strong generation ability, but details like indentation or extra blank lines are inevitable.
Boris’s solution is to set up a PostToolUse Hook —
Simply put, this is an automatic “post-processing hook” that Claude calls after “completing a task.”
Its functions include:
Automatically fixing code formatting
Adding missing comments
Handling lint errors to prevent CI failures
This step is usually simple but crucial. Like running Grammarly after writing an article, it ensures the work is stable and tidy before submission.
For AI tools, good usability often depends on finishing touches, not just generation power.
8. Permission management: pre-authorization instead of skipping
Boris explicitly states he does not use --dangerously-skip-permissions — a parameter in Claude Code that skips all permission prompts during command execution.
It sounds convenient but can be dangerous, e.g., accidentally deleting files or running wrong scripts.
His alternative approach:
Use /permissions command to explicitly declare trusted commands
Write these permissions into .claude/settings.json
Share these security settings across the team
This is like pre-approving a “whitelist” of operations for Claude, such as:
“preApprovedCommands”: [
“git commit”,
“npm run build”,
“pytest”
]
Claude encounters these commands and executes directly, no need for interruption each time.
This permission mechanism is designed more like a team operating system than a standalone tool. He uses /permissions to pre-authorize common, safe bash commands, stored in .claude/settings.json, shared by the team.
Insight: AI automation doesn’t mean losing control. Incorporating security policies into automation is the real engineering practice.
9. Multi-tool integration: Claude = multi-capable robot
Boris doesn’t just let Claude code locally. He configures Claude to access multiple core platforms via MCP (a central control service module):
In real projects, Claude sometimes handles long tasks, such as:
Build + test + deploy
Generate reports + send emails
Data migration scripts running
Boris’s approach is very engineering-oriented:
Three ways to handle long tasks:
After completion, use a background agent to verify results
Use Stop Hook to trigger subsequent actions automatically at task end
Use ralph-wiggum plugin (proposed by @GeoffreyHuntley) to manage long process states
In these scenarios, Boris uses:
–permission-mode=dontAsk
or runs tasks in sandbox to avoid interruption from permission prompts.
Claude is not “watching all the time,” but a trusted collaborator you can safely delegate to.
Insight: AI tools are suitable not only for quick tasks but also for long-term, complex workflows — provided you build a “hosting mechanism” for it.
11. Automatic verification: Claude’s output is only valuable if it can verify itself
Boris’s most important experience is:
Any result from Claude must have a “verification mechanism” to check correctness.
He adds a verification script or hook:
After writing code, Claude automatically runs tests to verify correctness
Simulate user interactions in browsers to verify front-end experience
Automatically compare logs and metrics before and after execution
If it doesn’t pass, Claude automatically modifies and re-executes until it does.
It’s like Claude has a “closed-loop feedback system” built-in.
This not only improves quality but also reduces cognitive load on humans.
Insight: The true determinant of AI output quality isn’t model size, but whether you’ve designed a “result check mechanism.”
Summary: Not to replace humans, but to collaborate like humans
Boris’s approach doesn’t rely on “hidden features” or black magic, but engineering-wise use of Claude, upgrading it from a “chat tool” to an efficient component of the work system.
His core features of Claude usage:
Multiple sessions in parallel: clearer task division, higher efficiency
Planning first: Plan mode aligns Claude’s goals better
Knowledge system support: team maintains AI’s knowledge base, continuous iteration
Task automation: Slash commands + sub-agents, making Claude work like a workflow engine
Closed-loop feedback: every output of Claude has verification logic, ensuring stable and reliable results
Boris’s method demonstrates a new way of using AI:
Upgrading Claude from “dialogue assistant” to “automatic programming system”
Turning knowledge accumulation from human brain to AI’s knowledge base
Transforming workflows from manual repetitive operations into scriptable, modular, collaborative automation
This approach doesn’t rely on black magic but on engineering capabilities. You can also learn from it to make your use of Claude or other AI tools more efficient and smarter.
If you often feel “it understands a bit but isn’t reliable,” or “the code it writes always needs my fixing,” the problem might not be Claude itself, but the lack of a mature collaboration mechanism.
Claude can be a qualified intern or a stable, reliable engineering partner, depending on how you use it.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
The father of Claude Code reveals: How to turn Claude into your "virtual development team"?
Original: Boris Cherny, Claude Code developer
Compilation & organization: XiaoHu AI
You may have heard of Claude Code, or even used it to write some code or edit documents. But have you ever thought: if AI is not just a “tool for temporary use,” but a formal member of your development process, or even an automated collaboration system—how would it change the way you work?
Boris Cherny, as the father of Claude Code, wrote a very detailed tweet sharing how he efficiently uses this tool, and how he and his team deeply integrate Claude into the entire engineering workflow in practical work.
This article will systematically organize and interpret his experience in a simple way.
How does Boris make AI an automated partner in his workflow?
Core points:
He introduces his workflow, including:
How to use Claude:
Run multiple Claudes simultaneously: open 5–10 sessions on terminal and web pages to handle tasks in parallel, and also use Claude on mobile.
Don’t mess with default settings: Claude is ready to use out of the box, no need for complicated configuration.
Use the most powerful model (Opus 4.5): although slower, it’s smarter and more convenient to use.
Plan before coding (Plan mode): let Claude help you think through before writing, increasing success rate.
After generating code, use tools to check formatting to avoid errors.
How to make Claude smarter over time:
Team maintains a “knowledge base”: whenever Claude makes mistakes, add the experience in, so it won’t happen again.
Automatically train Claude when writing PRs: let Claude review PRs to learn new usage or standards.
Turn frequently used commands into slash commands, which Claude can call automatically, saving repetitive work.
Use “sub-agents” to handle fixed tasks, such as code simplification, feature verification, etc.
How to manage permissions:
Don’t skip permissions casually; instead, set safe commands to be automatically approved.
Sync Claude’s workflow across multiple devices (web, terminal, mobile).
The most important point:
Always provide Claude with a “verification mechanism” so it can confirm whether what it writes is correct.
For example, Claude automatically runs tests, opens browsers to test web pages, and checks if features work.
Claude Code is a “partner,” not just a “tool”
Boris first conveys a core idea: Claude Code is not a static tool, but an intelligent partner that can cooperate with you, learn continuously, and grow together.
It doesn’t require too many complex configurations; it’s very powerful out of the box. But if you’re willing to invest time in building better usage methods, the efficiency gains can be exponential.
Model selection: choose the smartest, not the fastest
Boris uses Claude’s flagship model Opus 4.5 + “with thinking” mode for all development tasks.
Although this model is larger and slower than Sonnet:
1. Plan mode: use AI to write code, but don’t rush it to “write” immediately
When we open Claude, many people instinctively input “Help me write an interface,” “Refactor this code,” etc… Claude usually “writes some,” but often goes off track, misses logic, or even misunderstands the requirements.
Boris’s first step is never to let Claude write code directly. He uses the Plan mode—first, jointly formulate the implementation plan with Claude, then proceed to execution.
How does he do it?
When opening a PR, Boris first doesn’t let Claude write code directly, but uses Plan mode:
Describe the goal
Develop a plan together with Claude
Confirm each step
Then let Claude start coding
Whenever he needs to implement a new feature, like “adding rate limiting to an API,” he confirms step-by-step with Claude:
This “plan negotiation” process is like two people drawing a “construction blueprint.”
Once Claude understands the goal clearly, Boris switches to “auto-accept editing” mode, allowing Claude to modify code and submit PRs directly, sometimes without manual confirmation.
“Claude’s code quality depends on whether you’ve agreed on the plan before coding.” — Boris
Insight: Instead of repeatedly fixing Claude’s mistakes, it’s better to clarify the roadmap together from the start.
Summary
Plan mode isn’t a waste of time; it’s about stabilizing execution through upfront negotiation. Even with powerful AI, “you need to say it clearly.”
2. Multiple Claudes in parallel: not just one AI, but a virtual dev team
Boris doesn’t just use one Claude. His daily setup looks like:
Each Claude instance is like a “dedicated assistant”: some responsible for coding, some for document completion, some for long-term background testing.
He even sets up system notifications, so when Claude is waiting for input, he gets alerted immediately.
Why do this?
Claude’s context is local, not suitable for “one window doing everything.” Boris splits Claude into multiple roles running in parallel, reducing wait times and “interference memory.”
He also uses system notifications to remind himself: “Claude 4 is waiting for your reply,” “Claude 1 finished testing,” managing these AIs like a multi-threaded system.
Analogy
Imagine five smart interns sitting around you, each responsible for a task. You don’t do everything yourself; just “switch people” at key moments to keep the workflow smooth.
Insight: Treating Claude as multiple “virtual assistants,” each handling different tasks, can significantly reduce waiting time and context switching costs.
3. Slash commands: turn your daily routines into Claude shortcuts
Some workflows involve dozens of repetitions:
He encapsulates these operations into Slash commands, such as:
/commit-push-pr
Behind these commands are Bash scripts stored in the .claude/commands/ folder, managed via Git, accessible to team members.
How does Claude use these commands?
When Claude encounters this command, it doesn’t just “execute instructions,” but understands the workflow it represents, and can automatically perform intermediate steps, pre-fill parameters, and avoid repeated communication.
Key point
Slash commands are like “automatic buttons” you install for Claude. You train it to understand a task flow, then it can execute with one click.
“It’s not just I can save time with commands, Claude can too.” — Boris
Insight: Don’t repeat prompts every time; abstract high-frequency tasks into commands, so your collaboration with Claude becomes “automated.”
4. Team knowledge base: Claude doesn’t learn from prompts but from team-maintained knowledge
Boris’s team maintains a .claude knowledge base, managed via Git.
It’s like an “internal wiki” for Claude, recording:
Claude automatically references this knowledge base to understand context and judge code style.
What if Claude makes mistakes?
Whenever Claude misunderstands or writes incorrect logic, lessons are added to the knowledge base.
Each team maintains its own version.
Everyone collaborates on editing, and Claude references this knowledge base in real-time.
Example:
If Claude keeps writing incorrect pagination logic, just add the correct pagination standards into the knowledge base, and all users benefit automatically.
Boris’s approach: don’t scold it or shut it down, but “train it once”:
This code isn’t written like that, add it to the knowledge base
Next time, Claude won’t make the same mistake again.
More importantly, this mechanism isn’t maintained by Boris alone, but contributed and updated weekly by the entire team.
Insight: Using AI isn’t about individual effort, but building a “collective memory” system.
5. Automatic learning: PRs are also Claude’s “training data”
Boris often @Claude in PRs during code reviews, for example:
@.claude add this function’s style to the knowledge base
With GitHub Actions, Claude automatically learns the intent behind these changes and updates its internal knowledge.
It’s like “continuous training of Claude”: each review not only merges code but also enhances AI capabilities.
This is no longer “post-maintenance,” but integrating AI’s learning mechanism into daily collaboration.
Teams improve code quality via PRs, and Claude’s knowledge level also advances synchronously.
Insight: PRs are not just code review processes—they’re opportunities for AI self-improvement.
6. Subagents: modular execution of complex tasks by Claude
Besides main task workflows, Boris defines some sub-agents to handle common auxiliary tasks.
Sub-agents are automated modules, such as:
These sub-agents act like plugins, automatically integrated into Claude’s workflow, working collaboratively without repeated prompts.
Insight: Sub-agents are like team members for Claude, upgrading it from “assistant” to “project commander.”
Claude is not just a person, but a small manager you can lead a team with.
7. Additional paragraph 1: PostToolUse Hook — the final gatekeeper for code formatting
In a team, it’s hard to ensure everyone writes in a consistent style. Claude has strong generation ability, but details like indentation or extra blank lines are inevitable.
Boris’s solution is to set up a PostToolUse Hook —
Simply put, this is an automatic “post-processing hook” that Claude calls after “completing a task.”
Its functions include:
This step is usually simple but crucial. Like running Grammarly after writing an article, it ensures the work is stable and tidy before submission.
For AI tools, good usability often depends on finishing touches, not just generation power.
8. Permission management: pre-authorization instead of skipping
Boris explicitly states he does not use --dangerously-skip-permissions — a parameter in Claude Code that skips all permission prompts during command execution.
It sounds convenient but can be dangerous, e.g., accidentally deleting files or running wrong scripts.
His alternative approach:
Use /permissions command to explicitly declare trusted commands
Write these permissions into .claude/settings.json
Share these security settings across the team
This is like pre-approving a “whitelist” of operations for Claude, such as:
“preApprovedCommands”: [
“git commit”,
“npm run build”,
“pytest”
]
Claude encounters these commands and executes directly, no need for interruption each time.
This permission mechanism is designed more like a team operating system than a standalone tool. He uses /permissions to pre-authorize common, safe bash commands, stored in .claude/settings.json, shared by the team.
Insight: AI automation doesn’t mean losing control. Incorporating security policies into automation is the real engineering practice.
9. Multi-tool integration: Claude = multi-capable robot
Boris doesn’t just let Claude code locally. He configures Claude to access multiple core platforms via MCP (a central control service module):
How to implement?
MCP configuration is stored in .mcp.json
Claude reads this config at runtime, autonomously executing cross-platform tasks
The whole team shares a single configuration
All these are integrated through MCP (Claude’s central control system), with settings stored in .mcp.json.
Claude acts like a robot assistant, helping you to:
“Write code → Submit PR → Check results → Notify QA → Report logs.”
This is no longer just a traditional AI tool, but a neural hub of the engineering system.
Insight: Don’t let AI work only “inside the editor,”
It can be the scheduler in your entire system ecosystem.
10. Long tasks handled asynchronously: background agents + plugins + hooks
In real projects, Claude sometimes handles long tasks, such as:
Boris’s approach is very engineering-oriented:
Three ways to handle long tasks:
After completion, use a background agent to verify results
Use Stop Hook to trigger subsequent actions automatically at task end
Use ralph-wiggum plugin (proposed by @GeoffreyHuntley) to manage long process states
In these scenarios, Boris uses:
–permission-mode=dontAsk
or runs tasks in sandbox to avoid interruption from permission prompts.
Claude is not “watching all the time,” but a trusted collaborator you can safely delegate to.
Insight: AI tools are suitable not only for quick tasks but also for long-term, complex workflows — provided you build a “hosting mechanism” for it.
11. Automatic verification: Claude’s output is only valuable if it can verify itself
Boris’s most important experience is:
Any result from Claude must have a “verification mechanism” to check correctness.
He adds a verification script or hook:
If it doesn’t pass, Claude automatically modifies and re-executes until it does.
It’s like Claude has a “closed-loop feedback system” built-in.
This not only improves quality but also reduces cognitive load on humans.
Insight: The true determinant of AI output quality isn’t model size, but whether you’ve designed a “result check mechanism.”
Summary: Not to replace humans, but to collaborate like humans
Boris’s approach doesn’t rely on “hidden features” or black magic, but engineering-wise use of Claude, upgrading it from a “chat tool” to an efficient component of the work system.
His core features of Claude usage:
Boris’s method demonstrates a new way of using AI:
This approach doesn’t rely on black magic but on engineering capabilities. You can also learn from it to make your use of Claude or other AI tools more efficient and smarter.
If you often feel “it understands a bit but isn’t reliable,” or “the code it writes always needs my fixing,” the problem might not be Claude itself, but the lack of a mature collaboration mechanism.
Claude can be a qualified intern or a stable, reliable engineering partner, depending on how you use it.