Native Agent Teams and isolated subagent spawning are not the same thing — and conflating them will wreck your workflow. The verdict: use Agent Teams when your tasks talk to each other, use spawning when they don’t. Getting that wrong in either direction costs you hours.
There’s been a lot of noise about “multi-agent AI” as if it’s one thing. It isn’t. Under the hood, there are two genuinely different execution models, and the difference matters more than most devs realise. We’ve been running both patterns in production workflows for months and we’ve got opinions.
What’s Actually Different Here
Spawning subagents gives each task a fresh, isolated context. No shared memory. No coordination overhead. Each agent wakes up, reads its instructions, does the work, dies. Clean. Predictable. Like spinning up Docker containers — they don’t inherently know about each other.
Native Agent Teams are different. They use peer-to-peer messaging between agents, a shared task list, and a lead agent that coordinates workers in real time. Agents can hand off context mid-task, surface blockers to each other, and adapt as the work evolves. It’s less like containers and more like a Slack thread that executes code.

The key insight: they read the same inputs and write the same outputs. In a well-designed workflow, they’re interchangeable at the execution layer. That’s the “two-way door” — you can switch between them freely without redesigning your whole system.
When Agent Teams Is a Certified W
The scenario where Agent Teams absolutely cooks is when tasks have dependencies that only reveal themselves during execution. You kick off three parallel agents — frontend, backend, infra — and halfway through, the backend agent discovers the API contract needs to change. In a standard spawning setup? That frontend agent is now writing to a dead spec. No cap, that’s a certified disaster.

Agent Teams wins here because the backend agent can message the frontend agent mid-task and say “hey, the auth response shape changed, update your types.” The lead coordinates the resync. Work continues without a full restart.
The other killer use case is adversarial debugging. We run /team:debug with 3–5 agents, each assigned a competing hypothesis about what’s broken. They run independent investigations, share findings, and the lead synthesises. It’s genuinely faster than a single agent running through possibilities one by one — and the competing hypotheses catch assumptions a solo agent would just accept.
The rule of thumb: if your agents would benefit from a Slack DM mid-task, use Agent Teams.
When Spawning Is the Right Call
Here’s the thing nobody wants to admit: Agent Teams adds overhead. The coordination layer, the message routing, the lead agent’s orchestration — it’s all compute and latency that you’re paying for even on tasks that don’t need it.
Spawning wins cleanly on truly isolated work. Think: “generate SEO metadata for 50 pages”, “run linting passes on 12 independent modules”, “summarise 30 different documentation sections”. There are zero dependencies between these tasks. Each agent can do its job in isolation with no knowledge of what the others are doing.

We’ve found spawning is also better when you’re on a tight context budget. Each subagent starts fresh — no accumulated token cost from cross-agent messaging history. On a 50-file refactor where every task is “update import paths in this file”, running Agent Teams is lowkey overkill.

The rule of thumb: if you could describe each task in a self-contained prompt with no references to other tasks, spawn.
The Sneaky Middle Ground
There’s a pattern that doesn’t fit cleanly into either category — and it’s where most devs make the wrong call. You’ve got 2–3 tasks that are mostly independent but share a constraint or resource. Maybe they all write to the same config file. Maybe they’re deploying to the same infra namespace.
This is where /team:quick earns its place. It’s not full Agent Teams orchestration — it’s a lightweight coordination layer for 2+ independent streams that need just enough awareness of each other to avoid conflicts. Less overhead than full teams, more safety than pure isolation.
The failure mode we see repeatedly: using pure spawning on this middle category, having two agents clobber each other’s writes, and then spending 40 minutes debugging a race condition that never would have happened with 10 minutes of lightweight coordination.

The Setup (It’s One Env Var)
If you’re using Claude Code and haven’t tried Agent Teams yet — it’s behind a flag:
# In ~/.claude/settings.json
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
Yes, it’s still experimental. Yes, it’s production-ready for the use cases above. The “experimental” label is Anthropic being cautious about the API surface, not about stability.
The commands map cleanly:
| Standard | Agent Teams Equivalent | When to Swap |
|---|---|---|
/gsd:execute-phase | /team:execute | Tasks have dependencies or mid-work coordination |
/gsd:quick (2+ streams) | /team:quick | Parallel independent work |
| Manual debugging | /team:debug | Complex bugs with multiple competing hypotheses |
The Decision Framework
Before every multi-agent task, run this mental check:
1. Do my tasks need to talk to each other mid-execution? → Yes: Agent Teams. No: keep reading.
2. Do my tasks share any mutable resource (file, database, API endpoint)?
→ Yes: /team:quick at minimum. No: keep reading.
3. Are my tasks genuinely self-contained with independent inputs and outputs? → Yes: spawn. You’re done.
That’s it. Three questions. The devs who overthink this and default to Agent Teams on everything aren’t being sophisticated — they’re paying a coordination tax on work that doesn’t need it. Certified mid behaviour.

The Actual Point
The framing of “which is better” is the wrong question. They’re tools. A cordless drill and a hammer are both valid — the L is using one where you need the other.
What we’ve shipped with Agent Teams: complex feature builds where frontend/backend/infra agents needed to renegotiate contracts mid-execution. Zero restarts. Coordination worked.
What we’ve shipped with spawning: SEO page generation, documentation passes, parallel module linting. Faster. Cheaper. Simpler.
The two-way door means you never have to commit. Design your workflow so the execution layer is swappable — same inputs, same outputs — and you can tune based on actual task shape, not vibes.
Run /team:debug on your next gnarly bug. You’ll get it.
