ASK KNOX
beta
LESSON 219

Parallel Agent Dispatch — Four Agents, No Merge Conflicts

The bottleneck in most AI-assisted development is sequential execution — one agent finishes, another starts. Parallel dispatch eliminates that bottleneck by assigning agents to non-overlapping file territories. Independent work has no serialization requirement.

12 min read·From Audit to Ship

The standard model for AI-assisted development is sequential: write a prompt, wait for a response, review it, write the next prompt. At each step, one agent is working and one human is waiting. This is a serialization bottleneck. It is also unnecessary.

Most production fixes are independent. The code that implements authentication does not overlap with the code that implements test coverage. The performance fix to a SQLite connection pattern does not touch the same files as the architectural fix to wire a missing subsystem. When work is independent, there is no technical reason to serialize it.

Parallel agent dispatch is the practice of identifying independent work units, assigning them to separate agents, and running those agents simultaneously. The constraint is simple: agents must operate on non-overlapping file territories. When that constraint holds, concurrent work is conflict-free.

The Territorial Assignment Model

Before dispatching agents, map findings to file territories. This is the pre-dispatch analysis that makes parallel execution safe.

For the Principal Broker, 84 findings mapped to roughly four territories:

Security territory: broker/api/auth.py, broker/api/kill_switch.py, scripts/okr_sentinel_metrics.py, scripts/heartbeat-bridge.py, broker/api/escalations.py, broker/api/directives.py

Architecture territory: broker/main.py (wiring changes), broker/safety/kill_switch.py (stub implementations), broker/core/dispatcher.py (shared HTTP client), broker/safety/halt_store.py (connection strategy)

Testing territory: tests/ (all new test files), specifically tests/unit/test_message_pipeline.py, tests/unit/test_kill_switch_api.py

Performance territory: broker/safety/halt_store.py (overlaps with architecture — see handling below), broker/api/directives.py (connection pattern), broker/core/dispatcher.py (shared client)

Notice the overlap between architecture and performance in halt_store.py and dispatcher.py. This is where territorial planning matters. The solution: consolidate overlapping findings into one agent's territory. The architecture agent takes ownership of the connection pattern fixes in those files. The performance agent's findings for those files get merged into the architecture agent's scope. No file is owned by two agents.

The resolution dispatch for the Principal Broker used four agents in two waves:

Wave 1 (all P0 findings, two agents in parallel):

  • Security agent: auth bypass, SQL injection, command injection, escalation endpoint
  • Architecture agent: _revoke_all_tokens stub implementation, _lock_env_files stub implementation, list_escalations state parameter bug

Wave 2 (P1 findings, two agents in parallel after Wave 1 merges):

  • Architecture + Performance agent: wiring finops and feedback into main.py, persistent connections in halt_store and directives, shared HTTP client in dispatcher
  • Testing agent: all new test files, coverage to 90% floor

Wave 1 and Wave 2 are sequential at the wave level because Wave 2 testing agent needs to test the correct implementations from Wave 1. But within each wave, agents run in parallel.

Worktree Isolation

Git worktrees allow a single repository to be checked out in multiple working directories simultaneously. Each worktree can be on a different branch. This is the mechanism for parallel agent commits without conflict.

Setup:

# Create a worktree for the security agent's branch
git worktree add /tmp/broker-security-fixes security/p0-auth-fixes

# Create a worktree for the architecture agent's branch
git worktree add /tmp/broker-arch-fixes arch/p0-stub-implementations

# Both branches diverge from the same base (main)
# Each agent works in its own working directory
# Commits are isolated until merged

When agents complete, their branches are reviewed and merged into main sequentially. The worktrees are cleaned up:

git worktree remove /tmp/broker-security-fixes
git worktree remove /tmp/broker-arch-fixes

The worktree model gives you full isolation during development (no file system conflicts) with full integration at merge time (standard Git merge/PR workflow).

For agents running on the same machine without worktrees, the simpler approach works when territories are clearly assigned: each agent clones the same repository into a separate directory. Heavier on disk space but identical in behavior for the purposes of parallel development.

The Dispatch Prompt Structure

Each agent's dispatch prompt must include three elements:

  1. Explicit scope — which files the agent is authorized to modify
  2. Explicit findings — the P-numbered findings from the audit assigned to this agent
  3. Explicit constraints — what the agent must not touch, even if it discovers related issues

A well-formed dispatch prompt:

You are a fix agent for the Principal Broker codebase. Your scope is:

AUTHORIZED FILES:
- broker/api/auth.py
- broker/api/kill_switch.py (API layer only)
- scripts/okr_sentinel_metrics.py
- scripts/heartbeat-bridge.py

FINDINGS TO FIX:
- P0: Auth middleware accepts any non-empty token (auth.py:46-51)
- P0: Kill switch resume has no authorization (kill_switch.py:95)
- P0: SQL injection via f-string (okr_sentinel_metrics.py:36-40)
- P0: Command injection via daemon name interpolation (heartbeat-bridge.py:89)

DO NOT MODIFY:
- broker/safety/kill_switch.py (the KillSwitch class itself — another agent owns this)
- Any test files (testing agent owns tests/)
- broker/main.py (architecture agent owns this)

If you discover issues outside your scope, document them in SCOPE-EXPANSIONS.md and continue with your assigned scope.

The explicit "DO NOT MODIFY" list is as important as the authorized files list. Without it, an agent that discovers a related issue in an out-of-scope file may "helpfully" fix it — creating overlap with the other agent's work and a potential merge conflict.

Scope Discipline in Practice

During the Principal Broker fix session, the testing agent discovered that properly testing _make_message_handler required the architecture agent to have already wired the finops and feedback subsystems. The testing agent was operating in Wave 2, after Wave 1 committed. This sequencing was by design.

But the testing agent also discovered that achieving 90% coverage required adding tests for the dispatcher's store_in_akashic method — which required mocking an HTTP client that the architecture agent was changing as part of Wave 2. This was a genuine coordination dependency within the same wave.

The resolution: the dispatch prompts were ordered within Wave 2 such that the architecture agent committed first, then the testing agent was dispatched with the updated dispatcher code. This converted a within-wave parallel into a short sequential step.

The key discipline: when you discover a dependency that was not accounted for in the initial territorial analysis, do not guess or assume. Surface it to the coordinating session and let the human decide the resolution order. The cost of a coordination pause is far lower than the cost of a merge conflict or a test suite written against wrong behavior.

The Non-Overlapping File Test

Before dispatching any parallel agents, run this mental check:

Take the full list of files each agent will modify. Union them into two sets. Are there any files in both sets? If yes, one agent's scope needs to shrink or the agents need to be serialized.

This is not optional rigor. It takes 2 minutes. A merge conflict in a 400-line module can take 30 minutes to resolve correctly, and conflicts in test files are especially painful because the same test class name may appear in the conflict zone.

The file-territory analysis is the pre-flight check for parallel dispatch. Run it every time.

What Parallel Dispatch Eliminates

Sequential fix agents have a specific cost structure:

  • Agent 1 dispatched, Agent 1 completes (15 min)
  • Agent 2 dispatched, Agent 2 completes (15 min)
  • Agent 3 dispatched, Agent 3 completes (15 min)
  • Agent 4 dispatched, Agent 4 completes (15 min)

Total: 60 minutes of work time, 60 minutes of wall-clock time.

With parallel dispatch in two waves:

  • Wave 1: Agent A and Agent B dispatched simultaneously (15 min)
  • Wave 1 merge (5 min)
  • Wave 2: Agent C and Agent D dispatched simultaneously (15 min)
  • Wave 2 merge (5 min)

Total: 60 minutes of work time, 40 minutes of wall-clock time.

At scale with more agents, the compression is more significant. This is the mechanical source of "compound velocity" — not working faster, but eliminating the serial wait time between agents. The lesson on compound velocity in this track (Lesson 221) walks through the math in detail.

Lesson 219 Drill

Take any two open tickets in your current project. Map the full set of files each ticket touches. Do they overlap? If not, document the dispatch plan: branch names, authorized file lists, findings per agent, and the "DO NOT MODIFY" list for each.

Run them in parallel. Measure the wall-clock time compared to sequential execution. The delta is your parallelism dividend — time reclaimed by eliminating serialization.

Bottom Line

Parallel agent dispatch is not advanced technique. It is the natural consequence of recognizing that independent work has no serialization requirement. The discipline is in the territorial analysis: assign non-overlapping files, document explicit constraints, surface scope expansions rather than acting on them. When those three practices hold, four agents working simultaneously produces no more conflicts than one agent working alone — and delivers in a fraction of the wall-clock time.