Back to Research

Fast mode is not the default

Practical ai coding governance for engineering teams: speed, review guardrails, and training across tools.

Editorial illustration for Fast mode is not the default. Counter-thesis: the fastest agentic coding setup is usually not the best default for engineering teams.
Rogier MullerMay 13, 20265 min read

The situation

Counter-thesis: the fastest agentic coding setup is usually not the best default for engineering teams.

I believed speed was the cleanest path to developer productivity. I tried to make every task run in the most aggressive mode across Codex, Claude Code, and Codex, and here is what happened: more edits, more churn, and more review time spent untangling work that should have been slower and safer.

Diagnosis: this is the speed-versus-control trap, the same coordination problem described by Brooks’s Law and by every team that confuses throughput with progress.

The actual thesis: fast mode is a tool, not a policy.

That matters because the main questions are the same across tools: where instructions live, how skills load, what MCP can touch, and which changes need a human review gate. If you are building an ai coding governance practice or running an ai coding workshop, the question is not whether the model can move faster. It is what should be fast, and what must stay standard.

Walkthrough

Failure mode: you make speed the team default. If you shipped AI code, you have hit this: someone turns on the most aggressive mode for everything, then review starts acting like cleanup instead of control. In Codex, Claude Code, and Codex, the fix is the same: define when fast mode is allowed, and keep standard mode as the default. I call this the Fast-Mode Gate. Put it in the team convention doc and require a reason for exceptions. That cuts down diff noise and keeps reviewers focused on work that actually needs scrutiny. That is tip one.

Failure mode: your instructions are too flat. One giant root file becomes the place every rule goes to die. Codex’s layered .cursor/rules/*.mdc, Claude Code’s CLAUDE.md, and Codex’s AGENTS.md all point to the same fix: scope instructions where the work happens. I call this the Local Rules Stack. Use a small root policy, then add nested files for repo-specific or path-specific behavior. That gives the model less irrelevant context and makes review easier. That is tip two.

# AGENTS.md fragment: Fast-Only-When-Scoped Rule

- Default to standard-speed execution for routine edits, refactors, and tests.
- Allow fast mode only for narrowly scoped, reversible tasks with a human review checkpoint.
- Require a repo-local instruction file for any path with special safety or architecture constraints.
- If a task touches auth, billing, deployment, or external connectors, do not use fast mode.

Failure mode: MCP becomes a blank check. A connector gets added because it is convenient, then nobody remembers what it can reach. The fix is a boundary review before the connector ships. I call this the MCP Boundary Review. Claude Code’s MCP docs, Codex’s MCP support, and Codex’s connector surfaces all reward the same habit: review scope, permissions, and failure behavior before you trust the integration. That keeps teams from finding out about tool access after a bad action or a surprising prompt path. That is tip three.

Failure mode: review guardrails live in people’s heads. “Just review it carefully” is not a process. Write a review artifact that names what must be checked for agent-authored work. I call this the Agent Review Checklist. For Claude Code, that can be a PR checklist tied to CLAUDE.md and hooks; for Codex, it can be a verification loop around codex exec; for Codex, it can be a background-agent policy plus scoped rules. Reviewers then look for the same failure classes every time: instruction drift, unsafe connector use, and unverified behavior. That is tip four.

Failure mode: team training stops at feature demos. Everyone knows the buttons, but nobody knows the operating model. Teach one shared map across tools: instructions, skills, connectors, and verification. Codex teams should learn .mdc rules and AGENTS.md; Claude teams should learn CLAUDE.md, skills, hooks, and MCP; Codex teams should learn AGENTS.md, the CLI, and the verification loop. That is the core of an ai coding governance program for engineering leaders, and it is the core of an ai coding workshop that actually changes behavior. After that, the team can switch tools without relearning governance from scratch. That is tip five.

Synthesis: fast mode is a lane, not a highway. If you make it the road itself, review will eventually go off the edge.

Tradeoffs and limits

Fast mode is still useful. I would use it for narrow, reversible work, especially when the task is well-bounded and the verification path is cheap. I would not use it as the default for connector-heavy work, architecture changes, or anything that changes shared team conventions.

The limit is simple: governance cannot compensate for vague tasks. If the prompt is underspecified, no rule file, skill, or hook will save you from rework. That is why the thesis stays the same: fast mode is a tool, not a policy.

A practical methodology note: in the Document step, write the rule once, then make it local enough that a reviewer can find it without asking around.

Further reading

Where to go next

If you are standardizing this across a team, start with the shared governance page at /topics/ai-coding-governance and turn one repo into the pilot.

Related training topics

Related research

Continue through the research archive

Ready to start?

Transform how your team builds software.

Get in touch