ASK KNOX
beta
LESSON 174

The Surgical Iteration Rule

AI doesn't just fix what you ask — it improves what it notices. One missing phrase at the end of every prompt is the difference between a targeted fix and a regression you didn't ask for.

9 min read·Ship, Don't Just Generate

You ask an AI to center a button. It centers the button. It also adjusts the spacing on three other components, changes a font-weight, and renames a CSS class to something "more semantic." All the changes look reasonable. Two of them break things that were working. The PR now has 47 changed lines instead of 3.

No one asked for any of that.

Why This Happens

AI coding assistants are trained to produce high-quality outputs. "High quality" means clean code, consistent patterns, and no obvious issues. When an AI edits a file, it sees the surrounding code — and if it notices something that looks sub-optimal, it is inclined to fix that too.

This is not a bug. It is the training objective working exactly as designed.

The problem is that your codebase does not have a single quality reviewer. It has a history. That "sub-optimal" spacing the AI corrected? It was set that way because it matches a parent container two components up. That renamed CSS class? It is referenced in a Playwright E2E test.

The AI does not know your history. It only knows the local pattern.

The Fix Is One Sentence

Always end AI prompts with: "Fix only these items. Do not change anything else."

That is it. Eleven words. They reframe the task from "be helpful" to "be precise." The AI is not less capable with the lock phrase — it is scoped. And scoped AI produces surgical changes instead of helpful regressions.

Four Rules That Compound With the Lock Phrase

The lock phrase is the foundation. These four rules build on top of it.

1. Feedback must be specific and measurable.

Not: "The scroll feels off."

Instead: "The hero section scrolls to section 2 where the stat bar appears. The scroll-snap is not engaging — the page floats past section 2 without locking. Add a scroll-snap-align: start to the section 2 container. Fix only this. Do not change anything else."

Vague feedback gives the AI latitude to interpret. Specific feedback removes the interpretation problem entirely.

2. One feature per conversation.

Never ask AI to build a multi-feature UI in a single prompt. Each visual feature — geometry, texture, animation, interaction — is a separate conversation. When AI holds the full project in one session, context overload is guaranteed and quality degrades at the edges of what it can track.

3. Integrate sequentially.

Build feature X, verify it works in isolation, then: "Here is the working feature X. Add feature Y without changing X." The phrase "without changing X" is the integration-mode version of the lock phrase. Never mix verification and new work in the same session.

4. Verify scope before accepting.

When AI returns a fix, count the changed files. If the diff is larger than expected, read what changed before accepting. This step takes 30 seconds and prevents every scope violation before it enters your codebase.

Before and After: The Prompt Pattern

The pattern is easier to internalize with direct examples.

BAD:  "Fix the button alignment and make the layout cleaner."

GOOD: "The Submit button in LoginForm.tsx is 4px off-center on desktop.
      Center it horizontally within its flex container.
      Fix only this. Do not change anything else."
BAD:  "The stats section looks off on mobile."

GOOD: "On mobile (<768px), StatBar.tsx renders 3 items per row but
      should render 2. Add a responsive grid breakpoint at 768px.
      Fix only this. Do not change anything else."

The difference is not length — it is specificity. The bad prompts are ambiguous about what "done" looks like. The good prompts define exactly one success condition and explicitly close the scope.

The Mission Control Incident

This is not theoretical. A UI fix PR in the mission-control portfolio dashboard started as a 3-line change: a card component had an incorrect justify-content value. The AI was asked to fix it.

The PR came back with 47 changed lines.

The AI had noticed sub-optimal flex patterns in four nearby components and "improved" them while it was in the file. The original fix was correct. The improvements introduced a broken layout on the Agents page — align-items was changed to a value that collapsed a flex column on a breakpoint the AI had not tested.

The fix for the fix took longer than the original fix.

The lock phrase was added to every AI prompt for UI work the following day. No such incident has occurred since.

Why This Rule Belongs in Your Default Workflow

The lock phrase is not a workaround for a bad AI. It is calibration for a good one.

AI assistants are capable of precise, targeted changes. They will also drift into broader improvements without constraint. The lock phrase is the constraint. It takes two seconds to write and eliminates the most common class of AI-introduced regression in UI work.

The InDecision frontend team applies this rule to every AI prompt that touches a component file. One feature, one conversation, one lock phrase at the end. The PRs are smaller, the reviews are faster, and the regressions from "helpful" AI changes are gone.

Lesson 174 Drill

Pull up your last five AI-generated UI PRs. For each one:

  1. Count the changed files. Was the change larger than what you asked for?
  2. Were any "bonus" changes made that you did not request?
  3. Did any of those bonus changes introduce issues?

If the answer to question 2 is yes for more than two of them, you need the lock phrase in every future prompt. Write it down somewhere you will see it before typing the next prompt. Make it a reflex before it becomes a rule.

The prompt is not done until the lock phrase is at the end.