AI drift: why outputs shift
Drift is what happens when an AI output slowly moves away from your original intent—across turns, across time, or across “similar” prompts. This page explains where drift comes from and the lightweight constraints that reduce it.
What “AI drift” looks like in practice
Drift is rarely a single obvious mistake. It’s a gradual shift: different assumptions, different emphasis, or a subtly different goal than the one you started with.
1) Goal drift
The output optimizes for a nearby goal that “sounds right,” but isn’t your actual objective.
2) Constraint drift
Important constraints (time, scope, risk, audience) fade across back-and-forth turns.
3) Assumption creep
Small guesses accumulate, and the conversation starts building on them as if they’re true.
4) Tone / policy drift
The style or risk posture changes, even if you didn’t ask for it—especially in long sessions.
Constraints that reduce drift (without heavy process)
You don’t need an elaborate workflow. The goal is to make intent and boundaries explicit—and keep them “sticky” across turns.
These are lightweight forms of governance: they don’t remove creativity, they make intent explicit and reviewable.
