Begin — Option B

AI drift: why outputs shift

Drift is what happens when an AI output slowly moves away from your original intent—across turns, across time, or across “similar” prompts. This page explains where drift comes from and the lightweight constraints that reduce it.

What “AI drift” looks like in practice

Drift is rarely a single obvious mistake. It’s a gradual shift: different assumptions, different emphasis, or a subtly different goal than the one you started with.

1) Goal drift

The output optimizes for a nearby goal that “sounds right,” but isn’t your actual objective.

2) Constraint drift

Important constraints (time, scope, risk, audience) fade across back-and-forth turns.

3) Assumption creep

Small guesses accumulate, and the conversation starts building on them as if they’re true.

4) Tone / policy drift

The style or risk posture changes, even if you didn’t ask for it—especially in long sessions.

Drift isn’t always “hallucination.” Often it’s a reasonable path taken without your explicit approval.

Constraints that reduce drift (without heavy process)

You don’t need an elaborate workflow. The goal is to make intent and boundaries explicit—and keep them “sticky” across turns.

Pin the objective and constraints
Start with: “Objective:” + “Constraints:” and ask the model to restate them before answering.
Force a facts vs. assumptions split
Require two sections: “Known” and “Assumed,” so drift is visible.
Use a quick “checkback” line
Add: “Before final answer, list anything you’re uncertain about.”

These are lightweight forms of governance: they don’t remove creativity, they make intent explicit and reviewable.

Scroll to Top