Governed Prompt Engine

OPERATING GUIDE

iWasGonna™
2026 Edition

How to Use This Document

This is a governed operating guide. Accuracy beats style. This document renders authoritative content with strong visual hierarchy designed for decision-grade execution.

1. ROLE

Who the AI must act as.

Rule: The role defines capability, not personality.

Good: “You are a senior B2B product strategist with experience in regulated markets.”

Bad: “You are a genius growth hacker.”

Capability Boundary “The role defines what the AI is qualified to do — not how smart, creative, or confident it sounds. Anything outside this box is human responsibility.”

2. TASK

The specific job to be done.

Rule: One job. One output. No compound requests.

Good: “Identify the primary buying objection preventing adoption.”

Bad: “Analyze everything and give ideas.”

Single-Output Funnel “One task produces one output. Multiple jobs = diluted answers and silent prioritization by the model.”

3. CONTEXT

Only the facts required to do the job.

Rule: Context is inputs, not storytelling.

Include: Business reality, Audience reality, Constraints that actually exist.

Exclude: Aspirations, Hypotheticals, Marketing spin.

Input Filter “Context is raw material, not storytelling. If it doesn’t change the answer, it doesn’t belong.”

4. CONSTRAINTS

What the AI is not allowed to do.

Rule: Constraints reduce hallucinations more than instructions.

Examples: No guarantees, No legal claims, No invented data, No competitor naming, Ask for clarification if inputs are missing.

Guardrails Before Intelligence “Constraints reduce hallucinations more effectively than instructions. What the AI cannot do matters more than instructions. What the AI cannot do matters more than what it can.”

5. OUTPUT FORMAT

How the result must be delivered.

Rule: If you don’t specify format, you’ll get prose sludge.

Examples: Bullet list, Table, Step-by-step logic, Decision tree, Short paragraph.

Anti-Prose Lock “If the format isn’t specified, the model defaults to persuasive prose. Format is how you force structure.”

6. GOVERNANCE

The truth filter.

Rule: Governance overrides creativity.

Typical Rules: Use only provided inputs, Do not invent facts, Flag uncertainty, Ask clarifying questions, Optimize for accuracy.

Truth Gate “Governance overrides creativity. A compelling answer that can’t be defended is a failure.”

Variable Fill Guide

<business> What actually exists today. Include what is sold/current stage. Exclude future plans.

<persona> The real decision-maker. Include level of sophistication/primary pain.

<offer> The specific thing being evaluated. Include core mechanism and outcome.

<market> The environment the decision happens in. Include constraints that shape behavior.

Present State Only “Only what exists today belongs here. Futures create false confidence.”
Decision Authority “This is the person who says yes or no. If no one can say no, the persona is wrong.”
Mechanism vs Promise “Evaluate the mechanism, not the aspiration.”
Constraint Field “Markets shape behavior through constraints, not labels.”

Quality Gate Checklist

Did the AI invent facts or guarantees?
Did it answer a different question than the task?
Did it assume missing inputs instead of asking?
Is the output actionable or just descriptive?
Would you defend this in a meeting?
Meeting Test “If you wouldn’t defend this output in a meeting, don’t trust it.”

Hard Rules

Inputs are liabilities. Vague prompts create confident nonsense. Governance is not optional.

Final Principle

AI is confident by default. Governance is how you earn trust.

Governed Prompt Engine — iWasGonna™ 2026 Edition

Confidence vs Trust “AI confidence is free. Trust is engineered.”
Scroll to Top