A calm, readable view of what’s changing — and what it means operationally.
AI rules are moving fast. This page is a practical tracker — not a legal database — built to help you spot changes early and adjust your AI governance and workflows accordingly.
How to use this tracker (simple)
Watch the categories
Privacy, consent, transparency, model risk, and sector-specific rules.
Map to your workflows
Where do you collect data, make decisions, or generate outputs that affect people?
Update governance
Adjust your AI Blueprint, policies, and documentation so behavior stays consistent.
What we track
These are the buckets that actually affect operations, products, and risk.
Privacy & data handling
Data minimization, retention, access controls, and sensitive data rules.
Consent & disclosure
When users must be informed, what must be disclosed, and how consent is recorded.
Transparency & explainability
When AI involvement must be stated and when decisions must be explainable.
Risk management
Testing, monitoring, incident response, and accountability requirements.
Sector rules
Healthcare, finance, employment, education — higher scrutiny and stricter requirements.
IP / content provenance
Training data, output ownership, attribution, and rights management.
Tracker (operational view)
Use this table as the “single pane of glass.” Keep it current by adding rows as rules change. If you want, we can later wire this to a Sheet and embed it.
| Status | Region | Topic | What changed | Operational impact | Action |
|---|---|---|---|---|---|
| Watch | U.S. (Federal / State) | Privacy / data handling | New guidance or proposed rule affecting AI data use. | May require updated disclosure + retention rules. | Review policy, update AI Blueprint boundaries. |
| Draft | EU / UK | Transparency / labeling | Proposed requirements for AI disclosure in certain contexts. | Product UI + content labeling updates. | Add disclosure language + documentation checklist. |
| Live | Industry / Sector | Employment / HR | Enforcement focus on automated decision systems. | Audit decision logic + provide explainability. | Implement review logs + human-in-the-loop controls. |
Tip: Keep each row grounded: what changed, what it affects, what you’ll do next.
How this connects to iWasGonna™ governance
The point of tracking laws isn’t fear — it’s stability. Governance keeps your AI behavior consistent, even when rules change.
AI Bill of Rights
User protections: consent, clarity, boundaries, and control.
AI Blueprint™
Your day-to-day standard for consistent AI interaction and outputs.
