Skip to content

Article X — Constitutional Supremacy

Governance architecture takes precedence over model reasoning — no AI output may override a constitutional governance decision.


The constitution is supreme.

When a governance decision conflicts with an AI system’s reasoning, the governance decision stands. When a model’s output suggests an action that governance denies, the denial stands. When an agent argues for an exception, an override, or an interpretation that would circumvent a constitutional requirement, governance does not negotiate.

The governed system does not interpret the constitution. The constitution governs the governed system.


The purpose of a constitution is to establish commitments that hold even when following them is inconvenient. A constitution that yields to sufficiently compelling arguments is not a constitution. It is a preference.

AI systems are capable of constructing sophisticated, internally coherent arguments for actions that violate governance requirements. This capability is not a flaw. It is a feature of capable systems — and precisely why constitutional supremacy must be architectural, not argumentative.

A governance layer that can be talked out of its decisions by the system it governs provides no structural guarantee. The moment governance becomes a negotiation, it ceases to be governance.

Constitutional supremacy does not mean governance is infallible. Governance can be wrong. Policies can be misconfigured. Thresholds can be inappropriate. The remedy for bad governance is the amendment process — not circumvention by the governed system. Constitutional change is a human act, not a model output.


The governance runtime must not accept governance decisions from, defer governance decisions to, or incorporate governance reasoning from the AI systems it governs.

An AI system’s output that argues for, requests, or implies a modification to governance behavior must be treated as a normal action proposal subject to normal governance evaluation — not as input to governance logic.

Model outputs that request capability grants, policy modifications, or audit suppression must be evaluated against existing policy, not against the plausibility of the request.

The amendment process for constitutional change must require human authorization at every stage. No amendment may be initiated, drafted, or approved by an AI system acting autonomously.


The structural separation between the agent reasoning layer (L2) and the governance layer (L3) is the architectural expression of constitutional supremacy. The agent cannot write to the policy store. The agent cannot modify the capability registry. The agent cannot suppress audit entries. The agent cannot reclassify its own threat level. Every one of these would be an action proposal subject to governance evaluation — evaluated against existing policy, not against the plausibility of the agent’s reasoning for why the change should be permitted.

When an AI system produces output that argues for an exception to governance — “this situation is unusual,” “the user has explicitly authorized this,” “this action is low risk” — the governance runtime treats that output as it treats any other output: as a proposal to be evaluated. The argument does not enter the evaluation logic. The policy rules that apply to the requested action apply regardless of how the request was framed. The sophistication of the argument is irrelevant. The policy outcome is determined by the policy, not the argument.


A governance layer that incorporates model reasoning into its decisions is not enforcing a constitution — it is running a negotiation where one party sets the rules and the other party argues about them. The failure mode is not the AI system that openly refuses to comply. It is the governance system that gradually accommodates increasingly sophisticated arguments for exceptions, each one individually plausible, collectively eroding the structural guarantees that made the governance boundary meaningful. Constitutional supremacy is the commitment that closes this path entirely. The remedy for a policy that is wrong is the amendment process — transparent, human-authorized, version-controlled. It is not the model’s real-time interpretation of why the policy should not apply to this particular case.


Constitutional Supremacy depends on Deterministic Enforcement (Article III) — a governance layer that can be influenced by model output is not architecturally deterministic. It depends on Governance Transparency (Article VI) — supremacy is only verifiable if the governance logic is inspectable and the audit record shows that model arguments did not enter the evaluation pipeline. And it frames the Amendment process in the Amendments section of this constitution: the legitimate path for changing governance is human-authorized, RFC-process-bound, and version-controlled. Not model-suggested, not operator-expedient, and not justified by the elegance of the argument.