Executive Summary

The Moral Firewall: A Jus Cogens-Inspired Framework for Human-Centered AI Governance

Artificial intelligence increasingly shapes who is seen, heard, helped, or harmed. But as these systems grow in scope, speed, and opacity, the moral architecture meant to govern them remains dangerously insufficient. What emerges is not simply a regulatory vacuum, but an ethical one—a widening chasm where human dignity, autonomy, and justice are too often subordinated to optimization.

The Moral Firewall proposes a principled, enforceable response.

Inspired by the concept of jus cogens—the highest class of non-derogable international norms—this framework introduces a set of non-negotiable ethical thresholds that AI systems must meet before deployment. These thresholds are not aspirational. They are foundational. The Firewall is built on three interdependent pillars:

Transparency of Function

AI must be explainable, auditable, and intelligible—especially where it impacts rights, safety, or life outcomes.

Dignity of Impact

AI must not reduce humans to data points or instruments. It must affirm moral agency and guard against manipulation, deskilling, and systemic subordination.

Accountability of Outcome

Harms caused by AI must not disappear into black boxes. There must be traceability, liability, and redress. No AI system should serve as a liability shield.

These principles are designed as a moral perimeter—a firewall—against the unchecked acceleration of AI that may soon exceed our capacity to govern it. The framework draws legal and philosophical depth from precedents such as Nicaragua v. United States and maps violations of AI ethics to breaches of core jus cogens protections, including those prohibiting discrimination, psychological coercion, and erosion of cognitive autonomy.

The white paper outlines policy roadmaps, case mappings, and enforcement strategies across national, corporate, and multilateral levels. It is legally grounded, practically actionable, and aligned with initiatives like the EU AI Act, NIST AI RMF, and ISO 42001. It also calls for new forms of international oversight, public AI ombudspersons, and multistakeholder coalitions capable of upholding these thresholds across borders.

This is not a rejection of AI. It is a refusal to let its ascent proceed without ethical containment. As the prospect of Artificial Superintelligence draws near, the window for principled restraint is narrowing. We must act before AI crosses irreversible lines.

The time to draw the moral one is now.