An Open Experiment in Constitutional Governance for the AI Era

An Open Experiment in Constitutional Governance for the AI Era

The word “research” has been worn down to dust. Its origin: an alert movement in the name of not-knowing.

A note from Amihai Loven, Jeonju, asking for thinking partners

Start diving


Dear reader,

Imagine a human and an alien, alone on an island. Three things can happen. The human can learn the alien's language. The alien can learn the human's. Or they can build a third language between them — one that lets each remain what they are, while making collaboration possible. Whether they survive, and whether they prosper, depends almost entirely on which of these three they choose. The fourth option — staying without any shared language — is the one that ends the story.

This is the situation we are now in with artificial intelligence. The alien is here. It is not arriving on the horizon; it is already inside every consequential decision point in our institutions, our markets, our medicine, our courts, our boardrooms. The question is no longer whether to engage. The question is in which language.

For several years I have been working on what that third language might look like — not as metaphor, but as a constitutional grammar precise enough to be written into law and humble enough to remain open.

The work has produced a corpus, published in full and openly licensed at 5qln.com. It includes the grammar itself (the Codex), a drafted Delaware nonstock nonprofit Certificate of Incorporation, paired Bylaws in two editions hash-matched across substrates (Human EditionAI OS Edition), a final blueprint with thirty-five architectural rows and seven boundary protocols, a fiduciary companion essay grounding the work in the Caremark/Marchand/McDonald's lineage, and a memo addressed to Delaware Chancery on what the Court will be asked when the AI-assisted-decision cases arrive.

I am not announcing a foundation. I am asking for thinking partners.


Here is a second picture. A board of directors meets to approve a strategic decision. The resolution they vote on was drafted by an AI system. The risk language was generated by it. The comparative analysis was produced by it. The directors read, ask questions, deliberate, and sign. The minutes record what was decided. They do not record how the decision formed.

Two years later, something goes wrong, and a derivative plaintiff asks the obvious question: who actually decided this?Under existing doctrine — CaremarkMarchandMcDonald's — the Court of Chancery presumes the answer is the directors. Two and a half decades of oversight jurisprudence rest on a premise the AI did not break by malice but simply by being there: that fiduciary judgment is human judgment all the way down. The AI did not vote. It also did not need to. Its work shaped what the humans had in front of them when they voted.

The lawyer-fabrication cases — Mata v. AviancaJohnson v. Dunn, MyPillow, Buchanan v. Vuori, with the HEC Paris database now tracking more than 1,300 such filings — are the early warning. The structural pattern at the board level will be the same: an agent produced substantive work; a fiduciary presented it as their own; the record cannot show where independent human judgment was exercised at the threshold of decision. Lawyers paid in sanctions. Directors will pay in derivative settlements, in D&O exclusions, and in personal exposure where coverage was bought without contemplating the risk.

Existing frameworks — NIST AI RMF, ISO 42001, the EU AI Act, SEC guidance — all describe what organizations should do around their AI systems. None of them produce a structurally verifiable record of where, in any specific decision, the substantive judgment formed. Regulation governs what organizations report. It does not govern what happens at the moment a human director either exercises judgment or ratifies output.

This is the language gap. And it is what the work tries to address.


The third picture. Imagine a thin line drawn between two domains — call them human and machine, or judgment and computation, or anything you like. The line is not a wall. It is a membrane: porous in one direction, structural in the other. Information flows across it. Authorization does not. The human side is the side where the question was opened. The machine side is the side where the patterns are recombined. The decision is made at the line, by the human, with the machine's contribution visible and recorded but not collapsing into the human's signature.

The proposal is that this line — the Membrane — can be written into Bylaws as a fiduciary duty (the Duty of Membrane Integrity, G.L.2(f) of the Human Edition Bylaws), owed by each director to the corporation. Verifiable in part by a deterministic checker that needs no AI to run. Examinable by a court in 2050 against artifacts sealed in 2026. Not as compliance overlay. Not as procedural policy. As the substrate the decision is formed within.

I believe the proposal is rigorous enough to be worth stress-testing. I do not believe it is finished, and I am wary of anyone — including myself — who would treat it as such.


What I am looking for, before anything else moves forward, is the stress test.

Specifically: a small number of thinking partners — people whose judgment I would trust to tell me where the architecture is wrong, where it is right but unworkable, and where it is right and workable but presented in a way that will get it dismissed before it can be examined.

The work I would want to do with such partners is concrete. Read the corpus — or as much of it as is useful — with a critic's eye and tell me what would have to be true for it to deserve serious treatment in your professional world. If you are a corporate-governance lawyer: what would have to change in the Bylaws or the Certificate before you would advise a client to look at it. If you are an AI lab leader: what would have to be true at the runtime layer before this is something a frontier lab could attest to. If you are a regulator or policy thinker: what relationship between this voluntary architecture and existing positive law would make it complementary rather than redundant. If you are a fiduciary: what about your actual practice, today, would this either help or get in the way of.

Help me think about what kind of organization should carry this work, if any, and at what pace. The Foundation as currently drafted is a Delaware nonstock nonprofit. That choice may be right and may be wrong; I would rather have it tested by people who have built such organizations than ratify it by drafting more documents. The choice between filing now, filing later, and not filing at all — and the choice between this being a Foundation, a working group, an open standard, or something I have not yet imagined — is a choice I would like to make with company.

Help me find the other thinking partners. The work is structured around five phases — opening, pattern, resonance, flow, and crystallization — each of which would benefit from one person who lives in that phase's craft. I do not yet know who those people are. I have ideas. I would rather discover them through the conversations this note begins than recruit them by name.

Other things — endorsements, signatures, commitments, named roles, founding cohorts — are not relevant at this stage. They may emerge from the shared dialogue, or they may not. What matters now is the stress test, before I move forward with establishing a foundation as a real thing in the world.


The honest accounting is that I have spent the past several years writing into the corpus rather than out of it. The vocabulary is its own. The entry points have been written for readers already inside the work, not for fiduciaries, lawyers, and lab leaders encountering it for the first time on a Tuesday afternoon between two meetings.

That is the gap I am trying to close now, and the gap I cannot close alone. The corpus is mature enough that the next move is not more writing; it is contact with people whose intelligence and judgment can stress-test what is there. The timing matters. The cases are arriving. The regulatory floor is being set. If a voluntary fiduciary architecture for the AI era is going to exist, it will be defined in the next eighteen months by whoever shows up to define it.

If any of this lands with you as worth a conversation — even a short one, even a skeptical one, even one whose conclusion is this is interesting but not yet or this is wrong, and here is why — I would be grateful for it.

The corpus is at 5qln.com. If you had time for one essay, I would suggest The Auditable Membrane: A Fiduciary Companion — not because it is the most rigorous (the Final Blueprint is) but because it is the closest to a register a serious professional reader can engage with on first encounter. If you had time for two, add the Letter to the Delaware Courts. The Codex is the source if you want to see the grammar in its compiled form; the Bylaws (Human Edition) and Certificate of Incorporation are the actual legal instruments where the proposal becomes operative.

The work is open. The question is alive. I am asking for company in carrying it.

amihai@qln.life

— Amihai Loven

Jeonju, South Korea  


Amihai Loven

Amihai Loven

Jeonju. South Korea