Has AI Become the Most Powerful Institution Americans Never Approved?
Consent in the age of algorithms.
When AI Collided With the Chain of Command
On February 24, 2026, Defense Secretary Pete Hegseth delivered an ultimatum to Dario Amodei at the Pentagon:
By 5:01 p.m. that Friday, remove two ethical guardrails built into Claude—no mass domestic surveillance of Americans and no fully autonomous weapons—or face consequences.
It was a hard deadline. And it carried weight.
As of early 2026, Claude was operating in more than 150,000 enterprise deployments, including agencies within the U.S. federal government. That made its embedded values operational rather than theoretical.
Three days later—before his own deadline had even arrived—Donald Trump posted on Truth Social, directing every federal agency to immediately cease use of Anthropic’s technology and designating the company a national threat.
Within the hour, Hegseth escalated. He labeled Anthropic a “supply chain risk to national security” —a designation previously reserved for Chinese military-linked companies like Huawei and never before publicly applied to an American firm.
By that evening, OpenAI had signed its own Pentagon agreement. It included the same two safeguards Anthropic had just been rejected for maintaining.
So what happened?
The Convenient Narrative
Many picked a side almost immediately. Both sides missed the same thing.
The American people weren’t in either version of the story.
From a distance, the narratives felt clean and simple. Either Anthropic was a principled company holding the line against executive overreach, or it was an arrogant tech firm dictating terms to the U.S. military. Both stories feel plausible. Both miss the point.
For what it’s worth, the two values Anthropic protected do not seem unreasonable. But that isn’t the real question.
The framing treats this as a fight between two players. In reality, there’s a third party that never made it into the spotlight: the American people.
The military chain of command has a constitutional structure. Congress appropriates. The President commands. The Department of Defense executes. Clear lines. Clear authority.
What that structure does not answer is this:
Anthropic built an AI “constitution” governing what its technology will and won’t do. That constitution is now embedded in systems running across American courtrooms, classrooms, hospitals, and classified government networks.
No legislature passed it.
No court reviewed it.
No voter approved it.
So the real question isn’t whether the values are reasonable.
It’s who authorized a private company to write them.
A Second Constitution
Let’s get to the basics.
Somewhere between a San Francisco startup’s ethics committee and the lives of 330 million Americans, a substitution happened.
The U.S. Constitution—ratified by the people and amendable only by a supermajority—was quietly joined by a second governing document. A private one. Written by engineers and ethicists at a company valued at $380 billion.
There was no public debate. No ratification vote. No constitutional amendment.
The Founders built a republic on a single radical idea: authority flows from the consent of the governed.
Not from Silicon Valley sophistication.
Not from good intentions.
Not from private expertise.
The decisive question was never truly about autonomous weapons or surveillance. Those were surface-level disputes. The deeper issue is older than artificial intelligence and more fundamental than any administration.
In America, who writes the rules?
And did anyone ask us?
Four Artifacts. One pattern
The artifacts are on the record.
First: Anthropic’s Claude Constitution, updated January 22, 2026. It establishes a four-tier value hierarchy—safety, ethics, compliance, helpfulness—governing how Claude responds across every deployment. It was written internally and published unilaterally. There is no statutory authority behind it.
Second: Dario Amodei’s February 26 statement: “Anthropic understands that the Department of War, not private companies, makes military decisions” —followed immediately by two exceptions his company will enforce regardless.
Read that carefully.
A private company concedes it has no governing authority, then exercises it anyway. That is not hypocrisy. It is the mechanism.
Third: 10 U.S.C. § 3252, the supply chain risk statute—historically reserved for foreign adversaries and never before publicly applied to an American company operating under a valid federal contract.
Fourth: OpenAI’s Pentagon agreement, announced that same evening, included the identical safeguards. They were structured through contract language, cloud-only architecture, and cleared personnel oversight. The Pentagon accepted those terms within hours of rejecting Anthropic’s.
A reasonable reader might pause here.
If both deals ended in the same place, why did the path matter so much?
The Branding of Governance
“Constitutional AI” sounds like a safeguard. It sounds civic. It sounds American.
But in the American tradition, a constitution is a social compact. It is the product of debate, ratification, and accountability. It binds the government because the people agreed to it.
Anthropic’s document is something different. It is a corporate policy with a prestigious name.
It borrows language from the UN Universal Declaration of Human Rights — aspirational principles with no direct force in U.S. domestic law. It echoes the structure of Apple’s terms of service — documents designed to manage risk and limit liability. And it layers in internal guidelines that answer to investors and executives, not to voters.
Consider the gap.
The UN Declaration states, “Everyone has the right to life, liberty and security of person.” The language is noble. It is also unenforceable in American courts unless adopted into law.
Anthropic’s constitution translates that aspiration into model behavior rules. Those rules are enforced only by the company that wrote them. They can be revised at any board meeting.
That is not a constitutional system. It is a private governance framework.
In a republic, a document carries constitutional weight only if three conditions hold:
Authority derives from the people.
The process is transparent and open to challenge.
The rules can be changed by voting.
Test those conditions against Anthropic’s constitution.
Each one fails.
Terms of Service — For a Nation
Think of it as a franchise agreement that quietly became a governing document.
Anthropic is the franchisor. It writes the terms. The Pentagon, your hospital, and your child’s school are the franchisees. They operate inside those terms. You, the citizen, are the customer who never saw the contract.
There was no signature line for the public. There was no disclosure meeting.
Picture it this way.
A public defender in a rural county uses an AI-assisted case management tool. The tool runs on Claude. The values governing what it will and won’t surface—what arguments it flags, what precedents it omits—were written by an ethics committee in San Francisco.
The defender doesn’t know this.
Neither does the defendant.
Yet those embedded values shape what information rises to the top and what fades into the background. Over time, that shaping becomes structure. And structure becomes power.
The governing principle is simple: asymmetry wins through control of the terms of service.
Whoever writes the terms sets the values. Whoever sets the values shapes the machine. Whoever shapes the machine shapes what is possible—for everyone.
No election required.
No amendment process.
No consent of the governed.
The Downstream Effect
Anthropic built a value framework and embedded it inside a powerful AI system. Because that system became indispensable—to enterprises, federal agencies, and military networks—the framework traveled with it.
Not by force. By contract.
The values moved downstream the way water moves. They found every channel, filled every space, and arrived almost everywhere before anyone stopped to ask where they came from.
And like water, once they settle into the infrastructure, you stop noticing them. They become background conditions. Invisible. Assumed.
Until someone tries to shut them off.
When the Pentagon pulled on the thread, what unraveled was not just a contract dispute. It was a glimpse of something larger: the quiet privatization of the value layer governing AI in American life.
An internal ethics document became de facto public policy for millions of Americans who never read it and were never asked.
Grant the Premise, Question the Process
Anthropic’s two red lines are not unreasonable.
Most Americans would agree that AI should not power mass surveillance. Most would also agree that it should not operate weapons without human oversight. And Anthropic is not alone. OpenAI, Google, and Meta have all published internal ethics frameworks shaped by serious people thinking carefully about challenging problems.
The good faith deserves acknowledgment.
But the concession does not answer the central question.
When did the public agree to let private companies define the ethical boundaries of the most transformative technology in human history?
The rules governing what AI will and will not do in your doctor’s office, your courthouse, and your child’s classroom were written in corporate headquarters. They were reviewed by private boards and published on company websites.
There was no constitutional process.
No public comment period.
No elected vote.
The precedent was not set on February 27.
It was set quietly, years earlier—each time a major AI company published a values document and the public moved on without asking who authorized it.
February 27 simply turned on the lights.
The Cost of Bypassing Process
The Constitution already provides a process for questions like this: Who sets the rules? By what authority? Accountable to whom?
That process was not followed. Not because anyone broke the law, but because the law has not caught up.
From that gap, three costs compound.
First: democratic legitimacy.
When a private document becomes the rulebook for technology woven into national security, healthcare, and commerce, the constitutional process is not violated. It is bypassed. Quietly. Incrementally. Without notice.
An insurance claim.
A hospital triage decision.
A veteran’s benefits determination.
Each may be shaped by values the public never debated and cannot contest.
Second: accountability.
Anthropic’s constitution is enforceable only by the company that wrote it. It can be updated on a Tuesday, modified on Thursday, or abandoned altogether.
There is no appeals process. No public vote.
If the framework is changed in 2027, the courtrooms and classrooms running Claude in 2026 will have operated under rules that no longer exist. There may be no clear record of the change and no recourse for those affected.
Third: the civilizational wager.
The company that wins the next major government AI contract writes the next set of embedded values.
Winning does not just mean building the tool. It means defining what the tool “believes.” Right now, there is no shared framework to ensure consistency or public accountability. There is only a market—and markets optimize for competitive advantage, not constitutional fidelity.
The Founders understood this risk. That is why they did not leave the definition of American values to the most powerful commercial actors of their era.
We have arrived at the same problem again.
This time, the power is algorithmic.
Tools We Already Have
The good news is that the tools to ask these questions already exist.
What follows is not a policy platform. But procedural questions a self-governing republic has every right to raise before the architecture becomes permanent.
One path runs through Congress’s oldest power: the purse. Under Article I, the legislature controls the conditions attached to federal spending. Should any AI system deployed on federal networks require congressional review of its ethics framework as a condition of contract? Not as regulation of a private company, but as a condition of public business.
A second path would require pre-deployment review against the Bill of Rights before any AI governance framework is embedded in federal infrastructure. The instinct is not new. It echoes the logic behind FISA court review and OMB oversight.
A third path requires the least new machinery: transparency. If a private document functions as public policy, the public arguably has the right to read it in full and track its revisions.
A fourth path reaches back to the Founders themselves.
Article I, Section 8 limits Army appropriations to two years because the Framers believed permanent, unchecked authority was incompatible with a free republic.
The same instinct suggests that AI ethics frameworks in federal contracts might carry sunset provisions requiring periodic reauthorization.
Each path, at minimum, is worth asking about.
The Consent Question
Somewhere today, a judge is reviewing an AI-assisted sentencing recommendation. A doctor is reading a diagnosis shaped by an AI triage system. A soldier is acting on intelligence filtered through an AI tool.
They did not choose the values guiding the decision.
They do not know who did.
The franchise always had a flaw. The franchisor writes the rules. The franchisee follows them. The customer never sees the contract.
For most products, that asymmetry is a consumer-protection issue. Annoying. Manageable.
But this product is different.
AI is becoming the decision-making layer of American civilization. It runs on classified military networks, assists in medical diagnoses, and shapes what information reaches which eyes.
The terms governing it all were written in private by people who were not elected, drawing from documents that were not ratified, and accountable to boards confirmed by no Senate.
That is not an accusation. It is a description.
And it should feel uncomfortable in a republic founded on a simple argument: authority to govern must flow from the consent of the governed.
Not from a technology.
Not from the ‘altruistic’ intentions of its builders.
Not from the prestige of the frameworks they borrowed.
The Constitution became the supreme law of the land because the alternative—governance by the most powerful private actors of the age, unchecked and uncontested—was the precise condition the American Revolution fought to escape.
We are not there yet.
But the architecture being built right now will determine whether we ever get there.
The window for asking who should build it—and by what authority—is still open.
For now.
Primary Sources
All interpretive claims are framed conditionally—as questions, considerations, or structural observations, not verified conclusions. Primary sources are publicly available for independent review.
Endnotes
Anthropic, “Claude’s Constitution,” January 22, 2026.
President Trump, Truth Social post, February 27, 2026.
Anthropic, “Statement on Comments by Secretary of War,” February 26, 2026.
10 U.S.C. § 3252, Supply Chain Risk.





The entire argument swirled down the loo in the second or third paragraph with the assumption that the federal government still operates as guardian and executor of the U.S. Constitution. We’ve been a post-Constitutional nation with a multi-tier justice system since the passage of the Patriot Act in 2001. The words “We the People” are literally considered subversive by the greater establishment including the media.