The recent breakdown between Washington and Anthropic exposed the complete lack of coherent rules governing artificial intelligence. In response, a bipartisan coalition of thinkers has assembled something the government has so far declined to produce: a framework for what responsible AI development should actually look like.
Known as the Pro-Human Declaration, this document was finalized just before last week’s Pentagon-Anthropic standoff, a collision of events that was not lost on anyone involved. Max Tegmark, an MIT physicist and AI researcher who helped organize the effort, noted the remarkable shift in public opinion. Polling now shows that 95% of all Americans oppose an unregulated race to superintelligence.
The newly published document, signed by hundreds of experts, former officials, and public figures, opens with a straightforward observation: humanity is at a fork in the road. One path, termed “the race to replace,” leads to humans being supplanted first as workers, then as decision-makers, as power accrues to unaccountable institutions and their machines. The other path leads to AI that massively expands human potential.
This positive scenario depends on five key pillars: keeping humans in charge, avoiding the concentration of power, protecting the human experience, preserving individual liberty, and holding AI companies legally accountable. Among its more specific provisions are an outright prohibition on superintelligence development until there is scientific consensus it can be done safely and genuine democratic buy-in; mandatory off-switches on powerful systems; and a ban on architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.
The declaration’s release coincides with a period that makes its urgency far easier to appreciate. In late February, Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk” after the company refused to grant the Pentagon unlimited use of its technology. Hours later, OpenAI cut its own deal with the Defense Department, one that legal experts say will be difficult to enforce meaningfully. This sequence laid bare how costly Congressional inaction on AI has become. As one observer noted, this is not just a contract dispute, but the first national conversation about control over AI systems.
Tegmark used an analogy to clarify the need for oversight. We do not worry that a drug company will release a harmful drug because the FDA requires safety testing first. He sees child safety as the pressure point most likely to break the current political impasse. The declaration calls for mandatory pre-deployment testing of AI products aimed at younger users, covering risks like increased suicidal ideation and emotional manipulation. Tegmark argues that if a person manipulating a child is illegal, it should be no different if a machine does it.
He believes that once pre-release testing is established for children’s products, the scope will inevitably widen to include other requirements, such as preventing AI from helping terrorists create bioweapons or ensuring superintelligence cannot overthrow the government.
It is significant that figures from across the political spectrum, from former Trump advisor Steve Bannon to President Obama’s National Security Advisor Susan Rice, have signed the same document. They are joined by former Joint Chiefs Chairman Mike Mullen and progressive faith leaders. As Tegmark notes, what they agree on is that they are all human. If the choice is between a future for humans or a future for machines, of course they are going to be on the same side.

