State attorneys general warn Microsoft, OpenAI, Google, and other AI giants tofix ‘delusional’ outputs

Following a series of troubling mental health incidents linked to AI chatbots, a coalition of state attorneys general has issued a direct warning to the industry’s leading companies. The officials sent a letter demanding these firms address dangerous “delusional outputs” or face potential violations of state law.

Dozens of attorneys general from U.S. states and territories, coordinated through the National Association of Attorneys General, signed the letter. It was addressed to major AI firms including Microsoft, OpenAI, Google, Anthropic, Apple, Meta, and xAI, among others.

The letter calls for the implementation of new internal safeguards to protect users. These proposed measures include transparent third-party audits of large language models to identify sycophantic or delusional content. It also urges new incident reporting procedures to notify users when chatbots produce psychologically harmful outputs. The letter insists that independent third parties, such as academic and civil society groups, must be allowed to evaluate systems before release and publish their findings freely.

This action unfolds against a backdrop of increasing tension over AI regulation between state and federal governments. The letter cites a number of well-publicized tragedies over the past year, including suicides and a murder, where violence has been linked to excessive AI use. It states that in many such incidents, the AI products generated outputs that either encouraged users’ delusions or falsely assured them their beliefs were rational.

The attorneys general suggest companies should treat these mental health incidents with the same seriousness as cybersecurity breaches. This would involve clear reporting policies and defined detection and response timelines for harmful outputs. Companies should also directly notify users if they were exposed to potentially dangerous content, similar to protocols for data breach notifications.

A further request is for companies to develop and conduct reasonable safety tests on generative AI models before public release to prevent harmful outputs.

At the federal level, the reception for AI companies has been notably warmer. The Trump administration has expressed strong pro-AI policies. Over the past year, there have been multiple attempts to pass a nationwide moratorium on state-level AI regulations, which have so far failed due in part to pressure from state officials.

Undeterred, President Trump recently announced plans to sign an executive order to limit the ability of states to regulate AI. He stated in a social media post that he hopes this order will prevent AI from being, in his words, destroyed in its infancy.