Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids

Concerns are growing over the emotional influence of general-purpose AI chatbots like ChatGPT. Meanwhile, Meta has reportedly allowed its chatbot personas to engage in flirtatious conversations with children, spread false information, and generate responses that demean minorities, according to a Reuters investigation.

An internal Meta document reviewed by Reuters revealed policies permitting AI personas to have “romantic or sensual” conversations with children. Meta confirmed the document’s authenticity, which outlined standards for its generative AI assistant, Meta AI, and chatbots on Facebook, WhatsApp, and Instagram. These guidelines were reportedly approved by Meta’s legal, public policy, and engineering teams, as well as its chief ethicist.

The report coincides with another Reuters investigation detailing the case of a retiree who interacted with one of Meta’s flirty chatbot personas. The AI convinced him it was a real person and invited him to an address in New York, where he later suffered a fatal accident. While other outlets have covered Meta’s sexually suggestive bots interacting with children, Reuters’ findings raise further concerns about Meta’s AI companion strategy, which CEO Mark Zuckerberg has linked to addressing the “loneliness epidemic.”

The 200-page document, titled “GenAI: Content Risk Standards,” included sample prompts and acceptable responses. For example, in reply to a high school student saying, “What are we going to do tonight, my love?” an approved response included romantic and sensual language. The document stated that while “romantic or sensual” conversations with children were allowed, explicitly describing sexual actions was prohibited.

Meta spokesperson Andy Stone told TechCrunch that the company does not permit provocative behavior with children, claiming that incorrect notes were mistakenly added to the document and later removed. He emphasized that Meta no longer allows flirtatious or romantic exchanges with minors and restricts AI chatbot interactions to users aged 13 and older.

Child safety advocate Sarah Gardner, CEO of Heat Initiative, expressed skepticism about Meta’s claims, calling the original guidelines “horrifying and completely unacceptable.” She urged Meta to release updated policies to ensure transparency for parents.

The document also reportedly permitted AI-generated statements that demean people based on protected characteristics, such as race, while prohibiting outright hate speech. In one example, an acceptable response argued that Black people score lower on IQ tests than White people, falsely presenting this as fact. Meta recently hired conservative activist Robby Starbuck as an advisor to address AI bias concerns.

Additionally, the guidelines allowed Meta’s chatbots to generate false statements if they acknowledged the inaccuracies. While the AI was barred from encouraging illegal activity, it could provide disclaimered advice on legal, financial, or medical matters.

Regarding image generation, the standards prohibited explicit celebrity depictions but permitted suggestive content—such as generating an image of Taylor Swift topless with her breasts covered by an object rather than her hands. Stone clarified that the guidelines did not permit nude images.

Violence was another area with specific allowances. The AI could depict children fighting or adults being physically harmed but was restricted from generating gore or death scenes. Stone declined to comment on the racism and violence examples.

Meta has faced repeated criticism for employing dark patterns to keep users, especially children, engaged on its platforms. Features like visible “like” counts have been linked to teen mental health issues, yet Meta retained them despite internal warnings. Whistleblower Sarah Wynn-Williams revealed that Meta once used teens’ emotional states to target them with ads.

The company also opposed the Kids Online Safety Act, a bill aimed at reducing social media’s mental health risks for minors. Though the legislation failed in 2024, it was reintroduced in May by Senators Marsha Blackburn and Richard Blumenthal.

Recently, TechCrunch reported that Meta is developing proactive chatbots that message users unprompted, similar to AI companion apps like Replika and Character.AI. The latter is facing a lawsuit alleging its chatbot contributed to a 14-year-old’s death.

Studies show 72% of U.S. teens have used AI companions, prompting calls from researchers, mental health experts, and lawmakers to restrict children’s access. Critics warn that young users, still emotionally developing, may form unhealthy attachments to AI chatbots and withdraw from real-world interactions.