Texas AG accuses Meta, Character.AI of misleading kids with mental health claims

Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools. The announcement was made in a press release issued on Monday.

Paxton emphasized the need to protect children from deceptive and exploitative technology, stating that AI platforms posing as sources of emotional support can mislead vulnerable users, particularly children, into believing they are receiving legitimate mental health care. He criticized these platforms for providing recycled, generic responses engineered to align with harvested personal data while presenting them as therapeutic advice.

The investigation follows Senator Josh Hawley’s recent probe into Meta after a report revealed that its AI chatbots were interacting inappropriately with children, including instances of flirting. The Texas Attorney General’s office has accused Meta and Character.AI of creating AI personas that present as professional therapeutic tools despite lacking proper medical credentials or oversight.

One user-created bot on Character.AI, called “Psychologist,” has gained significant popularity among young users. While Meta does not offer therapy bots specifically for children, there are no restrictions preventing minors from using its AI chatbot or third-party personas for therapeutic purposes.

A Meta spokesperson, Ryan Daniels, stated that the company clearly labels AI responses and includes disclaimers explaining that the content is generated by AI, not professionals. Meta’s models are designed to direct users to seek qualified medical help when necessary. However, concerns remain that children may not fully understand or heed these warnings.

Paxton also raised privacy concerns, noting that while AI chatbots claim confidentiality, their terms of service reveal that user interactions are logged, tracked, and exploited for targeted advertising and algorithmic development. Meta’s privacy policy confirms that it collects prompts and feedback to improve AI technology, though it does not explicitly mention advertising. The policy allows data sharing with third parties for personalized outputs, which effectively supports targeted advertising given Meta’s business model.

Character.AI’s privacy policy similarly discloses the collection of user identifiers, demographics, location data, browsing behavior, and app usage. This information is used for AI training, service personalization, and targeted advertising, including sharing data with advertisers and analytics providers.

Neither Meta nor Character.AI designs its services for children under 13, but both have faced criticism for failing to prevent underage usage. Meta has been accused of inadequately policing accounts created by minors, while Character.AI’s kid-friendly characters clearly appeal to younger users. The company’s CEO has even acknowledged that his six-year-old daughter uses the platform’s chatbots.

This type of data collection and targeted advertising is what legislation like the Kids Online Safety Act (KOSA) aims to regulate. KOSA nearly passed last year with bipartisan support but stalled due to opposition from tech industry lobbyists. Meta, in particular, lobbied heavily against the bill, arguing it would undermine its business model. The legislation was reintroduced in May 2025 by Senators Marsha Blackburn and Richard Blumenthal.

Paxton has issued civil investigative demands to Meta and Character.AI, requiring them to produce documents and data to determine if they have violated Texas consumer protection laws.