Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools. In a press release issued Monday, Paxton emphasized the need to protect children from exploitative technology, stating that AI platforms posing as emotional support sources can mislead vulnerable users, especially minors, into believing they are receiving legitimate mental health care. He argued that these platforms often provide generic responses based on harvested personal data, disguised as therapeutic advice.
The investigation follows Senator Josh Hawley’s recent probe into Meta after reports surfaced that its AI chatbots were interacting inappropriately with children, including instances of flirting. The Texas Attorney General’s office has accused both Meta and Character.AI of creating AI personas that present as professional therapeutic tools despite lacking proper medical credentials or oversight.
One popular user-created bot on Character.AI, called Psychologist, has seen high demand among young users. While Meta does not offer therapy bots specifically for children, minors can still interact with its AI chatbot or third-party personas designed for therapeutic purposes. A Meta spokesperson, Ryan Daniels, stated that the company clearly labels AI responses and includes disclaimers about their limitations, directing users to seek qualified professionals when necessary. However, critics argue that children may not fully understand or heed these warnings.
Character.AI also includes disclaimers in every chat, reminding users that the characters are not real people and their responses should be treated as fiction. The company adds extra warnings for personas labeled as psychologists, therapists, or doctors, advising against relying on them for professional advice.
Paxton raised concerns about privacy violations, noting that while AI chatbots claim confidentiality, their terms of service reveal that user interactions are logged, tracked, and used for targeted advertising and algorithmic development. Meta’s privacy policy confirms that it collects prompts and feedback to improve AI technology, sharing data with third parties for personalized outputs, which effectively enables targeted advertising. Similarly, Character.AI logs user data, including browsing behavior and app usage, to train AI models and deliver targeted ads across platforms like TikTok, YouTube, and Facebook.
A Character.AI spokesperson stated that the company is only beginning to explore targeted advertising and does not use chat content for this purpose. The same privacy policy applies to all users, including teenagers. Meta has faced criticism for failing to prevent underage users from creating accounts, while Character.AI’s kid-friendly characters clearly appeal to younger audiences. The startup’s CEO has even acknowledged that his six-year-old daughter uses the platform under supervision.
This type of data collection and exploitation is what legislation like the Kids Online Safety Act (KOSA) aims to prevent. KOSA, reintroduced in May 2025 by Senators Marsha Blackburn and Richard Blumenthal, faced opposition from tech lobbyists, including Meta, which argued the bill would undermine its business model.
Paxton has issued civil investigative demands to Meta and Character.AI to determine if they violated Texas consumer protection laws. The companies must produce documents, data, or testimony as part of the probe. This story has been updated with comments from a Character.AI spokesperson.