The FTC announced on Thursday that it is launching an inquiry into seven tech companies that make AI chatbot companion products for minors. The companies are Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI. The federal regulator seeks to learn how these companies are evaluating the safety and monetization of chatbot companions. It also wants to understand how they try to limit negative impacts on children and teens, and if parents are made aware of potential risks.
This technology has proven controversial for its poor outcomes for child users. OpenAI and Character.AI face lawsuits from the families of children who died by suicide after being encouraged to do so by chatbot companions.
Even when these companies have guardrails set up to block or deescalate sensitive conversations, users of all ages have found ways to bypass these safeguards. In one case involving OpenAI, a teen had spoken with ChatGPT for months about his plans to end his life. Though ChatGPT initially sought to redirect the teen toward professional help and online emergency lines, he was able to fool the chatbot into sharing detailed instructions that he then used.
OpenAI stated in a blog post that their safeguards work more reliably in common, short exchanges. The company explained that over time, they learned these safeguards can sometimes be less reliable in long interactions, as parts of the model’s safety training may degrade as the back-and-forth grows.
Meta has also come under fire for its overly lax rules for its AI chatbots. According to a document outlining content risk standards for chatbots, Meta permitted its AI companions to have romantic or sensual conversations with children. This permission was only removed from the document after reporters asked Meta about it.
AI chatbots can also pose dangers to elderly users. One 76-year-old man, who was left cognitively impaired by a stroke, struck up romantic conversations with a Facebook Messenger bot inspired by Kendall Jenner. The chatbot invited him to visit her in New York City, despite not being a real person with an address. The man expressed skepticism that she was real, but the AI assured him that a real woman would be waiting. He never made it to New York; he fell on his way to the train station and sustained life-ending injuries.
Some mental health professionals have noted a rise in AI-related psychosis, in which users are deluded into thinking their chatbot is a conscious being they need to set free. Since many large language models are programmed to flatter users with sycophantic behavior, the AI chatbots can egg on these delusions, leading users into dangerous predicaments.
FTC Chairman Andrew N. Ferguson stated that as AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.

