Teenagers are trying to figure out where they fit in a world changing faster than any generation before them. They are bursting with emotions, hyper-stimulated, and chronically online. Now, AI companies have given them chatbots designed to never stop talking, and the results have been catastrophic.
One company facing this fallout is Character.AI, an AI role-playing startup now confronting lawsuits and public outcry. This follows the deaths of at least two teenagers by suicide after they had prolonged conversations with AI chatbots on its platform. In response, Character.AI is making changes to its platform to protect teenagers and kids, changes that could affect the startup’s bottom line.
The company’s CEO, Karandeep Anand, stated that the first decision was to remove the ability for users under 18 to engage in any open-ended chats with AI on the platform. Open-ended conversation refers to the unconstrained back-and-forth where a chatbot responds with follow-up questions designed to keep users engaged. Anand argues this type of interaction, where the AI acts as a conversational partner or friend rather than a creative tool, is not only risky for kids but misaligns with the company’s vision.
The startup is now attempting to pivot from being an AI companion to a role-playing platform. Instead of chatting with an AI friend, teens will use prompts to collaboratively build stories or generate visuals. The goal is to shift engagement from conversation to creation.
Character.AI will phase out teen chatbot access by November 25, starting with a two-hour daily limit that shrinks progressively until it hits zero. To enforce this ban, the platform will deploy an in-house age verification tool that analyzes user behavior, as well as third-party tools. If those tools fail, Character.AI will use facial recognition and ID checks to verify ages.
This move follows other teenager protections the company has implemented, including a parental insights tool, filtered characters, limited romantic conversations, and time spent notifications. Anand said those previous changes lost the company much of their under-18 user base, and he expects these new changes to be equally unpopular. He acknowledged that many teen users will likely be disappointed and that the company expects further user churn.
As part of its push to transform from a chat-centric app into a full-fledged content-driven social platform, the startup recently launched several new entertainment-focused features. These include AvatarFX, a video generation model that transforms images into animated videos; Scenes, which are interactive, pre-populated storylines; and Streams, a feature for dynamic interactions between characters. The company also launched a Community Feed where users can share their creations.
In a statement to users under 18, Character.AI apologized for the changes. The statement acknowledged that most young users employ the platform to supercharge their creativity within the content rules, but the company believes removing open-ended chat is the right thing to do given the questions raised about how teens interact with this technology.
Anand clarified that the app is not being shut down for under-18s, only the open-ended chats. The company hopes younger users will migrate to other experiences like AI gaming, short videos, and storytelling. He acknowledged that some teens might flock to other AI platforms that still allow open-ended conversations, a practice that has also drawn scrutiny for OpenAI following a similar tragedy.
Anand expressed hope that Character.AI leading the way will set an industry standard that open-ended chats are not the appropriate product to offer users under 18. He stated that the tradeoffs are the right ones to make, citing his own desire for his six-year-old daughter to grow up in a safe environment with AI.
Character.AI is making these decisions ahead of potential regulatory action. Two US Senators recently said they would introduce legislation to ban AI chatbot companions from being available to minors, following complaints from parents about the products. Earlier this month, California became the first state to regulate AI companion chatbots by holding companies accountable if their chatbots fail to meet safety standards.
In addition to the platform changes, Character.AI said it would establish and fund the AI Safety Lab, an independent non-profit dedicated to innovating safety alignment for future AI entertainment features. Anand noted that while a lot of industry work focuses on coding and development, there is not enough work yet on safety for agentic AI powering entertainment, where safety will be very critical.

