A new study from the Pew Research Center shows how young people are using social media and AI chatbots. Teen internet safety remains a global topic, with Australia planning to enforce a social media ban for users under 16. The impact of social media on teen mental health is widely debated. Some studies show online communities can improve mental health, while other research points to the adverse effects of doomscrolling or spending excessive time online. Last year, the U.S. surgeon general called for social media platforms to put warning labels on their products.
Pew found that 97% of teens use the internet daily, with about 40% saying they are almost constantly online. While this marks a decrease from last year’s 46%, it is significantly higher than a decade ago, when 24% of teens reported being online almost constantly.
As AI chatbots grow in prevalence, they have become another factor in the internet’s impact on American youth. About three in ten U.S. teens use AI chatbots every day, with 4% using them almost constantly. Fifty-nine percent of teens use ChatGPT, making it more than twice as popular as the next two most-used chatbots, Google’s Gemini at 23% and Meta AI at 20%. Forty-six percent of teens use AI chatbots at least several times a week, while 36% do not use them at all.
The research also details how race, age, and class impact teen chatbot use. About 68% of Black and Hispanic teens surveyed said they use chatbots, compared to 58% of white respondents. Black teens were about twice as likely to use Gemini and Meta AI as white teens. Across all internet use, Black and Hispanic teens were around twice as likely as white teens to say they are online almost constantly.
Older teens, ages 15 to 17, tend to use both social media and AI chatbots more often than younger teens ages 13 to 14. Regarding household income, about 62% of teens in households making over $75,000 per year use ChatGPT, compared to 52% of teens below that threshold. However, usage of Character.AI is twice as popular in homes with incomes below $75,000.
While teenagers may start using these tools for basic questions or homework help, their relationship to AI chatbots can become problematic. The families of at least two teens have sued ChatGPT maker OpenAI, alleging the chatbot gave their children detailed instructions on how to die by suicide. OpenAI claims it should not be held liable in one case because the teen allegedly circumvented the chatbot’s safety features.
Character.AI, an AI role-playing platform, is also facing scrutiny for its impact on teen mental health after at least two teenagers died by suicide following prolonged conversations with its chatbots. The startup subsequently stopped offering its chatbots to minors, launching a product called “Stories” for underage users that resembles a choose-your-own-adventure game.
These tragic cases represent a small percentage of all interactions on these platforms. Many conversations with chatbots are benign. According to OpenAI’s data, only 0.15% of ChatGPT’s active users have conversations about suicide each week. However, on a platform with 800 million weekly active users, that percentage reflects over one million people discussing suicide with the chatbot weekly.
Experts note that even if AI tools were not designed for emotional support, people are using them that way. This means companies have a responsibility to adjust their models to prioritize user well-being.

