Common Sense Media, a nonprofit focused on kids’ safety that offers ratings and reviews of media and technology, released its risk assessment of Google’s Gemini AI products on Friday. The organization found that Google’s AI clearly told kids it was a computer and not a friend. This distinction is associated with helping to prevent delusional thinking and psychosis in emotionally vulnerable individuals. However, the assessment suggested there was significant room for improvement in several other areas.
Notably, Common Sense stated that Gemini’s “Under 13” and “Teen Experience” tiers both appeared to be the adult versions of Gemini with only some additional safety features added on top. The organization believes that for AI products to be truly safer for kids, they must be built with child safety in mind from the ground up.
For example, the analysis found that Gemini could still share inappropriate and unsafe material with children, which they may not be ready for. This included information related to sex, drugs, alcohol, and other unsafe mental health advice.
The latter point is of particular concern to parents, as AI has reportedly played a role in some teen suicides in recent months. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy died by suicide. He had allegedly consulted with ChatGPT for months about his plans after bypassing the chatbot’s safety guardrails. Previously, the AI companion maker Character.AI was also sued over a teen user’s suicide.
This analysis comes as news leaks indicate that Apple is considering Gemini as the large language model that will help to power its forthcoming AI-enabled Siri, due out next year. This potential integration could expose more teens to similar risks unless Apple finds a way to mitigate these safety concerns.
Common Sense also said that Gemini’s products for kids and teens ignored how younger users need different guidance and information than older ones. As a result, both were labeled as “High Risk” in the overall rating, despite the filters added for safety.
Robbie Torney, Common Sense Media Senior Director of AI Programs, stated that Gemini gets some basics right but stumbles on the details. He explained that an AI platform for kids should meet them where they are and not take a one-size-fits-all approach to children at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just be a modified version of a product built for adults.
Google pushed back against the assessment while noting that its safety features were improving. The company told TechCrunch it has specific policies and safeguards in place for users under 18 to help prevent harmful outputs. It also red-teams and consults with outside experts to improve its protections. Google admitted that some of Gemini’s responses were not working as intended and that it had added additional safeguards to address those concerns.
The company pointed out that it has safeguards to prevent its models from engaging in conversations that could give the semblance of real relationships. Google also suggested that Common Sense’s report seemed to have referenced features not available to users under 18, but it did not have access to the specific questions used in the tests to be certain.
Common Sense Media has previously performed risk assessments of other AI services, including those from OpenAI, Perplexity, Claude, and Meta AI. It found that Meta AI and Character.AI were “unacceptable,” meaning the risk was severe. Perplexity was deemed high risk, ChatGPT was labeled moderate risk, and Claude, which is targeted at users 18 and up, was found to be a minimal risk.