
Character.AI is banning minors from using its chatbots amid growing concerns about the effects of artificial intelligence conversations on children. The company is facing several lawsuits over child safety, including by the mother of a teenager who says the company’s chatbots pushed her teenage son to kill himself.
Character Technologies, the Menlo Park, California-based company behind Character.AI, said Wednesday it will be removing the ability of users under 18 to participate in open-ended chats with AI characters. The changes will go into effect by Nov. 25 and a two-hour daily limit will start immediately. Character.AI added that it is working on new features for kids — such as the ability to create videos, stories, and streams with AI characters. The company is also setting up an AI safety lab.
Character.AI said it will be rolling out age-verification functions to help determine which users are under 18. A growing number of tech platforms are turning to age checks to keep children from accessing tools that aren't safe for them. But these are imperfect, and many kids find ways to get around them. Face scans, for instance, can't always tell if someone is 17 or 18. And there are privacy concerns around asking people to upload government IDs.
Character.AI, an app that allows users to create customizable characters or interact with those generated by others, spans experiences from imaginative play to mock job interviews. The company says the artificial personas are designed to “feel alive” and “humanlike.”
“Imagine speaking to super intelligent and lifelike chat bot Characters that hear you, understand you and remember you,” reads a description for the app on Google Play. “We encourage you to push the frontier of what’s possible with this innovative technology.”
Critics welcomed the move but said it is not enough — and should have been done earlier. Meetali Jain, executive director of the Tech Justice Law Project, said, “There are still a lot of details left open.”
“They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created,” Jain said. “Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies – not just for children, but also for people over the age of 18 years.”
More than 70% of teens have used AI companions and half use them regularly, according to a recent study from Common Sense Media, a group that studies and advocates for using screens and digital media sensibly.

