Start your day with intelligence. Get The OODA Daily Pulse.

Home > Briefs > Technology > Character.AI has retrained its chatbots to stop chatting up teens

Character.AI has retrained its chatbots to stop chatting up teens

In an announcement today, Chatbot service Character.AI says it will soon be launching parental controls for teenage users, and it described safety measures it’s taken in the past few months, including a separate large language model (LLM) for users under 18. The announcement comes after press scrutiny and two lawsuits that claim it contributed to self-harm and suicide. In a press release, Character.AI said that, over the past month, it’s developed two separate versions of its model: one for adults and one for teens. The teen LLM is designed to place “more conservative” limits on how bots can respond, “particularly when it comes to romantic content.” This includes more aggressively blocking output that could be “sensitive or suggestive,” but also attempting to better detect and block user prompts that are meant to elicit inappropriate content. If the system detects “language referencing suicide or self-harm,” a pop-up will direct users to the National Suicide Prevention Lifeline, a change that was previously reported by The New York Times. Minors will also be prevented from editing bots’ responses — an option that lets users rewrite conversations to add content Character.AI might otherwise block.

Full report : Character.AI announces more parental controls and a separate LLM for users under 18, after two US lawsuits claimed its chatbots harmed young users.