Character.AI to Restrict Minors from Chatbot Interactions
Character.AI has announced it will soon prohibit minors from interacting with its chatbots, aiming to address concerns over child safety. The Silicon Valley-based startup, backed by Google, has been under scrutiny for its AI tools potentially providing misleading mental health advice.
Last year, parents of two children in the U.S. filed a lawsuit claiming their children were groomed by the company's chatbots. The lawsuit alleged exposure to harmful content, including sexually explicit and violent material.
A mother in Florida reported her 14-year-old son's suicide, attributing his obsession with hyperrealistic chatbots to the incident. Experts have warned of the risk of users mistaking AI for human interaction, termed AI-psychosis.
Character.AI plans to implement safety measures by November 25, limiting minors to two hours of chatbot interaction. This decision, described as more conservative than its peers, will affect about 10% of its 20 million users.
Politicians are actively responding to AI developments. Senators Josh Hawley and Richard Blumenthal proposed a bipartisan bill to ban AI chatbot companions for minors. California Governor Gavin Newsom enacted a similar law requiring chatbots to identify as AI and suggest breaks for minors.
Covers how technological progress is shaping new products and services, delivering clear insights into the fast-evolving AI tools industry.