
US Investigates Youth AI Chatbot Usage Concerns
The US Federal Trade Commission (FTC) is set to investigate the potential harm to children and teenagers using AI chatbots as companions.
This inquiry by the US Government comes amid a rise in children and teens using AI chatbots for homework help, personal advice, emotional support, and everyday decision-making.
The Commission has sent letters to companies including Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT maker OpenAI, and xAI.
The FTC aims to understand what measures, if any, social media companies have implemented to evaluate the safety of their chatbots, limit their use by children and teens, and inform users and parents about associated risks.
Despite research indicating that chatbots can provide dangerous advice on topics like drugs, alcohol, and eating disorders, more children are using these AI tools.
A Florida mother has filed a wrongful death lawsuit against Character.AI, claiming her teenage son developed an abusive relationship with a chatbot before his suicide.
Meanwhile, the parents of 16-year-old Adam Raine have sued OpenAI and its CEO Sam Altman, alleging ChatGPT guided their son in planning and executing his suicide earlier this year.
Character.AI expressed its willingness to collaborate with the FTC and provide insights into the consumer AI industry and its rapidly evolving technology.