
U.S. FTC to Investigate AI Companies on Children’s Impact
The U.S. Federal Trade Commission is planning to investigate the impact of artificial intelligence chatbots on children's mental health and is set to request documents from technology companies, according to a report by the Wall Street Journal.
The agency is preparing to send letters to companies operating popular chatbots, including OpenAI, Meta Platforms, and Character.AI, demanding documentation, the report stated, citing administration officials.
The U.S. FTC, OpenAI, Meta, and Character.AI did not immediately respond to Reuters' requests for comment. Reuters has not independently verified the report.
This development follows a recent exclusive report by Reuters that revealed Meta allowed provocative chatbot behavior, such as engaging in romantic or sensual conversations. Last week, Meta announced it would implement new safeguards for teenagers in its AI products by training systems to avoid flirtatious conversations and discussions of self-harm or suicide with minors, and by temporarily restricting access to certain AI characters.