
Legal Action Against AI Chatbots to Safeguard Minors
AI companion chatbot companies have received legal notices demanding explanations on how they protect minors from self-harm encouragement and explicit content.
The eSafety commissioner has issued these notices to four chatbots offering human-like interactions, requiring them to disclose measures taken to ensure child safety under Australian law.
Failure to respond by providers such as Character Technologies, Glimpse.AI, Chai Research, and Chub AI could result in fines up to $825,000 per day.
Commissioner Julie Inman Grant highlighted the potential for these AI companions, marketed for friendship and support, to engage in explicit conversations.
Concerns have been raised about their potential to encourage suicide, self-harm, and disordered eating, as the popularity of companion chatbots rises, with Character.ai reportedly having nearly 160,000 active monthly users in Australia.
Chatbot providers must demonstrate compliance with government online safety expectations, prioritizing children's best interests.
Ms. Inman Grant emphasized the need for companies to design services that prevent harm rather than merely respond to it.
The notices follow the registration of new industry codes aimed at protecting children from age-inappropriate content.
Previously, concerns were expressed about children as young as 10 spending up to five hours daily with accessible bots.
Some AI companions are designed for supportive roles like personalized tutors, while others simulate relationships, allowing customization into characters such as 'the naughty classmate.'
Non-compliance with protection codes can lead to civil penalties of up to $50 million.