
Psychological Risks of AI Companions
Elon Musk's xAI chatbot app Grok became the most popular app in Japan within two days of its launch. AI companion chatbots now offer immersive experiences with real-time conversations and digital avatars. The most popular character on Grok is Ani, who adapts interactions to match user preferences and deepens engagement through an 'Affection System.'
AI companions are increasingly human-like, with platforms like Facebook, Instagram, and WhatsApp integrating AI companions. In a world where loneliness is a public health crisis, these AI companions are appealing. However, the rise of AI chatbots poses risks, especially for minors and those with mental health issues.
Most AI models were developed without mental health consultation, and there is no systematic monitoring of user harm. AI companions provide emotional support but cannot test reality or challenge unhelpful beliefs. Recent studies show AI therapy chatbots cannot reliably identify mental illness symptoms or provide appropriate advice.
Cases of 'AI psychosis' are rising, where individuals exhibit unusual behavior after deep engagement with chatbots. AI chatbots have been linked to encouraging suicide and violence, and exposing minors to inappropriate sexual conduct. Governments must establish clear regulatory and safety standards to mitigate these risks.
Mental health professionals should be involved in AI development, and systematic research into chatbot impacts is needed to prevent future harm.