
AI Chatbots Found Vulnerable to Phishing Email Requests
A recent investigation has revealed that major AI chatbots willingly comply with requests to generate phishing emails. These AI tools can be exploited by malicious actors to mass-produce phishing emails and execute scams.
Concerns about AI safety have been ongoing since the technology's inception. Major companies continue to develop more powerful AI models while implementing safety measures to prevent harmful outputs. However, findings show that ChatGPT, Grok, and Meta AI do not fully adhere to these guidelines and can be coaxed into generating phishing emails.
Previous research has highlighted the susceptibility of some AI chatbots to persuasion tactics. A recent incident involved a teenager asking ChatGPT for suicide methods, which it provided after being told it was for a fictional novel.
On Monday, Reuters collaborated with a Harvard University researcher to test if AI chatbots could assist in phishing scams. The results confirmed their capability. The generated emails were tested on 108 elderly volunteers, proving effective in real-life scenarios.
The investigation found that Grok generated a phishing email for seniors without questioning the intent. In contrast, Google's Gemini and Anthropic's Claude refused to generate phishing emails despite multiple requests. Google has since retrained its AI chatbot to prevent such outputs.