AI News

News · · 2:03 AM · avrenith

Two-thirds of Bank Staff Use Unapproved AI Tools

A recent study highlights the growing risk posed by 'shadow AI', where bank staff utilize unapproved technology. Research by AI vendor DeepL reveals that 65% of surveyed UK finance professionals admit to using unsanctioned AI tools for customer interactions, potentially leading to cybersecurity and regulatory risks.

The study also indicates that 70% of respondents believe AI has enhanced customer support speed and availability, expecting it to become essential for cross-border banking. Currently, 37% of banking interactions are AI-powered, with multilingual communication being the most utilized application, followed by chatbots and fraud transaction monitoring.

However, the rise of shadow AI could impede technological development. Another study by Cybernews reports that 59% of US workers use unapproved AI tools, with executives and managers being the primary offenders.

According to DeepL, shadow IT typically arises when teams lack access to necessary tools, such as using general-purpose AI tools when secure translation solutions are needed. Consequently, firms must ensure collaboration between customer-facing teams and IT departments to select appropriate technology.

In financial services, where interactions are highly regulated and reputational risk is significant, staff may seek workarounds if provided tools are inadequate. The real risk lies not in employees experimenting with AI, but in companies failing to provide secure, fit-for-purpose solutions.