
Ethical Challenges of AI in Cybersecurity
The rapid advancement of artificial intelligence (AI) has brought significant changes to the field of cybersecurity. Hackers are now utilizing AI-based tools to automate vulnerability scanning, predict attack vectors, and create threats more swiftly than before. This synergy between human creativity and machine learning is transforming cybersecurity, sparking debates on the ethical constraints necessary when employing AI for defense systems.
According to the IBM X-Force report, cyberattacks on critical infrastructure, including SCADA systems and telecommunications, have increased by 30%. In the first quarter of 2025, a DDoS botnet comprising 1.33 million devices was discovered, six times larger than the largest botnet in 2024. Cybersecurity departments conducting penetration tests according to ISO 27001, NIST, and CIS standards are now doing so daily instead of annually or weekly.
The active use of AI has necessitated insurance against AI-driven threats. Cyber insurance has become a strategic requirement, especially in the finance, healthcare, and critical infrastructure sectors. International insurers demand regular ethical hacking assessments from clients.
The U.S. Department of Justice's updated guidance legally protects ethical hacking when done with consent, and insurers increasingly rely on it to assess enterprise resilience. Ethical hacking is a fundamental requirement for secure business operations in the US, crucial for both Fortune 500 companies and startups.
In Israel, ethical hacking forms the basis of the updated National Cybersecurity Strategy for 2025–2028. However, opponents argue that the line between 'white' and 'black' hackers is thin, raising ethical issues about reporting or exploiting discovered vulnerabilities.
Gevorg Tadevosyan, a cybersecurity expert from Israel's NetSight One, shared his perspective on this debate. He acknowledged AI's improvements in cyber defense but warned against its offensive use, advocating for ethical hacking as a primary protective measure. Tadevosyan calls for a comprehensive framework to address the ethical issues in AI's application to cybersecurity.