
Phishing remains one of the most prevalent forms of cybercrime. In these schemes, criminals impersonate trusted organizations to trick individuals into revealing sensitive information, such as passwords or financial data. Some operations also involve extortion, where perpetrators threaten to leak stolen information, lock users out of their accounts, or demand ransom payments. These threats can be delivered through various channels, including email, social media, and messaging platforms.
Although service providers continue to improve security measures, studies indicate that since 2022—the year ChatGPT was released—the number of phishing emails that bypass filters has increased by 49%, making these threats even more difficult to detect [1].
Phishing also remains remarkably effective despite extensive public education efforts. For example, simulation tests conducted within some companies reveal that 33.1% of employees engage with phishing emails [2], while another study involving 735 workers found that 10.9% fell victim to such attacks [3]. Targeted educational campaigns have proven effective in reducing the number of victims, but maintaining awareness requires periodic testing and ongoing training.
While AI has made phishing scams far craftier, it’s also helping defend against them. AI-powered systems can analyze emails and messages in real time, identifying subtle clues that humans might miss. This technology helps filter out fraudulent messages before they reach inboxes, reducing the chances that someone will accidentally share sensitive information with criminals [4, 5].
Many companies are also using AI to educate users more effectively [6]. Instead of generic warnings, AI-powered platforms can provide tailored advice and immediate feedback when someone encounters a potential phishing attempt [7]. This helps build stronger awareness over time, making people less likely to fall victim to scams.
While no system is foolproof, the combination of AI technology, continuous education, and user vigilance offers the most effective defense against phishing and identity theft. As cybercriminals adopt more advanced tactics, AI’s ability to learn and adapt in real time will be critical to staying ahead. By leveraging these tools thoughtfully, organizations and individuals alike can significantly reduce the risk of falling victim to digital deception.
VICOMTECH
References
[1] Hoxhunt, “AI-Powered Phishing Outperforms Elite Red Teams in 2025,” Hoxhunt Blog, Mar. 2025. [Online]. Available: https://hoxhunt.com/blog/ai-powered-phishing-vs-humans. [Accessed: Jul. 22, 2025].
[2] KnowBe4, “KnowBe4 Report Reveals Security Training Reduces Global Phishing Click Rates by 86%,” KnowBe4 Press Release, May 13, 2025. [Online]. Available: https://www.knowbe4.com/press/knowbe4-report-reveals-security-training-reduces-global-phishing-click-rates-by-86. [Accessed: Jul. 22, 2025].
[3] P. Sirawongphatsara, P. Pornpongtechavanich, N. Phanthuna, and T. Daengsi, “Comparative Simulation of Phishing Attacks on a Critical Information Infrastructure Organization: An Empirical Study,” arXiv preprint arXiv:2410.20728, Oct. 2024. [Online]. Available: https://arxiv.org/abs/2410.20728. [Accessed: Jul. 22, 2025].
[4] Arya AI. Phishing Detection API. Available: https://arya.ai/apex-apis/phishing-detection-api. [Accessed: Jul. 22, 2025].
[5] Web Asha Technologies. How DeepPhish is Revolutionizing Phishing Detection | AI-Powered Email Security & Threat Prevention. Available: https://www.webasha.com/blog/how-deepphish-is-revolutionizing-phishing-detection-ai-powered-email-security-threat-prevention. [Accessed: Jul. 22, 2025].
[6] Keepnet Labs. How Keepnet’s AI-Powered Phishing Simulator Delivers Hyper-Personalized Security Awareness. Available: https://keepnetlabs.com/blog/how-keepnets-ai-powered-phishing-simulator-delivers-hyper-personalized-security-awareness. [Accessed: Jul. 22, 2025].
[7] Rei Meguro and Ng S. T. Chong, “AdaPhish: AI-Powered Adaptive Defense and Education Resource Against Deceptive Emails,” arXiv preprint arXiv:2502.03622, Feb. 2025. Available: https://arxiv.org/abs/2502.03622. [Accessed: Jul. 22, 2025].