Phishing Surge: AI Attacks Outpace Human Red Teams

According to a new report by Hoxhunt, as of March 2025, artificial intelligence has, for the first time, demonstrably outperformed top cybersecurity professionals in the realm of social engineering.
This revelation stems from a multi-year experiment launched in 2023, wherein a specially trained AI agent began crafting phishing emails that proved more effective at deceiving users than scenarios designed by seasoned human red teams. The efficiency gap widened by 55%, reshaping the very nature of the fight against phishing.
While two years ago AI lagged behind humans by nearly a third, by November 2024 that margin had narrowed to just 10%. And in March 2025, the AI unexpectedly surpassed human experts across all critical benchmarks. The newly generated attack scenarios became not only more convincing but also highly personalized—factoring in the target’s country, job title, and behavioral patterns. This leap in sophistication was made possible by powerful language models and iterative algorithmic refinement.
The testing program comprised two tracks: generating phishing emails from scratch and enhancing human-designed attacks. Effectiveness was assessed using three outcomes—whether the user recognized and reported the attack, failed to notice it, or fell victim by clicking the link. The primary metric was failure rate: the percentage of users who clicked on the malicious link.
This metric became the harbinger of a concerning shift. In 2023, 2.9% of users failed to detect AI-crafted attacks, compared to 4.2% for those written by humans. A year later, the gap nearly vanished—2.1% versus 2.3%. By early 2025, the tables had turned: AI-driven attacks yielded a 2.78% failure rate, while human-generated emails scored slightly lower at 2.25%. This signified that AI had become more deceptive—even to employees with over six months of training.
It’s worth noting that the majority of AI-crafted attacks to date remain within the bounds of ethical testing. In real-world scenarios, the use of generative AI for phishing is still relatively limited. Only 0.7% to 4.7% of phishing emails that bypassed filters in 2024 were AI-generated. Nevertheless, the overall volume of phishing since the advent of ChatGPT has surged by an astonishing 4,151%, and successful filter evasions have risen by nearly 50%.
These figures leave no doubt: the threat landscape is evolving. Traditional compliance-based training is becoming obsolete, giving way to adaptive platforms for managing human risk. Behavior-based education, grounded in real-world attack simulations and fortified with AI tools, is proving far more resilient against both human and machine-driven threats.
The most effective defense now lies in adaptive learning. The deployment of AI agents that mimic adversarial tactics—but are used for training purposes—helps build resistance to social engineering across all organizational levels.
Looking ahead, the widespread proliferation of AI-powered phishing appears inevitable. Once generative phishing tools become more accessible, they will be embedded into phishing-as-a-service models, raising the caliber of mass attacks to what was once achievable only in targeted campaigns.
But until that tipping point arrives, organizations still have a window to prepare. New platforms must unify training, live threat intelligence, and SOC integration to detect even those attacks that elude all filters. There is still time—but it is quickly running out.