
The notion that spam is easily identifiable by clumsy spelling and awkward syntax no longer holds true. Generative neural networks have not merely enhanced the quality of fraudulent correspondence—they have elevated it to a level of linguistic and cultural sophistication that renders it nearly indistinguishable from legitimate communication. Spam no longer appears suspicious—it appears flawless. And that very perfection has become the new red flag.
In the past, cybercriminals were largely confined to English, Spanish, or—at best—formal French, as hiring native speakers of other languages was time-consuming, costly, and often unjustified. Today, with the aid of artificial intelligence, attackers can effortlessly generate messages in any dialect—from Québécois to European Portuguese. The result is fake correspondence so convincing it could have been penned by the neighbor next door.
According to Chester Wisniewski, Global CISO at Sophos, who spoke at the RSAC conference, it is now likely that up to half of all spam is generated by neural networks. He underscores that the grammar and punctuation in such messages are so impeccable that it should, paradoxically, arouse suspicion. Unlike machines, humans—regardless of native fluency—inevitably make mistakes.
AI has granted malicious actors yet another formidable advantage: scalability. They are no longer constrained by geography or language. In Quebec, for example, phishing attacks used to fail simply because they arrived in formal French rather than in colloquial Québécois. Now, neural networks craft phrases that resonate with native authenticity. A similar trend is unfolding in Portugal, where attackers once targeted Brazil as the larger market—but European Portuguese is now equally within reach.
One of the most insidious forms of deception—romance scams—has also evolved. During the initial phases of “courtship,” victims interact with a chatbot that expertly mimics a compassionate, attentive, and seemingly genuine interlocutor. Once rapport is established, a human operator takes over, initiating pleas for assistance, requests for money transfers, or luring victims into fraudulent investment schemes.
Perhaps the most alarming technology to gain mass adoption through AI is deepfake audio. As Wisniewski explains, it is now possible—at negligible cost—to clone any employee’s voice, such as that of an IT support technician, and call colleagues with seemingly legitimate requests for passwords or sensitive information. This form of deception unfolds in real time and requires minimal technical skill.
As for video deepfakes, there remains a degree of relative calm. Wisniewski expressed skepticism toward high-profile incidents—such as the widely reported case in Hong Kong, where a deepfaked video call allegedly convinced an employee to transfer \$25 million. He suggested this might be more of an attempt to attribute human error to a trendy threat than a true demonstration of current capabilities. Even the most advanced firms have yet to develop realistic, interactive video models—but this, he warns, is merely a matter of time. He predicts that at the current pace of technological advancement, convincing deepfake video calls will become commonplace within two years.
Kevin Brown, Chief Operating Officer of NCC Group, disagrees. He notes that his team of penetration testers has already achieved notable success in producing video deepfakes for targeted operations. The technology exists—it simply hasn’t yet been industrialized. But again, it is only a matter of time.
Wisniewski and Brown are united in one conviction: the time has come to rethink how communication is authenticated. The old hallmarks of phishing—poor spelling, a sense of urgency, incoherent language—are no longer reliable indicators. The future lies in verifying identity through new channels that, for now, remain resistant to real-time manipulation.