
The FBI has issued a warning about a new wave of attacks involving voice-based deepfakes targeting U.S. officials. Since the beginning of April 2025, cybercriminals have increasingly employed artificial intelligence to generate counterfeit voice recordings impersonating high-ranking individuals—both current and former members of federal and regional agencies.
Attackers are distributing text messages and audio files that convincingly mimic the voices of public officials in an effort to gain the victim’s trust. This strategy, which combines SMS phishing (smishing) and voice phishing (vishing), enables threat actors to extract access credentials and sensitive information. Frequently, the messages prompt recipients to switch to an alternate messaging platform via a malicious link, which leads to the compromise of devices or user accounts.
Once a single official’s account is breached, attackers gain access to that individual’s contacts, enabling a cascade of subsequent attacks that leverage the compromised person’s real address and voice. This allows cybercriminals to extract even more data—or in some cases, money—from other government personnel.
Digital paranoia is the new common sense.
The FBI stresses that receiving a message from a “high-ranking official” should not be grounds for trust, particularly when it involves unexpected requests or financial transfers. To help the public detect and prevent such attacks, the agency has released a set of practical guidelines.
The threat of deepfakes is not new. As early as 2021, the FBI predicted that AI-generated voice, image, text, and video manipulation would become integral to malicious information operations. Europol echoed this concern in 2022, warning that deepfakes would soon be widely used in CEO fraud schemes and the fabrication of digital evidence.
In 2024, the U.S. Department of Health and Human Services (HHS) reported similar incidents in which cybercriminals used synthetic voices to deceive IT support staff. Shortly thereafter, LastPass experienced an attack wherein the perpetrators simulated the voice of CEO Karim Toubba in an attempt to coerce an employee into taking fraudulent actions.
All of this affirms a sobering reality: a voice alone can no longer serve as proof of authenticity. Deepfakes are growing ever more convincing, and traditional methods of detecting deception are becoming obsolete.