
Over the weekend, an audio recording circulated on social media in which U.S. Vice President J.D. Vance was allegedly heard speaking disparagingly about Elon Musk, calling him “a man playing the role of a great American leader” who “embarrasses the administration.” Later that Sunday, Vance’s communications director, William Martin, stated on X (formerly Twitter) that the recording was “100% fake and definitively not the vice president.” The original post containing the audio has since been deleted.
Disinformation experts from Reality Defender analyzed the recording and concluded it was most likely AI-generated. “We employed multiple audio analysis models. The likelihood of it being a forgery is extremely high,” a company spokesperson said. According to their findings, background noise and reverberation were deliberately added to obscure telltale signs of synthetic audio.
A clip of the recording, posted on TikTok without any indication of AI involvement, garnered more than 2 million views and 8,000 comments. One of the top responses read, “With the rise of AI, I no longer know what to believe.” The clip also spread across YouTube and X, despite TikTok’s policies prohibiting disinformation and requiring clear labeling of AI-generated content.
This is not the first time synthetic audio of well-known political figures has appeared on TikTok. Previously, voice clones of Donald Trump were used to promote financial scams. TikTok has not issued a comment on the latest incident.
It remains unclear what specific software was used to produce the forgery. However, platforms like ElevenLabs make it relatively easy to clone celebrity voices, despite their stated safeguards. In March, Consumer Reports tested six AI voice-generation services and concluded that meaningful restrictions were largely absent.
Although the fake Vance recording is unlikely to wield substantial political impact, the incident underscores just how effortless it has become to create convincing yet entirely fabricated audio. Such deepfakes are now deployed not only in politics, but also in scams, extortion schemes, and the propagation of “AI noise”—a flood of low-quality synthetic content that obscures access to reliable information.