AI Fuels Identity Fraud Explosion: Losses Soar to $2.8 Billion Monthly
Experts are raising the alarm: artificial intelligence has transformed identity fraud into a thriving industry. According to the latest quarterly report from AU10TIX, digital platforms are facing an unprecedented surge in sophisticated attacks, particularly targeting social networks, payment systems, and cryptocurrencies. Analysts estimate the monthly losses from such incidents now reach $2.8 billion—an astonishing 340% increase compared to last year.
Criminals have significantly refined their methods. Simple document forgery has given way to synthetic identities, deepfakes, and automated bots capable of bypassing traditional verification systems. Generative AI is now being used to create fake accounts and disseminate disinformation. Researchers have identified over 50 specialized darknet marketplaces where tools for generating synthetic identities are traded.
A sharp rise in bot activity was recorded on social media in the lead-up to the U.S. presidential election. At the start of 2024, these attacks accounted for only 3% of fraudulent campaigns, but by the third quarter, this figure had surged to 28%. The average lifespan of a fake account has increased from 48 hours to 12 days.
The wave of attacks leveraging generative AI, which began in March 2024, reached its peak in September. Cybercriminals have mastered the art of creating convincing fake narratives and provocative content to influence public opinion. According to the analysis, these false narratives are distributed through complex global networks of computers, effectively masking their origins—70% of attacks remain untraceable.
A new trend is the emergence of “multi-layered” synthetic identities, complete with coordinated online histories across multiple platforms. To enhance credibility, disinformation agents create fake accounts on LinkedIn, Instagram*, and other social networks, nurturing them with activity for months before deploying them in propaganda efforts.
Another groundbreaking tactic is the use of synthetic selfies. Hyper-realistic images now mimic real faces with such precision that they can deceive biometric security systems. Once considered a reliable authentication method, selfies are now undermined by advancements in generative AI technology.
Modern synthetic selfies even pass “liveness detection” tests, simulating blinking and facial expressions. AI algorithms can generate short video clips with natural facial movements, rendering traditional verification methods increasingly obsolete. It seems the era of digital security tranquility is over.
Criminals have also mastered “template attacks,” where a single forged document serves as a blueprint for generating numerous unique identities. AI tools automatically alter photos, document numbers, and personal details.
Another advanced tactic is “temporary substitution,” in which legitimate documents belonging to real individuals are used for initial verification, only to have their details gradually replaced with synthetic information. This approach enables evasion of even the most stringent verification systems.
In the payments sector, progress has been made. Fraudulent attacks, which accounted for 52% of all incidents in Q2, dropped to 39% in Q3, thanks to increased oversight from regulators and law enforcement agencies.
Predictably, cybercriminals are now turning their focus to the cryptocurrency market. In Q3, 31% of all attacks targeted this sector, making it the second most popular objective. The speed of these operations is accelerating—fraudsters can now go from creating a synthetic identity to executing their first scam attempt in just six hours.
Decentralized finance platforms (DeFi) are particularly vulnerable. Researchers have uncovered a network of over 10,000 interconnected accounts manipulating decisions in major DeFi projects.
AU10TIX analysts emphasize that simple document verification is no longer sufficient. They recommend companies monitor user behavior within their systems: tracking login patterns, traffic sources, and transactional activities to identify potential threats.