Deepfake Fraud on the Rise: FinCEN Warns Financial Institutions
The Financial Crimes Enforcement Network (FinCEN) has issued a warning to financial institutions regarding emerging fraud schemes involving deepfakes.
Since the beginning of 2023, FinCEN has observed a marked increase in reports from financial organizations highlighting suspicious activities linked to deepfake technology. Malicious actors are leveraging the capabilities of generative artificial intelligence (GenAI) to forge documents and deceive identity verification systems. These schemes frequently involve the creation of counterfeit identification documents, enabling fraudsters to bypass client authentication and verification processes.
Particular emphasis has been placed on the use of deepfakes and generated imagery to circumvent standard authentication methods. For instance, criminals may manipulate or fabricate photographs for fake driver’s licenses, passports, and other identification documents by combining authentic and fraudulent personal information (PII) to construct so-called “synthetic identities.” These fabricated profiles are then utilized to open accounts and conduct subsequent financial transactions.
Identifying Fraud in Financial Institutions
To safeguard against such threats, FinCEN has outlined several indicators that may assist financial organizations in detecting deepfake-related fraud:
- Document discrepancies: Institutions may identify forgery during secondary reviews of customer-submitted documents. For example, an ID photo may appear suspicious or exhibit clear signs of digital manipulation.
- Challenges in identity verification: Certain clients may struggle to convincingly verify their identity or sources of income. Frequent technical glitches during verification could indicate attempts to use pre-recorded videos instead of live communication.
- Unusual account activity: FinCEN advises monitoring accounts for suspicious patterns, such as rapid transactions within short timeframes or transfers to high-risk platforms, including gambling websites or cryptocurrency exchanges.
- Suspicious attempts to bypass verification: Fraudsters might seek to alter communication methods during verification under the pretext of technical issues. The use of webcam plugins could also signal video forgery attempts.
FinCEN further warns of the potential misuse of deepfakes in social engineering schemes. For example, criminals may deploy fake voices or videos to persuade company employees to transfer funds to fraudulent accounts. In one notable case, scammers impersonating a senior executive’s voice managed to secure a transfer of over $25 million to their accounts.
Recommendations for Mitigating Risks
To counter deepfake attacks, FinCEN urges financial institutions to enhance their security measures and adopt advanced authentication practices, such as:
- Multi-factor authentication (MFA): Implementing two or more factors to verify identity, such as one-time passwords or biometric verification.
- Live verification: Conducting real-time audio and video checks to confirm client identities. Although fraudsters may use tools to generate synthetic responses, live interactions can expose inconsistencies.
FinCEN also recommends regular staff training to improve awareness of deepfake indicators and phishing attacks. Financial institutions should also assess risks associated with third-party identity verification providers and implement risk management protocols at all stages of collaboration.