On November 13, 2024, the Financial Crimes Enforcement Network (FinCEN) issued FIN-2024-Alert004 to help financial institutions identify fraud schemes associated with the use of deepfake media created with generative artificial intelligence (GenAI) in response to increased suspicious activity reporting. “Deepfake media” are a type of synthetic content that use artificial intelligence/machine learning to create realistic but inauthentic videos, pictures, audio, and text to circumvent identity verification and authentication methods.

FinCEN reports that fraudsters are using GenAI as a low cost tool to exploit financial institutions’ identity verification processes.  The SAR filings indicated the fraudsters are using GenAI to open accounts to funnel money and perpetrate fraud schemes, such as check fraud, credit card fraud, authorized push payment fraud, loan fraud, or unemployment fraud.

Deepfake media also may be used in phishing attacks and scams to defraud business and consumers by using GenAI to impersonate trusted individuals.

Red flag indicators to detect deepfake media include the following:

  • A customer’s photo is internally inconsistent (e.g., shows visual tells of being altered) or is inconsistent with their other identifying information (e.g., a customer’s date of birth indicates that they are much older or younger than the photo would suggest).
  • A customer presents multiple identity documents that are inconsistent with each other.
  • A customer uses a third-party webcam plugin during a live verification check. Alternatively, a customer attempts to change communication methods during a live verification check due to excessive or suspicious technological glitches during remote verification of their identity.
  • A customer declines to use multifactor authentication to verify their identity.
  • A reverse-image lookup or open-source search of an identity photo matches an image in an online gallery of GenAI-produced faces.
  • A customer’s photo or video is flagged by commercial or open-source deepfake detection software.
  • GenAI-detection software flags the potential use of GenAI text in a customer’s profile or responses to prompts.
  • A customer’s geographic or device data is inconsistent with the customer’s identity documents.
  • A newly opened account or an account with little prior transaction history has a pattern of rapid transactions; high payment volumes to potentially risky payees, such as gambling websites or digital asset exchanges; or high volumes of chargebacks or rejected payments.

FinCEN has identified the following practices as tools to attempt to reduce a financial institution’s vulnerability to deepfake identity documents:

  • Multifactor authentication (MFA), including phishing-resistant MFA; and
  • Live verification checks in which a customer is prompted to confirm their identity through audio or video.

If you would like to remain updated on these issues, please click here to subscribe to Money Laundering Watch.  Please click here to find out about Ballard Spahr’s Anti-Money Laundering Team.