Banks are right to be excited about GenAI. It is poised to accelerate customer service, hyper-personalize customer relationships, streamline regulatory filings, and deliver efficiencies across all financial value chains. At the same time, the easy accessibility and affordability of GenAI tools also present a downside for financial institutions: GenAI is already being exploited by fraudsters and criminals.
Fraudsters can create various complex frauds by adopting GenAI, and this continues to evolve rapidly. Noteworthy examples include synthetic media fraud, deepfake and voice cloning, and document forgery fraud, often executed via tools like FraudGPT or WormGPT. Chief Compliance Officers (CCOs) and fraud executives face dual pressures: utilizing GenAI technology to enhance operational efficiency and productivity while also countering the sophisticated fraud schemes enabled by the same technology. Traditional AI methods may not suffice against the advanced GenAI tools used by criminals, making it imperative for executives to leverage GenAI in their fight against fraud.
How GenAI is Enhancing Fraud Techniques
Criminals have devised disciplined approaches and well-laid-out methodologies to take over customer accounts using sophisticated GenAI algorithms and tools. They may track individuals' behaviors, hack social media accounts, and reach out with genuine-looking communications that highlight system issues or promote new products. LLMs are used to create highly believable emails, while deepfake technology clones voices and creates fraudulent advertisements, misleading customers into financial traps. Earlier this year, authorities in Hong Kong publicized a particularly brazen and advanced fraud scheme: a deepfaked version of a multinational company’s chief financial officer ordered employees to transfer funds during a video conference call, resulting in a loss of more than $25M.
Combatting GenAI Fraud with Multi-Layer Strategies
As GenAI matures, financial institutions must counteract fraudsters with better counter-GenAI measures. Implementing a multi-layer defense strategy that addresses the volume and sophistication of fraud attacks is crucial. Many institutions are already responding by reinvesting in fraud innovation labs and collaborating with technology consultants to create effective prototypes. The first defense layer should protect against account takeover attempts or fraud initiations, while the second defense layer should stop fraudulent transactions initiated by a customer if the first layer fails. These layers should work in tandem to maximize effectiveness. Criminals devising sophisticated schemes using GenAI tools like FraudGPT, WormGPT, and DarkBERT can be deterred faster by creating fraud prevention models using LLMs and applying them to protect each security layer.
For example, one large institution is deploying a multi-layered approach to verify customer identities by cross-checking customer personally identifiable information (PII) and photo images with external sources in real time, tracking device IDs to conduct liveness detection tests, and identifying IP addresses. Continuous monitoring of customer behavior and account activity, along with tracking behavioral aspects during banking transactions, helps detect changes indicative of fraud. Timely notifications via SMS and email inform customers about recent transactions, adding an extra layer of security.
Additionally, FIs are keen to augment their transaction monitoring platforms with robust advanced analytics that will enabling firms to monitor frauds and scams on a real-time basis. This monitoring platform further solidifies the defensive posture, and helps ensure that cannot be siphoned off by the scammers.
Creating Models with Synthetic Data
Starting with fraud models that mitigate known schemes — such as account takeover, identity theft, synthetic fraud, charity fraud, investment fraud, business email compromise, romance investment, and grandparent scams — is essential. This is achievable by using GenAI to create synthetic data that mimics advanced emerging fraud schemes, improving model accuracy through extensive training data. Feeding a fraud prevention model with a vast array of GenAI-created fraud scenarios hones the model’s ability to recognize and flag fraudulent activities. By moving quickly to train systems on synthetic data, financial institutions can stay one step ahead of fraudsters, effectively safeguarding their customers and assets.
Guidance and Mandates from Regulators
Regulatory bodies like the FFIEC, the Federal Reserve, the FTC, and the CFTC are encouraging banks to embrace RegTech firms and collaborate with consulting firms to identify use cases and find solutions. While agencies stress data privacy, they also encourage proactive problem-solving. For example, New York state Attorney General Letitia James’s recent order against a large bank emphasizes the need for banks to protect customer funds through process reimagination, technology modernization, or upskilling personnel. As new regulations like the UK's mandatory reimbursements push banks to reimburse fraud victims, financial institutions must invest in and use advanced tools, including GenAI, for fraud detection to keep customer funds safe.
Maintaining a Security Advantage
To stay ahead of these evolving threats, banks need to continuously innovate and implement real-time, AI-powered solutions. In areas like real-time payments, AI is the only way that banks will be able to reliably combat fraud at speed and scale. As fraud schemes (GenAI-driven and otherwise) continue to accelerate, strong yet seamless security measures will become a real differentiator for banks. The banks with best-in-class anti-fraud solutions won’t just be praised by regulators — they will be intentionally sought out by savvy customers who understand the risks of transacting in today’s fast-paced, data-rich economy.