close
close

Artificial Intelligence is Changing the Face of Fraud

Artificial Intelligence is Changing the Face of Fraud

AI Based Attacks
,
Artificial Intelligence and Machine Learning
,
Finance and Banking

Banks Use AI to Detect Fraud, Create Synthetic Data for Better Predictive Analytics

Image: Shutterstock

Criminals are leveraging the power of AI to change the face of fraud. In addition to horror stories like the finance worker who was tricked into paying $25 million by an AI-powered voice clone of a company executive, scammers are using AI to scale campaigns and create convincing synthetic identities with fake social media profiles and manipulated images and voices.

See also: 5 Requirements for Staying Afloat in the SIEM Storm

Deloitte estimated It is estimated that US banks could lose $40 billion to AI-powered fraud over the next three years.

While criminals have an advantage in the AI ​​race, banks and other financial services companies are responding with greater awareness and caution, and a growing number of organizations are exploring AI tools to improve fraud detection and response to AI-fueled scams.

Use of Artificial Intelligence for Fraud Programs

The banking industry is now on alert for a variety of targeted attacks that use phishing and vishing tactics with realistic, personalized messages developed by generative AI and large language models. With just a few seconds of sample audio, AI tools can replicate any voice pattern and enable caller ID spoofing, which makes fake calls appear to come from legitimate sources.

To improve detection of these scams, software company SAS is working on a pilot project to help financial services organizations use generative AI to analyze call center recordings of fraudulent claims to identify potential fraudsters. David Stewart, director of financial crimes and compliance at SAS, said in a recent survey of more than 1,000 fraud prevention professionals demonstrations It is noted that nearly 90% of survey respondents plan to add generative AI to their toolkit by next year.

“Larger institutions are overwhelmed by the increasing volume of fraud and need better intelligence to distinguish real claims from fraudulent ones so they can prevent losses and help victims,” Stewart told Information Security Media Group.

Most financial services organizations use historical and internal data to detect fraud patterns. This can limit checks to only the types of incidents currently being experienced, and machine learning models detect new methods by ever-evolving fraudsters only after the fraud has been committed. Organizations may also miss broader fraud patterns occurring across different institutions or industries, leading to potential blind spots in fraud programs.

Traditional machine-learning models evaluate transactions as they occur based on historical events, but they struggle with predictive analytics and quickly adapting to new fraud trends. A big part of the problem is that static models need to be recalibrated and adjusted when new data sources are introduced, said David Barnhardt, strategic advisor in the fraud and anti-money laundering practice group at financial services technology research firm Datos Insights (formerly Aite-Novarica Group).

He also said using only internal data can create a fragmented view that ignores broader fraud trends and patterns across products within the financial institution.

To address the limitations of ML-based fraud models, financial services companies are beginning to use generative AI to create synthetic data that can help institutions conduct “what if” analyses of emerging fraud risks.

“The use of digital twins is an advanced form of simulation that can quantify the impact of different scenarios to determine how they will impact banks’ operational fraud controls. The combination of synthetic data and simulation is a useful tool to help firms prepare for the unexpected,” Stewart said.

Stewart said one organization used synthetic data to simulate certain types of high-profile risks. “This allows us to test strategies without the PII concerns associated with real customer data,” he said. One of the organization’s customers is also considering using a synthetic data generator instead of staging a mirror image of a production environment. He said this approach addresses data privacy concerns and the costs associated with building a development environment with real data.

“Ultimately, synthetic data allows us to test more challenging strategies, resulting in better detection rates and better coverage,” Stewart added.

But even if a company is using synthetic data to train models, data management can’t take a backseat, Stewart said. Banks will need data managers who understand the viability of certain features, especially in terms of biases that can affect engagement, credit decisions and account closure decisions, Stewart said.

Stewart advocates the use of hybrid machine learning. includes supervised and unsupervised techniques to identify anomalous behavior that had not previously been tracked. “Anyone who does not use device, geolocation and biometric markers as part of their digital fraud strategy is vulnerable to synthetic identities and first-party fraudsters,” he said.