
In Ryan Schmiedl’s work protecting JPMorgan Chase from all kinds of fraud, business email compromise was the most damaging type of attack recently.
Fraudsters look for the weakest link, the least protected, Schmider said. They often find it somewhere inside the corporate client.
“In many cases, they’re attacking businesses because there are a lot of people in corporate entities and they don’t communicate much,” said Schmidl, the bank’s global head of payments, trust and security. During a panel discussion at Fintech Connect last week.
He oversees JPMorgan’s efforts to detect fraud and financial crime through fraud controls, sanctions screening, know-your-customer checks and other means. Before joining the bank, he held a similar role at Amazon.
“I can’t tell you how many times our clients have been socially engineered,” he said.
Fraudsters often send genuine-looking emails, seemingly from genuine suppliers or partners. It might say that the company is changing accounts and that the recipient is supposed to send the money to a convincing but bogus website created by the fraudster.
Whenever a bank employee becomes suspicious of a company transaction and calls a customer to ask if they are sure they want to send money, the customer is usually the first to say “yes” because they believe the transaction is legitimate. The customer didn’t realize they had been scammed until the supplier called a few days later to say they had never received their payment.
To catch such incidents and the many other types of fraud that banks continue to encounter, JPMorgan is using large language models, a technique that can process large amounts of text and is the technology behind ChatGPT, the popular artificial intelligence chatbot.
It’s part of a trend in which many organizations, including banks, payment networks like Swift, and online gambling companies like Caesars Entertainment, are moving from more basic machine learning to advanced artificial intelligence to track down bad actors and Suspicious transaction.
JP Morgan’s use of large language models
JPMorgan’s fraud detection technology has evolved from using basic business rules and decision trees to using machine learning. More recently, the bank has been using AI to extract entities, such as company and people names, from unstructured data and analyze them for signs of fraud. An example is the use of large language models to detect signs of leaks in emails.
“There’s an inherent signal in every email you create,” Schmider said. “Actors trying to create fraudulent emails basically tend to use different patterns, and you can learn those patterns with machine learning.”
The bank is using large language models to examine patterns that are close together and those that are far apart to understand context and association.
“We do that for a lot of different things, whether it’s looking at a Telegram note or doing a sanctions screening, I’m matching the list to the note,” Schmider said, without revealing which large-scale language model the bank uses .
For example, a large language model could be used to match a list of sea boats against multiple data sources and flag where an item in the list is next to a street address, making it a false positive.
“Right now we have hundreds of models, and they look at a lot of different things, whether they’re behavioral, whether they’re payment-related, whether they’re new accounts, just assessing risk and trying to figure things out like that,” Schmidl said.
He said the bank only uses data within its ecosystem to train large language models, noting the dangers of using large language models that collect data from the internet, as ChatGPT does.
“If you start using these models and external data, you start to see that what is presented as fact is not fact,” Schmider said. “You have to make sure that the data you have has been vetted, verified and It is true.”
Spot Payment Fraud
International payments messaging organization Swift is working with a number of technology partners including Google and Microsoft to build new artificial intelligence models, according to Kalyani Bhatia, head of global payments.
“We really believe that this will help us add to our existing rules-based engine and increase the success rate of fraud,” she said.
Swift is incorporating AI into some existing products to improve them, she said.
For example, it has a pre-authentication service where a payer can ask the receiving bank whether a given account is open and valid. Today, this is done through application programming interfaces.
Swift can apply artificial intelligence to its historical data store of 10 billion transactions per year and find indicators of anomalies, which it then shares with bank members.
Swift also has an intermediary transaction service called Payment Control, which is a rules-based engine that each bank can use to set its own thresholds for transactions that should get a second check. Artificial intelligence could also help improve the system, Bhatia said.
After post-processing, Swift plans to score each transaction and provide bank members with trend analysis and reports of fraud patterns and lessons learned.
Maria Christina Kelly, director of payments and fraud at Caesars Digital, a division of an entertainment company that offers gambling sites and apps, said her team focused on two broad categories of fraud.
The first is first-party fraud, also known as friendly fraud or account owner fraud. At that point, “you had a good time, you felt remorse, and now you fight back,” Kelly said. “This needs to be distinguished from third-party fraud, which we call hostile fraud. These people are attacking our website.” These groups tend to buy consumer data, create fake accounts, and use these fake accounts to extract money from Caesars .
Kelly has turned to third parties to build artificial intelligence-based fraud detection models. She is now training these models.
“Losing things is a real problem,” Kelly said. “It requires a combination of people who really understand what’s going on, and then make sure the model gets the right material. It takes a lot of lifting, it takes a lot of focus, and you’re not going out on day one and have a beautiful model …you have to keep improving it, studying it and feeding it the right data.”
The dark side of artificial intelligence
Like all champions of corporate data, Schmidl, Kelly, and Bhatia worry about fraudsters and criminals exploiting AI to commit fraud.
“It’s keeping me up at night,” Schmidl said. “It’s become an increasingly common problem, and these adversarial attacks or these adversarial machine learning models are growing.”
He said JPMorgan was investing in technology and research to stay ahead.
“It’s a challenge,” he said. “Some of these things require continued investment, continued research, continued time and effort. There are a number of players who are making good progress in some areas,” such as detecting deepfake voices and photos.