December 5, 2023


In his latest annual shareholder letter, JPMorgan Chase CEO Jamie Dimon sounded more like the founder of a fintech start-up than a company with a history dating back to 1799. One of the world’s largest banks. But then again, the focus on innovation is crucial for JPMorgan. Longevity of an iconic company.

“Artificial intelligence (AI) is a remarkable and groundbreaking technology. AI, and the raw materials that feed it, are critical to our company’s future success – the importance of implementing new technology cannot be overemphasized.” Dimon famous letter.

JPMorgan has more than 300 AI use cases in production, covering areas such as marketing, customer experience, risk management, and fraud prevention.

Emerging technologies including generative artificial intelligence, large language models (LLM) and ChatGPT are also top of mind for the company. “We are envisioning new ways to augment and empower workers through artificial intelligence through human-centric collaboration tools and workflows, leveraging tools such as large-scale language models, including ChatGPT,” Dimon said.

The launch of ChatGPT is reminiscent of the Netscape browser that heralded the Internet revolution of the mid-90s. It is important to note, however, that the adoption of generative AI needs to be part of a deliberate strategy that considers safe, responsible AI and stakeholder needs. While the technology has clear advantages, there are also dangers.

Security and Compliance

It may seem ironic, but earlier this year JPMorgan prohibit Employees no longer use ChatGPT, and the company isn’t the only one. Major financial institutions such as Citigroup, Bank of America, Wells Fargo, Goldman Sachs, etc. have also imposed restrictions on ChatGPT.

That shouldn’t be a surprise, nor should it be a disappointment. With banks having to contend with onerous regulations – know-your-customer (KYC) and anti-money-laundering (AML) laws – it’s important to take a more conservative approach when new technologies emerge. Security and compliance are sacrosanct.

Generative AI tools such as ChatGPT and GPT-4 have shown clear risks. For example, models tend to hallucinate so that the generated content is false or misleading.

Understanding how generative AI models respond is also nearly impossible. These systems are essentially “black boxes”. After all, the largest models have hundreds of billions of parameters, making them nearly impossible to decipher.

There are also thorny issues of bias and fairness. That’s because generative AI models are trained on large amounts of publicly available content, such as Wikipedia and Reddit.

Finally, the use of generative AI models is primarily through APIs. This means banks will be sending information from their own private data centers, creating compliance risks for privacy and data residency. In fact, several security breaches have occurred. In March of this year, OpenAI disclosed that payment information for its ChatGPT subscription service was exposed. For about 1.2% of the subscriber base, it revealed usernames, emails, and payment addresses. The last four digits of the credit card number and the expiration date were also disclosed. The leak was due to a bug in the open-source system.

Example

Given the challenges and risks associated with generating artificial intelligence, banks and financial services organizations need to take a cautious approach. That means it’s probably a good idea to avoid client-facing apps — at least for now.

Instead, a better approach is to try to do it internally, especially if you don’t use PII (Personally Identifiable Information). Marketing would be a good place to start, as creativity is a key attribute of generative AI. While the technology isn’t quite ready for final drafts, it can help spark ideas and improve the results of marketing campaigns.

Another area of ​​focus is help desk operations. With natural language prompts, employees can describe their problem, and generative AI will provide helpful answers and even help initiate the problem-solving process. This reduces costs and increases efficiency.

Generative AI can also be a useful tool, enabling employees to gain insights from internal proprietary content. That’s what Morgan Stanley did with a pilot project using OpenAI’s GPT-4 model. The app, not trained on any client information, is a tool that allows financial advisors to ask questions based on research reports and reviews generated by companies.

As the generation technology becomes more stable, it will be easier to take on more complex projects.

in conclusion

The pace of innovation in generative AI has been impressive, but there are also notable risks, such as hallucinations and safety. This is why banks need to take a thoughtful approach to this important technology. Rushing into it could be a mistake. Instead, a good strategy is to start using generative AI applications for internal purposes that do not use sensitive data. This could be a way to reap real benefits while also allowing time for the technology to mature.