Today’s essay was written by my bank nerd friend Todd Phillips. Todd is an Assistant Professor at the Georgia State University Robinson College of Business.

Generative artificial intelligence (GAI) agents — so-called because they can execute their users’ natural-language commands, just like human agents — appear to be the next big thing, with Citigroup declaring that they “could have a bigger impact on the economy and finance than the internet era.”

Although financial firms use so-called agentic GAI to support the work of their employees and grow their businesses through back-office automation, such software can also be used by individuals and nonfinancial firms to support their everyday tasks, including financial tasks. Ultimately, users may task applications like X’s Grok (given owner Elon Musk’s ambitions to create a financial super app), Apple’s Siri, or Google’s Gemini to pay their utility bills, recommend financial products, or manage investment portfolios with the press of a button on their phones. Firms may task their financial management software, powered by AI, to pay invoices and effectively manage their cash flows with humans only interfering when necessary.

GAI agents working on behalf of financial institutions’ customers are to likely arrive soon, in large part because the technology needed to create them currently exists. Policymakers and financial institutions must wrestle with how these technologies may affect financial services. In this essay, I discuss how GAI agents working for financial institutions’ customers could operate and examine one possible risk they pose: bank runs.

How Personal GAI Agents May Operate

The core premise of GAI agents is that users will be able to direct their agents to take some action on their behalf using natural-language commands, and the software will execute the task. For that to work in financial services, there must be some technological way for the agents to (1) understand natural-language commands and (2) access users’ financial accounts. The tools for doing so are open finance application programming interfaces (APIs) and text-to-code large language models (LLMs).

Readers of this newsletter should already be familiar with open finance APIs, like those provided by Plaid and MX. In brief, these APIs allow financial institutions’ accountholders to access information about and send execution orders to their accounts via third-party applications. Those third-party applications can then serve as access points for customers, offering them functionality similar to what is offered by their institutions’ websites or applications, in addition to whatever additional functions the third-party applications provide.

Readers may not be as familiar with text-to-code LLMs. LLMs create new information from training data in response to the information with which they are prompted. The new information LLMs can create vary and can include the creation of new text, images, audio, and video. Text-to-code LLMs like Cursor, as their moniker indicates, generate computer code when prompted, allowing anyone to create code without needing to know how to code. One may simply tell the LLM to create code to accomplish whatever task they have in mind using natural-language commands.

The combination of text-to-code LLMs and open finance APIs can allow GAI agents to execute transactions at the user’s command by simply incorporating both into the agents. When a user issues some command (e.g., “Transfer $500 from my savings to my checking account.”), the software can direct the user’s statement to the system’s text-to-code LLM, generating a set of commands for the system’s open finance APIs to execute the command. The software will then run the code, using those APIs to access the user’s accounts and initiate the requested transaction. If GAI agents are always on, they can also engage in real-time monitoring and execute transactions long after users’ commands have been given. A user may be able to give their GAI agent commands that it executes whenever pre-set parameters are met (e.g., “Transfer $500 from my savings to my checking account whenever the checking account drops below $100.”).

This combination is fundamentally different than fintech apps that that have come before and have leveraged open finance APIs but not GAI. Take, for example, Digit (now Oportun) that would analyze each user’s spending habits and migrate excess cash from their checking account to their savings. The difference offered by GAI agents is in their flexibility. Whereas Digit required developers to create software using rigid rules to enable its narrow functionality, GAI agents hold the promise of allowing users to implement the same feature on their own and without knowing how to code, as well as countless other features we haven’t yet thought up.

GAI agents are therefore poised to be much more powerful than existing fintech apps, with much more expansive risks. This is particularly true because they may be able to take users’ vague commands and interpret them in unpredictable ways. The command “transfer $500 from my savings to my checking account whenever the checking account drops below $100” is fairly simple for GAI agents to understand; “maximize my bank account’s interest income” is something else entirely.

Run Risk

Let’s take a step further and incorporate real-time data feeds, like those used by high-frequency traders, into GAI agents. If this occurs, GAI agents will not only enable banks’ customers to avoid overdrafts, as in the above example, but also to proactively offer product suggestions, like identifying the best-priced products offered by various financial institutions to recommend to their users. GAI agents equipped with open finance APIs can not only transfer assets between accounts at the same institution, as in the above example, but can transfer assets between accounts at different institutions via ACH, RTP, FedNow, or some other method. This means that GAI agents could, for example, identify the bank offering the highest rate from among the various options, take the initiative to open an account for their users, and move users’ assets from their old accounts to their new one. One can imagine Apple proactively sending iPhone users an offer to have Siri maximize their account interest income in exchange for a small fee. GAI agents could also migrate users’ assets from one bank to another at the earliest sign of trouble.

This capability poses real and significant risks to financial institutions and the financial system by enabling runs. Although bank accounts have historically been considered sticky, recent events have demonstrated that is less true now. We all recall the spring 2023 banking panic, in which Silicon Valley Bank’s depositors ran in response to a press release that the bank had sold its portfolio of Available for Sale assets at a loss and would be raising capital from other sources. Uninsured depositors rushed to withdraw $100 billion in two days, and the bank was quickly placed into receivership. GAI agents are poised to make deposits less sticky than they have ever been. Imagine that, rather than networks of venture capitalists encouraging their startups to withdraw funds, a large share of the banks’ uninsured depositors used GAI agents to manage their deposit accounts with instructions to minimize the risk of loss. Relying on real-time data feeds, the GAI agents incorporated the bank’s press release and other financial statements into its corpus of information, observe that the bank’s riskiness had increased, and autonomously make the decision to migrate their users’ assets to other banks. Although each GAI agent makes the decision to move its user’s assets on an individualized assessment of the needs of that user, the result is a bank run.

That there is likely to be only a small number of LLMs magnifies these risks. GAI agents that are built atop the same LLM will respond to information and commands in identical manners. If GAI agents operate on a large number of different code bases, their reactions to the same information and commands will be different; some will observe a press release announcing losses and decide to transfer their principals’ assets, but others will wait for further information before making any sudden move. But because LLMs are incredibly expensive to develop, there will only ever be three or four on the market from which GAI agents may be created, meaning that 25% or more of GAI agents will respond in the same manner to new information, a phenomenon known as herding. If 25% of a bank’s depositors decide to leave at once, that will push the bank into insolvency.

Banks’ and Regulators’ Limits

With GAI agents able to rely on open finance APIs, banks may ultimately be required to treat any account connected via API as subject to redemption at any moment. Today, when depositors use software reliant on open finance APIs to access their accounts, financial institutions do not know which piece of software is being used — all banks observe is that the customer has connected the account to some third-party service. Banks may not know whether some GAI agent has been given instructions to manage those accounts. But even if they are made aware (by, for example, signing contracts with API providers that require the identification of which services accountholders are using), banks must assume that all accounts connected to GAI agents are subject to run, even though the GAI agent may never have been given an instruction to maximize yield or minimize losses. Banks could prohibit depositors from using GAI agents to interface with their accounts, but since they may consider it a competitive advantage to allow accountholders to do so, it is not practical to expect banks themselves to stop this run risk.

Like banks themselves, regulators are also not in a position to address this run risk, as all regulatory options available have limits. They could impose redemption gates that prevent depositors from withdrawing large quantities of funds at a time when redemption requests are high. This would slow GAI agents’ withdrawals, but may ultimately incentivize other depositors to run. They could impose heightened banks’ capital or high-quality liquid asset requirements for accounts connected to open finance APIs. This would allow banks to better withstand runs, but would also compress banks’ net interest margin, reducing profitability and maybe even making some — most likely the community banks that rely more on traditional banking than their larger competitors — unprofitable. They could prohibit banks from paying interest on accounts connected to these APIs, but that would prevent beneficial functionality from being used.

What would be ideal is to regulate the GAI agents themselves, but that authority is nonexistent. If GAI agents are able to completely manage users’ bank accounts, that would make them akin to deposit brokers. There are two problems with this. First, GAI agents may not even be deemed deposit brokers as the law currently defines the term. Second, because, in the past, brokered deposits were easily identifiable — customers would transfer cash to the deposit broker, who would place it with various banks — the law only regulates the way that banks interface with brokered deposits, not deposit brokers themselves. Similarly, the Bank Services Company Act (BSCA) is inapplicable. That law regulates companies that are fully owned by insured banks or that provide services to banks. The GAI agents discussed here are neither; if anything, they are providing services to banks’ customers, not the banks themselves. Moreover, even if the provider of a GAI agent has entered into a contract with banks to perform some service, the BSCA is unlikely to apply to other aspects of the firm’s business.

Conclusion

The ability for individuals and nonfinancial firms to have their financial lives assisted by GAI agents poses incredible promise. But it also poses incredible risks. Policymakers should be thinking about how these tools can be used and whether current laws are sufficient to address them. They may not be.

Alex Johnson
Alex Johnson
Sign up for Fintech Takes, your one-stop-shop for navigating the fintech universe.

Over 41,000 professionals get free emails every Monday & Thursday with highly-informed, easy-to-read analysis & insights.

This field is for validation purposes and should be left unchanged.

No spam. Unsubscribe any time.