The Roman emperor Valens pours money into a coffer. 1600-1614.

3 FINTECH NEWS STORIES

#1: Who is responsible for scams? 

What happened?

Meta teamed up with a couple of UK banks to stop scams:

The Fraud Intelligence Reciprocal Exchange (FIRE) is a “threat intelligence sharing program” that lets banks share information with Meta directly so the tech giant can use it to stop scammers, according to a Wednesday (Oct. 2) press release.

NatWest and Metro Bank are the first financial institutions in the U.K. to take part in the program, and others are scheduled to join, the release said.

An early pilot of FIRE brought down a “significant concert ticket scam network” targeting people in the U.K. and the United States, according to the release. Data shared between the banks and Meta allowed the company to remove around 20,000 scam accounts.

Shortly after the launch of FIRE, Revolut blasted the initiative as insufficient, arguing that Meta should be compensating scam victims directly:

Revolut said the pact “falls woefully short of what’s required to tackle fraud globally.”

In a statement, Woody Malouf, Revolut’s head of financial crime, said that Meta’s plans to tackle financial fraud on its platforms amount to “baby steps, when what the industry really needs is giant leaps forward.”

“These platforms share no responsibility in reimbursing victims, and so they have no incentive to do anything about it. A commitment to data sharing, albeit needed, simply isn’t good enough,” Malouf added.

So what?     

This is just the latest chapter in a story that’s been playing out between banks and big tech in the UK over the last two years.

To quickly summarize:

  • In late 2022, the Payment Systems Regulator (PSR), a small subsidiary of the Financial Conduct Authority (FCA), proposed that banks and fintech companies should be on the hook for up to £415,000 when their customers lose money to payment scams.
  • Banks and fintech companies absolutely lost their shit over this proposal, arguing that they were only the middlemen moving money where their customers instructed them to (in technical terms, this type of money movement is known as authorized push payments). The industry instead pointed the finger at big tech companies, which operate the communication platforms where many of these scams originate. Meta, owner of Facebook, WhatsApp, and many other such platforms, was a particular target. Revolut (which, let’s remember, has a dog in this hunt) released research stating that 60% of reported scams came from Meta.
  • This year (an election year in the UK), government sentiment around the rule shifted thanks to an enormous lobbying push from the financial industry, and the PSR adjusted its proposal. The final rule, which goes into effect today, lowers the maximum reimbursement limit to £85,000, split 50:50 between the sending and receiving banks.

Despite the rule going into effect today, there is not yet a formal mechanism for the sending and receiving banks to transfer money back and forth to each other and resolve disputes. So banks in the UK are reportedly going to be handling it manually for the time being (via channels like email), which seems like it will quickly become a complete clusterfuck.

Also, interestingly, the final rule includes an exception for payments authorized due to “gross negligence” on the part of the customer. This is a high bar for banks to meet as it requires, among other things, for the customer to have authorized the payment after the bank explicitly told them it was a scam. This might explain why it was reported earlier this year that Revolut had begun requiring customers to take selfies holding up signs that said, “Revolut warned me this is likely a scam, and I am unlikely to get my money back” before they would agree to authorize certain suspicious transactions.

Ultimately, it seems unlikely that regulators in any country will try to force big tech companies to own a portion of the liability for scams that originate from their platforms. It just feels like a bridge too far (though if any regulators would be willing to give it a go, I’d think it would be European regulators).

The conversation that I would prefer to have (in the UK and the U.S.) is about what banks, fintech companies, and big tech companies are doing to stop fraudsters from creating accounts with them in the first place (and identifying and kicking off those accounts when they manage to sneak in). That’s the part of the story that doesn’t get enough attention IMHO. 

#2: What Happens If We Succeed?

What happened?

An Israeli fintech startup is launching an AI-powered chatbot that gives investment advice:

Tel Aviv-based Bridgewise has been given the green light by the Israel Securities Authority (ISA) to release a chatbot called Bridget later this month that can offer recommendations for which stocks to buy and sell in response to user queries. The startup is working with one of the country’s largest banks, Israel Discount Bank, to roll out the product. It plans to expand to a second Israeli bank’s investment platform in the coming months.

The move represents a significant — and controversial — milestone for generative AI. While global financial institutions have increasingly embraced chatbots for research and customer service in the nearly two years since OpenAI launched ChatGPT, regulators have been wary of the risks of deploying this technology for retail investing.

And Bud, a financial data platform provider in the UK, has announced a consumer banking agent powered by AI:

Our consumer agent is trained to understand a consumer’s financial history and position, and – using that information – continuously and autonomously direct tasks to achieve objectives. 

Today, it’s trained to improve the amount of money a consumer earns in interest, help ensure they meet their financial obligations and avoid entering an unnecessary overdraft by taking direct actions such as moving money autonomously between accounts like checking and savings. 

So what?     

The thing I keep getting hung up with fintech companies’ initial forays into the new and exciting world of agentic financial services is that they don’t seem to be thinking much about the larger implications of what they’re building.

They just look at a specific opportunity, like “a large language model with access to up-to-date market information could probably pick stocks better than most humans” or “an autonomous AI agent with access to a consumers’ financial accounts could do a lot to optimize their finances” and rush to build it.

They don’t ask, “What happens if we succeed?”

Now, to be clear, startup founders and operators rarely ever spend time thinking about long-term outcomes. That’s not their job!

However, as AI advances at an alarmingly accelerating rate, it will become more important for founders and operators to diligently think through the implications of what success, in a narrow product-led sense, will mean for their customers and for the market as a whole.

For example, AI likely can do a better job picking stocks than humans can, but what happens to the stock market when everyone is using the same foundation models to pick stocks? Will we see herding behavior that increases the risks of bubbles? Will companies begin gaming their earnings to appeal to these AI models? Will these stock-picking AI agents be required to take the SIE and Series 57 exams? How will we know they are acting in the financial interests of their principals, and who is liable if they do not?

The same types of questions apply to the Bud example. It sounds great for the end customer, but why would a bank offer such an agent to its customers if the agent reduced the revenue that the bank was earning from services like overdraft protection? And if consumer banking agents make it easier for customers to chase the best prices in both lending and deposits, how does our fractional reserve banking system survive in its current form?      

#3: Are Big Banks Good for Financial Inclusion?

What happened?

JPMorgan Chase is planning to open 100 new branches in low-income communities:

JPMorgan Chase is working on opening nearly 100 new branches in low-income areas around the country, including America’s inner cities and rural towns where banks have been shrinking their footprint for years. 

Some of these so-called community centers come with the usual fixings—teller windows, ATMs, and bankers’ offices—but also have spaces where the bank will host small businesses and financial literacy workshops, open to the public. After piloting the model five years ago, JPMorgan is now expanding it nationwide.

“This is not just ‘do-gooding,’ this is business,” Dimon, the JPMorgan chief executive, said in an interview. “We measured these branches by number of customers, deposits, investments, and the model works.”

So what?

I give Chase a hard time in this newsletter a lot. As the biggest bank in the U.S., I think this level of criticism is perfectly fair.

However, it’s also worth noting when the bank does something that moves the industry in a good direction, and this certainly qualifies.

As Jamie Dimon makes clear in his quotes to the Wall Street Journal, this initiative isn’t charity. Building branches in lower-income communities like Brooklyn, New Orleans, the south side of Chicago, and the Crenshaw district of Los Angeles can be profitable. The data JPMC shares is compelling:

Data from Chase’s first community center branch in New York’s Harlem show that customers opened more checking accounts there versus any other branch in the neighborhood between 2019 and 2023. Four years after the expanded branch opened in 2019, personal savings balances there had grown 73%, according to JPMorgan.

And it presents a really interesting macro question — is a more consolidated and concentrated banking market actually better from a financial inclusion perspective?

I have no idea.

The established viewpoint is the opposite — having a larger, more diverse banking market populated by lots of community banks and credit unions is the best way to ensure access and inclusion. 

That’s what most regulators and public policy folks would probably say. And that’s what I think (plus fintech!), but depending on how this expanded initiative from Chase plays out (especially in rural communities), I might be willing to update my priors.


2 FINTECH CONTENT RECOMMENDATIONS

#1: Synapse Used Customer Funds For Reserve Requirements, Partner Bank, Ex-Staffers Say (by Jason Mikula, Fintech Business Weekly) 📚  

The more reporting that comes out about how Synapse operated its platform, the worse it gets. This latest reporting from Jason suggests, at least to me, that Synapse committed fraud by using end users’ funds to meet the reserve requirements of its partner banks.

To put it mildly, that’s not how that’s supposed to work, and if it’s true, we may end up with some criminal liability when all is said and done.  

#2: Shadow Banks Can Run—But Not Hide—From Bank Supervisors (by Steven Kelly, Without Warning) 📚

If you’re not an expert in private credit and the evolution of shadow banking since the Great Recession, but you are curious about these topics (this should describe like 98% of you), then you should be reading Steven Kelly. His newsletter (and research) is fantastic.


1 QUESTION TO PONDER

What are some specific examples of how fintech companies and banks have overused or misused their negotiating leverage in BaaS? 

I’m curious to learn what was common when fintechs (and BaaS middleware platforms) had all the power and what has changed now that the leverage has shifted to banks. 

I’m obviously happy to keep any examples you share with me off the record!

Alex Johnson
Alex Johnson
Join Fintech Takes, Your One-Stop-Shop for Navigating the Fintech Universe.

Over 36,000 professionals get free emails every Monday & Thursday with highly-informed, easy-to-read analysis & insights.

This field is for validation purposes and should be left unchanged.

No spam. Unsubscribe any time.