Editor’s Note — This article is sponsored by Plaid. As with all sponsored content in Fintech Takes, this article was written, edited, and published by me, Alex Johnson. I hope you enjoy it!


Paleontologists use the term Cambrian Explosion to describe the moment 540 million years ago when almost every major animal body type appeared in the fossil record. It was an “explosion” of biological diversity without parallel in Earth’s history. 

2025’s fraud landscape feels eerily similar. 

Within just the last couple of years, large‑language models (LLMs) and cheap generative‑AI tooling have spawned an entire ecosystem of new scams. Production costs have cratered, distribution has become more automated, and the evolutionary pressure on traditional fraud controls is merciless. 

Indeed, fraud attempts reported to the FTC hit $12.5 billion in 2024, up 25% year‑over‑year and likely well below the actual number (fraud is chronically underreported).

Why is this happening? And what can banks and fintech companies do about it?

I’m so glad you asked!

When the marginal costs for the bad guys approach near-zero …

Remember the “Nigerian Prince” scam? Did you (like me) ever wonder why those letters were riddled with typos? 

It turns out that this was an intentional strategy. The grammatical mistakes acted like a filter: anyone savvy enough to notice the errors wasn’t gullible enough to pay the “transfer fee.” The scammers wanted to weed out the folks least likely to fall for it, so they didn’t waste their limited time and money on them.

LLMs have flipped this dynamic on its head. Today, a fraud ring pays a few bucks for FraudGPT or WormGPT, feeds it leaked PII, and gets hundreds of tailored scripts in minutes. Mentions of malicious AI tools on dark‑web forums jumped 200% in 2024, according to Kela’s latest threat‑intel report.

When the marginal cost of deception trends toward zero, false positives stop being a problem. Fraudsters no longer need to self‑select for the gullible; they can blast everyone, trusting that the math is on their side.

Deepfakes & The Perfect Synthetic Identity

Last week, Business Insider detailed how a journalist cloned her own voice for eight bucks, called up her bank, and passed the bank’s voice ID authentication checks. The same piece also mentioned the now-infamous $25 million Hong Kong heist executed entirely over a deep‑faked video call. 

These are not isolated examples. According to a survey from Medius, more than half of businesses in the U.S. and U.K. have been targets of a financial scam powered by deepfake technology, with 43% falling victim to such attacks.

This is an extremely important point to emphasize — it’s not just that LLMs are lowering costs for fraudsters. They are simultaneously raising the quality of the fraudulent content being generated.

Sign up for Fintech Takes, your one-stop-shop for navigating the fintech universe.

Over 41,000 professionals get free emails every Monday & Thursday with highly-informed, easy-to-read analysis & insights.

This field is for validation purposes and should be left unchanged.

No spam. Unsubscribe any time.

The result is that it is now fast, easy, and cheap to spin up the perfect synthetic identity. Need a backstory? Ask the LLM to draft a five‑year Instagram timeline with geotagged photos and consistent slang. Need supporting docs? Spin up pay stubs, utility bills, or a perfect ID document and selfie that passes liveness checks.

Traditional KYC checks — government ID, selfie video, credit bureau header data — were built for attackers who paid real money to forge a driver’s license. They simply were not designed for a world in which the perfect synthetic identity can be instantly and inexpensively conjured by an AI model.


Sponsored by Plaid

Has your verification stack kept pace with evolving fraud? Traditional KYC checks weren’t built for deepfakes and AI-generated identities.


Join us at Plaid Effects 2025 on June 12 at 10am PT to see how Plaid is helping companies fight back—with smarter, multi-layered defenses built for now and what’s next.


Small Banks & Fintechs: Fraudsters’ New Favorite Prey    

When the cost of mounting a fraud attack falls to pennies, it suddenly makes economic sense to target any institution, no matter how small. Indeed, when all else is equal, fraudsters prefer to attack smaller banks and fintech companies because they are much less likely to have robust defenses and well-trained, well-staffed fraud investigation teams. 

According to Auriemma Group, synthetic ID and credit‑washing fraud rings already “disproportionately threaten small and midsize banks.” And that study comes from August of last year. 

Since then, the situation has undoubtedly gotten much worse for small institutions. In fact, in speaking with Plaid while researching this article, I learned that the vast majority of the most novel and innovative attacks being reported to Plaid by its customers are being directed at small banks and fintech companies.

And this brings us to the most important question: how can financial institutions (particularly small banks and fintech startups) survive this Cambrian Explosion in financial services fraud?

Shrink Your Attack Surface  

During the actual Cambrian period, 540 million years ago, the oceans filled with predators sporting eyes, jaws, and armor. Prey species survived by shrinking the exploitable surface area — evolving shells, burrowing habits, and camouflage. 

Financial institutions will have to do something similar, focusing on data that is expensive to fake and dangerous for criminals to manipulate at scale.

Two examples are worth elaborating on:

  1. Behavioral Biometrics. Behavioral biometrics are the unconscious tells that every legitimate customer leaves behind when they move through a digital experience. Keystroke cadence, swipe pressure, hesitation on date-of-birth fields, even the micro‑patterns in how a camera shakes while a selfie is captured — all of these tiny signals add up to a behavioral fingerprint. Crucially, the signals are time‑bound and generated live; a fraudster can clone a voice or Photoshop a driver’s license, but running a real‑time biomechanics emulator that perfectly mimics a human’s erratic thumb rhythm is prohibitively difficult. 
  2. Bank Transaction Data. Deposit accounts come with messy, expensive realities — paychecks that land on alternate Fridays, rent debits that hit at 8:00 AM, grocery swipes clustered around paydays, and the occasional overdraft fee that lingers for a week. Faking that rhythm at scale forces an attacker to float real money for weeks and exposes them to AML compliance reporting and ACH returns. The longer they try to preserve the illusion, the more it costs, until the economics flip and the account becomes a liability instead of an asset.

Sponsored by Plaid

AI, fraud, and credit are ever evolving. Is your data strategy built to keep up?

Effects 2025 is Plaid’s virtual event on turning data into action, with new tools for smarter lending, faster decisions, and better fraud defense.

(Featuring Rocket on how exactly AI is reshaping the path to homeownership.)

Live June 12 at 10am PT.


Implementation Considerations

Leveraging behavioral biometric data and bank transaction data to fight fraudsters sounds compelling in theory. In practice, it’s tricky. It can be difficult to know what to focus on.

Here are three suggestions: 

  1. Instrument what’s already inside the walls. Capture two streams at the source: the fine‑grained behavioral biometrics generated by your digital channels — keystroke cadence, device tilt, scroll velocity — and the real‑time cash‑flow patterns flowing through your ledger or core banking system. You already have a wealth of on-us data that you are (likely) not using to its full potential. Start there.
  2. Join a data‑sharing consortium. Fraud is a negative‑sum game; any intelligence you hoard merely reroutes attackers to the next weakest link. A shared pool of compromised device fingerprints, mule accounts, and recycled scam scripts turns each new incident into a network‑wide early‑warning signal.
  3. Fight LLMs with LLMs. Generative AI models are good at flagging recycled phishing language, exposing AI‑generated documents, and stress‑testing your own chatbots for prompt‑injection holes. Be sure to re‑tune thresholds and retrain models on fresh outcome data frequently. Remember, the goal is to use your LLMs to push attackers’ unit economics underwater before theirs does the same to you.

Conclusion  

Thanks to technology, fraud is no longer a constraint-based business. With near‑zero marginal costs and relentlessly improving models, fraudsters are now empowered to attack significantly larger surface areas while maintaining a positive expected return on their investment.

To survive, banks and fintech companies will need to be just as aggressive in adapting to this new environment, tightening their focus on data that is hard to fake, sharing intelligence with their peers, and enlisting robots to fight the bad guys’ robots.

Anything less, and they risk ending up a fossil.

Alex Johnson
Alex Johnson
In collaboration with:

AI, fraud, and credit are ever evolving. Is your data strategy built to keep up?

Effects 2025 is Plaid’s virtual event on turning data into action, with new tools for smarter lending, faster decisions, and better fraud defense.

(Featuring Rocket on how exactly AI is reshaping the path to homeownership.)

Live June 12 at 10am PT. Register here.

Sign up for Fintech Takes, your one-stop-shop for navigating the fintech universe.

Over 41,000 professionals get free emails every Monday & Thursday with highly-informed, easy-to-read analysis & insights.

This field is for validation purposes and should be left unchanged.

No spam. Unsubscribe any time.