The Bad Guys Have New Social Engineering Superpowers


Reading time: 6 minutes
Listening time: 6.5 minutes

AI-generated phishing emails now achieve a 54% click-through rate, 4.5 times higher than human-written phishing scams. They can clone your CEO's voice on a phone call and deepfake your CFO on a Zoom call. One documented case: a single deepfake voice scam extracted $25.6 million from one firm.

Three reasons to read this edition.

  1. The attacks have moved beyond email. A third of all 2025 social engineering incidents never touch an inbox. If your training starts and ends with "don't click bad links," you're defending against an enemy that no longer exists.
  2. There is a concrete five-step prevention framework, and the order matters. Most institutions skip straight to technology and wonder why employees still get manipulated.
  3. If you run a financial institution and an employee gets tricked because you lacked proper safeguards, the legal exposure is real. Reg E, GLBA, and FFIEC all have teeth. Total reported cybercrime losses reached $20.9 billion in 2025.

Prefer to listen? Check out the audio version.


The threat isn't what it was two years ago.

Social engineering used to be sloppy.

Nigerian prince emails. Misspelled URLs. Obvious fakes that most employees could spot from across the room.

That era is over.

Generative AI lets attackers scrape your LinkedIn profile, your company's press releases, and your executive team's communication patterns, then craft messages that sound exactly like someone you trust. The personalization takes about five minutes. The results are nearly identical to the real thing.

And email is now only one of the channels.

Everyone knows what "phishing" is now. But what about "vishing?" Fraudulent phone calls using sophisticated voice clones have surged 442% in the past year. "ClickFix," campaigns, fake CAPTCHA prompts that trick employees into executing malicious commands, spiked 1,450% in early 2025.

Nation-state actors now open with casual "chit-chat" conversations before delivering payloads. A third of all 2025 social engineering incidents never touch an inbox.

Here is what financial institutions are now facing:


The dark web accelerates all of it.

Stolen credentials, phishing kits, and deepfake creation tools are bought and sold in underground marketplaces. Deepfake tool trading jumped 223% in recent years. The technical barrier to launching a sophisticated social engineering campaign has collapsed.


There is an actual plan though, and it has five steps.

Most institutions respond to social engineering with a single tactic: annual phishing training.

One session. One checkbox. Then back to business until something goes wrong. A single annual session skips the foundation. The Fidelis Security framework sequences five steps in a specific order, and skipping ahead is where most programs break.

You build the culture before you build the controls.

The pyramid below reads bottom-up.
Culture, Training, and Policy form the human foundation.
Technical Controls and Incident Response build on top.
Skip a tier and the structure collapses.

Culture means leadership models secure behaviors. Reporting is encouraged, not penalized. Security champions are visible from the front desk to the C-suite. Employees flag suspicious contacts without fear of blame.

Training is continuous, role-specific, and threat-responsive. Treasury staff trained on wire fraud. HR on pretexting. IT on help desk manipulation. Simulated phishing and vishing campaigns measured quarterly.

Policy removes the ambiguity social engineers exploit. Dual-authorization on financial transactions. Callback verification using pre-registered numbers only. Frictionless incident reporting pathways that are clear and documented.

Technical Controls reduce the attack surface and catch what humans miss. The controls layer operates as defense-in-depth: concentric rings from perimeter to core. Perimeter tools (email filters, brand monitoring, dark web monitoring, patching) form the outer ring. Detection tools (behavioral analytics, AI-powered threat detection, anomaly alerting) form the middle ring. Phishing-resistant MFA and deception technology form the core.

Attackers must penetrate all three rings to reach protected assets.
No single tool covers the full attack surface.

Incident Response means documented playbooks, tested in tabletop exercises, with regulatory reporting workflows built in before the crisis hits. Escalation paths to legal, compliance, and executive leadership are pre-defined.

The 2026 AI threat requires protocols the original framework doesn't cover:

  • Out-of-band verification — Verify all financial instructions through a separate channel (for example, confirming a wire request received by email with a phone call to a pre-registered number) before executing.
  • Safe-word protocols — Pre-established authentication phrases between executives and finance/operations teams, used to confirm identity on live calls.
  • Voice biometric authentication — Real-time deepfake detection technology applied to high-value financial authorization workflows.
  • AI-tell recognition training — Teaching employees to recognize the subtle signs of AI-generated voice or video, including unnatural speech cadence, unusual urgency cues, and mismatched background sounds.
  • MFA bombing countermeasures — Protections against repeated fake login-approval notifications designed to wear down employees into clicking "approve." Auto-lock accounts and alert security after repeated MFA push rejections.

Can your institution check all five?

Every step reinforces the one before it. Policy without culture produces workarounds. Technology without training produces false confidence. IR without all four preceding steps produces chaos.


If you run a financial institution and you get tricked, the legal exposure is real.

Social engineering is not an IT problem. For financial institutions, it is a governance exposure with regulatory teeth.

Regulation E (rules for electronic fund transfers) requires timely, documented investigations and defensible claim adjudication when customers lose funds to fraud. GLBA (customer data protection law) mandates customer information protection, including employee training and testing as a compliance obligation. The FFIEC (bank examiner cybersecurity standards) Cybersecurity Assessment Tool specifically addresses social engineering in its evaluation framework. NIST (federal security control catalog) SP 800-53 controls (SI-4, AT-2, CA-8) map directly to social engineering risk management.

NAIC Model 668 (insurer data security model law) requires licensed insurers to maintain information security programs that include social engineering controls and employee training — the same framework your own regulators are using to evaluate your operations.

For the first time, the FBI's IC3 report includes a dedicated section on artificial intelligence in cybercrime: 22,364 AI-related complaints totaling $893 million in losses. The tools your employees are being attacked with now have their own federal reporting category.


BEC alone caused $3 billion in losses in 2025. Consumer fraud losses hit $15.9 billion that same year. Total reported cybercrime losses crossed $20.9 billion for the first time.

When an employee gets manipulated into wiring funds to a spoofed vendor, the regulatory question is not "did the attacker fool them?" The question is: did the institution have documented training, tested verification procedures, and a governance-backed response workflow in place before the incident?

The exposure compounds.

Reputational damage accelerates customer attrition. Examiners don't just look at the incident — they open up the entire control environment and ask what the board knew. Insurance carriers investigate whether the institution met policy conditions before paying claims.

What this means for your insurance program.

The governance question lands differently when your Crime policy is sitting on the desk.

Most FI Crime forms carry a social engineering sub-limit (typically $250K to $1M) that was sized before deepfake-authorized wire transfers existed. Voluntary parting exclusions, which allow carriers to deny claims where an employee "willingly" transferred funds, were written for scenarios where the manipulation was obvious in hindsight. When the manipulation is a perfect voice clone of your CEO, the word "voluntary" becomes a coverage dispute.

This is where the five-step framework meets your renewal.

Carriers are already asking whether your institution has documented callback verification procedures, dual-authorization protocols, and tested incident response playbooks. These aren't best-practice suggestions anymore. They're underwriting requirements.


Five questions for your board.

  1. Has management tested our verification procedures against a deepfake voice scenario — not just email phishing — within the last 12 months?
  2. Do we have documented dual-authorization and callback verification protocols that use pre-registered numbers, and when were they last audited?
  3. If an employee authorizes a fraudulent wire transfer tomorrow because of a voice clone, can our GC produce the documented training record, the tested verification procedure, and the governance-backed response workflow that Reg E and GLBA require?
  4. What is our Crime policy's current social engineering sublimit, and does the policy language contemplate AI-generated impersonation?
  5. When was the last time the institution conducted a tabletop exercise that included an AI-assisted social engineering scenario?

Sources: Fidelis CrowdStrike Unit 42 FBI IC3 FTC

Coming in two weeks: the underwriter's side of this conversation.

If your controls haven't been updated since 2024, they weren't built for this.

The five-step framework gives you the sequence. The counter-AI protocols give you the 2026 upgrade. The five board questions give you the starting point for your next conversation with leadership.

But prevention is only half the equation. The other half is what happens when the underwriter opens your submission.

Jim Kardaras, head of Nationwide's Crime underwriting unit, joins the Wednesday Intelligence series to break down how social engineering is forcing a different underwriting conversation. What Crime underwriters used to look for, what they're asking now, and what your institution needs to have documented before your next submission hits a desk.

A special thanks to today's Boardroom Brief sponsor: Nationwide!

Stay Covered Everybody,

FLIP and Our Friends from the Nationwide FI team

P.S. Want to share this edition via text, email or social media? Simply copy-and-paste the link below:

https://lionspecialty.kit.com/posts/the-bad-guys-have-new-superpowers

And if this briefing was forwarded to you, subscribe directly here.

P.S. Nothing in this brief constitutes legal advice. These are the opinions of the founders, offered as market intelligence to help institutions ask sharper questions at their next insurance renewal.

LION Specialty

Everything you need to know to navigate the financial institution insurance market in ≈ 5 minutes per week. Delivered on Fridays.

Read more from LION Specialty
A worm just poisoned 796 software packages your vendors depend on. Your cyber policy was written for a different kind of breach. Why this matters if you run a regional insurer, an MGA, or an insurtech...

Reading scan time: 6 minutesListen time: 6 minutes Your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three reasons to read this week... Regional and mutual insurers: Google just intercepted the first cyberattack built entirely by AI. It targeted the same open-source code your TPAs and core system vendors build on. Inside: what it targeted, how they caught it, and the one question to ask your top three vendors...

Anthropic shipped 10 insurance AI Agent templates. Verisk wired loss data into Claude. And Microsoft ushers in the "Frontier Firm" Era at 3x revenue!

Reading scan time: 5 minutesListen time: 5 minutes Here's your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three events are poised to move the global insurance markets this week... Anthropic released 10 agent templates built for financial services. Verisk plugged its ISO loss-cost data directly into the same platform. Three other connectors matter to FI buyers: D&B, S&P Capital IQ, and Moody's. A...

One article about your daily coffee risk. Another about UFOs. And the same question at the end...what is your program not covering?

Reading scan time: 5 minutesListen time: 5 minutes Here's your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three unusual articles caught our attention this week... Fair warning: this edition takes a less conventional path. One article involves your morning coffee. Another involves unidentified objects in the sky. Ya, UFOs. Stick with us. It all connects at the end. The U.S. insurance industry consumes an...