|
Reading time: 6 minutes AI-generated phishing emails now achieve a 54% click-through rate, 4.5 times higher than human-written phishing scams. They can clone your CEO's voice on a phone call and deepfake your CFO on a Zoom call. One documented case: a single deepfake voice scam extracted $25.6 million from one firm. Three reasons to read this edition.
Prefer to listen? Check out the audio version. The threat isn't what it was two years ago.Social engineering used to be sloppy. Nigerian prince emails. Misspelled URLs. Obvious fakes that most employees could spot from across the room. That era is over. Generative AI lets attackers scrape your LinkedIn profile, your company's press releases, and your executive team's communication patterns, then craft messages that sound exactly like someone you trust. The personalization takes about five minutes. The results are nearly identical to the real thing. And email is now only one of the channels. Everyone knows what "phishing" is now. But what about "vishing?" Fraudulent phone calls using sophisticated voice clones have surged 442% in the past year. "ClickFix," campaigns, fake CAPTCHA prompts that trick employees into executing malicious commands, spiked 1,450% in early 2025. Nation-state actors now open with casual "chit-chat" conversations before delivering payloads. A third of all 2025 social engineering incidents never touch an inbox. Here is what financial institutions are now facing: Stolen credentials, phishing kits, and deepfake creation tools are bought and sold in underground marketplaces. Deepfake tool trading jumped 223% in recent years. The technical barrier to launching a sophisticated social engineering campaign has collapsed. There is an actual plan though, and it has five steps.Most institutions respond to social engineering with a single tactic: annual phishing training. One session. One checkbox. Then back to business until something goes wrong. A single annual session skips the foundation. The Fidelis Security framework sequences five steps in a specific order, and skipping ahead is where most programs break. You build the culture before you build the controls. The pyramid below reads bottom-up. Culture means leadership models secure behaviors. Reporting is encouraged, not penalized. Security champions are visible from the front desk to the C-suite. Employees flag suspicious contacts without fear of blame. Training is continuous, role-specific, and threat-responsive. Treasury staff trained on wire fraud. HR on pretexting. IT on help desk manipulation. Simulated phishing and vishing campaigns measured quarterly. Policy removes the ambiguity social engineers exploit. Dual-authorization on financial transactions. Callback verification using pre-registered numbers only. Frictionless incident reporting pathways that are clear and documented. Technical Controls reduce the attack surface and catch what humans miss. The controls layer operates as defense-in-depth: concentric rings from perimeter to core. Perimeter tools (email filters, brand monitoring, dark web monitoring, patching) form the outer ring. Detection tools (behavioral analytics, AI-powered threat detection, anomaly alerting) form the middle ring. Phishing-resistant MFA and deception technology form the core. Attackers must penetrate all three rings to reach protected assets. Incident Response means documented playbooks, tested in tabletop exercises, with regulatory reporting workflows built in before the crisis hits. Escalation paths to legal, compliance, and executive leadership are pre-defined. The 2026 AI threat requires protocols the original framework doesn't cover:
Can your institution check all five? Every step reinforces the one before it. Policy without culture produces workarounds. Technology without training produces false confidence. IR without all four preceding steps produces chaos. If you run a financial institution and you get tricked, the legal exposure is real.Social engineering is not an IT problem. For financial institutions, it is a governance exposure with regulatory teeth. Regulation E (rules for electronic fund transfers) requires timely, documented investigations and defensible claim adjudication when customers lose funds to fraud. GLBA (customer data protection law) mandates customer information protection, including employee training and testing as a compliance obligation. The FFIEC (bank examiner cybersecurity standards) Cybersecurity Assessment Tool specifically addresses social engineering in its evaluation framework. NIST (federal security control catalog) SP 800-53 controls (SI-4, AT-2, CA-8) map directly to social engineering risk management. NAIC Model 668 (insurer data security model law) requires licensed insurers to maintain information security programs that include social engineering controls and employee training — the same framework your own regulators are using to evaluate your operations. For the first time, the FBI's IC3 report includes a dedicated section on artificial intelligence in cybercrime: 22,364 AI-related complaints totaling $893 million in losses. The tools your employees are being attacked with now have their own federal reporting category. When an employee gets manipulated into wiring funds to a spoofed vendor, the regulatory question is not "did the attacker fool them?" The question is: did the institution have documented training, tested verification procedures, and a governance-backed response workflow in place before the incident? The exposure compounds. Reputational damage accelerates customer attrition. Examiners don't just look at the incident — they open up the entire control environment and ask what the board knew. Insurance carriers investigate whether the institution met policy conditions before paying claims. What this means for your insurance program.The governance question lands differently when your Crime policy is sitting on the desk. Most FI Crime forms carry a social engineering sub-limit (typically $250K to $1M) that was sized before deepfake-authorized wire transfers existed. Voluntary parting exclusions, which allow carriers to deny claims where an employee "willingly" transferred funds, were written for scenarios where the manipulation was obvious in hindsight. When the manipulation is a perfect voice clone of your CEO, the word "voluntary" becomes a coverage dispute. This is where the five-step framework meets your renewal. Carriers are already asking whether your institution has documented callback verification procedures, dual-authorization protocols, and tested incident response playbooks. These aren't best-practice suggestions anymore. They're underwriting requirements. Five questions for your board.
Sources: Fidelis CrowdStrike Unit 42 FBI IC3 FTC Coming in two weeks: the underwriter's side of this conversation.If your controls haven't been updated since 2024, they weren't built for this. The five-step framework gives you the sequence. The counter-AI protocols give you the 2026 upgrade. The five board questions give you the starting point for your next conversation with leadership. But prevention is only half the equation. The other half is what happens when the underwriter opens your submission. Jim Kardaras, head of Nationwide's Crime underwriting unit, joins the Wednesday Intelligence series to break down how social engineering is forcing a different underwriting conversation. What Crime underwriters used to look for, what they're asking now, and what your institution needs to have documented before your next submission hits a desk. A special thanks to today's Boardroom Brief sponsor: Nationwide! Stay Covered Everybody, FLIP and Our Friends from the Nationwide FI team https://lionspecialty.kit.com/posts/the-bad-guys-have-new-superpowers And if this briefing was forwarded to you, subscribe directly here. |
Everything you need to know to navigate the financial institution insurance market in ≈ 5 minutes per week. Delivered on Fridays.
Reading scan time: 6 minutesListen time: 6 minutes Your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three reasons to read this week... Regional and mutual insurers: Google just intercepted the first cyberattack built entirely by AI. It targeted the same open-source code your TPAs and core system vendors build on. Inside: what it targeted, how they caught it, and the one question to ask your top three vendors...
Reading scan time: 5 minutesListen time: 5 minutes Here's your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three events are poised to move the global insurance markets this week... Anthropic released 10 agent templates built for financial services. Verisk plugged its ISO loss-cost data directly into the same platform. Three other connectors matter to FI buyers: D&B, S&P Capital IQ, and Moody's. A...
Reading scan time: 5 minutesListen time: 5 minutes Here's your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three unusual articles caught our attention this week... Fair warning: this edition takes a less conventional path. One article involves your morning coffee. Another involves unidentified objects in the sky. Ya, UFOs. Stick with us. It all connects at the end. The U.S. insurance industry consumes an...