Part 3: Your Crime insurance requires a human criminal to trigger a covered loss. AI-generated fraud doesn't always have one.


Reading time: 6 minutes
Listening time: 9 minutes

Our line-by-line Silent AI audit continues

This is Part 3 of our three-part Wednesday Intelligence series, The Six-Line Silent AI Audit.

This is the final installment and the complete reference document. Part 1 covered where wrongful act definitions break on AI-reliant board decisions and algorithmic discrimination claims in D&O and EPL policies. Part 2 mapped E&O and Cyber where the professional/product boundary is unsettled, deepfake wire fraud falls between coverage sections, and model poisoning has no standard trigger. If you've wound up here and didn't read the first two installments, links are at the end of this piece...

Today, Part 3 covers the last two lines, Crime/FI bond and Fiduciary Liability, then delivers the full audit framework, governance docs, and what underwriters at leading carriers are now asking institutions at renewal.

Three reasons to read this installment:

  1. A synthetic borrower who doesn't exist cannot commit a "dishonest act" under your Crime insurance program. AI-generated synthetic identity fraud may have no clean coverage answer in your current crime program.
  2. Underwriters at leading FI writers are starting to ask for AI system inventories, deepfake detection protocols, and bias audit records. What you can produce at renewal determines your negotiating position, not just your premium.
  3. The four governance documents described in this brief are moving from underwriting preferences to opportunities for coverage differentiation. Building them now is both a regulatory compliance move and a renewal positioning asset simultaneously.

Prefer to listen? Check out the audio version.

The Crime situation

Most every dishonest act definition in your Crime insurance assumes the perpetrator is a person.

That assumption runs through the last two of your six core policy lines. Both have active claim patterns. Both have coverage triggers built for a world where fraud required a human to carry it out.

A synthetic criminal who does not exist cannot commit a dishonest act.

An insurer wires a return of premium to a customer account built from AI-generated identity documents, fake personal data, and manufactured underwriting history. The customer does not exist. The insurer lost real money. The FI Bond requires a human perpetrator. A fake identity built by an AI system meets none of those elements.

The National Insurance Crime Bureau projects a 49 percent rise in insurance fraud linked to identity theft in 2026. Nearly one in four referred claims now involves a fake identity.

In life insurance alone, synthetic identity fraud makes up 85 percent of identity fraud cases, according to RGA, at an estimated $30 billion a year. FinCEN Alert FIN-2024-Alert004 warned that criminals are using GenAI images with stolen or fake personal data to build synthetic identities for loan fraud, check fraud, and push payment fraud at scale. The alert created a dedicated SAR key term for the pattern.

Insurers face this on both sides. They are targets of synthetic fraud in their own claims shops. They also underwrite the coverage that may not respond when their clients get hit.

The deepfake piece

Deepfake CEO fraud makes the problem worse from the other direction.

Part 2 mapped that scenario through the cyber form. On the crime side, many social engineering sections require a fraudulent instruction from a third party posing as someone with authority. Whether a voice made by AI counts as posing under the form is an open coverage question.

The voluntary parting exclusion is being tested against deepfake wire transfers right now.

When a tricked employee approves a transfer based on a fake video call, most forms treat that as a voluntary act outside coverage. In December 2025, Coalition became the first major cyber carrier to offer a deepfake response endorsement for forensic analysis and legal support. The endorsement exists because the standard forms do not respond.

Social engineering sublimits of $250,000 to $1 million were set for one-off fraud attempts. AI can generate hundreds of convincing requests at once. Swiss Re warned in its SONAR 2025 report that deepfakes may increasingly drive cyber insurance losses.

The fiduciary piece

Fiduciary liability is the sixth line. It is the least developed. It is also where the governance question will be answered first.

AI-assisted investment advice in retirement plan management raises fiduciary duty questions under ERISA that no court has resolved. The Supreme Court in Hughes v. Northwestern confirmed that ERISA fiduciaries are judged on an ongoing duty of prudence and monitoring. That standard will extend to AI-driven investment tools as they enter plan operations.

The litigation path is already being built.

In Lokken v. UnitedHealth Group, a federal court in February 2025 let breach of contract claims move forward against a health insurer whose AI tool had a 90 percent reversal rate on appealed denials. The insurer kept using it because only 0.2 percent of members appealed. In March 2026, the court ordered broad discovery into how the algorithm was designed, what governance records existed, and whether it was built to replace physician judgment.

Legal analysts have already extended the Lokken framework beyond health insurance.

Policyholders in property, casualty, and liability disputes can now seek discovery into whether AI replaced the adjuster's own judgment. The same path leads to AI-assisted plan decisions. Insurtechs using AI in coverage determinations or claims triage face the same discovery exposure Lokken opened for health insurers.

The U.S. Treasury published its AI in Financial Services report in December 2024 after receiving 103 comment letters from financial firms, fintechs, and trade associations confirming that AI is already used in retirement readiness apps, chatbots, portfolio management, and trade execution. In February 2026, Treasury released a Financial Services AI Risk Management Framework and a shared AI Lexicon. The DOL's September 2024 guidance extended cybersecurity rules to all ERISA-covered plans.

And at least one AI platform is now using machine learning to find fiduciary breach claims against plan sponsors by scanning plan documents, filings, and court records. AI is speeding up the plaintiff's ability to spot the exposure before the plan sponsor knows it exists.

This is a 2027 to 2028 litigation risk, but the governance work needs to happen before your next renewal submission.

The governance framework

Underwriters at leading FI writers are already asking for documentation many boards have not yet built.

The coverage issues we've highlighted in this series is not new news to the carriers writing your program. At the top of the FI market, the underwriting conversation has moved ahead of many buyers. The institution that arrives at renewal with governance records is in a very different conversation than one that cannot produce them. What you can show at renewal sets your negotiating position. At leading FI writers, governance records are moving from a pricing factor to a condition of coverage.

The four governance documents in this series are moving from preferences to requirements.

  • An AI Oversight Policy defines who owns AI governance, what decisions require committee review, and what the escalation protocol is when AI outputs affect covered activities. This is the document carriers ask for first. For MGAs operating under binding authority, capacity providers are beginning to require it as part of program audit cycles.
  • A Bias Audit Schedule covers any AI tool used in hiring, credit decisions, or benefits. A documented schedule meets emerging state rules under NYC Local Law 144, the Colorado AI Act, and similar laws. It also provides a coverage defense in EPLI and fiduciary claims. One document serves both functions.
  • An Employee Acceptable-Use Policy covers AI tools employees bring in without oversight. This is where most institutions have the least visibility and the most unpriced exposure. Underwriters are asking for this document at EPLI renewals now.
  • An AI Incident Response Playbook defines your protocol when an AI system produces a harmful output. At minimum, carriers expect it to cover AI system identification, failure detection, internal notification chain, carrier notification timing, and evidence preservation. This document is moving from a carrier preference to an underwriting requirement at leading FI programs.

Sources: NICB, FinCEN, Swiss Re, Lokken v. United, ArentFox Schiff

A note on vendor contracts...

Review is necessary but not sufficient. Model output disclaimers and IP carve-outs in many AI vendor agreements may wipe out recovery before indemnification caps are reached. Have legal review the full contract, not just the indemnification section.

The board question this installment surfaces: has management confirmed in writing which coverage section responds to a synthetic identity fraud loss or an AI-assisted fiduciary decision in your current program?

This completes the three-part series mapping all six lines. The full brief contains the complete gap analysis, governance framework, board questions, and regulatory timeline.

Want to walk through your current program against this framework? We do this work every day for regional insurers, MGAs, insurtechs, and community banks. Book a confidential conversation here.

In Case You Missed It!

A month ago we launched our Six-Line Silent AI Audit series, a three-part Wednesday Intelligence series mapping a financial institution's core policies against the AI exposures most insurance policies were never written to address.

Part 1 covered D&O and EPLI, where "wrongful act" definitions assume a human decided and algorithmic discrimination doesn't map to your form's coverage trigger. Part 2 covered E&O and Cyber, where the professional/product liability boundary for AI-assisted advice is unsettled in every court and deepfake wire fraud falls between three coverage sections without triggering any of them cleanly.

Read Part 1 here, or listen to the audio version here.

Read Part 2 here, or listen to the audio version here.

Stay Covered Everybody,

FLIP

P.S. Want to share this edition via text, email or social media? Simply copy-and-paste the link below:

https://lionspecialty.kit.com/posts/your-crime-insurance-dishonest-act-definition-requires-a-human-perpetrator-ai-generated-fraud-doesn-t-have-one

And if this briefing was forwarded to you, subscribe directly here.

P.P.S. Nothing in this brief constitutes legal advice. These are the opinions of the founders, offered as market intelligence to help institutions ask sharper questions at their next insurance renewal.

LION Specialty

Everything you need to know to navigate the financial institution insurance market in ≈ 5 minutes per week. Delivered on Fridays.

Read more from LION Specialty
A worm just poisoned 796 software packages your vendors depend on. Your cyber policy was written for a different kind of breach. Why this matters if you run a regional insurer, an MGA, or an insurtech...

Reading scan time: 6 minutesListen time: 6 minutes Your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three reasons to read this week... Regional and mutual insurers: Google just intercepted the first cyberattack built entirely by AI. It targeted the same open-source code your TPAs and core system vendors build on. Inside: what it targeted, how they caught it, and the one question to ask your top three vendors...

AI-generated phishing emails now hit a 54% click-through rate. Your annual training isn't built for this.

Reading time: 6 minutesListening time: 6.5 minutes AI-generated phishing emails now achieve a 54% click-through rate, 4.5 times higher than human-written phishing scams. They can clone your CEO's voice on a phone call and deepfake your CFO on a Zoom call. One documented case: a single deepfake voice scam extracted $25.6 million from one firm. Three reasons to read this edition. The attacks have moved beyond email. A third of all 2025 social engineering incidents never touch an inbox. If your...

Anthropic shipped 10 insurance AI Agent templates. Verisk wired loss data into Claude. And Microsoft ushers in the "Frontier Firm" Era at 3x revenue!

Reading scan time: 5 minutesListen time: 5 minutes Here's your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three events are poised to move the global insurance markets this week... Anthropic released 10 agent templates built for financial services. Verisk plugged its ISO loss-cost data directly into the same platform. Three other connectors matter to FI buyers: D&B, S&P Capital IQ, and Moody's. A...