We mapped E&O and Cyber line by line. Four cases spanning three coverage issues and zero settled answers. Part 2 of our Silent AI Audit continues....


Reading time: 4 minutes

Our line-by-line Silent AI audit continues

The operational lines are next.

This is Part 2 of our ongoing three-part Wednesday Intelligence series, The Six-Line Silent AI Audit. Part 1 mapped D&O and EPLI coverage issues. AI-washing is already an SEC enforcement action, and algorithmic discrimination produces claims that don't trigger your EPLI form's coverage definition. If you missed it, the link is at the bottom.

Part 2 moves to E&O and Cyber. These are the lines where AI executes a professional or technical function and it goes wrong. They carry the highest current claim frequency of the six.

Three reasons to read this installment.

  1. The professional / product liability boundary for AI-assisted advice has no settled answer in any court. You'll find out where your carrier stands at claim time if you haven't asked before renewal.
  2. Deepfake wire fraud falls between three insurance programs and may trigger none of them cleanly. Most institutions carry cyber, crime, and social engineering coverage, and none was designed for a CFO authorizing a transfer based on a synthetic video call.
  3. Model poisoning has no coverage trigger in any standard form, and the highest-risk version isn't internal. It's in your vendor ecosystem, where a third-party AI system touching your data could be compromised before your own detection protocols would catch it.

Prefer to listen? Check out the audio version.

The E&O situation:

Every "professional services" definition in your E&O form was written before AI touched client work.

That assumption runs through both of your operational policy lines. Both have active claim patterns working their way through the market. Both have coverage triggers that may break when the loss comes from an AI system instead of a human.

When a professional delivers advice that was generated, augmented, or validated by an AI tool, and that advice turns out to be wrong, the loss occupies contested ground. Is it a covered professional error under your E&O form? Or an uncovered product defect attributable to the AI vendor? No appellate court has resolved this question as it applies to AI-assisted professional services.

The form language compounds the problem.

Some E&O policies define "professional services" by specific enumerated activities. If an AI tool enables a service type not on that list, losses from that activity may fall outside the coverage grant entirely. The definition was written for a fixed set of human professional functions. AI changes what professionals do faster than forms get updated.

AI-driven errors in coverage determinations are already in litigation.

In Lokken v. UnitedHealth Group, a federal court allowed breach of contract claims to proceed against an insurer whose AI tool carried a 90% reversal rate on appealed denials. The insurer continued using it because, according to the complaint, only 0.2% of policyholders appeal. In March 2026, the court ordered discovery. Lokken is a policyholder contract claim, not a professional liability case, and that distinction is the point: the E&O question for carriers and MGAs deploying AI in coverage determinations has not been litigated yet. When your AI system makes a professional-grade determination that turns out to be wrong, does your E&O form treat that as a covered professional error or an uncovered system output?

The vendor who built the tool disclaimed model output accuracy and capped indemnification at 12 months of fees.

That vendor liability wall is being tested from the other direction in Nippon Life v. OpenAI (N.D. Ill., filed March 2026), where an insurer is suing after ChatGPT fabricated a non-existent settlement in a covered disability claim. If the claims survive, it puts a real insurance company on the plaintiff side of the vendor accountability question.

Your organization carries the professional liability. The vendor does not.

For MGAs operating under binding authority, the E&O question has an additional layer: when an AI tool assists an underwriting decision made under a BAA and that decision produces a covered loss, the question of whether the MGA's E&O responds or the loss is a product defect is unresolved. Capacity providers are beginning to incorporate AI governance attestations into program audit cycles, and your E&O form should be reviewed for AI-assisted binding decisions before the next audit.

The CYBER piece:

A CFO authorizes a seven-figure wire transfer after a video call with the CEO.

The CEO was never on the call. The face and voice were AI-generated. The institution has suffered a direct financial loss, and three policy forms sit in the program that should respond. None was designed for this scenario. In January 2024, the firm Arup lost $25 million to exactly this attack pattern.

It is not traditional computer fraud. No system was compromised, and no network was penetrated.

It may qualify as social engineering, but the cyber form's social engineering section typically carries a sublimit of $250,000 to $1 million depending on program size. That sub-limit was calibrated for individually crafted fraud attempts in a loss environment where AI can generate hundreds of convincing wire fraud requests simultaneously, each personalized, each with institutional-quality voice or video.

Carriers at the top of the FI market are beginning to correlate sub-limit adequacy to actual wire transfer volume rather than accepting legacy amounts. Your crime form may require unauthorized system access or an identifiable third-party impersonation, and whether an AI-generated synthetic voice constitutes impersonation under the form's language is an active coverage dispute.

Courts in Ernst & Haas Mgmt. Co. v. Hiscox, Inc. and Apache Corp. v. Great American Ins. Co. parsed "computer fraud" and "direct loss" language in email-spoofing cases. They reached opposite conclusions: the Ninth Circuit found coverage, the Fifth Circuit denied it. The same deepfake loss produces a different coverage outcome depending on where your institution operates, and that jurisdictional split is a segmentation variable for underwriters, not just a legal footnote. AI-generated phishing at scale compounds the mismatch: attacks that once required a human to craft individually can now be produced with institutional-quality voice and real-time personalization, and the sublimit gap appears in both your cyber and crime forms and needs to be negotiated as a pair.

Carriers are also beginning to ask about deepfake detection protocols in supplemental questionnaires, including voice authentication, video verification, and mandatory call-back protocols for high-value wire authorizations.

The supply chain problem:

If an adversary manipulates the training data behind your AI-assisted fraud detection system, the model degrades silently. The resulting losses do not map to any standard cyber coverage section. No widely adopted form has a trigger for a corrupted AI model, and the standard market response is silence.

The larger version of this exposure is in your vendor ecosystem. A third-party AI system that touches your data could be compromised through training data manipulation before your own detection protocols would identify it. Your governance policy governs your systems. It does not govern theirs. Vendor AI supply chain risk requires dedicated contractual review and audit infrastructure that most institutions have not built, and model output disclaimers in many AI vendor contracts may eliminate meaningful recovery before indemnification caps are reached.

The preparation window for your next renewal is now.

This is Part 2 of a three-part series mapping all six lines. Part 3 covers fiduciary liability, crime and FI bond, the full audit framework, governance documentation, and what underwriters at leading FI writers are asking for at renewal.

The board question this installment surfaces: has management confirmed in writing which coverage section responds to an AI-assisted operational error in your current program, and at what limit?

Want to walk through your current program against this framework? We do this work every day for regional insurers, MGAs, insurtechs, and community banks. Book a confidential conversation here.

In case you missed it:

Part 1 of this series mapped D&O and EPLI, the two governance lines where AI breaks in your institution's program first.

AI-washing is already an SEC enforcement action. Algorithmic discrimination produces claims that don't trigger your "wrongful employment act" definition. And AI governance inquiries from state regulators may not fire your investigation cost coverage at all. The coverage questions you need answered before renewal are negotiating objectives, not guaranteed confirmations.

Read it ​here​, or listen to it ​here​.

Stay Covered Everybody,

TASH & FLIP

P.S. Want to share this edition via text, email or social media? Simply copy-and-paste the link below:

https://lionspecialty.kit.com/posts/your-e-o-form-can-t-tell-whether-ai-assisted-advice-is-a-professional-error-or-a-product-defect-neither-can-any-court

And if this briefing was forwarded to you, subscribe directly here.

P.S. Nothing in this brief constitutes legal advice. These are the opinions of the founders, offered as market intelligence to help institutions ask sharper questions at their next insurance renewal.

LION Specialty

Everything you need to know to navigate the financial institution insurance market in ≈ 5 minutes per week. Delivered on Fridays.

Read more from LION Specialty
A worm just poisoned 796 software packages your vendors depend on. Your cyber policy was written for a different kind of breach. Why this matters if you run a regional insurer, an MGA, or an insurtech...

Reading scan time: 6 minutesListen time: 6 minutes Your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three reasons to read this week... Regional and mutual insurers: Google just intercepted the first cyberattack built entirely by AI. It targeted the same open-source code your TPAs and core system vendors build on. Inside: what it targeted, how they caught it, and the one question to ask your top three vendors...

AI-generated phishing emails now hit a 54% click-through rate. Your annual training isn't built for this.

Reading time: 6 minutesListening time: 6.5 minutes AI-generated phishing emails now achieve a 54% click-through rate, 4.5 times higher than human-written phishing scams. They can clone your CEO's voice on a phone call and deepfake your CFO on a Zoom call. One documented case: a single deepfake voice scam extracted $25.6 million from one firm. Three reasons to read this edition. The attacks have moved beyond email. A third of all 2025 social engineering incidents never touch an inbox. If your...

Anthropic shipped 10 insurance AI Agent templates. Verisk wired loss data into Claude. And Microsoft ushers in the "Frontier Firm" Era at 3x revenue!

Reading scan time: 5 minutesListen time: 5 minutes Here's your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three events are poised to move the global insurance markets this week... Anthropic released 10 agent templates built for financial services. Verisk plugged its ISO loss-cost data directly into the same platform. Three other connectors matter to FI buyers: D&B, S&P Capital IQ, and Moody's. A...