|
Reading time: 6 minutes Our line-by-line Silent AI audit continuesThis is Part 3 of our three-part Wednesday Intelligence series, The Six-Line Silent AI Audit. This is the final installment and the complete reference document. Part 1 covered where wrongful act definitions break on AI-reliant board decisions and algorithmic discrimination claims in D&O and EPL policies. Part 2 mapped E&O and Cyber where the professional/product boundary is unsettled, deepfake wire fraud falls between coverage sections, and model poisoning has no standard trigger. If you've wound up here and didn't read the first two installments, links are at the end of this piece... Today, Part 3 covers the last two lines, Crime/FI bond and Fiduciary Liability, then delivers the full audit framework, governance docs, and what underwriters at leading carriers are now asking institutions at renewal. Three reasons to read this installment:
Prefer to listen? Check out the audio version. The Crime situationMost every dishonest act definition in your Crime insurance assumes the perpetrator is a person. That assumption runs through the last two of your six core policy lines. Both have active claim patterns. Both have coverage triggers built for a world where fraud required a human to carry it out. A synthetic criminal who does not exist cannot commit a dishonest act. An insurer wires a return of premium to a customer account built from AI-generated identity documents, fake personal data, and manufactured underwriting history. The customer does not exist. The insurer lost real money. The FI Bond requires a human perpetrator. A fake identity built by an AI system meets none of those elements. The National Insurance Crime Bureau projects a 49 percent rise in insurance fraud linked to identity theft in 2026. Nearly one in four referred claims now involves a fake identity. In life insurance alone, synthetic identity fraud makes up 85 percent of identity fraud cases, according to RGA, at an estimated $30 billion a year. FinCEN Alert FIN-2024-Alert004 warned that criminals are using GenAI images with stolen or fake personal data to build synthetic identities for loan fraud, check fraud, and push payment fraud at scale. The alert created a dedicated SAR key term for the pattern. Insurers face this on both sides. They are targets of synthetic fraud in their own claims shops. They also underwrite the coverage that may not respond when their clients get hit. The deepfake pieceDeepfake CEO fraud makes the problem worse from the other direction. Part 2 mapped that scenario through the cyber form. On the crime side, many social engineering sections require a fraudulent instruction from a third party posing as someone with authority. Whether a voice made by AI counts as posing under the form is an open coverage question. The voluntary parting exclusion is being tested against deepfake wire transfers right now. When a tricked employee approves a transfer based on a fake video call, most forms treat that as a voluntary act outside coverage. In December 2025, Coalition became the first major cyber carrier to offer a deepfake response endorsement for forensic analysis and legal support. The endorsement exists because the standard forms do not respond. Social engineering sublimits of $250,000 to $1 million were set for one-off fraud attempts. AI can generate hundreds of convincing requests at once. Swiss Re warned in its SONAR 2025 report that deepfakes may increasingly drive cyber insurance losses. The fiduciary pieceFiduciary liability is the sixth line. It is the least developed. It is also where the governance question will be answered first. AI-assisted investment advice in retirement plan management raises fiduciary duty questions under ERISA that no court has resolved. The Supreme Court in Hughes v. Northwestern confirmed that ERISA fiduciaries are judged on an ongoing duty of prudence and monitoring. That standard will extend to AI-driven investment tools as they enter plan operations. The litigation path is already being built. In Lokken v. UnitedHealth Group, a federal court in February 2025 let breach of contract claims move forward against a health insurer whose AI tool had a 90 percent reversal rate on appealed denials. The insurer kept using it because only 0.2 percent of members appealed. In March 2026, the court ordered broad discovery into how the algorithm was designed, what governance records existed, and whether it was built to replace physician judgment. Legal analysts have already extended the Lokken framework beyond health insurance. Policyholders in property, casualty, and liability disputes can now seek discovery into whether AI replaced the adjuster's own judgment. The same path leads to AI-assisted plan decisions. Insurtechs using AI in coverage determinations or claims triage face the same discovery exposure Lokken opened for health insurers. The U.S. Treasury published its AI in Financial Services report in December 2024 after receiving 103 comment letters from financial firms, fintechs, and trade associations confirming that AI is already used in retirement readiness apps, chatbots, portfolio management, and trade execution. In February 2026, Treasury released a Financial Services AI Risk Management Framework and a shared AI Lexicon. The DOL's September 2024 guidance extended cybersecurity rules to all ERISA-covered plans. And at least one AI platform is now using machine learning to find fiduciary breach claims against plan sponsors by scanning plan documents, filings, and court records. AI is speeding up the plaintiff's ability to spot the exposure before the plan sponsor knows it exists. This is a 2027 to 2028 litigation risk, but the governance work needs to happen before your next renewal submission. The governance frameworkUnderwriters at leading FI writers are already asking for documentation many boards have not yet built. The coverage issues we've highlighted in this series is not new news to the carriers writing your program. At the top of the FI market, the underwriting conversation has moved ahead of many buyers. The institution that arrives at renewal with governance records is in a very different conversation than one that cannot produce them. What you can show at renewal sets your negotiating position. At leading FI writers, governance records are moving from a pricing factor to a condition of coverage. The four governance documents in this series are moving from preferences to requirements.
Sources: NICB, FinCEN, Swiss Re, Lokken v. United, ArentFox Schiff A note on vendor contracts... Review is necessary but not sufficient. Model output disclaimers and IP carve-outs in many AI vendor agreements may wipe out recovery before indemnification caps are reached. Have legal review the full contract, not just the indemnification section. The board question this installment surfaces: has management confirmed in writing which coverage section responds to a synthetic identity fraud loss or an AI-assisted fiduciary decision in your current program? Want to walk through your current program against this framework? We do this work every day for regional insurers, MGAs, insurtechs, and community banks. Book a confidential conversation here. In Case You Missed It!A month ago we launched our Six-Line Silent AI Audit series, a three-part Wednesday Intelligence series mapping a financial institution's core policies against the AI exposures most insurance policies were never written to address. Part 1 covered D&O and EPLI, where "wrongful act" definitions assume a human decided and algorithmic discrimination doesn't map to your form's coverage trigger. Part 2 covered E&O and Cyber, where the professional/product liability boundary for AI-assisted advice is unsettled in every court and deepfake wire fraud falls between three coverage sections without triggering any of them cleanly. Read Part 1 here, or listen to the audio version here. Stay Covered Everybody, FLIP And if this briefing was forwarded to you, subscribe directly here. |
Everything you need to know to navigate the financial institution insurance market in ≈ 5 minutes per week. Delivered on Fridays.
Reading scan time: 6 minutesListen time: 6 minutes Your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three reasons to read this week... Regional and mutual insurers: Google just intercepted the first cyberattack built entirely by AI. It targeted the same open-source code your TPAs and core system vendors build on. Inside: what it targeted, how they caught it, and the one question to ask your top three vendors...
Reading time: 6 minutesListening time: 6.5 minutes AI-generated phishing emails now achieve a 54% click-through rate, 4.5 times higher than human-written phishing scams. They can clone your CEO's voice on a phone call and deepfake your CFO on a Zoom call. One documented case: a single deepfake voice scam extracted $25.6 million from one firm. Three reasons to read this edition. The attacks have moved beyond email. A third of all 2025 social engineering incidents never touch an inbox. If your...
Reading scan time: 5 minutesListen time: 5 minutes Here's your Friday Five: Every week our team rips through 200+ insurance, legal, regulatory, and market-risk articles so you don't have to! Three events are poised to move the global insurance markets this week... Anthropic released 10 agent templates built for financial services. Verisk plugged its ISO loss-cost data directly into the same platform. Three other connectors matter to FI buyers: D&B, S&P Capital IQ, and Moody's. A...