As AI becomes embedded in how businesses operate, the insurance policies meant to protect them are quietly narrowing. Here’s what the fine print now says and what it doesn’t cover.

Most technology companies assume their Professional Indemnity (PI) and Cyber insurance policies cover them. They pay the premiums, tick the compliance boxes, and file the paperwork. What many never do, however, is check how their insurer now defines “AI-related activity.” That definition quietly reshapes what the policy covers when a claim arrives.

Over the past 18 months, insurers have accelerated the introduction of AI-specific exclusions across both PI and Cyber policy wordings. Some changes are explicit. Many, though, are not. The result is a growing gap between what businesses expect their policy to cover and what it will actually pay out on.

The silent AI exposure problem

The term “silent AI exposure” describes AI-related liability that a policy neither covers nor excludes. Historically, this ambiguity worked in the insured’s favour, because insurers tended to read general policy language broadly. That era is ending.

Today, insurers recognise how deeply AI activity sits inside standard software products. Consider the range of exposure: a coding assistant that introduces a vulnerability, a customer-facing chatbot that delivers legally actionable advice, or a fraud-detection model that produces biased outcomes. Each can generate a PI or Cyber claim. Yet each sits in a grey zone unless the policy wording addresses them directly.

The lesson here is not that firms should avoid AI tools. Rather, the moment AI generates a professional output a client relies on, the firm has likely assumed liability, whether its policy reflects that or not.

Copyright carve-outs: the exclusion that’s growing fast

Generative AI has introduced a category of IP risk that traditional PI policies never anticipated: the inadvertent reproduction of copyrighted material. In response, insurers now add copyright carve-outs to many policy wordings. These vary enormously in scope, and most policyholders never notice them until a claim arrives.

Some carve-outs apply only to deliberate reproduction. Others, however, exclude any claim that involves AI-generated content, regardless of intent or the firm’s level of control over the model. As a result, a media company, a marketing agency, or any firm producing AI-assisted content at scale could find its entire content liability exposure sitting outside its policy.

The deeper problem is that most firms using generative AI tools have little visibility into what data those models trained on. Consequently, the trigger for an exclusion can be entirely outside the firm’s control.

Model training data disputes: an emerging battleground

A newer category of exclusion now appears in more sophisticated policy wordings. These provisions carve out claims that arise from training data disputes, including data privacy violations, consent failures, and the unlicensed use of personal or proprietary data in AI development.

This matters because training data liability is no longer theoretical. Litigation is active across multiple jurisdictions, and regulators are building enforcement capacity. Firms that develop proprietary AI models, or that rely on third-party models with unclear training data provenance, carry an exposure that most Cyber policies simply did not account for.

The boundary between a “data breach” and a “training data dispute” is now one of the most contested areas in AI coverage. Firms should not assume existing Cyber protections extend to cover it.

What brokers and risk managers should do now

Action steps
  1. Audit current policy wording for AI-specific language. Don’t rely on last year’s renewal summary. Pull the actual policy schedules and endorsements and search for terms like “artificial intelligence,” “machine learning,” “automated output,” and “generative.” Insurers frequently insert new exclusions at renewal inside endorsement schedules rather than the base policy wording.
  2. Map AI use to policy categories. Build an internal register of every AI tool in use, both proprietary and third-party. For each one, identify the liability pathway: does it generate professional outputs? Does it produce content? Does it train on personal data? Then match each exposure to the relevant policy clause.
  3. Push back on broad carve-outs at renewal. Not all AI exclusions are fixed. Insurers will often narrow carve-outs for well-documented, lower-risk AI uses. Arrive at renewal with specifics: which models the firm uses, what training data provenance looks like, and what human oversight exists. Vague answers tend to produce broad exclusions.
  4. Explore standalone AI liability products. A small but growing market of AI-specific insurance products now exists. For firms with significant AI-generated revenue or active AI model development, a standalone policy may be worth evaluating alongside traditional PI and Cyber cover.
Treat AI governance as an underwriting asset. Firms with documented AI governance frameworks, including model risk policies, human-in-the-loop requirements, and training data records, consistently negotiate better terms at renewal. Governance is no longer just a compliance obligation; it directly affects insurability.

The bottom line

The insurance market is not anti-AI. Insurers want to cover viable businesses, and viable businesses now run on AI. Even so, the market is actively repricing AI-related risk, and the main mechanism for that repricing is exclusion clauses and narrowed definitions that most policyholders have not yet noticed.

The firms most at risk are those that enthusiastically adopt AI tools while leaving their insurance programmes on autopilot. The coverage gap rarely appears all at once. Instead, it builds slowly, renewal by renewal, endorsement by endorsement, until a claim arrives and the policy reads differently from what the firm expected.

In the AI era, reading the policy carefully is no longer optional. It is the first act of risk management.

Most firms discover coverage gaps at the worst possible moment. Continuum works with technology businesses and their brokers to identify AI-related blind spots in PI and Cyber Insurance before a claim does. Get in touch for a policy review.