Skip to Content
Industry Trends

Rethinking Privacy Exposure in Cyber Liability

As organizations rapidly adopt AI tools, privacy liability is re-emerging as a central concern in cyber insurance. Evolving state regulations, third-party AI models, and silent cyber exposure are reshaping how underwriters assess risk and coverage.

March 23, 2026

Cyber insurance began as a privacy-focused product before ransomware shifted the industry’s attention. Now, with the rise of generative AI and evolving state privacy laws, privacy liability is once again at the forefront. As organizations adopt AI tools, underwriting conversations are increasingly focused on how data is used, stored and managed, and whether a clear plan is in place to address emerging risks.

“Insurance carriers expect an organizational plan that addresses AI and privacy exposures, with considerations for processes and objectives to meet future risks,” said Steve Anderson, Assistant Vice President of Cyber Underwriting at Safety National. “During initial stages of the underwriting process, there are no right or wrong answers. What matters is that there is planning with clear timelines for implementation, so that we can assess the risks based on an organization’s current tactics.”

Here, Steve discusses how privacy exposure has evolved and why silent cyber remains a concern in an all-risk environment.

How has privacy liability evolved within cyber insurance?

Cyber liability insurance has been around since the early 2000s, and when it first entered the market, the primary exposure was third-party liability. At that time, first-party liability was present but less prevalent because the insured’s network footprint closed loops for networks that had no access to the outside world (i.e., the internet). The client-server model was just emerging and insureds were connecting to the outside world through other mechanisms.

At that time, the privacy questions being asked were very basic. Record count was important due to the cost of exposure for mailings and notifications to insureds in the event of a breach. Record counts were typically split between paper and digital formats, as most companies were converting their records in their databases. The breakdown of personal health information (PHI) and personal identifiable information (PII) was critical for evaluating the overall record count. Some of the early claims involved dumpster diving for PII and selling it, in addition to the traditional breach or lost laptop at the airport.

Ransomware was the primary exposure that brought cyber liability insurance to the forefront. This exposure clearly answered why cyber liability was needed and offered the opportunity for risk transfer with the purchase of an insurance policy. Today, privacy liability has once again become a central issue. Although there is no comprehensive federal framework for privacy protection, many states have enacted strong laws safeguarding individual rights. These regulations have increased scrutiny around how generative AI and language modeling systems are developed, deployed, and governed. They have also introduced additional complexity in the way data is transferred, stored, and managed.

Some organizations remain unaware of the full extent of their privacy liability risks. As a result, underwriters are now asking more in-depth questions during calls, including which tools are being used and how data is being handled and transferred. Recently, however, organizations have improved their efforts by gaining a clearer understanding of their potential exposure and implementing stronger processes.

These exposures can include:

  • Potential regulatory investigations, enforcement actions, fines, penalties, and related defense costs, where insurable and permitted by applicable law.
  • Potential civil litigation, including claims where a private right of action exists.
  • Data breach costs, including notification, forensic investigations, credit monitoring, and remediation.
  • Data sharing or cross-border transfers that may be inconsistent with applicable law, contractual obligations, or disclosed privacy practices.
  • Improper use of AI tools, such as training models on protected or confidential data without appropriate authorization or a legally sufficient basis.
  • Failure to obtain proper consent or provide required disclosures.
  • Vendor and third-party risk, where a partner’s actions may create potential regulatory, contractual, or litigation exposure for the organization.
  • Reputational damage, which can lead to loss of customers and business opportunities.

In short, exposure can arise at multiple points in the data lifecycle, particularly where governance or compliance controls are insufficient, especially when governance, oversight, or compliance controls are weak.

Why is “silent cyber” a concern in the context of AI and privacy?

Many cyber policies are written on a broad coverage basis, often with duty-to-defend features and multinational elements, subject to the specific terms, conditions, exclusions, and territorial limitations of the policy. They are often broader than traditional named-peril forms, though structure varies by insurer and policy wording. When evaluating emerging technologies, whether from a threat perspective or in terms of process and return on investment (ROI), silent cyber is a key issue. Because of this, stakeholders attempt to identify potential exposures to unforeseen risks that may not be expressly addressed in policy language.

When carriers introduce exclusions, they are often seeking to clarify, limit, or reduce uncertainty around how particular exposures may be treated under the policy. In practice, however, some carriers and reinsurers have taken the position that certain risks may fall within existing coverage, depending on the specific policy wording and facts. Therefore, the real issue focuses on managing aggregate risk. This remains an active debate in the marketplace today.

What are the liability concerns around third-party AI?

Liability concerns surrounding third-party AI arise when an organization relies on AI tools developed, hosted, or trained by an external vendor. Even if the system is not built in-house, the organization using it may still face legal and financial exposure depending on the facts and applicable law. Risks commonly include data privacy and confidentiality issues, such as uploading sensitive information without proper consent, unauthorized reuse of data for training purposes, cross-border transfer violations, or breaches of proprietary information. There are also regulatory and compliance concerns, particularly as state privacy laws and emerging AI governance frameworks impose new requirements for transparency, risk assessments, and oversight.

Brokers and insureds are increasingly effective at incorporating AI into their discussions, recognizing that it represents a significant exposure. At the same time, organizations are starting to keep pace with the technology, implementing processes and safeguards to address the associated risks.