How to prepare as cyberinsurers incorporate AI risks
Some humans in some loops, for starters.
• 5 min read
Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.
As more companies consider implementing agentic AI, what IT protocols do insurers want to see implemented before they’ll write a policy that covers the use of (and potential damages from) AI?
Cyberinsurance helps organizations protect themselves against costs related to adverse events like ransomware or a data breach. Some insurers now offer AI-specific coverage for scenarios like a chatbot mishap; others see AI as incidental to a breach and don’t mention the technology in policies. A recent report from global insurers’ group Geneva Association found that insurers are adapting cyber and liability policies to include GenAI‑related causes of loss, while “due diligence protocols are being tested to streamline underwriting and claims processes.”
“It remains too early to say whether existing insurance products or new standalone solutions will come to dominate the GenAI risk market,” the report concluded.
We asked insurance pros about the ways that agents can lead to unexpected costs for organizations—as well as the still emerging due diligence designed to satisfy AI underwriters.
In control. Diana Kelley, CISO at agentic AI security platform Noma Security, noted risk-mitigating controls that insurers will likely want from organizations adopting agentic AI:
- Runtime guardrails: Let’s say your billing department has a new inbox agent, and the company receives a message from a supplier disputing a charge—one that links to a cloud-based spreadsheet. An agent designed to summarize email threads may inadvertently send internal data from previous emails or documents to that supplier or even their spreadsheet. An important safeguard, enforced by policy engines, here could be: “Never let an agent parse an email from outside and then take sensitive data and share it before it’s been approved,” Kelley told us. In other words: Before an agent takes an external action involving sensitive data, a human should step in to verify the data, its origins, and what the agent is attempting to do with it.
- Least privilege: You’re the CFO of the company, and you’re using agentic AI to gather info for your quarterly report. You probably don’t want the entire company seeing those financials until they’re “run, rerun, and retested,” Kelley warned. Because of that, an agent should not be granted a CFO-level access to information simply because it operates on a CFO’s behalf. Instead, an agent should have task-specific permissions, like read access to financial systems.
- Continuous monitoring. If an agent made a decision, triggered a workflow, or transmitted data, an insurer will likely want traceability: a record on inputs received, specific tools invoked, and outputs produced. Easier said than done! While some AI-aware security platforms offer visibility into agent workflows and model usage, this product space is an emerging one and maturity varies, according to Kelley.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
The call is from outside the house. Use of automation technology by cyberadversaries is also causing losses.
In a report released on December 10, the Identity Theft Resource Center (ITRC) found that AI-powered attacks were the root cause of 41% of small business breaches. ITRC President James Lee told IT Brew that common attacks included AI-driven phishing and social engineering.
He sees insurers wanting organizations to demonstrate training and incident response plans for increasingly autonomous attacks.
Many insurers are still figuring out this nexus of cybersecurity and AI, too. Cyberinsurer Coalition offers protection for those deploying AI systems, but does not have a standalone AI insurance product, according to Michael Phillips, the company’s head of cyber portfolio underwriting. As AI becomes embedded across software and operations, Phillips wrote to IT Brew, “the relevant question isn’t whether there’s a separate AI policy; it’s whether existing insurance products meaningfully address the risks created by AI in real-world environments.”
Phillips noted threat actors’ increasing use of AI to generate convincing phishing messages and deepfakes, along with the emerging threat of AI agents acting on behalf of users, executing “decisions across pricing, transactions, or operations, where a single error or hallucination may propagate rapidly.” He shared controls for those trying to reduce AI-driven risk:
- Human-in-the-loop oversight
- Identity and access management
- Data minimization and masking
- Model monitoring
- Bias and discrimination testing
- Incident response and business continuity
- Vendor and third-party risk management
- Security awareness and training
- Comprehensive AI use policy
- Deepfake response planning
AI insurance please! More than 90% of respondents expressed a need for insurance coverage tailored to AI and GenAI threats, according to the Geneva Association study. More than two-thirds of 600 surveyed global business insurance decision-makers said they’d pay at least 10% extra in premiums for it.
Not every insurer is rushing to spin up AI-centric policies, though. For example, insurer At-Bay does not provide AI-related insurance or recognize anything specific to the technology in its policies—and that’s intentional, according to company CISO for customers Adam Tyra.
“The fact that you had a loss is what’s important to insurance coverage,” Tyra told IT Brew.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.