Who’s accountable when there’s an AI disaster?
Who’s to blame when there’s a data leak, bad vibe coding, or hallucinations.
• 5 min read
Billy Hurley has been a reporter with IT Brew since 2022. He writes stories about cybersecurity threats, AI developments, and IT strategies.
There’s plenty of possibility for “oh, no!” in AI. The following are just some examples of ways implementing AI can have unexpected consequences.
- Data leaks. Eager AI users can upload sensitive data to a large language model without realizing the data could be logged and/or used to train the model. (Samsung in 2023 reportedly banned the use of GenAI after an engineer uploaded source code to ChatGPT.)
- Bad vibes. With “vibe-coding” tools, natural language prompts can lead to errors and imperfect results, like an accidental database deletion.
- Hallucinations. A wrong answer from an AI agent can frustrate customers. (In 2024, a court ruled that Air Canada had to compensate a traveler who was fooled by incorrect information from a chatbot.)
- Prompt injections. By embedding malicious commands, adversaries can trick a model into making damaging decisions.
Any AI deployment involves a cast of actors: the vendor, the buyer, the IT team implementing, the end-users end-using. This raises a vital question we asked analysts and IT pros: Who’s really in trouble if an AI goes awry?
What’s up, documents? Citing the EU AI Act’s obligations for high-risk applications (and varied US litigation outcomes), a May 2025 report from Global Legal Insights ultimately determined that the blame for AI error is likely to fall on humans, not the machines.
“The consistent message from regulators and courts is that, even for autonomous AI, ultimate responsibility must remain anchored to human decision-makers,” the report concluded.
NIST’s Artificial Intelligence Risk Management Framework spreads responsibility around to “AI actors” (including developers, vendors, data evaluators, and end-users) while suggesting that AI developers “should consider proportionally and proactively adjusting their transparency and accountability practices” when severe consequences arise, like “when life and liberty are at stake.” (Looking at you, self-driving car!)
What tech pros say. Nathan Olson, a senior manager on Baker Tilly’s US data solutions team, sees a collaborative model for AI responsibility:
- The business owner or the teams requiring the use cases are accountable for outputs and their review.
- IT owns safe usage of the tools, along with data protection and cybersecurity policies.
- Legal and risk teams ensure rules are followed according to regulations.
“I think AI problems rarely start with AI. They start with unclear ownership and human over-trust of it,” Olson told IT Brew. “I think that AI control needs to be governed by a collaborative model so you understand: What are the sorts of ways that this can go wrong, and who’s accountable for each way?”
For a situation like an accidental data leak, the group accountability offers a responsive “brain trust,” Olson said. Here are some hard questions that need to be asked:
- From a legal standpoint, are there vendor agreements to request deletion and certification of the deletion?
- From an IT standpoint, what gaps must be sealed off to prevent recurrence?
- Should there be access controls, or outputs that pull only from approved sources?
- A business user may want to analyze customer impact, as well.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
A report from software marketplace G2 found that 26% of “security incidents” and 25% of “hallucinations” were considered “major.” In an email exchange, Tim Sanders, CIO at G2 and author of the study, said that 82% of 1,035 decision-makers claimed to have undergone an incident related to agentic AI, including hallucinations, compliance issues, security problems, reputational impacts, and data leakage.
For IT practitioners looking to protect themselves from liability, Sanders stressed the importance of buyers’ due diligence around AI tools, including the emerging market of third-party “AI guardrails” products. Sanders also advised IT professionals taking on more agentic workflows to consult legal teams on the potential for data-leak insurance. Finally, he recommends practitioners “be very judicious about dribbling out data” for agentic applications, rather than giving a third party wholesale access to systems.
Sanders sees company-based liability issues emerging, but not necessarily liability for the AI provider or a single, erring individual (unless there was “gross individual negligence,” he said).
In a follow-up email, Sanders wrote, “In some cases, the vendor may seek indemnification from liability in their contracts...which companies should negotiate.”
“When there is a data breach, you collected the data as the enterprise, and you have some kind of agreement with…your employees, your customers, your suppliers, to protect that data. So, in the average court of law, you have some liability when there’s a data breach, because you were trusted with that data,” Sanders told us, while also adding that he is not a lawyer.
Shakel Ahmed, founder of insights platform CyberDesserts, also sees accountability landing on “the organization making poor decisions around the implementation of the technology.”
IT professionals must educate users and implement available security measures, like data leak prevention tools watching for sensitive-data uploads, to mitigate risks, he said. Also, teams should provide vetted tools and prevent employees from using unsanctioned ones. Companies are still exploring complicated questions regarding the limits and dangers of AI use.
“The technology is moving so fast, the governance and the legality of it is still catching up,” Ahmed said.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.