So, you woke up and your favorite SaaS tool now has an AI capability you didn’t ask for.
An attendee at our recent IT Brew virtual event asked how to track the legacy IT that suddenly gets AI-ified: “Should we be evaluating these tools differently?”
We posed the question to IT industry pros, including Eoin Wickens, director of threat intelligence at HiddenLayer, and Peter Eichelberger, director of IT at Altamont Capital Partners, who spoke with us during the May 2025 presentation.
Below we pulled insightful bits from a body of data to answer the query—you know, like a chatbot.
Matt Radolec, VP, incident response and cloud operations, Varonis: I don’t think of the AI threat much differently than the data sovereignty or the data security threat that’s always looming. I equate this back to when people started using Box and Dropbox and other types of file-sharing services; it created privacy concerns around those cloud providers having access to your data, and whether or not you had the corporate versions (the licensed and paid-for versions) or if you were using private ones…This same problem exists with all of your existing SaaS applications.
Jim Packer, practice lead, data security, data privacy and AI governance, GuidePoint Security: You have to understand exactly what data is flowing to the AI components, where it’s processed, internally or externally, and how it’s stored or transmitted to any kind of third-party AI providers.
Joshua McKenty, co-founder and CEO, Polyguard.ai: I think the safest way to think of this is a model that’s trained on your data has your data embedded in it. They’re called embeddings for a reason…You would want to know that the models that you are being provided access to are only being provided to you, or that you know that there is some sandbox around the way that data is being ingested and consumed.
Eoin Wickens, director of threat intelligence, HiddenLayer: Has there been security audits of their AI-operations pipeline? Is that data secure in transit? Is it secure in rest? Is the AI vulnerable to prompt injection? Is it sharing context between users? Is there a potential for data leakage thereof? These would maybe be some of the questions I would begin to start asking, because I think they’re important for you but also for the vendor to ask themselves.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Packer: You have to evaluate whether any of the legacy vendors are disclosing what AI models they’re using and whether they’re proprietary or third-party, like an OpenAI or Anthropic for example, and then what data-governance controls exist around the AI processing layer at the vendor level. So, this isn’t even at your level. You need to know what the vendor is doing first and foremost before you decide to go into business with that vendor and use that vendor’s technology. And then at your level, you need to then understand your data and how it’s going to be used within that vendor’s environment.
Scott Laliberte, managing director, CISO advisory, Protiviti: The challenge is: How do you find and root out potential AI that’s being used within the organization? I think we’re still in the early stages of that, but there are some solutions that are out there and starting to evolve. A CASB, or cloud access security broker, can be used. Some CASBs have functionality to look for AI that may be interacting with users within the organization. For instance, Microsoft has its AI threat protection that’s built into their Defender for cloud apps, not only to see users interacting with potential AI SaaS, but also what they’re putting into the prompts.
Peter Eichelberger, director of IT, Altamont Capital Partners: Being aware of the tech stack that you’re working in and partnering with your vendors is really what I would look to do, making sure that you’re aware of things that are coming down the pike. Not only that, but also knowing how to deactivate those things that may come up fairly quickly, so that you can take those opportunities for your users to go astray off the table.