Who needs chatbots when you have a panel of IT pros to ask your burning security questions? We asked practitioners: How do you ensure data is secure when everyone is hooked on ChatGPT?
Babson College Chief Information Officer Patty Patria shared how the Massachusetts school enforces restrictions on certain LLMs and trains employees on acceptable use.
“You either need to use what we’re providing to you, or if it’s a student or faculty and they want to use their own proprietary data, use something that we’re paying for that’s closed,” Patria told us in a June 2 story.
Here’s what other industry pros had to say:
Responses have been edited for length and clarity.
Pete Nicoletti, global CISO, Americas, Check Point Software Technologies: ChatGPT has a couple of different levels…The unlicensed [version] goes into a big mosh pit of everybody else’s data. The licensed [version] is going to give you some exclusivity of your data being sequestered.
Melissa Ruzzi, director of AI, AppOmni: Can you 100% always be sure [your data is secure]? No. Never. Can someone open a different browser where they’re not logged in and use a free version and copy and paste information there? Yes, that can always happen. That’s the big problem. That goes back to the training. But they will only do that if they don’t have an option. Why would you on-purpose do that? If you have a ChatGPT enterprise solution for the company, why would you open another browser to go to the free version to put data there?
Joshua McKenty, CEO, Polyguard: Uploading that spreadsheet into ChatGPT isn’t triggering the same red flags in people’s minds of thinking, “Oh, this is basically the same as giving it to a stranger on the internet.”
Cristian Rodriguez, field CTO, Americas, CrowdStrike: I think it’s just more of treating that AI model as an exfiltration point, very similar to what you would do with other DLP initiatives that you have in your environment. Managing an approach incorporates protecting your sensitive data from egressing, and then knowing that there’s a way to classify if that egress was attached to some type of AI model. I think it’s just doubling down on those controls that have existed for quite some time.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
McKenty: The more savvy institutions like JPMorgan Chase, for instance, [said] “Hey, let’s stand up an agent in-house that all of the traders and quants can use as if they’re using ChatGPT. But we know the data is not leaving the building.” And they rushed to do that because they don’t want to be in a position of saying, “No, you can’t use AI.” They want to be able to say, “Hey, instead of using ChatGPT, use this one that we run that’s safe.”
Matt Radolec, VP, incident response and cloud operations, Varonis: You want to limit the amount of data that that person has access to, and make sure that it’s not outside of their role. And then also, you want to police the prompts…and make sure that people aren’t either asking questions of a ChatGPT that they shouldn’t be, or that they’re not uploading or creating data that’s outside of their job function. Some of the things that we’ve seen, for instance, are IT people looking for salary information, or traders looking for trades made by other traders that would otherwise be highly confidential.
Chaim Mazal, chief security officer, Gigamon: We don’t allow any uploads of documents to any of these open-source platforms. We also don’t allow any copying and pasting of data into these open-source platforms…anything where we have the introduction of intellectual property into these toolings, it goes against our policy, we disallow it by a couple of smart mechanisms that we put in place. Ideally, just like any other piece of commercial software, if your company finds value in it, you should invest for it. You should pay for it, and it will give you a big peace of mind.