MITRE is giving businesses a forum to securely reveal the AI security skeletons in their closet in exchange for the greater good.
Earlier this month, MITRE announced the launch of its AI incident-sharing initiative, which seeks to boost collective awareness of threats and defenses related to AI-enabled systems. As part of the initiative, the not-for-profit organization has rolled out a public-facing platform where organizations can submit information on AI-enabled system incidents within their organization in exchange for membership in MITRE’s community of data receivers, where they can access protected and anonymized data on real-world AI incidents.
The new initiative is a part of MITRE’s Center for Threat-Informed Defense secure AI project, which looks to improve the community knowledge base of threats against AI-enabled systems. Collaborators on the project include CrowdStrike, Microsoft, and Cato Networks.
Quenching a need. Christina Liaghati, MITRE’s department manager for trustworthy and secure AI, told IT Brew that the initiative comes after the organization observed a growing need for real-world data on how AI-enabled systems are being “attacked in the wild.”
“The goal is to get more of that at-scale data outside of an organization’s single view in the hands of the community so they can really see and prioritize which AI assurance risks they’re mitigating as they’re deploying more and more AI-enabled systems,” she said.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
Liaghati said that incidents shared with MITRE are sanitized and anonymized, allowing organizations to securely share cybersecurity events without needing to worry about reputational damage. She added that MITRE deliberately designed their AI-incident sharing submission form with a limited amount of required fields to make it easier for individuals to share information with the not-for-profit and “bring as much of the community up to that increased security awareness and preparedness as possible.”
Group effort. MITRE’s newly launched initiative comes less than a month after the House Science, Space, and Technology Committee approved the AI Incident Reporting and Security Enhancement Act, a bill that would require the National Institute of Standards and Technology to include AI systems vulnerabilities in its national vulnerability database and examine the need for voluntary reporting related to AI security and safety incidents.
Liaghati told IT Brew that MITRE is “excited” to see the bill come into play in order to capture more data on vulnerabilities. She added that the proposed bill, MITRE’s AI-incident-sharing initiative, as well as other vulnerability and incident-sharing efforts on the market, all have “a place in the larger landscape” of solutions.
“There’s a big enough set of problems here that I think everybody can be working on different pieces,” she said.