NIST wants your help to secure AI agents
NIST is looking for “concrete examples, best practices, case studies, and actionable recommendations” from agentic developers and other stakeholders who deploy agents.
• less than 3 min read
Brianna Monsanto is a reporter for IT Brew who covers news about cybersecurity, cloud computing, and strategic IT decisions made at different companies.
Your machine data knows things: Unlock it with Splunk and get game-changing insights—plus a critical resource to power AI. Tapping into machine data can boost your security and reliability. See more.
Closed mouths don’t get fed, especially when it comes to figuring out how to properly secure agentic AI systems. So, it should be no surprise that the National Institute of Standards and Technology (NIST) has turned to the public for answers.
On Jan. 8, the Center for AI Standards and Innovation (CAISI) at the Department of Commerce published a request for information (RFI) seeking commentary from security researchers, agentic developers, and other stakeholders that could bolster its guidelines on agentic AI. (CAISI is housed within NIST and was formerly the US AI Safety Institute.)
What NIST wants to know. In its request, CAISI states it’s looking for “concrete examples, best practices, case studies, and actionable recommendations” from stakeholders who have deployed and managed agentic systems. Specifically, the agency wants comments to gain a better understanding of current risks and vulnerabilities impacting agentic systems, practices that may need to be revamped for the budding agentic era, and how to evaluate the security of such technologies, among other things.
NIST + AI. The RFI is one of NIST’s latest efforts to support AI innovation. Last year, NIST published a concept paper proposing guidelines to secure AI systems using its “SP 800-53” framework. In 2023, the agency launched its Trustworthy and Responsible AI Resource Center, a central resource for AI guidance.
Deadline. NIST will accept comments until March 9. According to the RFI, responses could possibly “inform CAISI’s work evaluating the security risks associated with various AI capabilities, assessing security vulnerabilities of AI systems, developing evaluation and assessment measurements and methods, generating technical guidelines and best practices to measure and improve the security of AI systems, and other activities related to the security of AI agent systems.”
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
