Securing AI: Six steps to enable trusted innovation while addressing risk
A roadmap to integrate AI security into enterprise operations, from initial discovery to continuous validation
• 16 min read
Everyone is talking about AI security. But what is it, exactly? The long definition is that AI security is the practice of extending traditional cybersecurity to safeguard AI systems by protecting data, models, and actions from emerging risks like bias, tampering, and adversarial attacks.
The short one: AI security is all about keeping your AI systems safe and sound through cybersecurity.
Businesses are already leveraging agentic AI and GenAI tools to accelerate innovation and accomplish low-level tasks. But application and infrastructure security teams aren’t included early enough to ensure these enhancements and new AI use cases are secure and effective, which leaves organizations exposed to growing AI risks.
In the mad dash to adopt AI, many enterprises have focused on rapid enablement and expanding capabilities, hoping their security programs can keep up. Spoiler alert: They struggle to keep pace. Pilot projects are outpacing protection, scattering accountability, and widening the gap between what AI can do and what organizations can control.
Instead of rebuilding security programs from scratch, organizations are working to apply an AI lens to their existing frameworks—often retroactively—because many AI deployments bypass typical security steps and controls. Other organizations are using what they’ve got, with an upgrade. They’re extending and adapting their current programs with the right updates and consistency.
At a minimum, every organization should establish clear policies and governance before any AI tools are leveraged to avoid building on an inefficient, ineffective, and unstable foundation that undermines long-term security and trust.
It all comes down to one fundamental challenge: AI creates new dependencies and responsibilities that traditional controls weren’t designed to manage. It significantly expands the attack surface, exposing new vulnerabilities like bias, hallucination, model tampering, social influence engines, and unpredictable decision-making.
These systems can also act autonomously on behalf of users, often with elevated privileges—making strong authentication and authorization essential to prevent abuse. Security frameworks must now account for systems that can act autonomously and steadily change how they operate. Securing these systems isn’t just about new policies and tools; it demands a disciplined, programmatic approach that can keep pace with AI’s rapid evolution.
And executing on that mission tops the CEO’s agenda as well: 82% of leaders in a recent KPMG CEO survey* cited cybersecurity as their company’s top threat amid AI’s growing risks.
For security teams, the way forward is to evolve existing programs—strengthening what already works while layering on the governance, validation, and continuous monitoring needed to manage the many “we haven’t seen this before” challenges of AI.
Here are six practical steps to make that happen. Think of it as a roadmap to help you move from high-level discussion to tactical execution.
Ready to see what it takes to establish a scalable, trusted AI security program built on leading strategies, relentless innovation, and an enterprise-wide commitment to trust?
Let’s dig in.
Step 1: Define your AI security strategy
Before any AI or technology initiatives begin, the chief information security officer (CISO) and team need to have a good grasp of the organization’s overall AI and GenAI language models’ plans and goals—and where security fits within that strategy. That’s because AI is moving faster than most security programs can adapt, which means that even early engagement won’t guarantee effective governance.
Effective AI risk management starts with coordinated governance and clear accountability. Security leaders (including CISOs), technology teams (such as CIOs), and business stakeholders should embed visibility and controls into every AI initiative—not as an afterthought, but as part of the design. Large language models need to be integrated into risk frameworks early, with ownership clearly defined and security aligned to enterprise goals. When strategy and security evolve together, organizations scale AI faster and with greater consistency.
That’s why it’s so important to establish a clear, AI-specific security strategy from the start. It brings alignment quickly, defines ownership, sets measurable goals for secure adoption, and matches enterprise ambition with informed governance. That’s how security leaders move from gatekeepers to enablers of innovation—by embedding trust, compliance, and durability into every AI decision from day 1, rather than bolting them on later.
Making it happen
Clarify enterprise AI objectives: Partner with business and data leaders to pinpoint where AI will create value (like operations, customer engagement, finance, etc.) and know what data, technology, and resources need to be leveraged.
Build cross-functional alignment: Establish a working group of stakeholders from security, data, compliance, legal, and key business units to coordinate policy updates and communicate risk priorities to leadership. You know, your AI security dream team.
Define the AI security mandate: Translate enterprise goals into a strategy for securing AI and document accountability through formal AI security responsibilities across domains such as data protection, access governance, model assurance, and continuous monitoring. Don’t forget to document escalation paths for AI-related risk.
Set measurable outcomes: Determine how success will be tracked—for example, a steady reduction in unmanaged use cases and faster validation cycles—and align metrics with enterprise key performance indicators (KPIs).
Integrate cyber risks into the corporate risk register: Formally document and track cybersecurity and AI-related exposures alongside other enterprise risks, closing gaps that are too often missed in traditional risk management.
Step 2: Know where you are
You can’t secure what you don’t know. So once your overall security strategy is in place, your next big priority is visibility. Many organizations advance quickly in AI experimentation but lag a little behind in establishing guardrails and control. Most are still in the early phases of maturity—experimenting, not governing.
Real progress comes when security is built in right from the start. Here’s how: by secure-by-design practices, testing and validation, and continuous runtime monitoring of these solutions throughout the AI life cycle.
A critical goal is to create a single, trusted view of the enterprise’s AI landscape: what systems exist, who owns them, how they operate, and where the potential risks lie. This means identifying and defining everything that qualifies as AI—including systems, processes, and agentic workflows that are often overlooked—and validating which assets truly include AI components.
That comprehensive visibility is the basis for every control, validation, and monitoring decision that follows. Mature AI security programs treat this attention to detail as an ongoing life cycle, from discovery through validation and monitoring, so that both visibility and assurance grow together over time.
Making it happen
Run an AI maturity diagnostic: Begin with a structured assessment across technology, governance, and people. Benchmark against NIST frameworks or other leading standards to identify strengths and close critical gaps.
Map your AI footprint: Inventory all models, datasets, and third-party integrations in use, whether established or experimental. Identify and include “shadow AI” tools that may be operating outside formal oversight. Use this to support compliance reporting and ongoing validation.
Tier your risks: Develop a risk assessment that covers the cross-functional domains as well as cyber. Then document the risk score of the system informed by overarching business criticality, data sensitivity, and exposure level. Apply those tiers to prioritize testing, controls, and monitoring.
Step 3: Strengthen your security framework for AI
AI security is the next evolution of cybersecurity, not a separate discipline. Rather than rebuilding from scratch, leading organizations are expanding their existing programs to account for AI’s materially different risk profile. That means updating core domains—identity and access management, data protection, and application security—to accommodate automated business processes, AI-related data flows, and decision logic.
The goal here is to strengthen the foundation already in place by aligning established cyber practices with the dynamic behavior of AI systems. Once you’ve got it down, this integration helps support AI initiatives in scaling securely, with consistency, transparency, and control.
Making it happen
Detail AI’s impact on existing domains: Review the major cybersecurity arenas through an AI lens: identity, access, privacy, data protection, application security, incident response…you get the picture. Then determine AI’s impact and where processes must evolve to handle things like securing the usage of MCP servers, standardized logging approaches for agents’ invoicing tools, and establishing observability across AI systems and agent calls.
Integrate AI into governance routines: Embed AI-related risk discussions into standing security councils and change management groups. Make it required for all AI use cases to follow the same intake, approval, and documentation processes as any other critical technology.
Extend existing frameworks: Build on what already works. For example, if your organization follows the NIST Risk Management Framework (RMF), align your existing RMF controls and processes with the emerging NIST AI RMF—mapping current security and privacy safeguards to new AI risk areas such as data quality, model transparency, and accountability.
Reinforce accountability: Update policies and job roles so that ownership of AI systems is explicit, from model development and deployment to continuous validation and monitoring.
Automate for scale: As AI adoption grows, introduce automation through AI TRiSM (trust, risk, and security management) or discovery tools to streamline oversight, detect policy violations, and flag unapproved model usage.
Step 4: Build and integrate effective controls
With core security domains updated, the next priority is embedding targeted AI controls. A unified control framework that spans security, privacy, and compliance creates the guardrails to make AI innovation safe and defensible. Controls must fit seamlessly into existing processes, evolving with models and regulations while remaining measurable and auditable.
Organizations that embed AI controls into their established cyber and risk frameworks gain consistency, accelerate validation, and maintain confidence that their overall approach continues to work like it’s supposed to, even as systems and threats evolve.
Making it happen
Map AI risks to recognized frameworks: Align program design with frameworks and standards like the NIST AI RMF, ISO/IEC 42001, and the EU AI Act. This ensures controls address regulatory expectations while staying consistent with enterprise risk strategies.
Define clear control categories: Focus on key areas (e.g., model integrity, data provenance, access management, output validation, auditability, business ownership) and specify how each will be monitored and reported.
Evaluate control effectiveness and resilience: When defining and evaluating AI controls, make sure you have clear timelines for how to move from design to fully operational controls. Include expectations for incident response, business continuity, and disaster recovery planning so the business can maintain operations or quickly resume in the event of a control failure, to keep disruption at a minimum and prioritize resilience.
Avoid the parallel-governance trap: Embed AI control checks directly into existing change management workflows, risk registers, and assurance testing instead of creating a separate process. That helps you maintain order.
Require validation before release: Make independent review and formal attestation of a standard gate before any AI system and agentic workflow goes live, supported by documented testing and sign-off confirming controls are effective.
Step 5: Conduct rigorous validation and testing
Controls are only as effective as the testing behind them.
Validation turns governance from a checklist into a living practice, demonstrating that controls work, risks are contained, and AI systems behave as intended. Testing must be systematic, repeatable, and continuous across the model life cycle.
The goal is to confirm that every AI system, from development to deployment, meets its defined security and compliance standards, and that vulnerabilities are caught ASAP.
Making it happen
Build testing into development: Treat validation as part of the build process, not a final step. Integrate AI testing checkpoints into continuous integration and deployment (CI/CD) pipelines to catch issues before release.
Apply the right depth of scrutiny: Tailor validation to each system’s risk tier—models with higher business impact or sensitive data exposure require deeper, more frequent testing.
Use multiple testing methods: Combine static and dynamic testing (SAST, DAST, and SCA) with AI-specific techniques such as adversarial/red-teaming (including prompt injection and data-poisoning simulations).
Test your response readiness: Run an annual table-top exercise with business executives to make sure there are no gaps in understanding response protocols.
Document and attest: Keep formal records of testing results through AI system cards or equivalent reports. These records build traceability and support both internal assurance and regulatory readiness.
Close the loop on findings: Create a defined feedback process so vulnerabilities identified in validation feed directly into risk remediation, model retraining, or control enhancement.
Step 6: Ensure continuous monitoring and adaptive security
AI models evolve constantly, which means the threats that target them are constantly evolving, too.
Once controls and validation are in place, the next objective to tackle is comprehensive, continuous oversight. Monitoring confirms that systems stay within approved risk thresholds as they learn, retrain, and interact with new data.
The goal? To move from periodic checks to real-time visibility, using automation and AI-driven analytics to detect anomalies early, respond quickly, and sustain assurance as the environment changes.
Making it happen
Establish runtime monitoring: Track model drift, data exfiltration, and performance anomalies in real time. Integrate automated alerts with security operations for unified incident response.
Correlate AI signals with enterprise risk: Feed AI-specific telemetry—access patterns, model outputs, training-data changes—into enterprise risk dashboards to connect technical activity with business impact.
Automate adaptive response: Use machine learning and workflow automation to reevaluate controls dynamically and retrain models when thresholds are breached. This is how you can keep your security posture current with automated support.
Refine through threat intelligence: Integrate insights on emerging AI attack techniques into the monitoring environment to anticipate and mitigate new risks before they get out of control.
Assess and retrain the workforce: Test employees regularly on AI security protocols and retrain as needed to keep awareness and readiness on track with evolving threats.
Extend capabilities and coverage: Extend monitoring capacity through managed detection and response or other 24/7 assurance services. These models combine human expertise with automated analytics to maintain continuous protection at scale.
Building a culture of trusted AI
AI security doesn’t belong to just one team. It’s an enterprise responsibility that depends on shared trust, transparency, and accountability. Every business function has a role in governing how AI is designed, deployed, and refined. But turning that principle into practice requires clear leadership and integration.
That’s where the CISO comes in. Their mission now extends beyond protecting systems to architecting how AI operates safely across the organization. The CISO connects the technical, ethical, and regulatory threads, aligning cyber, data, and compliance teams so every new AI use case enters a controlled, measurable environment.
This doesn’t require owning every decision. Instead, the CISO needs to make sure that each decision falls within consistent, enforceable boundaries that expand as adoption grows.
From there, the mandate becomes execution. Securing AI is the next evolution of the core cybersecurity mission: visibility, validation, and accountability at speed. The best programs build frameworks that can absorb change, automate assurance, and learn as fast as the models they protect.
That’s how CISOs can lead both innovation and protection at the same time—and move trusted AI from aspiration to reality.
How KPMG can help
Implementing an AI security program requires a bit of a trifecta: tools, talent, and testing capacity to sustain AI security. KPMG Cyber Managed Services can help organizations put trusted AI into action through a flexible mix of consulting engagements and managed offerings, including managed, comanaged, and advisory support.
Our services include:
- AI security assessment and readiness: Evaluate AI maturity, identify governance and control gaps, and benchmark against leading frameworks such as the NIST AI RMF.
- Developing IAM capabilities for AI: Assess and uplift your existing IAM governance structures to help ensure coverage for AI systems and NHI (nonhuman identities) for AI agents.
- Managed testing and validation: Conduct continuous red-teaming, adversarial testing, and model validation through the KPMG Cyber Managed Testing platform, combining automation, threat intelligence, and experienced oversight.
- Continuous monitoring and response: Deliver 24/7 visibility into model drift, data exfiltration, and policy violations, integrated with your existing SOC operations.
- Governance integration and automation: Embed AI risk controls and reporting into enterprise risk management systems and automate assurance through AI TRiSM and workflow orchestration tools.
- Program management and enhancement: Provide ongoing support for roadmap execution, control attestation, and regulatory readiness as AI systems scale.
*2025 KPMG U.S. CEO Outlook Survey: https://kpmg.com/us/en/articles/2025/ceo-outlook-gated.html#outlook
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
By subscribing, you accept our Terms & Privacy Policy.