MergeGuide
Thought Leadership

AI Governance After RSA 2026: What Every Security Leader Needs to Know

·11 min read
Share

Three governance frameworks dominated the conversation at RSA Conference 2026. Each addresses AI risk from a different angle, but together they signal a clear shift: organizations that use AI to build software now face real accountability for what that software does.

Here is what each framework requires, who it affects, and when enforcement begins.

1. What Changed at RSA 2026

OWASP Top 10 for Agentic Applications

Industry Standard — Voluntary

The first governance standard written specifically for AI agents that produce code. It catalogs the ten most critical risks introduced when AI systems generate, modify, or commit source code autonomously — including hallucinated dependencies, policy-violating patterns, and autonomous commits that bypass human review.

Who it affects: Any organization where AI tools contribute code to production repositories. This includes copilot-style assistants, code generation APIs, and autonomous agents that open pull requests or push changes.

Enforcement: Voluntary. Published in 2025, this is a new standard. As the first framework to specifically address agentic code risk, expect it to inform audit criteria and vendor security assessments as AI-assisted development becomes standard practice.

NIST AI Risk Management Framework (AI RMF 1.0)

Federal Standard — Expected

The U.S. federal standard for managing AI risk across its full lifecycle. Organized around four functions — Govern, Map, Measure, Manage — it provides a structured approach for identifying where AI creates risk, quantifying that risk, and implementing controls to reduce it.

Who it affects: Government contractors, organizations in regulated industries (financial services, healthcare, defense), and any company that expects to do business with entities that follow NIST guidance.

Enforcement: Published January 2023. Executive Order 14110 (October 2023) directed federal agencies to use AI RMF as the basis for managing AI risks, and OMB M-24-10 mandates documented AI governance policies for agencies. For government contractors, these requirements flow down through procurement. For private sector organizations, adoption is voluntary but increasingly expected during enterprise procurement and vendor assessments.

EU AI Act

Legally Mandatory

The world's first comprehensive AI law. It classifies AI systems by risk level (unacceptable, high, limited, minimal) and imposes obligations proportional to that risk. High-risk AI systems — which can include AI used in safety-critical software development — require conformity assessments, human oversight mechanisms, and detailed technical documentation.

Who it affects: Any organization that deploys AI systems within the EU or sells AI-powered products to EU customers. Extraterritorial scope means U.S. companies with European customers are in scope.

Enforcement: Already underway. Prohibited practices took effect February 2025. High-risk AI system obligations phase in through August 2027. Penalties reach 35 million euros or 7% of global annual turnover, whichever is higher.

The Common Thread: All three frameworks share a core expectation: if AI contributes to your software, you need to know where, how, and with what safeguards. The era of ungoverned AI-assisted development is ending.

2. Planning for Compliance

Knowing what the frameworks require is step one. Step two is figuring out where your organization stands today and what needs to change. Below are practical starting points for each framework.

Gap Assessment: Questions to Ask Your Team

Before investing in tooling or process changes, start with an honest inventory. These questions surface the gaps that matter most:

  • Where is AI writing code today? Most organizations undercount. Check IDE plugins, CI/CD pipeline tools, internal automation scripts, and third-party platforms that use AI under the hood.
  • What review process exists for AI-generated code? Is it the same as human-authored code, or does it bypass steps? Are reviewers aware when code was AI-generated?
  • Can you produce an evidence trail? If an auditor asked you to demonstrate governance over AI-assisted development for a specific release, could you do it today?
  • How do you handle AI-suggested dependencies? Hallucinated packages are a documented supply chain risk. Is anyone verifying that AI-recommended libraries actually exist and are trustworthy?
  • Who owns AI governance decisions? If the answer is "nobody specifically," that is itself a finding.

Stakeholder Mapping

AI governance is not a single-team problem. These are the groups that need to be in the room:

StakeholderRole in AI Governance
Engineering LeadershipDefine which AI tools are authorized, enforce usage policies, ensure AI-generated code meets the same quality and security standards as human-written code.
Security / AppSecAssess new risk vectors introduced by AI tooling. Validate that existing security controls (SAST, SCA, secrets scanning) cover AI-generated outputs.
Compliance / GRCMap framework requirements to existing controls. Identify gaps. Own the evidence collection strategy for audits and regulatory inquiries.
LegalEvaluate IP implications of AI-generated code. Assess EU AI Act obligations. Review vendor agreements for AI tool providers.
ProcurementUpdate vendor assessment questionnaires to include AI governance. Ensure third-party AI tools meet organizational risk thresholds.

Where the Frameworks Converge: Evidence Requirements

Despite their different origins and enforcement mechanisms, the three frameworks converge on five evidence categories. The table below maps specific requirements from each framework to show where a single governance program can satisfy multiple obligations simultaneously.

Evidence CategoryOWASP AgenticNIST AI RMFEU AI Act
Policy DocumentationAGENT-01, AGENT-02: Recommends documented policies for agent tool access, scope limitations, and permission boundaries.GOVERN 1.1–1.7: Requires documented organizational policies covering AI lifecycle. GOVERN 1.1 specifically requires legal and regulatory AI requirements to be "understood, managed, and documented."Articles 9, 17: Mandatory quality management system with documented policies covering design, testing, risk management, and post-market monitoring. Non-compliance: fines up to 3% of global turnover.
Enforcement RecordsAGENT-09: Recommends comprehensive logging of agent actions, decisions, tool invocations, and outputs. Audit trails that capture what the agent did and why.GOVERN 1.4, MEASURE 2.5: Requires ongoing monitoring with documented results. AI system performance must be "examined and documented" through testing and validation.Articles 12, 19, 26(5): Mandatory automatic logging over the AI system's lifetime. Minimum six-month log retention. This is the most prescriptive enforcement record requirement across all three frameworks.
Risk AssessmentsThe Top 10 itself is a risk catalog (AGENT-01 through AGENT-10). Provides threat categories and attack scenarios for use in assessments, but does not mandate a formal assessment process.MAP 1.1–3.2, MEASURE 1.1–3.3: The most comprehensive risk methodology of the three. MAP function requires documenting intended uses, identifying potential harms, and contextualizing risks.Articles 9, 27: Mandatory risk management system as a "continuous iterative process" throughout the AI lifecycle. Article 27 requires fundamental rights impact assessments for qualifying deployers.
Incident ResponseAGENT-09, AGENT-10: Recommends alerting on anomalous behavior, kill switches, and human override capabilities. Incident response addressed indirectly through logging and containment.MANAGE 4.1: Explicitly names incident response as an expected component. Requires post-deployment monitoring plans including "mechanisms for capturing and evaluating input from users."Articles 62, 73: Mandatory serious incident reporting to market surveillance authorities within 15 days. Article 20 requires corrective action including potential withdrawal or recall.
Training & Human OversightAGENT-06, AGENT-10: Recommends human-in-the-loop controls for high-stakes decisions and training users not to blindly trust agent outputs.GOVERN 2.1, 4.1–4.3: Requires documented roles, responsibilities, and organizational competency in AI risk management. Senior leadership accountability expected.Articles 4, 14, 26(2): Article 4 (already in force) requires AI literacy for all staff dealing with AI systems. Article 14 requires high-risk systems to be designed for effective human oversight. Article 26(2) requires competent, trained human overseers.

The Convergence Opportunity: An organization that implements automatic logging with retention (EU AI Act Art. 12), structured risk assessment using NIST MAP/MEASURE methodology against OWASP's agentic threat categories, with documented policies and human oversight controls, satisfies the core evidence requirements across all three frameworks simultaneously. One governance program. Three frameworks covered.

Common Blind Spot: The most frequent gap we see in the market: organizations have governance over their primary AI coding tool but no visibility into secondary usage. A developer using an AI chatbot to generate a utility function, then pasting it into a PR, bypasses every control in the pipeline. Governance needs to account for this.

3. Implementation Considerations

Once you understand the requirements and your gaps, the question becomes how to close them. Here are the decisions that determine whether your AI governance program actually works or becomes shelf-ware.

Build vs. Buy: Evaluation Criteria

Some organizations will build internal governance tooling. Others will buy. Most will do both. The right answer depends on your specific situation, but here are the criteria that matter:

  • Coverage across the development lifecycle. AI-generated code can enter your codebase at the IDE, through pull requests, via CI/CD pipelines, or through automated agents. Point solutions that only cover one stage leave gaps.
  • Evidence generation vs. evidence collection. Tools that generate structured compliance evidence as a byproduct of enforcement are fundamentally different from tools that require manual documentation after the fact. The former scales; the latter does not.
  • Policy flexibility. Your governance requirements will evolve as frameworks mature, as your AI usage expands, and as your risk posture changes. Tooling that hard-codes policies rather than letting you define and update them will create technical debt.
  • Framework mapping. Can the tooling map its findings to specific framework controls (OWASP agentic risks, NIST AI RMF functions, EU AI Act obligations)? Auditors need this traceability.
  • Developer experience. Governance tooling that creates friction will be circumvented. The best implementations are invisible to developers during normal work and only surface when something actually needs attention.

What "Good" Looks Like

Organizations that are ahead on AI governance share a few characteristics:

Continuous enforcement, not periodic scanning. AI-assisted development moves fast. A quarterly scan of your codebase will miss thousands of commits. Effective governance evaluates every change as it happens, at the speed of your development process.

Structured evidence, not manual documentation. When compliance evidence is generated automatically — as a machine-readable artifact tied to specific commits, reviews, and policy evaluations — audit preparation drops from weeks to hours. When evidence lives in spreadsheets maintained by hand, it is always incomplete and usually stale.

Governance as code. The most mature organizations define their AI governance policies the same way they define their infrastructure: as version-controlled, reviewable, testable artifacts. This makes policy changes auditable and rollback possible.

Common Pitfalls

Finally, the mistakes we see organizations make repeatedly:

  • Treating AI governance as a point-in-time audit. A gap assessment is a starting point, not an endpoint. AI tooling evolves monthly. New models, new capabilities, new risks. Governance must be continuous.
  • Underestimating the velocity of AI-assisted development. Teams using AI coding tools produce code at 5–10x the rate of manual development. Your governance infrastructure needs to handle that throughput without becoming a bottleneck.
  • Waiting for frameworks to finalize. The EU AI Act's prohibited practices provisions are already in force, and its AI literacy obligation (Article 4) applies to all organizations using AI systems as of February 2025. NIST AI RMF is referenced in federal procurement under Executive Order 14110. OWASP Agentic Top 10, while newly published, fills a gap that auditors have been unable to address until now. Organizations that wait for perfect clarity will find themselves catching up to competitors who started early.
  • Bolting AI governance onto existing tools. Traditional SAST and SCA tools were designed for human-written code. AI-generated code introduces different risk patterns — hallucinated packages, insecure-but-plausible patterns, policy violations that are syntactically correct but semantically wrong. Governance tooling needs to understand these distinctions.
  • Ignoring the supply chain dimension. Your organization may govern its own AI usage carefully, but what about your vendors? Your open-source dependencies? AI-generated code is entering the software supply chain at every level. Procurement and vendor management need to be part of the governance conversation.

Where to Start: If you take one step this quarter, make it this: conduct an inventory of every place AI contributes code to your production systems. You cannot govern what you have not mapped. That inventory becomes the foundation for gap assessment, stakeholder alignment, and tooling decisions across all three frameworks.

CM

Chuck McWhirter

Founder & CEO, MergeGuide

Cybersecurity veteran with nearly three decades of experience spanning malware analysis, application security, and security operations. U.S. Air Force veteran (Air Force CERT), CISSP since 2000. Previously led solutions architecture teams at ReversingLabs, McAfee, and ArcSight. Founded MergeGuide to solve the governance gap created by AI-assisted development.

Continue Reading