RSAC 2026: Every Attack Involves AI. And Nobody Owns the Defense.
Table of Contents
- Introduction: The Week Capability Outpaced Control
- Section 1 — The Offense Picture: What SANS Presented at RSAC
- Section 2 — The Defense Gap: Nobody Owns Agent Access
- Section 3 — What Good Looks Like: Early Responses Worth Watching
- Section 4 — The Leadership Imperative: Board-Level Discipline
- Close: The Question Every Leader Must Answer
- Frequently Asked Questions
At your organization, does any single person own the question of what AI agents can access? Drop your answer in the comments.
Introduction: The Week Capability Outpaced Control
RSAC 2026 ended with a clean verdict from the SANS Institute: for the first time in the conference's 25-year history, every single dangerous attack technique on their annual list involves AI.
Not most of them. All of them.
The same week, a CSA survey landed that should be read alongside that finding. When researchers asked enterprise security teams who owns AI agent access at their organizations: 43% of organizations use shared or generic service accounts for their AI agents. Twelve percent aren't sure how their agents even authenticate.
These two facts belong in the same sentence. Offense is fully AI-enabled. Defense has an ownership vacuum.
Capability has outpaced control — not just at the technical layer, but at the organizational and governance layer. The enterprises that will navigate the next 18 months well are the ones that close this gap now, deliberately and at the leadership level.
Section 1 — The Offense Picture: What SANS Presented at RSAC
The SANS Institute's Top 5 Most Dangerous Attack Techniques keynote at RSAC 2026 (March 24) was the clearest statement yet about where the threat landscape has moved.
The headline: every technique on the list involves AI. Not as a feature — as a core enabler.
Zero-days at token cost. AI-powered fuzzing and vulnerability discovery has compressed the economics of finding exploitable flaws.
454,000 malicious packages. AI-generated malicious code packages have flooded open-source repositories at volumes that overwhelm signature-based detection.
8-minute domain takeover. SANS demonstrated attackers escalating from initial intrusion to full domain admin in 8 minutes using AI-driven attack workflows. Incident response plans written for days-to-weeks timelines are structurally mismatched to this attack speed.
AI-assisted forensics as an attack tool. The Protocol SIFT demonstration was striking: Claude Code completed what SANS described as a 3-day forensic investigation in 14 minutes and 27 seconds. That capability in the hands of threat actors means attackers can analyze compromised environments and plan lateral movement faster than defenders detect the initial intrusion.
The pattern across all five: AI has compressed attack timelines while expanding attack surface coverage. Defenses calibrated to human-speed adversaries are operating out of sync.
Section 2 — The Defense Gap: Nobody Owns Agent Access
Against that offense picture, the CSA survey data reads as a structural vulnerability.
43% of organizations use shared or generic service accounts for their AI agents. Same credential set, multiple agent workloads, no granular identity binding, no per-agent audit trail.
12% of respondents aren't sure how their agents even authenticate. They've deployed agents. They don't know their credential model.
81% agree that prompt manipulation could expose credentials. The threat is acknowledged. The governance response is absent.
No single function claimed clear ownership of AI agent access. Security said it was a developer responsibility. Developers said it was a security responsibility. In practice: no one owns it.
Cisco surfaced the broader readiness gap at RSAC: 85% of enterprise customers are testing agent pilots, but only 5% have moved agents into production. Security concerns were the dominant reason.
Kiteworks' 2026 data adds the operational dimension: 60% of organizations cannot terminate a misbehaving agent once running. 63% cannot enforce purpose limitations.
Organizations are deploying agents they can't fully identify, running on credentials nobody owns, with no reliable way to stop them if something goes wrong. That is a governance architecture problem — not a security team problem.
Section 3 — What Good Looks Like: Early Responses Worth Watching
CrowdStrike announced the general availability of AIDR (AI Detection and Response) at RSAC, alongside Charlotte AI AgentWorks. The premise: if attackers operate at AI speed, defenders need response tooling that matches it.
Palo Alto Networks announced Prisma AIRS 3.0. The meaningful step is the shift from observation to authorized action — blocking or constraining agents operating outside defined parameters. This is the kill switch capability that 60% of organizations currently lack.
The standards layer is forming. The CSAI Foundation, AAIF, and OWASP's AIVSS are all working on AI agent governance frameworks. None are production-ready enterprise solutions yet. But enterprises that engage with these frameworks now will be positioned for the regulatory environment taking shape.
The Model Context Protocol (MCP) remains the unsolved governance layer. MCP made enterprise agent deployment faster — but a compromised agent operating via MCP can reach more enterprise systems more quickly. The deployment ease and the governance gap are connected.
Section 4 — The Leadership Imperative: Board-Level Discipline
Senator Mark Warner, speaking at the Axios AI Summit this week, cited data showing entry-level job postings are down 35% since 2023. Law firms are no longer hiring first-year associates for document review that AI now handles.
Warner also proposed a data center tax to fund workers displaced by AI, calling AI companies the clearest leverage point for funding the transition. Whether or not that specific policy gains traction, it signals the direction of the regulatory and political environment.
Organizations that have clear AI agent governance frameworks, auditable AI systems, and documented human oversight will be better positioned for whatever regulatory framework emerges.
GitHub's recent training data policy change — which raised questions about enterprise code being used in AI training — illustrates another dimension of shadow governance risk. Organizations may have AI-related policy exposures nobody is tracking because nobody owns the question at the leadership level.
The pattern is consistent: technical capability has moved faster than organizational governance at every layer — security, identity, workforce, and policy. The enterprises that close this gap proactively will have a structural advantage.
Close: The Question Every Leader Must Answer
RSAC 2026 delivered a sharp summary: attackers are fully AI-enabled, defenders are unevenly prepared, and at most organizations nobody owns the governance problem.
The forward question isn't whether AI security will become a board-level priority. The events of this week make that inevitable. The question is whether your organization gets ahead of it or reacts to it.
Here's the diagnostic: At your organization, does any single person own the question of what AI agents can access? Not a team. Not a committee. One person who can answer in 10 seconds.
If you can't name that person, you have the same governance gap the CSA survey found in 43% of organizations — and you're one prompt injection or credential exposure away from finding out what that gap costs.
At your organization, does any single person own the question of what AI agents can access? Drop your answer — or your question — in the comments.
Frequently Asked Questions
What did the SANS Institute reveal about AI at RSAC 2026?SANS presented their Top 5 Most Dangerous Attack Techniques and noted every technique involves AI — the first time in 25 years. Key demonstrations included an 8-minute breach-to-domain-admin escalation and AI completing a 3-day forensic investigation in 14 minutes 27 seconds.What is the CSA survey finding about AI agent access controls?43% of organizations use shared service accounts for AI agents, 12% are unsure how agents authenticate, and 81% agree prompt manipulation could expose credentials. No single organizational function claimed clear ownership of AI agent access.Why are only 5% of enterprise AI agent pilots in production?Cisco reported at RSAC 2026 that 85% of enterprise customers are testing AI agent pilots but only 5% have moved to production. Security concerns are the dominant barrier — specifically identity governance, behavioral monitoring, and inability to terminate misbehaving agents.What is Palo Alto Prisma AIRS 3.0?Prisma AIRS 3.0 shifts from observing AI agent behavior to taking controlled action — blocking or constraining agents operating outside defined parameters. This provides the kill switch capability that 60% of organizations (Kiteworks 2026) currently lack.What governance actions should enterprise leaders take after RSAC 2026?Three actions: (1) Name a single owner for AI agent access governance. (2) Conduct a non-human identity audit to catalog all agent credentials. (3) For every deployed agent, define the kill switch mechanism and behavioral monitoring before the next production deployment.What is the MCP governance risk?Model Context Protocol accelerated enterprise agent deployment by standardizing tool access — but a misbehaving agent via MCP can reach more enterprise systems faster. The deployment ease and governance gap are directly connected. MCP governance is the unsolved layer in current enterprise security frameworks.