RSAC 2026 opens today at the Moscone Center in San Francisco. I’m not there in person this year, but I’ve spent the past week tracking every pre-conference announcement, keynote preview, and vendor press release. The signal-to-noise ratio is rough. So here’s my attempt to cut through it for practitioners who want to know what actually matters this week.
The short version: if you work in security, the next four days are wall-to-wall agentic AI. Every major vendor is shipping something. The question isn’t whether agentic AI security is real. It’s whether the industry is building controls fast enough to match the deployment speed.
The theme this year is unmistakable#
RSAC’s official theme is “The Power of Community.” The unofficial theme, based on the keynote lineup and session catalog, is “agentic AI is here and nobody’s ready.”
CrowdStrike CEO George Kurtz is delivering a keynote tomorrow with a preview line that should make every security leader uncomfortable: “By 2027, your smartest employee will be a machine. Most organizations deploy AI agents with less governance than they’d give an intern.” He’s introducing what CrowdStrike is calling “The AI Operational Reality Manifesto,” a peer-driven framework for deploying AI agents without losing board confidence or organizational trust.
Google is framing its entire RSAC presence around what it calls the “Agentic SOC,” with sessions on moving beyond simple automation to AI-driven agents that detect, investigate, and respond to threats autonomously. Mandiant CTO Charles Carmakal is presenting on Wednesday alongside legal counsel from AT&T and Debevoise & Plimpton, which tells you the conversation has moved well past the technical and into incident response, liability, and regulatory exposure.
Cisco’s session today is titled around a paradox that resonates: the autonomy that makes AI agents powerful is the same thing that makes them impossible to secure with traditional human-centric controls. Their framing connects identity-centric security, zero trust, and SASE as the convergence point for securing agentic systems.
Microsoft’s announcements are the most substantive#
Of all the announcements heading into this week, Microsoft’s are the ones practitioners should read carefully. They’re not just announcing products. They’re trying to define the security architecture for the agentic era.
Here’s what they’ve shipped:
Microsoft Entra Agent ID assigns a unique identity to every AI agent built with Microsoft Foundry, Copilot Studio, and their Agent 365 ecosystem partners. This is significant because it treats AI agents as first-class identities in the access fabric, not service accounts, not shared credentials, but governed identities with lifecycle management. They also announced a Conditional Access agent with context-aware recommendations and automated least-privilege enforcement.
Zero Trust for AI (ZT4AI) extends Zero Trust principles across the full AI lifecycle, from data ingestion and model training to deployment and agent behavior. They released a new reference architecture, practical patterns and practices documentation, and updated the Zero Trust Workshop with an AI pillar. A Zero Trust Assessment for AI is in development for summer 2026.
Microsoft Sentinel is being positioned as an “agentic defense platform” with data federation powered by Microsoft Fabric, a natural language playbook generator, and a Model Context Protocol (MCP) entity analyzer coming in April.
The reason this matters beyond Microsoft shops: they’re publishing the reference architecture openly. Whether you use Microsoft’s stack or not, the ZT4AI framework is a useful model for how to think about securing AI agents, and it maps to concepts your compliance and risk teams already understand.
What the data says about the gap#
The vendor announcements are one thing. The supporting research is what should keep you up at night.
HiddenLayer published its 2026 AI Threat Landscape Report on March 18, based on a survey of 250 IT and security leaders. The key finding: one in eight reported AI breaches is now linked to agentic systems. That’s not a projection. That’s what orgs are already reporting.
Saviynt’s 2026 CISO AI Risk Report surveyed 235 CISOs and senior security leaders at large enterprises. Among the findings: 47% have observed AI agents exhibiting unintended or unauthorized behavior. Only 5% felt confident they could contain a compromised AI agent. And 78% have no documented policies for creating or removing AI identities.
The Cloud Security Alliance and Oasis Security found that 92% of organizations lack confidence that their legacy IAM tools can manage AI and non-human identity (NHI) risks specifically. This is the structural problem underneath all the RSAC announcements: the identity and access management infrastructure most organizations run today was designed for human users. AI agents don’t authenticate the same way, don’t follow the same session patterns, and don’t respect the same permission boundaries.
The real story is non-human identity#
If I had to pick one thread to follow at RSAC this week, it’s non-human identity governance for AI agents. This is where the gap between deployment speed and security maturity is widest, and it’s where the compliance implications are most immediate.
AI agents hold persistent credentials. They operate at machine scale. They can chain actions across multiple systems without a human in the loop. Traditional IAM treats them like service accounts at best, or ignores them entirely at worst. The result is a growing population of autonomous identities with elevated privileges and minimal oversight.
Microsoft’s Entra Agent ID is one response to this. CyberArk has been framing AI agents as the next evolution of machine identities, though it’s worth noting that several CyberArk speakers had to cancel their RSAC sessions this year, including talks on how AI agents break the browser threat model and how to contain AI agents that escape enterprise sandboxes. Those sessions would have been directly relevant to this conversation, so keep an eye out for them as on-demand content after the conference. The OWASP Practical Guide for Secure MCP Server Development (published February 2026) cataloged the “confused deputy” as a named threat class, where a compromised agent inherits the trust of every agent it communicates with.
For practitioners in GRC and compliance roles, the question is straightforward: does your identity governance program account for AI agents? Can you inventory them? Do you know what permissions they hold? Can you revoke access when something goes wrong?
If the answer is “not yet,” you’re in the majority. But the window for “not yet” is closing fast.
Where to focus your time#
RSAC has hundreds of sessions and your time is limited. Here’s how I’d prioritize if I were building my schedule around what will have the most practical impact.
Be selective with “AI fights AI” sessions. Several vendors are positioning autonomous AI defense as the answer to autonomous AI threats. The concept has real merit, and some of these products will genuinely move the needle. The challenge is distinguishing mature capabilities from repackaged automation. A good filter: ask whether the vendor can clearly explain what decisions their product makes autonomously versus what still requires human approval. The ones that can answer that clearly are worth your time.
Dig deeper on “agentic SOC” sessions. This is one of the most common terms at the conference this year, and for good reason. The vision of AI-driven detection and response is compelling. The variation in maturity across vendors is significant, though, so it’s worth asking specific questions: what does the agent actually do? What data does it access? What actions can it take without human approval? What’s the blast radius if it’s wrong? The sessions that address those questions head-on will be the most valuable.
Prioritize sessions on agent identity lifecycle. This is where the hardest unsolved problems are right now. Creating an agent identity is straightforward. Governing it through provisioning, permission changes, decommissioning, and incident response is where most organizations have significant gaps. Any session addressing this topic is worth prioritizing because it’s the foundation that everything else depends on.
Prioritize MCP security content. The Model Context Protocol defines how agents connect to enterprise applications, tools, and data. It’s quickly becoming critical infrastructure for agentic AI, and it has real vulnerabilities. CVEs have already been published against MCP implementations. Understanding MCP security is going to be essential for the rest of 2026, and RSAC is the opportunity to get up to speed. The session “Securing MCP: Mitigating New Threats in Agentic AI Deployments” (NCS-W02, Wednesday 9:40 AM, moved to Moscone West 3001) is one to put on your calendar. Also worth catching: “Agents Unleashed: Securing Skills, MCPs and Agents” (PART4-W09, Wednesday 2:25 PM) with Randall Degges from Snyk.
Catch the new and updated sessions. The addendum released ahead of the conference shows just how much the agenda is shifting toward agentic AI. A few sessions have been added or updated that are worth highlighting: “The Post-Prompt World: Securing AI Agents That Think for Themselves” (PART2-W02, Wednesday 9:40 AM) now features a CrowdStrike AI Security Engineering speaker. On Thursday, Hack The Box is presenting “Lessons Learned from Humans vs. Agentic AI in Security” (VLG-R05, 12:20 PM), sharing results from the world’s largest AI vs. human CTF benchmark. And “Agentic AI on Trial: Human Identity or Machine Identity?” (IAIS-W02, Wednesday 9:40 AM, moved to Moscone West 3004) directly addresses the identity governance questions that should be top of mind for every CISO this week.
What this means for compliance frameworks#
One thing you won’t hear much about on the RSAC keynote stage: how any of this maps to existing compliance frameworks. That’s the gap practitioners have to bridge on their own.
AI agents that access, process, or could impact sensitive data environments need to be governed with the same rigor as any other identity in scope. That means documented access controls, least privilege enforcement, monitoring, and evidence of review. The frameworks we already work with, whether that’s PCI DSS, SOC 2, ISO 27001, or NIST CSF, don’t have explicit requirements for AI agent governance yet. But the principles translate directly.
NIST’s COSAiS project is developing tailored SP 800-53 control overlays for agentic AI use cases, but as of early 2026, those overlays are still in development. MITRE ATLAS needs to expand beyond its classical machine-learning origins to cover agentic kill-chain tactics and multi-agent lateral movement. The EU AI Act’s high-risk system requirements take full effect in 2026.
The compliance landscape is catching up, but it’s not there yet. That means security practitioners need to be proactive about mapping agentic AI risks to existing control frameworks rather than waiting for explicit guidance.
The bottom line#
RSAC 2026 is the conference where agentic AI security goes from niche topic to mainstream concern. The vendor noise is loud. The product announcements are constant. But underneath it all is a real and urgent problem: organizations are deploying autonomous systems faster than they can govern them, and the identity infrastructure most of us rely on wasn’t built for this.
I’ll be watching the keynotes, tracking the announcements, and writing about what matters for practitioners over the next few days. If you’re at Moscone this week, pay attention to the sessions on agent identity, MCP security, and Zero Trust for AI. Skip the marketing theater. And ask every vendor the same question: “What happens when the agent does something it wasn’t supposed to do?”
Because based on what we’ve seen this month, that’s not a hypothetical anymore.
RSAC 2026 Quick Links#
If you’re attending or following along remotely, here are the links worth bookmarking:
Event Essentials
- RSAC 2026 Conference Homepage — March 23-26, Moscone Center, San Francisco
- Agenda at a Glance — full session catalog and daily schedule
- Passes and Rates — All Access, Expo Plus, and Expo pass options
- All Speakers — complete speaker directory
Keynotes to Watch
- Keynote Speaker Lineup — full lineup including Dame Jacinda Ardern, Ben Horowitz, Michael Lewis, and Adam Savage
- Monday, March 23 (3:55 PM PDT): Vasu Jakkal, CVP of Microsoft Security — “Ambient and Autonomous Security: Building Trust in the Agentic AI Era”
- Tuesday, March 24: George Kurtz, CrowdStrike CEO — “The AI Operational Reality Manifesto”
- Wednesday, March 25: Charles Carmakal, Mandiant CTO — incident response panel with AT&T and Debevoise & Plimpton legal counsel
- Thursday, March 26 (9:40 AM): James Lyne, SANS CEO
Vendor Spotlights
- Microsoft at RSAC 2026 — Entra Agent ID, Zero Trust for AI, Sentinel updates
- Cisco at RSAC 2026 — identity-centric security and SASE for agentic systems
Sessions Worth Your Time (updated with last-minute addendum changes)
- Monday, March 23 (1:10 PM): “It’s Getting Real & Hitting the Fan 2026: Real World AI(dentity) Attacks” — Brian Contos, Field CISO, Mitiga (Moscone West 3011)
- Tuesday, March 24 (2:25 PM): “Behavioral Intelligence: When LLMs Become Your Newest Insider Risk” — Teramind (Moscone South Esplanade 153)
- Wednesday, March 25 (8:30 AM): “Unleashing Power, Managing Risk: Security in the AI Era” — Boaz Glebord, CSO, Akamai (Moscone South Esplanade 153)
- Wednesday, March 25 (9:40 AM): “Securing MCP: Mitigating New Threats in Agentic AI Deployments” — moved to Moscone West 3001
- Wednesday, March 25 (9:40 AM): “The Post-Prompt World: Securing AI Agents That Think for Themselves” — CrowdStrike AI Security Engineering (Moscone South Esplanade 154)
- Wednesday, March 25 (9:40 AM): “Agentic AI on Trial: Human Identity or Machine Identity?” — moved to Moscone West 3004
- Wednesday, March 25 (2:25 PM): “Agents Unleashed: Securing Skills, MCPs and Agents” — Randall Degges, Snyk (Moscone South Esplanade 156)
- Thursday, March 26 (8:30 AM): “From GPU to Grid: Hardening AI Infrastructure for National Security” — new moderator Olaf Groth, Cambrian.ai (Moscone West 2001)
- Thursday, March 26 (12:20 PM): “Lessons Learned from Humans vs. Agentic AI in Security” — Hack The Box (Moscone West 2016) [NEW SESSION]
Remote Access
- Select keynotes will be livestreamed, and all keynote and track sessions will be available on demand approximately four hours after the live presentation.
- Cancelled sessions from CyberArk (“Crashing Comets”, “Pass-ta-Key”, “Shattering the Enterprise Sandbox”) are expected to be available on demand.
