AI agents are running in production right now, autonomously calling APIs, querying databases, and triggering workflows. Most organizations have no idea what access those agents have or who approved it. This is the identity governance problem nobody is ready for.
PCI DSS v4.x wasn’t written with AI in mind, but the framework is more adaptable than it gets credit for. Here’s where the standard holds up, where there’s room to grow, and how the PCI SSC is already engaging with AI through initiatives like The AI Exchange.
After nearly 20 years of operation, the PCI Security Standards Council published its first annual report. It is a surprisingly revealing look at where payment security is headed, from product family restructuring and standards consolidation to AI guidance and global expansion.
When we talk about PCI DSS compliance, the conversation tends to stay clinical. Scoping exercises. Network diagrams. Encryption at rest. But compliance doesn’t exist in a vacuum. It exists because there’s a thriving, industrialized criminal economy on the other end waiting to monetize every gap you leave open.
Rapid7 published a detailed piece of research this month that every QSA, security engineer, and compliance leader should read: their analysis of the carding-as-a-service (CaaS) ecosystem and the underground dump shops that power it. Having spent years on the assessor side of PCI, I want to connect what Rapid7 found directly back to what it means for your cardholder data environment and your scoping decisions.
If you’ve spent any time on LinkedIn or at a cybersecurity conference in the last couple of years, you’ve seen the headlines. “Quantum computing will break all encryption.” “Your data is already at risk.” “The cryptographic apocalypse is coming.”
It makes for great conference talks and even better vendor marketing. But here’s the thing: encryption has always been broken. And every single time, we’ve replaced it with something stronger. The lifecycle of cryptographic algorithms isn’t a flaw in the system; it is the system. So why would quantum computing be any different?
OpenClaw made remarkable security strides since my January article, hired dedicated security leadership, patched 40+ vulnerabilities, partnered with VirusTotal. Then ClawHavoc exposed 341 malicious skills. And now the founder just joined OpenAI. Here’s everything that changed, what still worries me, and how to think about deploying OpenClaw in this new reality.
Security research reveals OpenClaw (formerly Clawdbot) has fundamental architectural flaws that make it function like malware. With 100,000+ users, exposed instances leaking credentials, and infostealers already targeting it, this viral AI agent proves we need AI governance now.