AI is showing up across all of DevOps and DevSecOps and naturally into payment technology: fraud detection pipelines, developer copilots, customer-facing chatbots, internal tooling. If you crack open PCI DSS v4.x looking for explicit AI guidance, you won’t find much yet. That’s not unusual for a standard that was developed before generative AI went mainstream.
As a former PCI QSA and cybersecurity engineer, I’ve spent a lot of time thinking about how the standard maps to real-world environments. PCI DSS v4.x made meaningful progress. It’s a more mature, risk-based framework than its predecessor. And while the standard text doesn’t mention AI by name, the PCI SSC is actively engaging with the topic through initiatives like The AI Exchange blog series, their official guidance on AI in assessments, and coverage in their 2025 annual report.
Here’s where v4.x already holds up, where there’s room to grow, and what organizations deploying AI in payment environments should be thinking about right now.
Where PCI DSS v4.x Actually Gets It Right#
Several v4.x changes align well with the AI risk landscape, even if they weren’t written with AI explicitly in mind.
Requirement 6: Secure Software and Systems#
The v4.x push toward secure software development practices is well-timed. Requirement 6.2.4 now calls out the need to prevent common software attacks through developer training and secure coding techniques. This maps reasonably well to the AI developer tooling problem.
Developers are using AI coding assistants, including GitHub Copilot, Cursor, and others, to write payment application code. The risk isn’t hypothetical. AI-generated code can introduce vulnerabilities just as human-written code can, and in some cases the failure modes are less obvious. A developer accepting a Copilot suggestion that mishandles PAN data, uses a deprecated cryptographic function, or skips input validation isn’t off the hook because a machine wrote the first draft.
Req 6 gives QSAs a hook here. Code review processes, SAST tooling, and developer training requirements all apply to AI-assisted code. The question organizations should be asking is: does your SDLC explicitly address AI-generated code as a risk vector? Most don’t.
Requirements 7 and 8: Access Control and Identity#
Req 7 and 8 now have stronger language around least-privilege, MFA, and service account management. These controls matter when AI systems are querying or processing cardholder data. An AI model or agentic workflow that has access to your CDE is, for all practical purposes, another identity in your environment, and it should be treated as one.
Most organizations haven’t extended their identity governance programs to cover AI systems. Who owns the service account the AI pipeline runs under? What’s the access review cadence? Is MFA even applicable? These aren’t rhetorical questions. They’re the kinds of things that will surface in assessments as the industry matures.
Req 8’s prohibition on shared credentials and requirement for unique IDs per user applies directly to AI service accounts. If your LLM-based fraud detection system is running under a generic shared credential with broad database access, that’s a finding.
Requirement 12: Policies, Risk Management, and the Shadow AI Problem#
Req 12 is where the shadow AI conversation lives. The requirement’s emphasis on maintaining a comprehensive information security policy and performing targeted risk analyses gives organizations a framework to address AI governance, if they choose to use it.
Shadow AI is the PCI compliance risk I find most concerning. Employees are using consumer LLM tools, including ChatGPT, Gemini, and others, to do their jobs. Customer service agents summarizing disputes. Finance teams building payment reconciliation prompts. Developers asking questions about transaction data structures. Some of those interactions are going to include cardholder data, whether intentionally or not.
Req 12 doesn’t explicitly call out AI tools in its acceptable use policy language, but the framework is there. A targeted risk analysis under 12.3.1 could, and arguably should, address AI tool usage by in-scope personnel. If your AUP doesn’t mention LLMs, that’s a policy gap worth closing before a QSA points it out.
Where the Standard Can Improve#
AI-specific scoping guidance. PCI DSS has detailed scoping guidance for traditional system components, but nothing that addresses AI systems explicitly yet. Is your LLM-based fraud detection platform a system component? If it processes, stores, or transmits cardholder data, or could impact the security of those systems, then yes, arguably. Clearer scoping language for AI systems would help both assessors and organizations navigate this more confidently.
Guidance on AI model risk. Model poisoning, prompt injection, data leakage through model outputs: these attack vectors don’t appear in v4.x. As AI-driven fraud detection and payment processing become more common, future iterations of the standard could benefit from addressing adversarial AI risks directly.
Chatbot handling of payment flows. AI-powered chatbots guiding customers through payment flows raise interesting scoping questions. What data does the chatbot capture? Where does it go? How is the session managed? These are areas where additional clarity would be valuable as the technology matures.
The Customized Approach as a bridge. One of v4.x’s notable additions is the Customized Approach, which allows organizations to implement controls differently as long as they meet the stated objective. This is a useful vehicle for addressing AI-specific risks today, though the documentation burden means most organizations will default to the defined approach. As AI use cases become better understood, more standardized guidance could complement this flexibility.
The PCI SSC Is Already Engaging With AI#
It would be a mistake to look only at the standard text and conclude the Council isn’t paying attention. Outside of the DSS itself, the PCI SSC has been actively building the conversation around AI in payment security.
The most visible effort is The AI Exchange, an ongoing blog series where the Council interviews payment security leaders about how their organizations are adopting and implementing AI. The series has featured Soft Space (February 2026), Bank of America (February 2026), and most recently Checkout.com (March 2026). These aren’t surface-level pieces. The Checkout.com feature, for example, dives into how the company moved from rules-based decisioning to adaptive, ML-driven risk models operating across the full payment lifecycle, covering everything from pre-authorization scoring to AI-powered merchant chatbots. As Checkout.com’s Security Director Jo Vane put it: “AI should augment human expertise, not eliminate it.”
Beyond the blog, the Council published official guidance on Integrating Artificial Intelligence in PCI Assessments, and their 2025 annual report included dedicated coverage of AI’s role in the payment security landscape.
The standard itself may not mention AI by name yet, but the Council is clearly laying the groundwork.
What You Should Be Doing Now#
While the standard evolves, the existing framework gives you plenty to work with.
Start with an AI inventory in your CDE and adjacent systems. Know what AI tools are deployed, what data they touch, and what access they have. This is basic asset management applied to a new category of component.
Extend your existing controls to AI systems explicitly. Your access control policy, your AUP, your SDLC: update them to reference AI tools and AI-generated outputs. This isn’t extra overhead. It’s closing gaps that a QSA is going to flag eventually.
Treat shadow AI as a data leakage risk. If employees are using consumer LLMs and your organization handles cardholder data, you need a clear policy position on this. “We don’t allow it” only works if you have technical controls to back it up. “We allow approved tools with these guardrails” is a more defensible posture.
Document your risk decisions. PCI DSS v4.x’s risk-based approach means your targeted risk analyses carry more weight than ever. If you’ve assessed AI tooling and made a deliberate decision about how to manage that risk, document it. An undocumented risk decision is indistinguishable from no risk decision at all.
Closing Thoughts#
PCI DSS v4.x is a better standard than v3.2.1. The risk-based enhancements, updated cryptographic requirements, and identity improvements are real progress. The standard was developed before generative AI became a mainstream enterprise tool, so the absence of explicit AI language is understandable rather than a shortcoming.
What matters is the trajectory. The PCI SSC is actively engaging with AI through The AI Exchange series, their assessments guidance, and the 2025 annual report. The gap between where the standard is today and where the industry needs it to be is real, but it’s closing.
In the meantime, the framework gives you the structure: secure development, access control, policy management, risk analysis. Organizations that apply these controls to AI explicitly, rather than waiting for the standard to spell it out, will be in the strongest position both for assessments and for actual security posture.
Related links:
- PCI Security Standards Council Official Site
- The AI Exchange: Innovators in Payment Security Featuring Checkout.com
- Integrating Artificial Intelligence in PCI Assessments
