PCI-DSS 11.5.2 - Guidance and Full Technical Deep Dive#
(On-Prem, Hybrid, and Native)#
I remember sitting in my first PCI assessment years ago, watching a QSA flip through pages of documentation. When we got to Requirement 11.5.2, file integrity monitoring, the conversation hit a wall. The requirement seemed straightforward on paper, but translating it into a hybrid environment with on-prem servers, AWS workloads, and network appliances? That’s where the real work begins.
The official PCI-DSS standard gives you the requirement and some basic guidance. While it’s prescriptive, it can’t possibly tell you how to implement FIM across your specific environment, especially when you’re dealing with a mix of traditional infrastructure and cloud-native services.
Having spent years implementing these controls as a systems engineer and later evaluating them as a QSA, I’ve seen what works and what doesn’t. Today, I’ll walk you through a complete multi-layer FIM strategy that addresses the real complexity of modern infrastructure.
Understanding the Requirement#
Let’s start with what the standard actually says:
Requirement 11.5.2: A change-detection mechanism (for example, file integrity monitoring tools) is deployed as follows:
- To alert personnel to unauthorized modification (including changes, additions, and deletions) of critical files.
- To perform critical file comparisons at least once weekly.
Critical files include:
- System executables
- Application executables
- Configuration and parameter files
- Centrally stored, historical, or archived audit logs
- Additional critical files determined by entity (for example, through risk assessment or other means)
Simple enough, right? But here’s where I’ve seen teams struggle: What exactly are “critical files” in your environment? Is it just /etc/passwd and some binaries? What about your FortiGate firewall ruleset? What about that Lambda function that processes transactions?
The answer is: all of the above. Critical files aren’t just the obvious OS files, they’re anything that, if modified without authorization, could compromise your security posture. In modern environments, that includes network device configurations, cloud infrastructure-as-code, and even cloud service configurations.
The Four-Level Architecture#
After implementing FIM across dozens of environments, from single-datacenter setups to massive hybrid clouds, I’ve found that a layered approach is the only way to achieve comprehensive coverage without drowning in noise.
Here’s the framework:
- Linux OS/WebApp/DB Files → Wazuh (real-time monitoring)
- Network Security Controls → FortiGate, Palo Alto, Cisco (configuration backups + diffing)
- AWS Infrastructure → CloudTrail + Config + CloudWatch (native cloud monitoring)
- Azure Infrastructure → Activity Log + Policy + Monitor (native cloud monitoring)
Each layer addresses different types of critical files and uses tools appropriate to that layer. Let’s dive into each one.
Level 1: Linux OS File Integrity Monitoring#
Why This Matters#
Your Linux systems, whether on-prem VMs, cloud instances, or even containers, are running your business logic. A single unauthorized change to /etc/shadow, a modified SSH config, or a trojaned binary can give attackers persistent access. During assessments, I’ve found evidence of breaches that went undetected for months because FIM wasn’t properly configured or was generating too many false positives to be actionable.
The goal here is real-time detection with actionable alerts. Weekly scans meet the minimum requirement, but in 2026, there’s no reason not to have real-time monitoring.
Critical Files Baseline#
Before you can monitor changes, you need to know what to monitor. Here’s a baseline that covers the essentials:
# Authentication and access control
/etc/passwd
/etc/shadow
/etc/group
/etc/sudoers
/etc/sudoers.d/*
/etc/ssh/sshd_config
/etc/security/*
# System configuration
/etc/fstab
/etc/hosts
/etc/hostname
/etc/resolv.conf
/etc/sysctl.conf
/etc/rsyslog.conf
/etc/crontab
/etc/cron.d/*
/var/spool/cron/*
# Network configuration
/etc/network/interfaces
/etc/sysconfig/network-scripts/*
/etc/netplan/*
# Critical binaries
/bin/*
/sbin/*
/usr/bin/*
/usr/sbin/*
# Application configs (adjust to your stack)
/etc/apache2/*
/etc/nginx/*
/etc/mysql/*
/etc/postgresql/*
/opt/payment-app/*
# Host-based firewalls
/etc/iptables/*
/etc/firewalld/*This is a starting point. Your payment application directories, custom scripts, and application-specific configs should be added based on your risk assessment.
Implementing Wazuh for Real-Time FIM#
I recommend Wazuh for Linux FIM because it’s open-source, battle-tested, and integrates well with SIEMs. In enterprise environments, you might use proprietary solutions like Tripwire or Qualys, but the principles remain the same.
Here’s a production-ready Wazuh configuration:
<!-- /var/ossec/etc/ossec.conf on Wazuh Agent -->
<syscheck>
<frequency>3600</frequency> <!-- Hourly full scans as fallback -->
<alert_new_files>yes</alert_new_files>
<!-- Critical OS files - Real-time monitoring -->
<directories check_all="yes" realtime="yes">/etc</directories>
<directories check_all="yes" realtime="yes">/bin</directories>
<directories check_all="yes" realtime="yes">/sbin</directories>
<directories check_all="yes" realtime="yes">/usr/bin</directories>
<directories check_all="yes" realtime="yes">/usr/sbin</directories>
<!-- Payment application -->
<directories check_all="yes" realtime="yes">/opt/payment-app</directories>
<!-- Web servers -->
<directories check_all="yes" realtime="yes">/etc/apache2</directories>
<directories check_all="yes" realtime="yes">/etc/nginx</directories>
<!-- Databases -->
<directories check_all="yes" realtime="yes">/etc/mysql</directories>
<directories check_all="yes" realtime="yes">/etc/postgresql</directories>
<!-- Firewall configs (REQUIRED for PCI) -->
<directories check_all="yes" realtime="yes">/etc/iptables</directories>
<directories check_all="yes" realtime="yes">/etc/firewalld</directories>
<!-- Logs - monitor for deletion attempts -->
<directories check_all="yes">/var/log</directories>
<!-- Exclusions to reduce noise -->
<ignore>/etc/mtab</ignore>
<ignore>/etc/resolv.conf</ignore>
<ignore type="sregex">.log$|.tmp$</ignore>
</syscheck>The realtime="yes" attribute is critical. It uses inotify to detect changes as they happen, not just during scheduled scans. This is the difference between catching an attacker in real-time versus discovering the breach during your weekly scan.
Custom Alert Rules for PCI#
Out-of-the-box Wazuh alerts are useful, but you need to tune them for PCI compliance. Here’s how to create high-priority alerts for critical file changes:
<!-- /var/ossec/etc/rules/local_rules.xml on Wazuh Manager -->
<group name="syscheck,pci_dss_11.5.2">
<rule id="100100" level="12">
<if_sid>550</if_sid>
<field name="file">/etc/shadow</field>
<description>PCI-DSS: CRITICAL - Shadow password file modified</description>
<group>pci_dss_11.5.2,gdpr_IV_35.7.d</group>
</rule>
<rule id="100101" level="10">
<if_sid>550</if_sid>
<field name="file">/etc/passwd</field>
<description>PCI-DSS: Password file modified</description>
<group>pci_dss_11.5.2</group>
</rule>
<rule id="100102" level="10">
<if_sid>550</if_sid>
<field name="file">/etc/iptables</field>
<description>PCI-DSS: Firewall configuration changed</description>
<group>pci_dss_11.5.2,pci_dss_1.2.8</group>
</rule>
<rule id="100103" level="10">
<if_sid>550</if_sid>
<field name="file">/etc/sudoers</field>
<description>PCI-DSS: Sudo configuration changed</description>
<group>pci_dss_11.5.2</group>
</rule>
</group>These rules trigger high-severity alerts that route to your security team immediately. During assessments, QSAs will want to see that you’re not just collecting FIM data, but actively responding to it.
Deployment#
Here’s the quick deployment process:
# Install Wazuh agent (Ubuntu/Debian)
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee /etc/apt/sources.list.d/wazuh.list
apt-get update
apt-get install wazuh-agent
# Configure manager IP
sed -i "s|MANAGER_IP|10.0.1.100|g" /var/ossec/etc/ossec.conf
# Start agent
systemctl enable wazuh-agent
systemctl start wazuh-agent
# Verify FIM is working
tail -f /var/ossec/logs/ossec.log | grep syscheckSIEM Integration#
FIM alerts are only useful if they reach the right people. Forward Wazuh alerts to your SIEM for correlation and long-term storage:
# Forward Wazuh alerts to Splunk/ELK
# On Wazuh Manager
cat >> /var/ossec/etc/ossec.conf <<EOF
<integration>
<name>splunk</name>
<hook_url>https://splunk.company.com:8088/services/collector</hook_url>
<api_key>YOUR-HEC-TOKEN</api_key>
<alert_format>json</alert_format>
</integration>
EOF
systemctl restart wazuh-managerNow your FIM alerts are centralized alongside other security events, making correlation and investigation much easier.
Level 2: Network Security Controls#
The Forgotten Critical Files#
Here’s something I see missed constantly during assessments: network device configurations are critical files too.
Your FortiGate firewall rules, Palo Alto security policies, and Cisco ACLs are every bit as critical as /etc/shadow. A single unauthorized firewall rule change can expose your entire CDE to the internet. Yet I’ve walked into assessments where organizations had robust FIM on their servers but zero monitoring of their network devices.
PCI-DSS doesn’t explicitly call out network devices in Requirement 11.5.2, but they absolutely fall under “configuration and parameter files” and “additional critical files determined by entity.” Any QSA worth their salt will ask you about this.
FortiGate Configuration Monitoring#
FortiGate devices don’t have native FIM in the way Linux systems do, so you need to implement configuration backups with change detection. Here’s a production-ready script:
#!/bin/bash
# Daily FortiGate config backup with change detection
FORTIGATE_IP="10.0.1.1"
BACKUP_DIR="/backup/fortigate"
DATE=$(date +%Y%m%d)
# Pull current config via API
curl -k -u "api-user:password" \
"https://$FORTIGATE_IP/api/v2/monitor/system/config/backup?scope=global" \
-o "$BACKUP_DIR/config-$DATE.conf"
# Compare with previous backup
PREVIOUS=$(ls -t $BACKUP_DIR/*.conf | sed -n '2p')
if [ -f "$PREVIOUS" ] && ! diff -q "$BACKUP_DIR/config-$DATE.conf" "$PREVIOUS"; then
echo "ALERT: FortiGate configuration changed!" | mail -s "FIM: FortiGate Config Change" security@company.com
# Log to SIEM here
fiSchedule this via cron to run daily (meeting the weekly requirement with margin). Store backups encrypted and retain them according to your log retention policy, typically 90 days for PCI.
Palo Alto Networks#
Palo Alto’s XML configuration can be pulled via API and compared similarly:
#!/usr/bin/env python3
# palo-fim.py - Palo Alto FIM via API
import requests
import difflib
import sys
from datetime import datetime
PA_HOST = "10.0.1.2"
API_KEY = "YOUR_API_KEY"
BACKUP_DIR = "/backup/paloalto"
def get_config():
"""Pull current running config from Palo Alto"""
r = requests.get(
f"https://{PA_HOST}/api/",
params={'type': 'export', 'category': 'configuration', 'key': API_KEY},
verify=False
)
return r.text
# Fetch and save current config
config = get_config()
filename = f"{BACKUP_DIR}/pa-{datetime.now():%Y%m%d}.xml"
with open(filename, 'w') as f:
f.write(config)
# Compare with previous backup
import glob
backups = sorted(glob.glob(f"{BACKUP_DIR}/*.xml"))
if len(backups) > 1:
with open(backups[-2]) as f:
prev = f.read()
if config != prev:
print("ALERT: Palo Alto configuration changed!")
# Send to SIEM, email security team, create ticket
# Integration code hereThe key here is automation and integration. These scripts should run daily, and alerts should route to the same systems as your Linux FIM alerts.
Cisco Devices - RANCID#
For Cisco switches and routers, RANCID (Really Awesome New Cisco confIg Differ) is the industry standard:
# Install RANCID
apt-get install rancid
# Configure device list
cat > /var/lib/rancid/PCI-Network/router.db <<EOF
10.0.1.10:cisco:up
10.0.1.11:cisco:up
10.0.1.12:cisco:up
EOF
# Setup .cloginrc for authentication
cat > ~/.cloginrc <<EOF
add user * admin
add password * {your-password} {enable-password}
add method * ssh
EOF
chmod 600 ~/.cloginrc
# Run RANCID (schedule via cron: 0 2 * * * /usr/bin/rancid-run)
rancid-runRANCID uses version control (SVN or Git) to track changes, which QSAs love because it provides both change detection and an audit trail.
Level 3: AWS Infrastructure Monitoring#
Why Cloud Is Different#
When I transitioned from traditional infrastructure to cloud, I initially tried to apply the same FIM tools. That was a mistake. In AWS, your “critical files” aren’t just files, they’re API calls that modify infrastructure.
Someone changing a security group rule via the console doesn’t touch a file on disk. They make an API call. Traditional FIM tools like AIDE or Tripwire are blind to this. You need cloud-native monitoring.
The AWS FIM Trinity: CloudTrail + Config + CloudWatch#
AWS provides three services that together fulfill FIM requirements:
- CloudTrail - Records every API call (who changed what, when)
- Config - Tracks resource configuration changes and compliance
- CloudWatch - Alerts you when specific changes occur
Here’s how to implement comprehensive AWS FIM:
CloudTrail Setup#
CloudTrail is your audit log for AWS. Every action, console, CLI, SDK, is recorded.
#!/bin/bash
# Enable CloudTrail with proper PCI configuration
# Create S3 bucket for logs (with encryption and access logging)
aws s3api create-bucket \
--bucket pci-cloudtrail-logs-$(aws sts get-caller-identity --query Account --output text) \
--region us-east-1
# Enable bucket encryption
aws s3api put-bucket-encryption \
--bucket pci-cloudtrail-logs-$(aws sts get-caller-identity --query Account --output text) \
--server-side-encryption-configuration '{
"Rules": [{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}]
}'
# Create CloudTrail (multi-region, log file validation enabled)
aws cloudtrail create-trail \
--name PCI-Master-Trail \
--s3-bucket-name pci-cloudtrail-logs-$(aws sts get-caller-identity --query Account --output text) \
--is-multi-region-trail \
--enable-log-file-validation
# Start logging
aws cloudtrail start-logging --name PCI-Master-Trail
# Verify it's working
aws cloudtrail get-trail-status --name PCI-Master-TrailKey PCI requirements:
- Multi-region - Catches activity in any region
- Log file validation - Cryptographic proof logs haven’t been tampered with
- Encrypted storage - Protects audit logs at rest
AWS Config for Configuration Compliance#
While CloudTrail tells you what changed, Config tells you whether your current state is compliant. It continuously evaluates your resources against rules.
# Enable AWS Config
aws configservice put-configuration-recorder \
--configuration-recorder name=PCI-Recorder,roleARN=arn:aws:iam::ACCOUNT_ID:role/config-role \
--recording-group allSupported=true,includeGlobalResourceTypes=true
aws configservice put-delivery-channel \
--delivery-channel name=PCI-Channel,s3BucketName=pci-config-bucket
aws configservice start-configuration-recorder --configuration-recorder-name PCI-RecorderNow deploy compliance rules. Here’s a critical one for security groups:
{
"ConfigRuleName": "restricted-ssh",
"Description": "Checks that security groups do not allow unrestricted SSH access (0.0.0.0/0 on port 22)",
"Source": {
"Owner": "AWS",
"SourceIdentifier": "INCOMING_SSH_DISABLED"
},
"Scope": {
"ComplianceResourceTypes": ["AWS::EC2::SecurityGroup"]
}
}Deploy it:
aws configservice put-config-rule --config-rule file://restricted-ssh-rule.jsonAWS Config will now continuously evaluate all security groups. If someone opens port 22 to the world, it’s flagged as non-compliant immediately.
CloudWatch Alarms for Real-Time Alerts#
CloudTrail and Config generate data, but you need alerts. CloudWatch bridges that gap.
Here’s an alarm for security group changes:
# Create SNS topic for security alerts
aws sns create-topic --name PCI-Security-Alerts
aws sns subscribe \
--topic-arn arn:aws:sns:us-east-1:ACCOUNT_ID:PCI-Security-Alerts \
--protocol email \
--notification-endpoint security@company.com
# Create metric filter for security group changes
aws logs put-metric-filter \
--log-group-name /aws/cloudtrail \
--filter-name SecurityGroupChanges \
--filter-pattern '{ ($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupIngress) }' \
--metric-transformations \
metricName=SecurityGroupEventCount,metricNamespace=CloudTrailMetrics,metricValue=1
# Create alarm
aws cloudwatch put-metric-alarm \
--alarm-name SecurityGroupChanges \
--alarm-description "Alert on security group modifications" \
--metric-name SecurityGroupEventCount \
--namespace CloudTrailMetrics \
--statistic Sum \
--period 60 \
--threshold 1 \
--comparison-operator GreaterThanOrEqualToThreshold \
--evaluation-periods 1 \
--alarm-actions arn:aws:sns:us-east-1:ACCOUNT_ID:PCI-Security-AlertsNow any security group change triggers an immediate alert. Your security team investigates, and you document the investigation for audit purposes.
Additional Critical AWS Config Rules#
# IAM password policy compliance
aws configservice put-config-rule --config-rule '{
"ConfigRuleName": "iam-password-policy",
"Source": {
"Owner": "AWS",
"SourceIdentifier": "IAM_PASSWORD_POLICY"
},
"InputParameters": "{\"RequireUppercaseCharacters\":\"true\",\"RequireLowercaseCharacters\":\"true\",\"RequireNumbers\":\"true\",\"MinimumPasswordLength\":\"12\"}"
}'
# Root account MFA
aws configservice put-config-rule --config-rule '{
"ConfigRuleName": "root-account-mfa-enabled",
"Source": {
"Owner": "AWS",
"SourceIdentifier": "ROOT_ACCOUNT_MFA_ENABLED"
}
}'
# S3 bucket encryption
aws configservice put-config-rule --config-rule '{
"ConfigRuleName": "s3-bucket-server-side-encryption-enabled",
"Source": {
"Owner": "AWS",
"SourceIdentifier": "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"
},
"Scope": {
"ComplianceResourceTypes": ["AWS::S3::Bucket"]
}
}'EC2 Instance FIM#
Don’t forget, your EC2 instances still need traditional file-level FIM. Deploy Wazuh agents on EC2 instances just like you would on on-prem servers. CloudTrail monitors the infrastructure; Wazuh monitors the OS.
# Deploy Wazuh agent to EC2 via user data or configuration management
#!/bin/bash
curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | gpg --no-default-keyring --keyring gnupg-ring:/usr/share/keyrings/wazuh.gpg --import && chmod 644 /usr/share/keyrings/wazuh.gpg
echo "deb [signed-by=/usr/share/keyrings/wazuh.gpg] https://packages.wazuh.com/4.x/apt/ stable main" | tee /etc/apt/sources.list.d/wazuh.list
apt-get update
apt-get install wazuh-agent -y
sed -i "s|MANAGER_IP|<WAZUH_MANAGER_IP>|g" /var/ossec/etc/ossec.conf
systemctl enable wazuh-agent
systemctl start wazuh-agentRDS Configuration Changes#
RDS doesn’t give you filesystem access, but parameter group changes are critical:
# CloudWatch alarm for RDS parameter changes
aws logs put-metric-filter \
--log-group-name /aws/cloudtrail \
--filter-name RDSParameterChanges \
--filter-pattern '{ $.eventName = ModifyDBParameterGroup }' \
--metric-transformations \
metricName=RDSParameterChangeCount,metricNamespace=CloudTrailMetrics,metricValue=1
aws cloudwatch put-metric-alarm \
--alarm-name RDSParameterChanges \
--metric-name RDSParameterChangeCount \
--namespace CloudTrailMetrics \
--statistic Sum \
--period 60 \
--threshold 1 \
--comparison-operator GreaterThanOrEqualToThreshold \
--evaluation-periods 1 \
--alarm-actions arn:aws:sns:us-east-1:ACCOUNT_ID:PCI-Security-AlertsLevel 4: Azure Infrastructure Monitoring#
Azure’s FIM Approach#
Azure’s monitoring model is similar to AWS but uses different services: Activity Logs instead of CloudTrail, Azure Policy instead of Config, and Azure Monitor instead of CloudWatch.
Azure Activity Log Configuration#
Activity Logs capture every Azure Resource Manager operation, the Azure equivalent of CloudTrail.
# Create Log Analytics Workspace for centralized logging
az monitor log-analytics workspace create \
--resource-group PCI-ResourceGroup \
--workspace-name PCI-Workspace \
--location eastus
WORKSPACE_ID=$(az monitor log-analytics workspace show \
--resource-group PCI-ResourceGroup \
--workspace-name PCI-Workspace \
--query id -o tsv)
# Create diagnostic settings to send Activity Logs to workspace
az monitor diagnostic-settings create \
--name PCI-Activity-Logs \
--resource $(az account show --query id -o tsv) \
--workspace ${WORKSPACE_ID} \
--logs '[
{
"category": "Administrative",
"enabled": true
},
{
"category": "Security",
"enabled": true
},
{
"category": "Alert",
"enabled": true
},
{
"category": "Policy",
"enabled": true
}
]'Now all administrative, security, alert, and policy operations are logged centrally.
Azure Policy for Compliance#
Azure Policy ensures your resources stay compliant. Think of it as AWS Config Rules for Azure.
{
"properties": {
"displayName": "Require NSG on Subnets",
"policyType": "Custom",
"mode": "All",
"description": "Ensures all subnets have NSG attached (PCI segmentation requirement)",
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Network/virtualNetworks/subnets"
},
{
"field": "Microsoft.Network/virtualNetworks/subnets/networkSecurityGroup.id",
"exists": "false"
}
]
},
"then": {
"effect": "deny"
}
}
}
}Deploy it:
# Create policy definition
az policy definition create \
--name require-nsg-on-subnet \
--display-name "Require NSG on Subnets" \
--description "Ensures network segmentation via NSG" \
--rules nsg-policy.json \
--mode All
# Assign policy to subscription
az policy assignment create \
--name enforce-nsg \
--display-name "Enforce NSG on Subnets" \
--policy require-nsg-on-subnet \
--scope /subscriptions/$(az account show --query id -o tsv)Now attempts to create subnets without NSGs are automatically blocked.
Azure Monitor Alerts#
Create real-time alerts for critical changes:
# Alert on NSG rule changes
az monitor activity-log alert create \
--resource-group PCI-ResourceGroup \
--name NSG-Rule-Change-Alert \
--description "Alert when NSG rules are modified" \
--condition category=Administrative \
and operationName=Microsoft.Network/networkSecurityGroups/securityRules/write \
--action-group /subscriptions/SUBSCRIPTION_ID/resourceGroups/PCI-ResourceGroup/providers/microsoft.insights/actionGroups/SecurityTeam
# Alert on resource group deletions
az monitor activity-log alert create \
--resource-group PCI-ResourceGroup \
--name ResourceGroup-Deletion-Alert \
--description "Alert on resource group deletion attempts" \
--condition category=Administrative \
and operationName=Microsoft.Resources/subscriptions/resourceGroups/delete \
--action-group /subscriptions/SUBSCRIPTION_ID/resourceGroups/PCI-ResourceGroup/providers/microsoft.insights/actionGroups/SecurityTeamAdvanced NSG Monitoring with KQL#
Log Analytics uses KQL (Kusto Query Language) for powerful log analysis. Here’s a query to detect suspicious NSG changes:
AzureActivity
| where OperationNameValue == "MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/SECURITYRULES/WRITE"
| where ActivityStatusValue == "Success"
| extend RuleName = tostring(parse_json(Properties).resource)
| extend Caller = Caller
| project TimeGenerated, Caller, ResourceGroup, RuleName, OperationNameValue
| order by TimeGenerated descSet this as a scheduled alert rule:
# Create action group for alerts
az monitor action-group create \
--resource-group PCI-ResourceGroup \
--name SecurityTeam \
--short-name SecTeam \
--email-receiver name=SecurityTeam email=security@company.com
# Create scheduled query alert
az monitor scheduled-query create \
--resource-group PCI-ResourceGroup \
--name "Suspicious NSG Changes" \
--scopes ${WORKSPACE_ID} \
--condition "count > 0" \
--window-size "PT5M" \
--evaluation-frequency "PT5M" \
--action-groups /subscriptions/SUBSCRIPTION_ID/resourceGroups/PCI-ResourceGroup/providers/microsoft.insights/actionGroups/SecurityTeam \
--severity 2 \
--description "Detects unauthorized NSG rule additions" \
--query "AzureActivity | where OperationNameValue == 'MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/SECURITYRULES/WRITE' and ActivityStatusValue == 'Success'"Azure VM File Integrity Monitoring#
For VMs running in Azure, you need OS-level FIM just like AWS EC2. You have two options:
- Wazuh agents (consistent with your other environments)
- Azure Change Tracking (native Azure solution)
Here’s the Azure Change Tracking approach:
# Install Azure Monitor Agent on VMs
az vm extension set \
--resource-group CDE-ResourceGroup \
--vm-name payment-web-01 \
--name AzureMonitorLinuxAgent \
--publisher Microsoft.Azure.Monitor \
--enable-auto-upgrade true
# Create Data Collection Rule for FIM
az monitor data-collection rule create \
--resource-group CDE-ResourceGroup \
--name FIM-DCR \
--location eastus \
--rule-file fim-dcr.jsonThe Data Collection Rule JSON (fim-dcr.json):
{
"location": "eastus",
"properties": {
"dataSources": {
"extensions": [
{
"name": "ChangeTracking-Linux",
"streams": ["Microsoft-ConfigurationChange"],
"extensionName": "ChangeTracking-Linux",
"extensionSettings": {
"enableFiles": true,
"enableSoftware": false,
"enableRegistry": false,
"enableServices": false,
"enableInventory": false,
"fileSettings": {
"fileCollectionFrequency": 900,
"fileInfo": [
{
"path": "/etc/passwd",
"fileSystemType": "File",
"recurse": false
},
{
"path": "/etc/shadow",
"fileSystemType": "File",
"recurse": false
},
{
"path": "/etc/ssh/sshd_config",
"fileSystemType": "File",
"recurse": false
},
{
"path": "/etc/sudoers",
"fileSystemType": "File",
"recurse": false
},
{
"path": "/etc/iptables",
"fileSystemType": "Directory",
"recurse": true
},
{
"path": "/opt/payment-gateway",
"fileSystemType": "Directory",
"recurse": true
}
]
}
}
}
]
},
"destinations": {
"logAnalytics": [
{
"workspaceResourceId": "/subscriptions/${SUBSCRIPTION_ID}/resourceGroups/CDE-ResourceGroup/providers/Microsoft.OperationalInsights/workspaces/PCI-LogAnalytics",
"name": "PCI-Workspace"
}
]
},
"dataFlows": [
{
"streams": ["Microsoft-ConfigurationChange"],
"destinations": ["PCI-Workspace"]
}
]
}
}Query the Change Tracking data:
ConfigurationChange
| where ConfigChangeType == "Files"
| where Computer contains "payment"
| where FileSystemPath in ("/etc/passwd", "/etc/shadow", "/etc/ssh/sshd_config")
| project
TimeGenerated,
Computer,
FileSystemPath,
ChangeCategory,
PreviousContentChecksum,
ContentChecksum
| order by TimeGenerated descAzure Firewall Configuration Backup#
Azure Firewalls are defined as Azure resources, so changes are captured in Activity Logs. But for traditional backup-and-compare FIM:
#!/bin/bash
# Daily Azure Firewall config backup
RESOURCE_GROUP="CDE-ResourceGroup"
FIREWALL_NAME="CDE-AzureFirewall"
BACKUP_DIR="/backup/azure-firewall"
DATE=$(date +%Y%m%d-%H%M%S)
# Export current firewall configuration
az network firewall show \
--resource-group "${RESOURCE_GROUP}" \
--name "${FIREWALL_NAME}" \
--output json > "${BACKUP_DIR}/firewall-config-${DATE}.json"
# Compare with previous backup
PREVIOUS=$(ls -t ${BACKUP_DIR}/*.json 2>/dev/null | sed -n '2p')
if [ -f "$PREVIOUS" ]; then
if ! diff -q "${BACKUP_DIR}/firewall-config-${DATE}.json" "$PREVIOUS" > /dev/null 2>&1; then
echo "ALERT: Azure Firewall configuration changed!" | \
mail -s "FIM: Azure Firewall Config Change" security@company.com
# Log to SIEM, create ticket, etc.
fi
fi
# Cleanup old backups (retain 30 days)
find ${BACKUP_DIR} -name "*.json" -mtime +30 -deleteSchedule via cron: 0 4 * * * /usr/local/bin/azure-firewall-backup.sh
QSA Evidence Package#
When it’s time for your PCI assessment, you need to present more than just “we have FIM running.” QSAs want to see:
- Proof it’s deployed - Screenshots, agent lists, config files
- Proof it’s working - Sample alerts from the past 90 days
- Proof you respond to alerts - Investigation tickets, change records
- Proof of periodic review - Weekly/monthly review meeting notes
Here’s what a complete evidence package looks like:
1. Configuration Files#
11.5.2_Evidence/
├── Configuration/
│ ├── wazuh-ossec.conf
│ ├── wazuh-local-rules.xml
│ ├── fortigate-backup-script.sh
│ ├── palo-alto-fim.py
│ ├── cisco-rancid-config.txt
│ ├── aws-cloudtrail-setup.sh
│ ├── aws-config-rules.json
│ ├── aws-cloudwatch-alarms.json
│ ├── azure-activity-log-alerts.sh
│ ├── azure-policy-definitions.json
│ ├── azure-dcr-fim.json
│ └── azure-firewall-backup.sh2. Proof of Deployment#
- Screenshot of Wazuh manager showing all agents connected
- Screenshot of Wazuh FIM dashboard showing real-time monitoring active
- AWS CloudTrail console screenshot (enabled, multi-region, log validation on)
- AWS Config rules compliance dashboard screenshot
- Azure Activity Log alert rules list
- Azure Policy assignments showing enforced policies
- FortiGate/Palo Alto last backup timestamps
3. Sample Alerts#
Provide 3-5 recent real alerts with investigation outcomes:
├── Alerts/
│ ├── 2026-02-01_wazuh_shadow_modified.pdf
│ ├── 2026-02-03_aws_sg_change.pdf
│ ├── 2026-02-05_azure_nsg_alert.pdf
│ ├── 2026-02-08_fortigate_rule_change.pdf
│ └── 2026-02-10_palo_policy_update.pdfEach alert should show:
- When it fired
- What changed
- Who was alerted
- Investigation outcome (authorized change vs. incident)
4. Response Documentation#
- Sample ServiceNow/Jira tickets showing investigations
- Change management records for authorized changes
- Incident response tickets for unauthorized changes (if any)
- Weekly FIM review meeting notes
Example ticket:
INC0012345: FIM Alert - /etc/shadow modified on payment-web-02
Opened: 2026-02-01 14:32:15
Alert Source: Wazuh FIM
Severity: High
Investigation:
- Checked change management system - no approved change
- Contacted server admin team - emergency password reset for locked account
- Verified admin authorization via Slack thread with CISO
- Confirmed legitimate activity
- Documented in change log retroactively
Resolution: Authorized emergency change, closed
Closed: 2026-02-01 15:05:225. Testing Evidence#
QSAs want to see that FIM actually works. Provide test results:
# FIM Test - Linux
echo "PCI TEST" >> /etc/passwd
# Wait for alert (should be <1 minute for real-time)
# Screenshot alert received
# FIM Test - AWS
aws ec2 authorize-security-group-ingress \
--group-id sg-12345678 \
--protocol tcp \
--port 80 \
--cidr 0.0.0.0/0
# Verify CloudWatch alarm fired
# Screenshot SNS email receivedDocument the tests with screenshots and timestamps proving alerts fired within expected timeframes.
Hybrid Environment Example#
Most organizations aren’t purely on-prem or purely cloud. Here’s a realistic hybrid setup and how to achieve comprehensive coverage:
Environment:
- On-prem Linux servers (payment processing)
- FortiGate edge firewall
- AWS EC2 instances (web tier)
- AWS RDS (database)
- Azure VMs (analytics workloads)
FIM Implementation:
| Layer | Tool | Frequency | Alert Destination |
|---|---|---|---|
| On-Prem Linux | Wazuh | Real-time | SIEM → Email |
| FortiGate | Bash script | Daily | Email on diff |
| AWS EC2 | Wazuh | Real-time | SIEM |
| AWS Infrastructure | CloudTrail + Config | Real-time | CloudWatch → SNS → Email |
| Azure VMs | Wazuh | Real-time | SIEM |
| Azure Infrastructure | Activity Log + Policy | Real-time | Azure Monitor → Email |
Centralization Strategy:
All FIM alerts route to your SIEM (Splunk, ELK, Sentinel) for correlation, long-term storage, and unified reporting. This gives you a single pane of glass for all FIM activity across your hybrid environment.
During audits, you can pull unified FIM reports from your SIEM that cover every layer of your stack.
Key Takeaways#
After implementing FIM across dozens of environments and evaluating hundreds more during assessments, here’s what I’ve learned:
Network configurations ARE critical files. Don’t skip firewall and switch configs. This is a common gap I see during assessments.
Cloud requires cloud-native tools. Don’t try to shoehorn traditional FIM into cloud environments. CloudTrail, Config, Activity Logs, use them. They’re designed for this.
Multi-layered approach is essential. One tool can’t cover OS files, network devices, and cloud infrastructure. Accept that you need multiple tools and integrate them well.
Real-time beats weekly. PCI requires weekly comparisons as a minimum, but real-time monitoring is table stakes in 2026. The technology exists, use it.
Alerts without response fail audits. FIM generates alerts. You must investigate, document, and respond. QSAs will interview your team about alert handling processes.
Document everything. You’ll need configurations, proof of deployment, sample alerts, investigation records, and review meeting notes. Start collecting evidence now, not two weeks before your audit.
Test your FIM regularly. Make a controlled change and verify alerts fire. Document these tests. QSAs love seeing proactive testing.
One final thought: The spirit of Requirement 11.5.2 is detecting unauthorized changes before they cause damage. Whether you use Wazuh, Tripwire, CloudTrail, or custom scripts, the goal is the same, know when something changes, investigate why, and respond appropriately.
Build your FIM program around that principle, and you’ll not only pass audits but actually improve your security posture.
Questions about implementing FIM in your specific environment? Reach out via LinkedIn or email, I’m happy to discuss your architecture.
