Control 4.9: Incident Reporting and Root Cause Analysis — Verification & Testing
Test cases and evidence collection procedures for Copilot incident reporting and root cause analysis.
Test Cases
Test 1: Alert Policy Trigger Verification
- Objective: Confirm that Copilot incident alert policies trigger correctly
- Steps:
- Review the configured alert policies for Copilot incidents.
- Simulate a condition that should trigger an alert (e.g., DLP violation in a Copilot interaction).
- Verify the alert is generated within the expected timeframe.
- Confirm the alert notification reaches the designated recipients.
- Expected Result: Alert triggers within the configured timeframe and notifications are delivered.
- Evidence: Alert notification email and alert entry in the Purview alerts dashboard.
Test 2: Incident Response Workflow Execution
- Objective: Validate the end-to-end incident response workflow for a Copilot incident
- Steps:
- Simulate a Copilot-related incident scenario (e.g., Copilot surfacing restricted content).
- Follow the incident response workflow from detection through containment and resolution.
- Complete the incident report template with all required sections.
- Verify the RCA is completed within the defined timeline.
- Expected Result: Incident response workflow executes smoothly with all steps documented.
- Evidence: Completed incident report with timeline and RCA documentation.
Test 3: Regulatory Notification Decision Process
- Objective: Verify that the regulatory notification assessment process functions correctly
- Steps:
- Create a hypothetical scenario involving customer NPI exposure via Copilot.
- Walk through the regulatory notification assessment criteria.
- Verify the decision matrix correctly identifies applicable notification requirements.
- Confirm the CCO approval workflow for notification decisions is functional.
- Expected Result: Notification assessment correctly identifies regulatory obligations and approval workflow functions.
- Evidence: Documented assessment walkthrough with decision rationale.
Test 4: Anomaly Detection Effectiveness
- Objective: Confirm that anomaly detection scripts correctly identify unusual patterns
- Steps:
- Run the anomaly detection script against the current audit log data.
- Review flagged users to determine if the anomalies are genuine.
- Verify the threshold settings are appropriate for the organization's usage patterns.
- Adjust thresholds if false positive rate is too high or false negatives are observed.
- Expected Result: Anomaly detection identifies genuine unusual patterns with acceptable false positive rates.
- Evidence: Anomaly detection report with validation notes.
Evidence Collection
| Evidence Item |
Source |
Format |
Retention |
| Alert policy configuration |
Purview portal |
Screenshot |
With control documentation |
| Incident response test report |
Simulation exercise |
Document |
7 years |
| RCA template and completed examples |
Incident management |
PDF |
7 years |
| Anomaly detection reports |
PowerShell |
CSV |
1 year |
Compliance Mapping
| Regulation |
Requirement |
How This Control Helps |
| FINRA 4530 |
Incident reporting to FINRA |
Supports compliance with event reporting obligations |
| SEC Reg S-P |
Breach notification |
Helps meet breach notification requirements for NPI exposure |
| FFIEC IT Handbook |
Incident response and RCA |
Supports IT incident management and root cause analysis requirements |
Next Steps