Skip to content

Control 2.10: Insider Risk Detection for Copilot Usage Patterns

Control ID: 2.10 Pillar: Security & Protection Regulatory Reference: GLBA 501(b), SOX 404, FINRA 3110 Last Verified: 2026-03-22 Governance Levels: Baseline / Recommended / Regulated


Objective

Configure Microsoft Purview Insider Risk Management (IRM) policies to detect anomalous Copilot usage patterns that may indicate data exfiltration, unauthorized information gathering, or misuse of AI capabilities. Copilot provides a powerful new vector for insider threats — enabling rapid aggregation and summarization of data that would previously require extensive manual effort. This control supports compliance with GLBA safeguard monitoring requirements, SOX internal control obligations, and FINRA supervisory expectations.


Why This Matters for FSI

  • GLBA 501(b) requires monitoring and testing of safeguard effectiveness — insider risk detection for Copilot demonstrates that the firm monitors for internal threats to customer information via AI tools
  • SOX Section 404 requires assessment of internal controls — insider risk policies for Copilot are an internal control designed to detect unauthorized access to financially sensitive information
  • FINRA Rule 3110 (Supervision) requires supervisory systems reasonably designed to prevent and detect violations — detecting anomalous Copilot usage is a supervision measure for AI-assisted activities. Per the FINRA 2026 Annual Regulatory Oversight Report (GenAI Section, December 9, 2025), agentic AI systems require supervisory controls that cover AI workflow engines selecting intermediate actions. IRM policies for Copilot agents directly address this requirement.
  • FINRA Rule 3120 requires designated supervisory personnel to test supervisory procedures — insider risk detection provides testable, evidence-generating supervision of Copilot
  • SEC Enforcement precedent demonstrates that regulators hold firms accountable for insider misuse of information systems — Copilot's ability to aggregate MNPI makes insider risk monitoring essential
  • OCC/Fed Interagency Guidance expects financial institutions to manage AI-related risks, including the risk of misuse by authorized users

Control Description

Microsoft Purview Insider Risk Management uses signals from across Microsoft 365 to identify potentially risky user activity. When integrated with Copilot activity signals, IRM can detect patterns that suggest misuse of AI capabilities. As of December 2025, IRM adds four capabilities that significantly expand coverage for AI and agent governance.

Insider Risk Indicators for Copilot

Risk Indicator Description Detection Method Severity
Bulk data extraction via prompts User submits many prompts designed to extract and aggregate data Volume + content analysis High
Sensitive topic probing Repeated queries about specific restricted topics (MNPI, client data) Keyword matching + pattern High
Cross-segment information gathering Copilot queries that attempt to access data across business segments Barrier violation attempts Critical
Unusual query volume Significantly more Copilot interactions than peer baseline Statistical anomaly Medium
Off-hours Copilot activity Heavy Copilot usage outside normal working hours Time-based anomaly Medium
Download after Copilot summary User downloads files shortly after Copilot summarizes them Sequence analysis High
Copilot + USB/print activity Copilot data extraction followed by USB or print activity Multi-signal correlation Critical
Pre-resignation data gathering Increased Copilot usage after submitting resignation HR signal + activity High
Client data aggregation Copilot used to compile client lists, account data, or portfolio summaries Content + pattern High
AI usage anomaly Copilot query volume, agent interactions, or AI app usage deviating significantly from behavioral baseline AI usage indicator category Medium
Agent interaction spike Sudden increase in agent interactions beyond peer baseline AI usage indicator + volume High

IRM Policy Architecture for Copilot

Signal Sources                    IRM Processing                 Outcome
┌──────────────┐
│ Copilot      │──┐
│ Activity Logs│  │
└──────────────┘  │
┌──────────────┐  │
│ Agent        │──┤
│ Interaction  │  │    ┌──────────────────┐    ┌─────────────────┐
│ Logs         │  │    │ Insider Risk     │    │ Alert Generated │
└──────────────┘  │    │ Management       │──→ │                 │
┌──────────────┐  │    │                  │    │ ├─ Low severity  │
│ HR Signals   │──┼──→ │ ├─ Policy match  │    │ ├─ Medium        │
│ (departure,  │  │    │ ├─ Risk scoring  │    │ ├─ High          │
│  PIP, etc.)  │  │    │ └─ Correlation   │    │ └─ Critical      │
└──────────────┘  │    └──────────────────┘    └────────┬────────┘
┌──────────────┐  │                                     │
│ DLP Matches  │──┤                              ┌──────┴──────┐
│              │  │                              │ IRM Triage  │
└──────────────┘  │                              │ Agent +     │
┌──────────────┐  │                              │ Investigation│
│ Device       │──┤                              │ Workflow    │
│ Activity     │  │                              └─────────────┘
└──────────────┘  │
┌──────────────┐  │
│ Cloud App    │──┘
│ Activity     │
└──────────────┘

IRM Policy Templates for Copilot

Template Triggering Event Key Indicators FSI Use Case
Data theft by departing users HR termination/resignation signal Copilot data gathering + file downloads Departing advisors gathering client data
General data leaks No trigger required (continuous) Unusual Copilot volume + sensitive data access General insider threat monitoring
Security policy violations DLP policy match DLP match in Copilot + continued probing Repeated attempts to access restricted data via AI
Risky browser usage No trigger required Copilot usage + unauthorized cloud storage access Data exfiltration via AI + personal cloud
Patient data misuse (customizable) No trigger required Copilot access to sensitive labeled content Unauthorized access to client financial data
Risky Agents Agent activity anomaly (auto-deployed) Agent data access volume, DLP matches, behavioral anomaly Agent-level misuse of Copilot Studio and Azure AI Foundry agents

Adaptive Protection Integration

Adaptive Protection dynamically adjusts DLP enforcement based on insider risk levels:

IRM Risk Level DLP Policy Adjustment Copilot Impact
Elevated Enhanced monitoring Copilot DLP policy tips always shown
High Block with override Copilot blocks sensitive content with manager override
Elevated + High Block without override Copilot blocks all sensitive content processing

Adaptive Protection also integrates with Conditional Access to dynamically restrict Copilot access for at-risk users. See Control 2.3 for CA integration details.


New Capabilities for Agent and AI Governance (December 2025)

Risky Agents Policy Template

Microsoft auto-deploys a Risky Agents policy for all Copilot Studio and Azure AI Foundry agents deployed in the tenant. This auto-deployment is generally available as of December 2025. The Risky Agents policy template itself is in public preview with full general availability targeted for late 2026.

Scope: Currently covers Copilot Studio agents and Azure AI Foundry agents. Microsoft prebuilt agents, third-party agents, and SharePoint agents are not yet included in the auto-deployment scope; coverage for additional agent types is expected to expand.

The Risky Agents policy detects agent-level risk indicators: - Agents accessing unusually high volumes of data - Agents triggering DLP matches - Agents behaving anomalously compared to a behavioral baseline

Why this matters for FSI: Per the FINRA 2026 Annual Regulatory Oversight Report (GenAI Section, December 9, 2025), agentic AI systems require supervisory controls that cover AI workflow engines selecting intermediate actions. FINRA Rules 3110 and 3120 — supervisory systems must be reasonably designed to prevent and detect violations, and this requirement now extends explicitly to AI-driven agent activities. The Risky Agents auto-deployment provides a foundational supervisory mechanism for Copilot Studio and Azure AI Foundry agent deployment.

Tier recommendations:

Level Requirement
Baseline Review the auto-deployed Risky Agents policy; confirm it covers all deployed Copilot Studio and Azure AI Foundry agents; configure alert routing to compliance team
Recommended Customize risk thresholds for FSI context; add agent-specific alert routing to compliance and risk teams; document agent risk management as part of supervisory procedures
Regulated Mandatory compliance review of all agent risk alerts within 24 hours; quarterly agent risk assessment reporting to senior management; document Risky Agents oversight in firm supervisory procedures per FINRA Rules 3110/3120

AI Usage Indicator Category

IRM adds an AI usage indicator as a distinct category of risk signals, separate from traditional file access and device indicators:

  • IRM can now correlate AI-specific usage patterns — Copilot query volume, agent interactions, and AI application usage — as distinct risk indicators
  • These indicators feed into existing IRM risk scoring alongside traditional indicators (file downloads, USB usage, HR signals)
  • An unusual spike in AI app usage, combined with traditional exfiltration signals, produces a higher composite risk score

Integration: AI usage indicators appear in the IRM policy indicators settings alongside existing indicator categories. Configure sensitivity thresholds appropriate for your Copilot deployment scale — organizations that have fully deployed Copilot will have higher baseline AI usage than those with limited rollout.

AI Usage Indicator Trigger Condition Risk Contribution
Copilot query volume spike Daily interactions >3x peer baseline Medium
Agent interaction spike Agent queries >3x prior-week baseline High
AI application diversity Access to >5 distinct AI applications in a session Low–Medium
After-hours AI usage AI app usage outside normal hours combined with data access Medium

Data Risk Graphs

Data risk graphs, generally available as of December 2025, provide a visualization capability within IRM that maps the relationships between users, data assets, and AI interactions to identify risk clusters.

Graphs surface patterns such as: - Users accessing high volumes of customer data via Copilot - Unusual cross-department AI data access patterns (e.g., operations staff querying finance data via Copilot) - Clusters of users involved in related data access events that individually appear low-risk but collectively suggest coordinated activity

FSI application: Financial institutions with complex organizational structures (trading, advisory, operations, compliance) can use data risk graphs to identify unusual cross-function data flows surfaced through Copilot. A graph showing multiple operations staff accessing investment banking data via Copilot may warrant investigation under Chinese wall supervisory procedures.

Access path: Microsoft Purview > Insider Risk Management > Data risk graphs (available from the IRM investigation workspace).

IRM Triage Agent (Security Copilot)

The IRM Triage Agent, powered by Security Copilot, is generally available as of December 2025. It automates the initial triage phase of the IRM alert queue:

  • Categorizes alerts by severity based on the activity pattern and available context
  • Recommends investigation priorities, helping investigators focus on the highest-risk alerts first
  • Provides context summaries: what activity was detected, what data was involved, what the user's recent behavior pattern shows
  • Reduces alert fatigue in high-volume IRM environments by pre-processing the alert queue before human investigators review it

FSI application: Large financial institutions with active Copilot deployments generate significant IRM alert volume. The Triage Agent reduces the manual overhead of initial alert processing while maintaining supervisory rigor — investigators review pre-categorized, contextualized alerts rather than raw signals.

Per OCC Bulletin 2025-26 on proportionate model risk management: The IRM Triage Agent represents AI-assisted governance that reduces manual overhead while maintaining appropriate oversight. The Triage Agent itself should be documented as a model per OCC Bulletin 2011-12 (SR 11-7) model risk management guidance — with appropriate validation, ongoing monitoring, and escalation procedures if the Triage Agent's recommendations are found to be systematically incorrect.

Tier recommendations:

Level Requirement
Baseline Enable Triage Agent in read-only mode; review categorizations weekly; document as a model in the firm's model inventory
Recommended Enable with auto-categorization; Triage Agent recommendations guide investigator queue prioritization; quarterly review of categorization accuracy
Regulated Enable with human-in-the-loop validation before alert dismissal; Triage Agent model documented with performance monitoring; any systematic mis-categorization triggers model remediation process per SR 11-7

Copilot Surface Coverage

M365 Application IRM Signal Collection Anomaly Detection Adaptive Protection Notes
Microsoft 365 Copilot Chat Yes Yes Yes Primary monitoring surface
Word Yes Yes Yes Document creation/editing signals
Excel Yes Yes Yes Data analysis and export signals
PowerPoint Yes Yes Yes Presentation creation signals
Outlook Yes Yes Yes Email drafting and sending signals
Teams Yes Yes Yes Chat and meeting activity signals
OneNote Yes Yes Yes Note creation signals
Loop Yes Yes Yes Collaboration signals
Copilot Pages Yes Yes Yes Page creation and sharing signals
SharePoint (Agents) Yes Yes Yes Agent interaction signals
Copilot Studio Agents Yes Yes Yes Covered by Risky Agents auto-deployment
Azure AI Foundry Agents Yes Yes Yes Covered by Risky Agents auto-deployment

Governance Levels

Level Requirement Rationale
Baseline Enable IRM with Risky AI usage (preview) as the primary Copilot template (Purview portal > Solutions > Insider Risk Management > Policies > Create policy > Risky AI usage); also enable Data leaks and Data leaks by priority users templates; include Copilot activity as a signal source; enable Generative AI apps indicators (preview) and Risky AI usage indicators (preview) at IRM > Settings > Policy indicators; review auto-deployed Risky Agents policy; enable IRM Triage Agent in read-only mode; configure alerts to notify the compliance team; review alerts weekly; PAYG note: non-M365 AI data in Risky AI usage indicators requires pay-as-you-go billing — M365 Copilot data is included at no additional charge Provides foundational insider risk detection and agent monitoring — minimum monitoring for Copilot deployment in regulated environments
Recommended Add Data theft by departing users template with HR connector; configure Adaptive Protection to dynamically adjust Copilot DLP; create custom indicators for FSI-specific patterns (MNPI probing, client data aggregation); configure Copilot-specific risk indicators: SIT matches in prompts, AI responses containing sensitive info, high-volume queries, and JailbreakDetected events; set custom thresholds at IRM > Policies > [policy] > Edit > Indicators > Customize thresholds with Low / Medium / High daily activity counts; enable risk score boosters: Activity above user's typical day, Priority user group member, Potential high-impact user (based on Entra hierarchy + sensitivity label access volume); enable Communication Compliance indicators — auto-creates a CC policy covering Exchange Online, Teams, Viva Engage, and M365 Copilot; integrate DSPM for AI (Purview portal > DSPM for AI) so prompt/response pair content flows as signals into the Risky AI usage template; enable data risk graphs for cross-department access visualization; enable IRM Triage Agent with auto-categorization; customize Risky Agents thresholds; integrate IRM alerts with SIEM; alert review within 24 hours; monthly risk trend analysis Comprehensive insider risk program incorporating AI-specific indicators and agent governance — suitable for most FSI firms
Regulated All Recommended requirements plus: Risky Agents alerts reviewed within 24 hours with mandatory investigation tracking; custom policy with all FSI-specific indicators including all Generative AI apps and Risky AI usage indicators; Adaptive Protection enforced at all risk levels; DSPM for AI prompt/response monitoring integrated with IRM investigations for full Copilot interaction visibility; IRM Triage Agent with human-in-the-loop validation; Triage Agent documented as a model per SR 11-7; IRM investigation workflow integrated with legal hold procedures; quarterly IRM effectiveness audit including agent risk assessment and DSPM signal coverage review; annual red team exercise testing Copilot-based insider threat scenarios; IRM metrics in board-level risk reporting Full insider risk governance with continuous improvement and agent supervisory coverage — designed for firms with active insider threat programs and board-level risk oversight

Setup & Configuration

Step 1: Enable Insider Risk Management

Portal: Microsoft Purview > Insider Risk Management > Settings

  1. Complete the Insider Risk Management setup wizard
  2. Configure privacy settings (anonymization for initial alerts)
  3. Enable required data connectors (HR, DLP, device)

Portal: Microsoft Purview > Settings > Data connectors > HR connector

  1. Set up the HR data connector to import termination and resignation dates
  2. Map HR system fields to IRM schema
  3. Configure automated import schedule (daily recommended)
  4. This enables the "Data theft by departing users" template

Step 3: Enable AI Usage Indicators

Portal: Microsoft Purview > Insider Risk Management > Settings > Policy indicators

  1. Navigate to Policy indicators settings
  2. Locate the AI usage indicator category
  3. Enable indicators for Copilot query volume, agent interactions, and AI app usage
  4. Set sensitivity thresholds appropriate for your Copilot deployment scale
  5. Allow 2-4 weeks for behavioral baselines to establish before tuning thresholds

Step 4: Review and Configure Risky Agents Policy

Portal: Microsoft Purview > Insider Risk Management > Policies

  1. Locate the auto-deployed Risky Agents policy in the policy list
  2. Review the policy scope — confirm it lists all Copilot Studio and Azure AI Foundry agents deployed in the tenant
  3. Review default risk thresholds; customize for FSI context if needed
  4. Configure alert routing: route agent risk alerts to both the compliance team and the agent deployment owner
  5. Note agents not yet covered (prebuilt Microsoft agents, third-party, SharePoint agents) and apply compensating monitoring via DSPM for AI or Defender for Cloud Apps

Step 5: Create IRM Policy for Copilot

Portal: Microsoft Purview > Insider Risk Management > Policies > Create policy

Policy 1: General Copilot Risk Monitoring

Setting Value
Template General data leaks
Name FSI-Copilot-Insider-Risk
Users All Copilot-licensed users
Indicators All Microsoft 365 indicators + Copilot-specific indicators + AI usage indicators
Thresholds Use default thresholds initially; tune after 30 days
Alert volume Medium (balance detection with noise)

Policy 2: Departing User Copilot Monitoring

Setting Value
Template Data theft by departing users
Name FSI-Copilot-Departing-Users
Triggering event HR connector — resignation or termination date
Indicators All Microsoft 365 indicators + Copilot + file download + USB/print + AI usage
Thresholds Lower thresholds (more sensitive) for departing users
Policy timeframe 90 days before departure date to 30 days after

Step 6: Configure Custom Indicators

Create custom indicators for FSI-specific Copilot risks:

Indicator Name Signal Threshold
Copilot MNPI probing Copilot queries containing MNPI keywords 5+ in 24 hours
Copilot bulk extraction Daily Copilot interaction count 3x peer average
Copilot client data aggregation Copilot queries referencing client names/accounts 20+ unique clients in 24 hours
Copilot off-hours usage Copilot interactions outside business hours 10+ interactions after hours
Agent data volume anomaly Agent accessing file counts above baseline 5x prior-week average

Step 7: Enable Adaptive Protection

Portal: Microsoft Purview > Insider Risk Management > Adaptive Protection

  1. Enable Adaptive Protection
  2. Configure risk levels mapping to DLP enforcement:
  3. Elevated risk → Enhanced Copilot DLP monitoring
  4. High risk → Block Copilot from processing sensitive content
  5. Set automatic risk level assignment based on IRM alerts
  6. Coordinate with the Conditional Access configuration in Control 2.3 to enable CA-level Copilot access restriction for high-risk users

Step 8: Enable Data Risk Graphs

Portal: Microsoft Purview > Insider Risk Management > Investigations

  1. From the investigation workspace, navigate to the Data risk graphs view
  2. Review available graph visualizations for Copilot and agent interactions
  3. Configure graph time windows appropriate for your investigation cycles
  4. Use graphs to identify cross-department data access patterns before they generate full IRM alerts

Step 9: Enable IRM Triage Agent

Portal: Microsoft Purview > Insider Risk Management > Settings > Triage Agent

  1. Enable the IRM Triage Agent after verifying the feature is entitled and available in your tenant
  2. Configure the Triage Agent to operate in read-only mode initially (Baseline)
  3. Review Triage Agent categorizations for the first 30 days to validate accuracy
  4. After validation, enable auto-categorization to allow Triage Agent recommendations to drive investigator queue order (Recommended)
  5. For Regulated: configure the human-in-the-loop requirement — alerts cannot be dismissed without investigator review of Triage Agent recommendation
  6. Document the Triage Agent in the firm's model inventory per OCC Bulletin 2011-12 (SR 11-7)

Step 10: Configure Alert and Investigation Workflow

  1. Define alert triage responsibilities (compliance team, security team)
  2. Configure alert notification routing (including agent-specific alert routing)
  3. Establish investigation procedures that include:
  4. Alert review and initial assessment (with Triage Agent context summary)
  5. Activity timeline reconstruction (Copilot interactions + agent interactions + correlated signals)
  6. Data risk graph review for cross-department access patterns
  7. Escalation criteria (when to involve legal, HR, management)
  8. Documentation requirements for regulatory evidence

Financial Sector Considerations

  • Departing Financial Advisors: When registered representatives or financial advisors depart, they may use Copilot to rapidly compile client lists, portfolio summaries, and contact information. IRM policies should trigger enhanced monitoring immediately upon resignation notice. This addresses a long-standing industry concern about "book theft." The AI usage indicator category adds a specific signal for departing users who dramatically increase Copilot usage in their final weeks.
  • MNPI Abuse Detection: Employees with MNPI access who use Copilot to gather related market information may be engaging in insider trading preparation. Custom indicators should detect patterns where users query Copilot about companies they have MNPI about and then conduct related searches. Data risk graphs help visualize these multi-step access patterns.
  • Agent Supervisory Obligations: Per FINRA 2026 Oversight Report, the deployment of Copilot Studio and Azure AI Foundry agents creates supervisory obligations that parallel traditional employee supervision. Firms should treat Risky Agents alerts with the same investigation rigor applied to human user alerts.
  • IRM Triage Agent Model Risk: The Triage Agent is an AI model that makes recommendations about insider risk alert severity. Per OCC Bulletin 2011-12 (SR 11-7), the firm should document the Triage Agent as a model, validate its performance against human investigator judgments, and establish escalation procedures for systematic categorization errors.
  • Privacy Considerations: IRM involves monitoring employee activity, which raises privacy concerns. Financial firms should document the business justification for IRM monitoring (regulatory requirement), communicate monitoring policies to employees, and use anonymization features for initial alert triage.
  • Union and Employment Law: Some jurisdictions have laws governing employee monitoring. Consult with employment counsel to confirm IRM monitoring practices comply with applicable labor and privacy laws.
  • Investigation Documentation: IRM investigations should follow documented procedures that preserve evidence for potential regulatory referrals, employment actions, or litigation. Integrate IRM investigation workflows with the firm's existing internal investigation procedures.
  • Regulatory Reporting: If IRM investigations reveal potential securities law violations (insider trading, market manipulation), the firm may have reporting obligations to FINRA or the SEC. Establish escalation procedures that include legal counsel and compliance leadership.
  • Board Reporting: For Regulated-level implementations, include IRM metrics in quarterly board risk reports. Metrics should include: number of Copilot-related alerts, agent risk alerts, investigations opened, investigations resulting in action, and risk trend analysis.

Verification Criteria

  1. IRM Policy Active: Verify at least one IRM policy is active that includes Copilot activity signals
  2. Risky Agents Policy: Confirm the auto-deployed Risky Agents policy is present and active for Copilot Studio and Azure AI Foundry agents; verify alert routing is configured
  3. AI Usage Indicators: Confirm the AI usage indicator category is enabled and thresholds are set appropriately for the organization's Copilot deployment scale
  4. IRM Triage Agent: Verify the Triage Agent is enabled and producing context summaries for IRM alerts; confirm it is documented in the model inventory
  5. Data Risk Graphs: Confirm data risk graphs are accessible in the investigation workspace and are being used as part of investigation procedures
  6. Signal Collection: Confirm that Copilot interaction data appears in the IRM signal timeline for test users
  7. HR Connector (if applicable): Verify the HR data connector is importing termination/resignation data successfully
  8. Alert Generation: Trigger a test scenario (e.g., high-volume Copilot usage) and confirm an IRM alert is generated
  9. Alert Routing: Verify that IRM alerts are delivered to the designated compliance team within the defined SLA; agent risk alerts route to both compliance and agent deployment owners
  10. Adaptive Protection: Confirm that Adaptive Protection is enabled and that risk level changes result in corresponding DLP enforcement adjustments for Copilot
  11. Custom Indicators: Verify that FSI-specific custom indicators (MNPI probing, client data aggregation, agent data volume) are configured and functional
  12. Investigation Workflow: Confirm documented procedures exist for investigating IRM alerts related to Copilot and agent usage, incorporating Triage Agent recommendations
  13. Privacy Controls: Verify that anonymization is enabled for initial alert triage and that access to IRM investigation data is restricted to authorized investigators
  14. Periodic Review: Confirm that IRM policies are reviewed quarterly with threshold adjustments based on false positive rates and detection effectiveness

Additional Resources