Skip to content

Control 2.9: Defender for Cloud Apps — Copilot Session Controls

Control ID: 2.9 Pillar: Security & Protection Regulatory Reference: GLBA 501(b), FFIEC Last Verified: 2026-02-17 Governance Levels: Baseline / Recommended / Regulated


Objective

Deploy Microsoft Defender for Cloud Apps (MDCA) session policies and access policies to provide real-time monitoring, anomaly detection, and session-level governance for Microsoft 365 Copilot interactions. MDCA extends Conditional Access by enabling granular session controls that can monitor Copilot usage patterns, detect anomalous behavior, and enforce real-time access decisions based on session context. This control supports compliance with GLBA safeguard requirements and FFIEC expectations for continuous monitoring of privileged application access.


Why This Matters for FSI

  • GLBA 501(b) requires monitoring and testing the effectiveness of safeguards — MDCA provides continuous, real-time monitoring of how users interact with Copilot, extending beyond point-in-time authentication controls
  • FFIEC IT Examination Handbook (Information Security) expects continuous monitoring and anomaly detection for critical applications — Copilot's access to enterprise data makes it a high-value monitoring target
  • FFIEC IT Examination Handbook (Audit) expects logging and monitoring of access to sensitive systems — MDCA session logs provide granular visibility into Copilot interaction patterns
  • NYDFS Part 500 (Section 500.06) requires an audit trail designed to detect and respond to cybersecurity events — MDCA alerts on anomalous Copilot usage support this requirement
  • OCC/Fed Interagency Guidance on AI expects institutions to monitor AI tool usage for safety, soundness, and consumer protection — MDCA enables this monitoring for Copilot

Control Description

Microsoft Defender for Cloud Apps integrates with Microsoft Entra Conditional Access to provide session-level controls that go beyond authentication-time decisions. For Copilot, MDCA enables:

MDCA Capability Matrix for Copilot

Capability Description FSI Application
Session policies Real-time monitoring and control of Copilot sessions Detect and respond to sensitive data access during Copilot interactions
Access policies Control access based on session context Block Copilot access from risky sessions
Anomaly detection Machine learning-based detection of unusual behavior Identify bulk data extraction attempts via Copilot
Activity policies Alert on specific activity patterns Notify on high-volume Copilot usage
App governance Monitor OAuth app permissions Track apps that integrate with Copilot
File policies Monitor files accessed during sessions Track sensitive file access through Copilot
Generative AI app catalog Inventory and risk assessment of 1,000+ generative AI apps Identify and govern Shadow AI tools used alongside Copilot
Agent threat detection Detection of compromised or malicious agent behavior in Defender XDR Protect Copilot agent deployments from threat actors

Generative AI App Catalog

The Microsoft Defender for Cloud Apps cloud app catalog includes 1,000+ generative AI apps in the generative AI subcategory (part of the overall 31,000+ app catalog). This catalog enables FSI organizations to:

  • Inventory Shadow AI usage: Discover which generative AI applications employees are using without IT authorization alongside sanctioned Copilot deployments
  • Assess risk scores: Each app in the catalog carries a risk score based on security, compliance, and regulatory factors — enabling risk-based governance decisions
  • Apply governance policies: Block or restrict access to high-risk generative AI apps while permitting sanctioned Copilot usage
  • Monitor AI app trends: Track adoption patterns of generative AI tools across the organization to identify emerging Shadow AI risks

Discovery workflow: Microsoft Defender portal > Cloud Apps > Cloud app catalog > Filter: Category = "Generative AI"

FSI relevance: GLBA 501(b) requires safeguards to protect customer information across all technology used to process that data. Employees using unsanctioned generative AI tools (Shadow AI) may inadvertently expose customer information to apps without contractual data protection commitments. The MDCA catalog enables proactive Shadow AI governance.

Agent Threat Detection in Microsoft Defender XDR

Microsoft Defender XDR added agent protection capabilities (September 2025), extending XDR threat detection to Copilot agents and other AI agents deployed in the organization:

  • Compromised agent detection: Identify agents exhibiting anomalous behavior that may indicate a compromised or manipulated agent (e.g., prompt injection attacks, data exfiltration patterns)
  • Malicious agent behavior detection: Detect agents performing actions inconsistent with their declared function or scope
  • Unified XDR incident timeline: Agent activity is integrated into the unified Defender XDR incident timeline, enabling security analysts to correlate agent actions with user activity and other security signals
  • Cross-activity correlation: Correlate agent activity with user sign-in events, file access, and network activity to reconstruct the full sequence of an agent-involved security incident

Navigation path: Microsoft Defender portal (security.microsoft.com) > Incidents & alerts > Incidents — filter for agent-related incidents

FSI application: Agentic Copilot workflows (SharePoint agents, Copilot Studio agents) represent new vectors for insider threat and external attack. FINRA's 2026 Annual Regulatory Oversight Report specifically identifies agentic AI monitoring as a supervisory concern under FINRA Rules 3110 and 3120. Agent threat detection in XDR provides the monitoring infrastructure to satisfy these supervisory obligations.

Conditional Access App Control Integration

MDCA session controls are activated through Conditional Access App Control:

User Access Request
  Entra Conditional Access
  ┌─────┴──────────┐
  │ CA Policy:      │
  │ Use Conditional │
  │ Access App      │
  │ Control         │
  └─────┬──────────┘
  ┌─────────────────────┐
  │  Defender for       │
  │  Cloud Apps         │
  │  Session Proxy      │
  │                     │
  │  ├─ Monitor mode    │
  │  ├─ Block downloads │
  │  ├─ Protect uploads │
  │  └─ Custom session  │
  │     policies        │
  └─────────┬───────────┘
  Copilot Session
  (monitored/controlled)

Anomaly Detection Policies for Copilot

Detection Description Risk Level FSI Concern
Unusual volume of AI interactions User submits significantly more Copilot queries than baseline Medium Possible bulk data extraction
Sensitive data access spike Copilot sessions accessing more sensitive content than normal High Potential data exfiltration via AI
Off-hours Copilot usage Copilot used outside normal business hours Low-Medium Potential unauthorized access
Multiple location sign-ins Copilot accessed from multiple geographic locations High Potential credential compromise
Impossible travel Copilot sessions from geographically impossible locations High Credential theft/sharing
Suspicious query patterns Copilot queries targeting specific sensitive topics Medium Social engineering or reconnaissance

Session Policy Types for Copilot

Policy Type Description Action Use Case
Monitor only Log all Copilot session activities Audit Initial visibility during rollout
Block on condition Block Copilot when conditions met Block + alert Block sensitive access from risky sessions
Alert on sensitivity Alert when sensitive content accessed via Copilot Alert Real-time notification for compliance team
Restrict session Limit session capabilities Restrict Reduce Copilot capabilities for elevated-risk sessions

Copilot Surface Coverage

M365 Application MDCA Session Proxy Anomaly Detection Activity Logging Notes
Microsoft 365 Copilot Chat Yes Yes Yes Primary monitoring surface
Word Yes Yes Yes Via Office web app proxy
Excel Yes Yes Yes Via Office web app proxy
PowerPoint Yes Yes Yes Via Office web app proxy
Outlook Yes Yes Yes Via Outlook web proxy
Teams Limited Yes Yes Teams desktop has limited proxy support
OneNote Yes Yes Yes Via OneNote web proxy
Loop Yes Yes Yes Via web app proxy
Copilot Pages Yes Yes Yes Web-based — full proxy support
SharePoint (Agents) Yes Yes Yes Via SharePoint web proxy

Governance Levels

Level Requirement Rationale
Baseline Enable MDCA for M365 apps; create monitor-only session policy for Copilot interactions; enable built-in anomaly detection policies; review MDCA alerts weekly; enable agent monitoring alerts in Defender XDR for Copilot agent deployments Provides visibility into Copilot usage patterns without impacting user experience — essential first step for understanding AI interaction behavior. Agent monitoring alerts provide baseline coverage for agent-involved incidents.
Recommended Configure Conditional Access App Control for Copilot sessions; create custom session policies for sensitive data access alerts; configure activity policies for high-volume Copilot usage; integrate MDCA alerts with SIEM; alert review within 4 hours for high severity; monthly Copilot usage analysis using MDCA data; use the MDCA generative AI app catalog to identify and govern Shadow AI usage (Defender portal > Cloud app catalog > Generative AI); configure custom detection rules for agent anomalies in Defender XDR Active monitoring with real-time alerting — suitable for production FSI Copilot deployments where proactive threat detection is needed. Generative AI catalog and agent anomaly detection extend coverage to Shadow AI and agentic threats.
Regulated All Recommended requirements plus: custom anomaly detection tuned for FSI patterns (MNPI probing, bulk extraction); session-level blocking for elevated risk; MDCA investigation workflow integrated with incident response procedures; quarterly MDCA effectiveness review by security team; MDCA configuration included in examination packages; agent threat detection integrated into SOC playbooks with mandatory investigation SLAs (per FINRA Rules 3110/3120 supervisory requirements); Shadow AI governance policy documented with prohibited generative AI app list reviewed quarterly Full session-level governance with automated response — designed for firms with the highest security monitoring requirements. Agent threat detection and Shadow AI governance satisfy FINRA supervisory obligations for agentic AI.

Setup & Configuration

Step 1: Enable Defender for Cloud Apps

Portal: Microsoft Defender Portal (security.microsoft.com) > Settings > Cloud Apps

  1. Verify MDCA license is active (included in M365 E5 or available as add-on)
  2. Complete initial MDCA setup if not already configured
  3. Connect M365 apps to MDCA

Step 2: Configure Conditional Access App Control

Portal: Microsoft Entra Admin Center > Protection > Conditional Access > Policies

  1. Create or modify Conditional Access policy for Copilot
  2. Under Session controls, select "Use Conditional Access App Control"
  3. Choose "Monitor only" for initial deployment or "Block downloads" for stricter control

Step 3: Create Session Policies for Copilot

Portal: Microsoft Defender Portal > Cloud Apps > Policies > Policy management > Create policy > Session policy

Policy 1: Monitor Copilot Activity

Setting Value
Name FSI-Copilot-Session-Monitor
Session control type Monitor only
Activity source App equals "Microsoft 365 Copilot"
Action Log activity
Alert Weekly summary to security team

Policy 2: Alert on Sensitive Access

Setting Value
Name FSI-Copilot-Sensitive-Access-Alert
Session control type Monitor only
Activity source App equals "Microsoft 365 Copilot"
Content inspection Sensitivity label equals "Confidential" or higher
Action Alert compliance team immediately
Severity High

Step 4: Configure Anomaly Detection

Portal: Microsoft Defender Portal > Cloud Apps > Policies > Policy management

  1. Enable built-in anomaly detection policies:
  2. Impossible travel
  3. Activity from infrequent country
  4. Suspicious inbox forwarding rules
  5. Create custom anomaly detection for Copilot-specific patterns:
  6. Alert when daily Copilot interaction count exceeds 3x user baseline
  7. Alert when Copilot accesses content from 5+ different departments in one session

Step 5: Create Activity Policies

Policy: High-Volume Copilot Usage

Setting Value
Name FSI-Copilot-High-Volume
Activity type Copilot interaction
Repeated activity More than 100 activities in 1 hour
Action Alert security team + manager
Severity Medium

Step 6: Configure SIEM Integration

# MDCA can forward alerts to SIEM via API or SIEM agent
# Configure in Defender Portal > Settings > Cloud Apps > Security extensions > SIEM agents

# For Microsoft Sentinel integration:
# Use the Microsoft Defender for Cloud Apps data connector in Sentinel

Financial Sector Considerations

  • Bulk Data Extraction Detection: Financial firms face the risk of employees using Copilot to rapidly extract and aggregate sensitive data that would be difficult to assemble manually. MDCA anomaly detection should be tuned to detect query patterns that suggest systematic data extraction (e.g., querying all client accounts in a segment sequentially).
  • Departing Employee Monitoring: Insider risk is heightened when employees announce departures. MDCA session policies can apply enhanced monitoring to users flagged by HR as departing, detecting any unusual Copilot usage during their notice period.
  • Trading Desk Surveillance: For proprietary trading desks, MDCA can detect Copilot queries that probe for information about specific securities, which may indicate improper information flow. Integrate MDCA data with the firm's trade surveillance program.
  • Shadow AI Governance via Generative AI Catalog: The MDCA catalog now contains 1,000+ generative AI apps. FSI firms face regulatory exposure when employees use unsanctioned AI tools to process customer or financial data. Use the catalog to identify generative AI tools in use, assess their risk scores, and enforce governance policies that block high-risk Shadow AI while permitting sanctioned Copilot use. GLBA 501(b) safeguard requirements extend to all tools that process customer information, including Shadow AI tools.
  • Agent Threat Detection in SOC: Copilot Studio agents and other organizational agents are now subject to threat detection in Defender XDR. FSI security teams should include agent threat alerts in SOC triage procedures. FINRA's 2026 Annual Regulatory Oversight Report specifically identifies agentic AI as a supervisory concern — SOC integration of agent threats supports FINRA Rules 3110/3120 supervisory obligations.
  • Regulatory Examination Response: During examinations, regulators may request evidence of AI monitoring capabilities. MDCA dashboards, alert history, generative AI app catalog governance policies, and agent threat detection configurations demonstrate comprehensive proactive monitoring of Copilot and AI usage.
  • Teams Desktop Limitations: MDCA's Conditional Access App Control uses a reverse proxy for session-level monitoring, which works best with web applications. Teams desktop client has limited proxy support. Plan for this limitation when designing monitoring coverage.
  • False Positive Management: Financial services users may legitimately use Copilot heavily (e.g., research analysts, portfolio managers). Tune anomaly detection thresholds using a monitoring period before enforcement to establish accurate baselines per role.
  • Integration with SOC: MDCA alerts for Copilot and agent threat detection alerts should be integrated into the firm's Security Operations Center (SOC) workflow with defined response procedures for each alert type, including agent-specific investigation runbooks.

Verification Criteria

  1. MDCA Connectivity: Verify that Microsoft 365 apps are connected to Defender for Cloud Apps and showing activity data
  2. Conditional Access App Control: Confirm at least one Conditional Access policy routes Copilot sessions through MDCA App Control
  3. Session Policy Active: Verify that session monitoring policies for Copilot are enabled and generating logs
  4. Anomaly Detection: Confirm built-in anomaly detection policies are enabled and tuned for the organization's user population
  5. Alert Generation: Trigger a test alert (e.g., access Copilot from an unusual location) and confirm the alert appears in MDCA
  6. Alert Routing: Verify that high-severity MDCA alerts are routed to the appropriate team (security, compliance) within the defined SLA
  7. SIEM Integration: Confirm MDCA alerts and activity logs are flowing to the firm's SIEM or Microsoft Sentinel
  8. Activity Policy Testing: Submit a high volume of Copilot interactions in a test environment and confirm the activity policy triggers
  9. Generative AI App Catalog: Navigate to Defender portal > Cloud Apps > Cloud app catalog > filter by Generative AI category and confirm the catalog is accessible; confirm governance policies exist for high-risk generative AI apps
  10. Agent Threat Detection: Confirm agent monitoring alerts are enabled in Defender XDR; verify at least one agent-related detection rule is configured for Copilot agent deployments
  11. Investigation Workflow: Verify that documented procedures exist for investigating MDCA alerts and agent threat detection alerts related to Copilot usage
  12. Periodic Review: Confirm that MDCA policies are reviewed monthly (Recommended) or quarterly (Regulated) with documented results

Additional Resources