Skip to content

Control 1.24: Defender AI Security Posture Management (AI-SPM)

Control ID: 1.24
Pillar: Security
Regulatory Reference: OCC 2011-12 (model risk monitoring), Fed SR 11-7 (effective challenge), FINRA Regulatory Notice 25-07 (AI supervisory controls), GLBA 501(b) Safeguards Rule, NYDFS 23 NYCRR Part 500, NIST AI RMF 1.0 (MAP/MEASURE/MANAGE/GOVERN)
Last UI Verified: April 2026
Governance Levels: Baseline / Recommended / Regulated


Agent 365 Architecture Update

Agent 365 security posture dashboard integrates with Microsoft Defender, providing centralized visibility into agent security risks across platforms. See Unified Agent Governance for security posture management architecture.

Objective

Implement Microsoft Defender for Cloud AI Security Posture Management (AI-SPM) to gain comprehensive visibility into multi-cloud AI security posture, identify attack paths targeting AI workloads, and maintain an AI Bill of Materials (AI BOM) for agent discovery and risk assessment across Azure, AWS, and GCP environments.


Why This Matters for FSI

  • OCC Bulletin 2011-12 (Model Risk Management): Ongoing model monitoring expectations include identifying changes in the AI system's attack surface, third-party model dependencies, and configuration drift. AI-SPM's continuous discovery and AI Bill of Materials (AI BOM) help meet the bulletin's expectation that firms maintain awareness of model components and risks throughout the model lifecycle.
  • Federal Reserve SR 11-7 (Guidance on Model Risk Management): Effective challenge of AI models depends on independent visibility into model infrastructure, data flows, and security posture. AI-SPM attack path analysis supports independent risk review by surfacing exploitable paths the model owner may not self-report.
  • FINRA Regulatory Notice 25-07 (AI Supervisory Controls): Calls for risk-based supervision of AI systems, comprehensive AI inventories, and third-party AI vendor oversight. AI-SPM agent discovery and AI BOM directly support the inventory requirement; attack path analysis aids supervisory reviewers in identifying risks introduced by configuration changes or new integrations.
  • GLBA 501(b) Safeguards Rule: Requires identification of foreseeable internal and external risks to customer information. AI-SPM helps identify AI-specific risks (prompt injection chains, data exfiltration paths) that may not be visible to traditional CSPM tooling.
  • NYDFS 23 NYCRR Part 500: Risk assessments must consider AI-enabled threats and the firm's own AI-enabled systems. AI-SPM provides the technical evidence to support these assessments.
  • NIST AI RMF 1.0: Maps to all four functions — MAP (agent discovery and AI BOM inventory), MEASURE (attack path analysis and risk factor scoring), MANAGE (prioritized security recommendations and remediation tracking), GOVERN (posture dashboards and trend reporting for governance committees).
  • SEC Reg S-P (amended 2024): Incident response programs must address AI-related risks to customer records; AI-SPM alerts feed incident detection workflows.

No companion solution by design

Not all controls have a companion solution in FSI-AgentGov-Solutions; solution mapping is selective by design. This control is operated via native Microsoft admin surfaces and verified by the framework's assessment-engine collectors. See the Solutions Index for the catalog and coverage scope.

Control Description

Defender AI-SPM provides multi-cloud AI security posture management capabilities that complement Microsoft Purview DSPM for AI. While DSPM for AI (Control 1.6) focuses on data security and compliance within Microsoft 365, AI-SPM addresses the broader attack surface and vulnerability management for AI workloads across cloud platforms.

Relationship to DSPM for AI: AI-SPM and DSPM for AI serve complementary purposes. DSPM for AI monitors how AI applications interact with organizational data (data-centric). AI-SPM identifies vulnerabilities, attack paths, and security misconfigurations in AI infrastructure (security-centric). Organizations using both M365 Copilot/Copilot Studio and Azure AI services should implement both controls.

Capability Description
Agent discovery Automatically discovers AI agents across Microsoft Foundry, Copilot Studio, and multi-cloud environments
AI Bill of Materials (AI BOM) Inventories AI components, models, SDKs, and dependencies
Attack path analysis Identifies exploitable paths to AI workloads and sensitive data
Risk factors Assesses indirect prompt injection, data exfiltration, and other AI-specific risks
Security recommendations Provides prioritized remediation guidance for AI security gaps
Multi-cloud support Extends visibility to AWS Bedrock, GCP Vertex AI, and other cloud AI services

Recent Enhancements (2025-2026)

Enhancement Release Description
GCP Vertex AI Support GA November 2025 Full posture management for Google Cloud Vertex AI workloads
Agent-Specific Recommendations January 2026 Targeted security recommendations for Copilot Studio and Agent 365 SDK agents
Attack Path Expansion January 2026 New AI-specific attack path scenarios including indirect prompt injection chains
Agent 365 SDK Discovery Preview Blueprint-registered agent inventory and risk assessment

AI-SPM vs. DSPM for AI Comparison

Feature Defender AI-SPM Purview DSPM for AI
Primary Focus Attack surface & vulnerabilities Data security & compliance
Scope Multi-cloud (Azure, AWS, GCP) Microsoft 365 AI applications
Key Capabilities Attack path analysis, AI BOM Oversharing assessment, activity monitoring
Discovery Agent inventory & infrastructure AI interaction monitoring
Risk Assessment Security misconfigurations Sensitive data exposure
FSI Control Control 1.24 Control 1.6

AI Threat Protection Alerts (GA)

Microsoft Defender now generates specific threat alerts for AI workloads:

Alert Type Description
Jailbreak attempt Detects prompt injection attempts to bypass agent guardrails
Prompt leak Detects attempts to extract system prompts or instructions
Phishing via AI Detects agents being used to generate phishing content
ASCII smuggling Detects unicode/ASCII encoding attacks in agent interactions
Reconnaissance Detects systematic probing of agent capabilities and data access

These alerts integrate with Microsoft Sentinel and the Defender XDR incident queue for unified security operations.

  • Copilot Studio and Foundry agent alerts (Preview): Defender can now generate threat alerts specific to Copilot Studio and Microsoft Foundry agents, including alerts for agents discovered in the tenant that haven't been registered in the governance framework
  • Defender for Cloud Apps discovery (Preview): Copilot Studio agents can be discovered and monitored through Microsoft Defender for Cloud Apps, providing shadow agent discovery capabilities for agents that may have been created without governance oversight

Key Configuration Points

  • Licensing prerequisite: Defender for Cloud Defender CSPM plan must be enabled (Standard tier) on each Azure subscription that hosts AI workloads — AI-SPM is a Defender CSPM extension and is not available on the Foundational CSPM (free) tier
  • AI workload prerequisite: At least one supported AI resource (Azure OpenAI, Azure AI Foundry, Azure AI Services / Cognitive Services, Azure Machine Learning) must exist in the subscription for discovery findings to populate
  • Enable the AI security posture management extension under Defender CSPM settings on each in-scope Azure subscription
  • Configure multi-cloud connectors for AWS Bedrock and GCP Vertex AI (Vertex AI support reached GA November 2025) where applicable
  • Enable Microsoft Threat Intelligence extension on Defender CSPM (required for AI-specific attack path scenarios)
  • Enable Defender for AI Services plan for runtime threat protection complementary to AI-SPM posture findings
  • Configure agent-specific recommendations filter for Copilot Studio and Microsoft Foundry agents (released January 2026)
  • Integrate findings with Microsoft Sentinel via the Defender for Cloud data connector for FSI SIEM correlation
  • Document risk factor thresholds (critical / high / medium) and remediation SLAs aligned to zone requirements in the firm's Written Supervisory Procedures (WSPs)
  • Establish a quarterly review cadence for the AI Bill of Materials (AI BOM) as part of model inventory attestation

Zone-Specific Requirements

Zone Requirement Rationale
Zone 1 (Personal) Defender CSPM + AI-SPM extension enabled where Azure AI workloads exist; monthly AI-SPM dashboard review by AI Governance Lead; agent discovery enabled; critical-severity attack paths remediated within 30 days Baseline visibility supports OCC 2011-12 model inventory expectations even for personal-productivity agents
Zone 2 (Team) Zone 1 plus Defender for AI Services runtime plan enabled; weekly posture review documented in supervisory log; high-severity attack paths remediated within 14 days; critical within 7 days; multi-cloud connectors configured if any non-Azure AI workloads exist Shared agents touch broader data; FINRA 25-07 expects evidence of risk-based supervisory cadence
Zone 3 (Enterprise) Zone 2 plus Sentinel integration for AI alerts (required); daily posture review by SOC; critical attack paths remediated within 72 hours; documented exception process tied to OCC 2011-12 model risk register; AI BOM reviewed quarterly by Model Risk Committee Customer-facing and revenue-impacting agents require continuous monitoring and Model Risk Committee oversight

Licensing note: Defender CSPM is a paid Defender for Cloud plan billed per-resource. AI-SPM extension is included with Defender CSPM at no additional cost as of April 2026. Defender for AI Services (runtime) is a separately metered plan. Confirm pricing in Azure Portal prior to enabling at scale.


Roles & Responsibilities

Role Responsibility
Entra Security Admin Enable Defender CSPM and AI-SPM extension; configure multi-cloud connectors; manage AI-related security recommendations and Defender XDR alert routing
Cloud Security Architect Review attack paths, validate AI BOM accuracy against architecture-of-record, prioritize remediation, document compensating controls
AI Governance Lead Align AI-SPM findings with governance framework; chair AI BOM quarterly review; maintain mapping between AI-SPM findings and the firm's model risk register
SOC Analyst Triage Defender XDR alerts for AI workloads (jailbreak, prompt leak, ASCII smuggling, reconnaissance); investigate per Sentinel runbook; escalate per incident response plan
Model Risk Manager Review AI-SPM findings as input to OCC 2011-12 / SR 11-7 model risk assessments; track remediation evidence for regulatory examinations

Control Relationship
1.6 - DSPM for AI Complementary data-centric AI monitoring
1.8 - Runtime Protection Runtime threat detection for agents
3.7 - PPAC Security Posture Power Platform security posture assessment
3.1 - Agent Inventory Agent inventory management
3.9 - Sentinel Integration SIEM integration for AI security events

Implementation Playbooks

Step-by-Step Implementation

This control has detailed playbooks for implementation, automation, testing, and troubleshooting:


Verification Criteria

Confirm control effectiveness by verifying:

  1. Defender CSPM (Standard tier) is enabled on every Azure subscription that hosts AI workloads (Azure OpenAI, AI Foundry, AI Services, Azure ML)
  2. The AI security posture management extension is toggled on under Defender CSPM settings for each in-scope subscription
  3. AI workload discovery is active and the AI Bill of Materials (AI BOM) lists all known AI resources; reconcile against architecture-of-record quarterly
  4. Attack path analysis surfaces at least one expected AI-specific scenario (e.g., publicly exposed AI endpoint with sensitive data store) in test conditions, confirming the analyzer is producing AI-aware results
  5. AI-specific risk factors (indirect prompt injection susceptibility, sensitive-data exposure) are scored on inventoried agents
  6. Security recommendations for AI workloads are reviewed and remediation tracked per zone SLA (monthly Z1 / weekly Z2 / daily Z3)
  7. Multi-cloud connectors (AWS, GCP) are configured and reporting where non-Azure AI workloads exist
  8. Defender XDR / Sentinel receive AI-specific alert types (jailbreak attempt, prompt leak, ASCII smuggling, reconnaissance) and the SOC has a documented runbook for each
  9. AI BOM and remediation evidence are exported quarterly and retained per the firm's record-retention schedule (typically 6 years for FINRA, 7 years for SOX)

Additional Resources

FSI Scope Note

Power Platform Focus: While AI-SPM provides valuable multi-cloud visibility, this framework primarily targets Power Platform and Microsoft 365 AI governance. Organizations should implement AI-SPM when:

  • AI agents call Azure AI services (Azure OpenAI, Cognitive Services)
  • Custom agents are built with Microsoft Foundry
  • Multi-cloud AI workloads exist alongside Copilot Studio agents

For organizations exclusively using Copilot Studio without Azure AI integration, Control 1.6 (DSPM for AI) and Control 3.7 (PPAC Security Posture) may provide sufficient coverage.

Complement with Defender for AI Services (GA)

Defender for AI Services provides runtime threat protection as a complement to AI-SPM's posture management. While AI-SPM identifies misconfigurations and attack paths (proactive), Defender for AI Services detects and blocks threats during agent execution (reactive). Organizations should implement both for defense-in-depth coverage. See Microsoft Learn: Defender for AI Services for details.


Updated: April 2026 | Version: v1.4.0 | UI Verification Status: Current