Skip to content

Control 1.10: Vendor Risk Management for Microsoft AI Services

Control ID: 1.10 Pillar: Readiness & Assessment Regulatory Reference: OCC Bulletin 2013-29, Interagency Guidance on Third-Party Relationships (2023), FFIEC IT Handbook (Outsourcing Technology Services), GLBA 501(b) Last Verified: 2026-02-17 Governance Levels: Baseline / Recommended / Regulated


Objective

Conduct vendor risk management due diligence for Microsoft as the provider of AI services through Microsoft 365 Copilot, including assessment of data processing practices, subprocessor relationships, responsible AI commitments, service reliability, and contractual protections. This control also addresses the assessment of Microsoft's AI subprocessors -- including Anthropic for certain AI services -- through the institution's third-party risk management framework. Vendor risk management for AI services supports compliance with interagency guidance that treats AI vendor relationships with the same rigor as other critical third-party relationships.


Why This Matters for FSI

  • OCC Bulletin 2013-29 (Third-Party Relationships): Requires national banks and federal savings associations to practice effective risk management for all third-party relationships, with heightened expectations for critical activities. Microsoft 365 Copilot -- which processes institutional data through AI models -- qualifies as a critical third-party activity requiring comprehensive due diligence.
  • Interagency Guidance on Third-Party Relationships (2023): Updated guidance from OCC, Federal Reserve, and FDIC reinforces risk management expectations for third-party relationships, including technology providers. AI-specific considerations include model risk, data privacy, and transparency of AI decision-making.
  • FFIEC IT Handbook (Outsourcing Technology Services): Expects institutions to evaluate and manage risks associated with outsourced technology services, including AI services integrated into productivity platforms.
  • GLBA 501(b): Data protection obligations extend to data processed by third-party vendors. Microsoft's handling of institutional data for AI processing must be evaluated against GLBA safeguard requirements.
  • State Regulatory Requirements: Various state banking regulators have issued AI-specific guidance that includes vendor risk management expectations (e.g., New York DFS, California DFPI).

Control Description

Microsoft as AI Vendor

Microsoft's role in the Copilot ecosystem extends beyond traditional SaaS provider. With Copilot, Microsoft is:

Role Description Risk Management Implication
Infrastructure provider Azure hosts the compute infrastructure for Copilot processing Standard cloud vendor risk (availability, security, residency)
AI model operator Microsoft operates the large language models that power Copilot responses Model risk (accuracy, bias, reliability, transparency)
Data processor Microsoft processes institutional data through AI pipelines (Semantic Index, LLM grounding) Data processing risk (confidentiality, retention, cross-contamination)
Subprocessor manager Microsoft engages subprocessors (including AI model providers) for certain services Fourth-party risk (subprocessor data handling, security, contractual flow-down)

Due Diligence Assessment Areas

Assessment Area Key Questions Evidence Sources
Data Processing How does Microsoft process institutional data for Copilot? Is data used to train foundation models? What retention applies to AI processing data? Microsoft Product Terms, Data Processing Addendum (DPA), Copilot privacy documentation
AI Model Governance What models power Copilot? How are models validated? What testing occurs before model updates? Microsoft Responsible AI documentation, model cards, transparency reports
Subprocessor Management Who are Microsoft's AI subprocessors? What data flows to subprocessors? What contractual controls exist? Microsoft Subprocessor List, contractual terms, subprocessor audit reports
Security Controls What security controls protect data during AI processing? How is tenant isolation maintained? SOC 2 Type II reports, ISO 27001 certification, penetration test results
Incident Response How does Microsoft handle AI-specific incidents (model failures, data leakage, prompt injection)? Microsoft incident response documentation, SLA commitments
Business Continuity What happens if Copilot AI services are unavailable? What are recovery time objectives? Microsoft SLA documentation, business continuity certifications
Regulatory Compliance Does Microsoft maintain compliance certifications relevant to FSI? (FedRAMP, SOC, ISO) Microsoft Compliance documentation, compliance certifications

Subprocessor Assessment: AI Model Providers

Microsoft engages AI model providers as subprocessors for certain services. Financial institutions should assess these fourth-party relationships:

Consideration Assessment Focus
Subprocessor identification Identify all AI subprocessors in Microsoft's Copilot supply chain (check Microsoft's published subprocessor list)
Data flow to subprocessors Understand what institutional data, if any, flows to subprocessors during AI processing
Contractual flow-down Verify that Microsoft's contractual commitments (data protection, confidentiality, retention) flow down to subprocessors
Subprocessor security Review available security documentation and certifications for AI subprocessors
Geographic considerations Identify where subprocessor processing occurs and whether it aligns with data residency requirements
Concentration risk Assess whether dependency on specific AI subprocessors creates concentration risk

Note: Anthropic is a Microsoft subprocessor for certain AI services. Financial institutions should include Anthropic in their third-party risk assessment scope when evaluating the full Copilot AI supply chain. Review Microsoft's published subprocessor list for current relationships.

Microsoft's Responsible AI Commitments

Commitment Area Microsoft Position Verification
No training on customer data Microsoft states that customer data is not used to train foundation models Review Microsoft Product Terms, DPA
Data boundary EU Data Boundary and contractual data residency commitments Review data residency documentation
Transparency Microsoft publishes AI transparency notes and responsible AI principles Review Responsible AI documentation
Content safety Content filtering and safety systems in Copilot pipeline Review safety documentation
Human oversight Copilot is designed as an assistant, not an autonomous agent Architecture documentation review
Privacy Data minimization, purpose limitation for AI processing Review privacy documentation, DPA

Risk Assessment Framework for AI Vendors

Risk Category Description Assessment Approach
Data confidentiality risk Risk that institutional data is exposed through AI processing Review data handling practices, encryption, tenant isolation
Model risk Risk that AI model produces inaccurate, biased, or harmful outputs Review model validation, testing, and monitoring practices
Concentration risk Risk from dependency on single AI vendor for critical capabilities Assess vendor lock-in, alternative options, exit strategy
Regulatory risk Risk that vendor practices do not meet evolving regulatory expectations Review compliance certifications, regulatory engagement
Operational risk Risk of AI service disruption affecting business operations Review SLAs, uptime history, incident response capabilities
Reputational risk Risk of AI-generated content causing reputational damage Review content safety controls, incident examples

Copilot Surface Coverage

Vendor risk management applies equally to all Copilot surfaces since the same vendor (Microsoft) and AI processing pipeline underlies all surfaces:

Copilot Surface Vendor Risk Relevance Notes
Microsoft 365 Copilot Chat Critical Broadest data processing scope across all M365 workloads
Word / Excel / PowerPoint High Content generation from institutional data
Outlook High Email content processing and generation
Teams High Meeting and chat content processing
SharePoint / OneDrive High Document content processing for grounding
Copilot Pages High AI-generated collaborative content
Loop Medium Collaborative content processing
Viva Medium Organizational data analytics
Extensibility High Third-party connectors may introduce additional vendor risk

Governance Levels

Level Requirement Rationale
Baseline Review Microsoft's published Copilot privacy and data handling documentation. Confirm Microsoft's position on customer data training. Document Microsoft as an AI vendor in the institution's third-party inventory. Review Microsoft's compliance certifications (SOC 2, ISO 27001). Minimum due diligence to understand Microsoft's AI data handling practices and document the vendor relationship.
Recommended All Baseline requirements plus: conduct formal vendor risk assessment using the institution's third-party risk framework. Review Microsoft's subprocessor list and identify AI subprocessors. Review Microsoft's Data Processing Addendum (DPA) and Product Terms for AI-specific provisions. Assess data residency and cross-border data flow implications. Document risk assessment findings and residual risk acceptance. Structured vendor risk assessment that applies the institution's existing third-party risk framework to the AI vendor relationship.
Regulated All Recommended requirements plus: engage legal counsel to review Microsoft contractual terms for AI service provisions. Conduct fourth-party risk assessment for AI subprocessors (including Anthropic where applicable). Evaluate Microsoft's AI incident response capabilities. Assess concentration risk and develop contingency plans. Include AI vendor risk in board risk committee reporting. Conduct annual vendor risk re-assessment. Maintain vendor risk documentation in regulatory examination file. Comprehensive AI vendor risk management that meets interagency guidance expectations with legal review, fourth-party assessment, board reporting, and examination readiness.

Setup & Configuration

Step 1: Inventory Microsoft as AI Vendor

Add Microsoft 365 Copilot to the institution's third-party vendor inventory with appropriate classification:

Field Value
Vendor name Microsoft Corporation
Service Microsoft 365 Copilot (AI services)
Criticality Critical
Data classification Processes institutional data including customer information
Contract reference Microsoft Enterprise Agreement / Microsoft Customer Agreement
Risk assessment due Before Copilot deployment

Step 2: Gather Due Diligence Materials

Collect the following from Microsoft:

  • Microsoft Products and Services Data Protection Addendum (DPA)
  • Microsoft Product Terms (specifically AI services sections)
  • SOC 2 Type II report (via Microsoft Service Trust Portal)
  • ISO 27001 certification
  • Microsoft Subprocessor List
  • Microsoft 365 Copilot privacy documentation
  • Microsoft Responsible AI Transparency Notes

Step 3: Conduct Risk Assessment

Apply the institution's standard third-party risk assessment methodology, supplemented with AI-specific assessment areas:

  • Complete standard vendor risk questionnaire
  • Add AI-specific questions (model governance, data training, subprocessors)
  • Score risk across assessment categories
  • Document residual risk and mitigation recommendations
  • Obtain risk acceptance from appropriate authority (CRO, CISO, or designated risk owner)

Step 4: Review Subprocessors

Navigate to Microsoft's published subprocessor list and:

  1. Identify all subprocessors associated with Microsoft 365 and AI services
  2. Assess subprocessor geographic locations against data residency requirements
  3. Review available security certifications for key subprocessors
  4. Document subprocessor assessment results

Step 5: Establish Ongoing Monitoring

Configure ongoing vendor risk monitoring:

  • Subscribe to Microsoft's subprocessor change notifications
  • Monitor Microsoft's Service Trust Portal for updated compliance documentation
  • Schedule annual vendor risk re-assessment
  • Track Microsoft AI-related incidents and service disruptions

Financial Sector Considerations

  • Interagency Guidance Alignment: The 2023 Interagency Guidance on Third-Party Relationships explicitly applies to technology vendors providing AI services. Financial institutions should document how their Copilot vendor risk assessment aligns with this guidance.
  • Board Oversight: OCC Bulletin 2013-29 requires board oversight of critical third-party relationships. If Copilot is deployed at enterprise scale, the AI vendor relationship warrants board risk committee reporting, at minimum annually.
  • Examination Preparedness: FFIEC examiners increasingly focus on AI vendor management. Maintain a comprehensive vendor risk file for Microsoft's Copilot AI services that is readily accessible during examinations.
  • Concentration Risk: Many financial institutions have deep Microsoft dependencies (Windows, Office, Azure, Dynamics). Adding AI dependency through Copilot increases concentration risk. Document this risk and any mitigation strategies (multi-vendor AI strategy, contingency planning).
  • Contractual Protections: Financial institutions should seek contractual protections specific to AI services, including: right to audit AI processing, notification of model changes, data deletion upon termination, liability for AI-generated errors, and subprocessor change notification rights.
  • Fourth-Party Visibility: Regulatory expectations extend to understanding and managing fourth-party risk (Microsoft's subprocessors). The AI supply chain for Copilot includes foundation model providers, cloud infrastructure, and content safety services -- all of which should be assessed to the extent practical.
  • Data Sovereignty Considerations: Some financial institutions have data sovereignty requirements that restrict where data can be processed. Confirm that Copilot AI processing occurs within agreed-upon geographic boundaries, and understand any exceptions for AI-specific processing.

Verification Criteria

  1. Microsoft is registered in the institution's third-party vendor inventory as an AI service provider with appropriate criticality classification
  2. Microsoft's Copilot data handling documentation has been reviewed and key commitments documented (no training on customer data, data residency, retention)
  3. Microsoft's SOC 2 Type II report and ISO 27001 certification have been reviewed within the past 12 months
  4. Formal vendor risk assessment has been completed using the institution's third-party risk framework (Recommended and Regulated levels)
  5. Microsoft's subprocessor list has been reviewed and AI subprocessors identified (Recommended and Regulated levels)
  6. Microsoft's DPA and Product Terms have been reviewed for AI-specific provisions (Recommended and Regulated levels)
  7. Legal counsel has reviewed contractual terms for AI service provisions (Regulated level)
  8. Fourth-party risk assessment has been conducted for AI subprocessors (Regulated level)
  9. Residual risk has been documented and risk acceptance obtained from appropriate authority
  10. Vendor risk documentation is maintained in the regulatory examination file and includes ongoing monitoring plan (Regulated level)

Additional Resources


FSI Copilot Governance Framework v1.2.1 - March 2026