Skip to content

Control 2.6: Model Risk Management (Alignment with OCC 2011-12/SR 11-7)

Overview

Control ID: 2.6 Control Name: Model Risk Management (Alignment with OCC 2011-12/SR 11-7) Pillar: Management Regulatory Reference: OCC 2011-12, Federal Reserve SR 11-7, FINRA Notice 25-07, SOX 302/404 Setup Time: 4-6 hours

Purpose

Model Risk Management (MRM) ensures that AI agents used in financial services are subject to the same rigorous governance as traditional quantitative models. OCC Bulletin 2011-12 and Federal Reserve SR Letter 11-7 establish requirements for model development, validation, and ongoing monitoring. While Copilot Studio agents may not be "models" in the traditional sense, their use in customer-facing or decision-support roles requires similar governance to manage the risk of incorrect outputs, bias, or regulatory non-compliance.

This control addresses key FSI requirements:

  • Model Inventory: Catalog agents that function as models
  • Independent Validation: Third-party review of agent behavior
  • Performance Monitoring: Track output quality over time
  • Bias Detection: Identify and mitigate unfair outcomes
  • Documentation: Complete model development lifecycle records
  • Change Control: Governance for model modifications

Prerequisites

Primary Owner Admin Role: AI Governance Lead Supporting Roles: Compliance Officer, Power Platform Admin

Required Licenses

License Purpose
Power Platform per-user Agent development and monitoring
Microsoft Purview (any tier) Audit logging for model governance
Azure Monitor (optional) Advanced performance monitoring

Required Permissions

Permission Scope Purpose
Power Platform Admin Tenant Agent inventory and oversight
Compliance Administrator Microsoft Purview Audit log access
Model Risk Manager Business role Model governance

Dependencies

Pre-Setup Checklist

  • [ ] Review OCC 2011-12 and SR 11-7 requirements
  • [ ] Identify agents that qualify as "models"
  • [ ] Establish Model Risk Management committee
  • [ ] Define model tiering criteria
  • [ ] Identify independent validation resources

Governance Levels

Baseline (Level 1)

Document model risk assessment for AI agents; establish validation procedures.

Quarterly model monitoring; bias testing; performance vs. baseline tracking.

Regulated/High-Risk (Level 4)

Comprehensive model risk framework per SR 11-7; annual third-party validation; real-time performance monitoring.


Setup & Configuration

Step 1: Define Agent-as-Model Classification

Determine Which Agents Require MRM Governance:

  1. Model Definition per OCC 2011-12:

    A model is a quantitative method, system, or approach that applies statistical, economic, financial, or mathematical theories, techniques, and assumptions to process input data into quantitative estimates.

  2. Agent Classification Criteria:

Criteria Model Treatment Example
Makes decisions affecting customers Yes - Tier 1 Credit recommendation agent
Provides financial calculations Yes - Tier 1/2 Investment calculator agent
Influences risk assessments Yes - Tier 1 Risk scoring agent
Customer-facing recommendations Yes - Tier 2 Product recommendation agent
Information retrieval only No FAQ/knowledge base agent
Internal productivity No IT help desk agent
  1. Document Classification:
    Agent Model Classification Form
    
    Agent Name: [Name]
    Agent ID: [ID]
    Business Owner: [Owner]
    
    Classification Decision:
    [ ] Model (requires MRM governance)
    [ ] Non-Model (standard agent governance)
    
    Justification:
    [Explain why agent does/doesn't qualify as model]
    
    Model Tier (if applicable):
    [ ] Tier 1 - High Risk (material business impact)
    [ ] Tier 2 - Medium Risk (significant but limited impact)
    [ ] Tier 3 - Low Risk (minimal business impact)
    
    Approved by: _________________ Date: _________
    Model Risk Manager
    

Step 2: Establish Model Inventory

Create Model Inventory Registry:

  1. Required Inventory Fields:
Field Description
Model ID Unique identifier
Model Name Agent/model name
Model Tier 1, 2, or 3
Business Purpose What the model does
Model Owner Business owner
Model Developer Technical owner
Primary Users Who uses the model
Data Inputs Data sources used
Model Outputs Decisions/recommendations
Implementation Date Go-live date
Last Validation Date of last review
Next Validation Due Scheduled review date
Performance Status Green/Yellow/Red
  1. Create SharePoint List or Dataverse Table:
  2. Power Platform Admin Center → Create new Dataverse table
  3. Or SharePoint → Create list with above columns
  4. Enable version history for audit trail

Step 3: Document Model Development

Model Development Documentation Template:

# Model Development Documentation
## [Agent/Model Name]

### 1. Executive Summary
- Model purpose and business objectives
- Key stakeholders and users
- Expected benefits and risks

### 2. Model Scope and Limitations
- Intended use cases
- Populations/scenarios covered
- Known limitations
- Conditions where model should not be used

### 3. Model Design
- Conceptual design approach
- Data sources and inputs
- Processing methodology
- Output specifications
- Error handling approach

### 4. Development Process
- Development timeline
- Team members and roles
- Development environment
- Testing methodology
- Peer review documentation

### 5. Data Description
- Input data sources
- Data quality requirements
- Data preprocessing steps
- Training data (if applicable)
- Ongoing data requirements

### 6. Model Performance
- Performance metrics defined
- Baseline performance levels
- Acceptable performance thresholds
- Monitoring approach

### 7. Implementation Details
- Production environment
- Integration points
- Security controls
- Operational procedures

### 8. Validation Summary
- Initial validation results
- Ongoing validation schedule
- Validation methodology

### 9. Appendices
- Technical specifications
- Test results
- Approval documentation

Step 4: Establish Validation Framework

Independent Validation Requirements:

  1. Validation Tiers:
Model Tier Validation Requirements Frequency
Tier 1 Independent third-party Annual
Tier 2 Independent internal team Annual
Tier 3 Self-assessment + review Biennial
  1. Validation Scope:

Conceptual Soundness: - [ ] Model design is appropriate for intended use - [ ] Methodology is theoretically sound - [ ] Assumptions are reasonable and documented - [ ] Limitations are clearly stated

Data Quality: - [ ] Data sources are appropriate - [ ] Data quality is acceptable - [ ] Data preprocessing is appropriate - [ ] Data is representative of use cases

Output Analysis: - [ ] Outputs are accurate and reliable - [ ] Performance meets expectations - [ ] Outputs are consistent over time - [ ] Edge cases handled appropriately

Implementation Verification: - [ ] Model implemented as designed - [ ] Controls are effective - [ ] Documentation is complete - [ ] Users are properly trained

  1. Validation Report Template:
    # Model Validation Report
    ## [Model Name] - [Validation Date]
    
    ### Validation Scope
    - Validation type: [Initial / Annual / Ad-hoc]
    - Validator: [Name/Firm]
    - Independence statement: [Confirm no development involvement]
    
    ### Summary of Findings
    | Area | Finding | Severity | Recommendation |
    |------|---------|----------|----------------|
    | [Data Quality] | [Example: 2% of test cases failed validation] | [Medium] | [Implement additional input validation] |
    
    ### Detailed Assessment
    [Section for each validation area]
    
    ### Conclusion
    - Overall validation status: [Approved / Conditional / Not Approved]
    - Conditions (if any): [List conditions]
    - Next validation date: [Date]
    
    ### Sign-off
    Validator: _________________ Date: _________
    Model Risk Manager: _________________ Date: _________
    

Step 5: Configure Performance Monitoring

Portal Path: Power Platform Admin Center → Analytics → Copilot Studio

  1. Define Performance Metrics:
Metric Description Threshold
Response Accuracy Correct responses / Total >95%
User Satisfaction CSAT score >4.0/5.0
Fallback Rate Escalations to human <10%
Response Time Average response latency <2 seconds
Error Rate Failed conversations <2%
Bias Indicators Demographic disparity <5% variance
  1. Create Monitoring Dashboard:
  2. Use Power BI connected to Dataverse
  3. Include trend charts for each metric
  4. Configure alerts for threshold breaches

  5. Automated Monitoring Script:

# Model Performance Monitoring Script
param(
    [string]$AgentId,
    [int]$DaysToAnalyze = 30
)

# Connect to Dataverse
# (Assumes connection established)

# Query conversation logs
$startDate = (Get-Date).AddDays(-$DaysToAnalyze)

# Calculate metrics
$metrics = @{
    TotalConversations = 0
    SuccessfulResolutions = 0
    Escalations = 0
    AverageResponseTime = 0
    UserSatisfactionAvg = 0
}

# (Query and calculate actual values)

# Calculate performance scores
$resolutionRate = $metrics.SuccessfulResolutions / $metrics.TotalConversations * 100
$escalationRate = $metrics.Escalations / $metrics.TotalConversations * 100

# Determine status
$status = "Green"
if ($resolutionRate -lt 90 -or $escalationRate -gt 15) {
    $status = "Yellow"
}
if ($resolutionRate -lt 80 -or $escalationRate -gt 25) {
    $status = "Red"
}

# Output report
Write-Host "=== Model Performance Report ===" -ForegroundColor Cyan
Write-Host "Agent: $AgentId"
Write-Host "Period: Last $DaysToAnalyze days"
Write-Host "Status: $status"
Write-Host ""
Write-Host "Metrics:"
Write-Host "  Resolution Rate: $([math]::Round($resolutionRate, 2))%"
Write-Host "  Escalation Rate: $([math]::Round($escalationRate, 2))%"
Write-Host "  Avg Response Time: $($metrics.AverageResponseTime)ms"
Write-Host "  User Satisfaction: $($metrics.UserSatisfactionAvg)/5.0"

# Alert if threshold breached
if ($status -ne "Green") {
    Write-Host "`nALERT: Performance threshold breached!" -ForegroundColor $status
    # Send notification (Teams, email, etc.)
}

Step 6: Establish Change Control for Models

Model Change Governance:

  1. Change Classification:
Change Type Description Governance
Material Change Affects model outputs significantly Full revalidation
Non-Material Change Minor updates, bug fixes Abbreviated review
Emergency Change Critical fix Expedited process
  1. Material Change Examples:
  2. Changes to prompts affecting recommendations
  3. New data source integration
  4. Modified business logic
  5. Significant performance tuning

  6. Change Request Form:

    Model Change Request
    
    Model: [Model Name/ID]
    Requestor: [Name]
    Date: [Date]
    
    Change Description:
    [Detailed description of change]
    
    Change Classification:
    [ ] Material Change
    [ ] Non-Material Change
    [ ] Emergency Change
    
    Justification:
    [Business/technical rationale]
    
    Impact Assessment:
    - Users affected: [Number/Groups]
    - Output changes expected: [Description]
    - Risk level: [High/Medium/Low]
    
    Testing Plan:
    [How change will be tested]
    
    Rollback Plan:
    [How to revert if issues]
    
    Approvals Required:
    [ ] Model Owner
    [ ] Model Risk Manager (material changes)
    [ ] Business Owner
    

Step 7: Configure Regulatory Reporting

Regulatory Examination Readiness:

  1. Documentation Package:
  2. Model inventory (complete list)
  3. Model development documentation
  4. Validation reports
  5. Performance monitoring reports
  6. Change control documentation
  7. Issue/finding tracking

  8. Examination Response Templates:

    Model Risk Management Summary for Examination
    
    Total Models in Inventory: [Number]
    - Tier 1 (High Risk): [Number]
    - Tier 2 (Medium Risk): [Number]
    - Tier 3 (Low Risk): [Number]
    
    AI/Agent Models: [Number]
    
    Validation Status:
    - Current (validated within required period): [Number]
    - Overdue: [Number]
    
    Performance Status:
    - Green (meeting thresholds): [Number]
    - Yellow (watch list): [Number]
    - Red (remediation required): [Number]
    
    Open Findings: [Number]
    - High severity: [Number]
    - Medium severity: [Number]
    - Low severity: [Number]
    
    Key Issues/Remediation:
    [Summary of significant issues and status]
    

Step 8: Integrate with Enterprise MRM Program

If Organization Has Existing MRM Program:

  1. Align with Existing Framework:
  2. Map agent governance to existing model tiers
  3. Use existing validation resources
  4. Integrate with model inventory system
  5. Align reporting with MRM committee

  6. AI-Specific Considerations:

  7. Add AI/agent-specific validation criteria
  8. Include bias testing in validation scope
  9. Address generative AI output risks
  10. Document AI-specific limitations

PowerShell Configuration

Generate MRM Compliance Report

# Model Risk Management Compliance Report
param(
    [string]$ModelInventoryPath,
    [string]$OutputPath = ".\MRM_Report_$(Get-Date -Format 'yyyyMMdd').html"
)

# Load model inventory
$models = Import-Csv -Path $ModelInventoryPath

# Calculate compliance metrics
$totalModels = $models.Count
$tier1 = ($models | Where-Object Tier -eq "1").Count
$tier2 = ($models | Where-Object Tier -eq "2").Count
$tier3 = ($models | Where-Object Tier -eq "3").Count

$currentDate = Get-Date
$validationCurrent = ($models | Where-Object {
    [datetime]$_.NextValidationDue -gt $currentDate
}).Count
$validationOverdue = ($models | Where-Object {
    [datetime]$_.NextValidationDue -le $currentDate
}).Count

$performanceGreen = ($models | Where-Object PerformanceStatus -eq "Green").Count
$performanceYellow = ($models | Where-Object PerformanceStatus -eq "Yellow").Count
$performanceRed = ($models | Where-Object PerformanceStatus -eq "Red").Count

# Generate HTML report
$html = @"
<!DOCTYPE html>
<html>
<head>
<title>Model Risk Management Report</title>
<style>
body { font-family: 'Segoe UI', sans-serif; margin: 20px; }
h1, h2 { color: #0078d4; }
.dashboard { display: flex; gap: 20px; flex-wrap: wrap; margin: 20px 0; }
.card { padding: 20px; background: #f3f2f1; border-radius: 8px; min-width: 150px; }
.card.green { background: #dff6dd; }
.card.yellow { background: #fff4ce; }
.card.red { background: #fed9cc; }
table { width: 100%; border-collapse: collapse; margin-top: 20px; }
th, td { padding: 10px; text-align: left; border-bottom: 1px solid #ddd; }
th { background: #0078d4; color: white; }
.overdue { color: red; font-weight: bold; }
</style>
</head>
<body>
<h1>Model Risk Management Compliance Report</h1>
<p>Report Date: $(Get-Date -Format 'MMMM dd, yyyy')</p>

<h2>Model Inventory Summary</h2>
<div class="dashboard">
<div class="card"><h3>Total Models</h3><p style="font-size:28px;">$totalModels</p></div>
<div class="card"><h3>Tier 1 (High)</h3><p style="font-size:28px;">$tier1</p></div>
<div class="card"><h3>Tier 2 (Medium)</h3><p style="font-size:28px;">$tier2</p></div>
<div class="card"><h3>Tier 3 (Low)</h3><p style="font-size:28px;">$tier3</p></div>
</div>

<h2>Validation Status</h2>
<div class="dashboard">
<div class="card green"><h3>Current</h3><p style="font-size:28px;">$validationCurrent</p></div>
<div class="card red"><h3>Overdue</h3><p style="font-size:28px;">$validationOverdue</p></div>
</div>

<h2>Performance Status</h2>
<div class="dashboard">
<div class="card green"><h3>Green</h3><p style="font-size:28px;">$performanceGreen</p></div>
<div class="card yellow"><h3>Yellow</h3><p style="font-size:28px;">$performanceYellow</p></div>
<div class="card red"><h3>Red</h3><p style="font-size:28px;">$performanceRed</p></div>
</div>

<h2>Model Details</h2>
<table>
<tr><th>Model ID</th><th>Name</th><th>Tier</th><th>Owner</th><th>Last Validation</th><th>Next Due</th><th>Status</th></tr>
$(
$models | ForEach-Object {
    $overdueClass = if ([datetime]$_.NextValidationDue -le $currentDate) { "overdue" } else { "" }
    "<tr><td>$($_.ModelID)</td><td>$($_.ModelName)</td><td>$($_.Tier)</td><td>$($_.ModelOwner)</td><td>$($_.LastValidation)</td><td class='$overdueClass'>$($_.NextValidationDue)</td><td>$($_.PerformanceStatus)</td></tr>"
}
)
</table>
</body>
</html>
"@

$html | Out-File -FilePath $OutputPath -Encoding UTF8
Write-Host "MRM Report generated: $OutputPath" -ForegroundColor Green

Financial Sector Considerations

Regulatory Alignment

Regulation Requirement MRM Implementation
OCC 2011-12 Model risk management framework Agent-as-model governance
Fed SR 11-7 Model validation and monitoring Independent validation program
FINRA 25-07 AI fairness and transparency Bias testing and documentation
SOX 302/404 Internal controls Documented model controls
OCC 2021-18 AI/ML risk management AI-specific governance

Zone-Specific Configuration

Configuration Zone 1 Zone 2 Zone 3
MRM Applicability Non-model May be model Likely model
Validation Type N/A Internal Independent third-party
Validation Frequency N/A Annual Annual + ongoing
Performance Monitoring Basic Standard Real-time
Documentation Level Basic Comprehensive Comprehensive
Change Governance Standard Enhanced Formal CAB

FSI Use Case Example

Scenario: Investment Recommendation Agent

MRM Classification:

  • Tier 1 Model (material business impact)
  • Provides investment product recommendations
  • Influences customer financial decisions

MRM Implementation:

  1. Documentation:
  2. Complete model development documentation
  3. Input data: Customer profile, risk tolerance, investment goals
  4. Output: Product recommendations
  5. Limitations: Not personalized financial advice

  6. Validation:

  7. Annual third-party validation
  8. Quarterly performance review
  9. Bias testing for demographic fairness
  10. Output analysis for recommendation quality

  11. Monitoring:

  12. Real-time accuracy tracking
  13. User satisfaction monitoring
  14. Escalation rate tracking
  15. Compliance sampling

  16. Governance:

  17. Changes reviewed by Model Risk Committee
  18. Material changes require revalidation
  19. Quarterly reporting to MRM committee

Verification & Testing

Verification Steps

  1. Classification Verification:
  2. [ ] All agents reviewed for model classification
  3. [ ] Model/non-model decisions documented
  4. [ ] Model tier assignments justified

  5. Documentation Verification:

  6. [ ] Model development docs complete
  7. [ ] Validation reports current
  8. [ ] Change control documentation maintained

  9. Monitoring Verification:

  10. [ ] Performance dashboards operational
  11. [ ] Alerts configured
  12. [ ] Regular reporting in place

  13. Governance Verification:

  14. [ ] MRM committee oversight
  15. [ ] Validation schedule maintained
  16. [ ] Regulatory reporting ready

Compliance Checklist

  • [ ] Agent-as-model classification completed
  • [ ] Model inventory maintained
  • [ ] Validation program established
  • [ ] Performance monitoring active
  • [ ] Change control process documented
  • [ ] Regulatory examination package ready

Troubleshooting & Validation

Issue 1: Unclear Model Classification

Symptoms: Difficulty determining if agent is a "model"

Resolution:

  1. Review OCC 2011-12 model definition
  2. Assess if agent provides quantitative estimates
  3. Evaluate impact on business decisions
  4. Consult Model Risk Management team
  5. When in doubt, treat as model

Issue 2: Limited Validation Resources

Symptoms: Cannot perform independent validation

Resolution:

  1. Identify internal teams not involved in development
  2. Consider second-line risk functions
  3. Engage external validators for Tier 1
  4. Use automated validation tools
  5. Document resource constraints

Issue 3: Performance Data Unavailable

Symptoms: Cannot measure model performance

Resolution:

  1. Enable conversation logging
  2. Configure Dataverse analytics
  3. Implement user feedback collection
  4. Create manual sampling process
  5. Document data limitations

Additional Resources


Control ID Control Name Relationship
2.5 Testing and Validation Pre-deployment validation
2.11 Bias Testing Fairness assessment
3.1 Agent Inventory Model inventory
3.3 Compliance Reporting MRM reporting

Support & Questions

For implementation support or questions about this control, contact:

  • AI Governance Lead (governance direction)
  • Compliance Officer (regulatory requirements)
  • Technical Implementation Team (platform setup)

Updated: Dec 2025
Version: v1.0 Beta (Dec 2025)
UI Verification Status: ❌ Needs verification