Control 2.19: Customer AI Disclosure and Transparency
Control ID: 2.19
Pillar: Management
Regulatory Reference: SEC Reg BI, CFPB UDAAP, FINRA Rule 2210, FINRA Regulatory Notice 24-09 (Gen AI), SEC Rule 17a-4 / FINRA Rule 4511 (recordkeeping of AI communications), GLBA Section 501(b), State AI Laws (CA SB 1001, CA SB 243, CA AB 853, Utah AI Policy Act / SB 149, Colorado AI Act)
Last UI Verified: April 2026
Governance Levels: Baseline / Recommended / Regulated
Objective
Establish formal processes to disclose to customers that they are interacting with AI agents, explain agent capabilities and limitations, and provide clear escalation paths to human representatives, supporting compliance with transparency requirements across federal and state regulations.
Why This Matters for FSI
- SEC Reg BI: Transparency obligations require disclosure of how recommendations are made; undisclosed AI involvement in recommendations may not meet the disclosure obligation
- CFPB UDAAP (12 U.S.C. 5531/5536): Failure to clearly disclose that a customer is interacting with AI — or failing to provide a workable path to a human when one is required — can be treated as a deceptive or unfair practice (see CFPB Issue Spotlight, Chatbots in Consumer Finance, June 2023)
- FINRA Rule 2210 + Regulatory Notice 24-09: All customer-facing communications, including those generated or shaped by AI/LLMs, must be fair, balanced, and not misleading. Notice 24-09 (June 2024) confirms existing communications, supervision, and recordkeeping rules apply to generative AI output; firms remain accountable for chatbot content
- SEC Rule 17a-4 / FINRA Rule 4511: Disclosure language, disclosure-version history, and the AI-side of the customer transcript must be retained as books-and-records (links to Control 2.13)
- State AI Laws:
- California SB 1001 — bots used to incentivize a commercial transaction must disclose their non-human nature
- California SB 243 (2025) — companion-chatbot operators must surface clear AI disclosure and (for minors) periodic reminders
- California AB 853 (AI Transparency Act, full compliance Aug 2 2026) — adds manifest and latent disclosure obligations for AI-generated content on covered platforms
- Utah AI Policy Act / SB 149 (as amended 2024–2025) — regulated persons in finance, health, and similar sectors must clearly and conspicuously disclose AI interaction at the start of meaningful interactions
- Colorado AI Act (effective Feb 1 2026) — for high-risk consumer-facing AI (including financial services), requires up-front AI disclosure, on-request human escalation, plain-language explanation of automated decisions, and auditable interaction records
FINRA Notice Disambiguation
The primary AI-relevant FINRA guidance is Regulatory Notice 24-09 (June 2024) — Firms' Obligations When Using Generative AI and LLMs. Regulatory Notice 25-07 (April 2025) is a Request for Comment on workplace modernization rules and only touches AI in the narrow context of recordkeeping for AI-generated communications; it should not be cited as standalone AI-disclosure authority. Customer-facing disclosure obligations rest on FINRA Rule 2210 (Communications), Rule 3110 (Supervision), and Notice 24-09.
No companion solution by design
Not all controls have a companion solution in FSI-AgentGov-Solutions; solution mapping is selective by design. This control is operated via native Microsoft admin surfaces and verified by the framework's assessment-engine collectors. See the Solutions Index for the catalog and coverage scope.
Control Description
This control establishes AI disclosure through:
- AI Identification - Persistent disclosure that user is interacting with AI agent
- Capability Explanation - Clear description of what agent can and cannot do
- Limitation Disclosure - Transparent communication about AI limitations
- Human Escalation Path - Clear mechanism to reach human representative at any time
- Data Use Disclosure - Information about how conversation data is used
- Disclosure Versioning - Track changes to disclosure language over time
Key Configuration Points
- Implement AI identification in the agent's Conversation Start / greeting topic and reinforce it persistently (status bar text, periodic reminders, pre-transaction confirmations)
- Create a capability and limitation disclosure template per agent type, reviewed by Legal/Compliance
- Configure the system "Escalate" topic in Copilot Studio and add a "Transfer conversation" node that routes to the engagement hub (Dynamics 365 Customer Service, Microsoft Teams voice/chat queue, or a generic engagement hub via the handoff payload)
- Define data-use disclosure aligned with the firm's privacy notice and GLBA initial/annual notices
- Document the disclosure language, version, and approver in the Agent Card (Control 3.1) and retain prior versions per Control 2.13
- Configure jurisdiction-aware disclosures (state-specific copy for CA, UT, CO; minor-reminders for CA SB 243 covered scenarios)
- Version-control all disclosure language with named approvers, effective dates, and links to the change ticket
Zone-Specific Requirements
| Zone | Requirement | Rationale |
|---|---|---|
| Zone 1 (Personal) | Not generally customer-facing; if shared with any external party, apply Zone 2 baseline | Personal-productivity agents typically lack customer reach |
| Zone 2 (Team) | Basic AI identification + on-request human escalation; disclosure approved by Compliance before external publication | Shared/team agents may be exposed to external users via Teams or embedded experiences |
| Zone 3 (Enterprise) | Comprehensive disclosure suite: AI identification at conversation start, capability + limitation statement, periodic reminders for long sessions, pre-transaction reconfirmation, jurisdiction-aware copy (CA/UT/CO), proactive human-handoff offer, full disclosure logging and 6-year retention (or longer where state law requires) | Customer-facing AI in regulated FSI workflows requires the highest transparency posture and full evidentiary trail |
Roles & Responsibilities
| Role | Responsibility |
|---|---|
| Compliance Officer | Approve disclosure language; validate alignment with FINRA 2210, Reg BI, UDAAP, and state AI laws |
| Legal Counsel | Review state-specific requirements (CA / UT / CO) and approve data-use disclosure wording |
| AI Governance Lead | Own the disclosure-language register, version history, and re-attestation cycle |
| AI Administrator | Configure tenant-level Copilot/agent settings that affect disclosure surfaces |
| Power Platform Admin | Govern the Copilot Studio environment(s) hosting customer-facing agents |
| Copilot Studio Agent Author | Implement disclosure copy in the greeting topic and the "Transfer conversation" node in the Escalate topic |
| Customer Experience / UX Lead | Design disclosure UX and validate escalation paths in each channel (web, Teams, Omnichannel) |
Related Controls
| Control | Relationship |
|---|---|
| 3.1 - Agent Inventory | Disclosure language documented in Agent Card |
| 2.12 - Supervision | Human escalation aligns with supervision |
| 1.6 - Purview DSPM for AI | Data use disclosure aligns with classification |
| 2.13 - Documentation | Disclosure versions maintained per retention |
Implementation Playbooks
Step-by-Step Implementation
This control has detailed playbooks for implementation, automation, testing, and troubleshooting:
- Portal Walkthrough — Step-by-step portal configuration
- PowerShell Setup — Automation scripts
- Verification & Testing — Test cases and evidence collection
- Troubleshooting — Common issues and resolutions
Verification Criteria
Confirm control effectiveness by verifying:
- AI identification appears at the start of every customer conversation and is reinforced persistently (status text, periodic reminder, or pre-transaction reconfirmation)
- The capability + limitation statement accurately reflects what the agent can and cannot do, and is approved by Compliance
- The Copilot Studio system Escalate topic contains a working Transfer conversation node, and the configured engagement hub (Dynamics 365 Customer Service / Teams queue / generic hub) successfully receives the handoff in a test session
- Data-use disclosure is present and consistent with the firm's privacy notice and GLBA notices
- Jurisdiction-aware disclosures render correctly for CA, UT, and CO test sessions
- Disclosure events (delivered, type, escalation offered, escalation taken) are logged to a retained store and are queryable for audit
- Disclosure language change history is retained per Control 2.13 with named approvers and effective dates
Additional Resources
- SEC Regulation Best Interest
- CFPB Issue Spotlight: Chatbots in Consumer Finance (June 2023) — research/risk identification; no chatbot-specific binding rule yet, but UDAAP authority applies
- FINRA Regulatory Notice 24-09 — Generative AI / LLM obligations
- FINRA Rule 2210 — Communications With the Public
- FINRA Regulatory Notice 25-07 — Workplace Modernization (recordkeeping context only)
- Microsoft Learn: Hand off to a live agent (Copilot Studio)
- Microsoft Learn: Configure handoff to Dynamics 365 Customer Service
- Microsoft Learn: Configure handoff to a generic engagement hub
- Microsoft Copilot Studio Samples — contact-center skill-handoff
Updated: April 2026 | Version: v1.4.0 | UI Verification Status: Current