Skip to content

Control 1.1: Restrict Agent Publishing by Authorization

Control ID: 1.1
Pillar: Security
Regulatory Reference: FINRA 4511(a), FINRA 3110(a)/(b), FINRA Regulatory Notice 25-07 (in comment period — not yet enacted; treat as forward-looking guidance), SEC Rule 17a-4(f), GLBA Safeguards Rule 16 CFR §314.4(c), SOX §404 (ICFR), OCC Bulletin 2023-17, Federal Reserve SR 11-7
Last UI Verified: April 2026
Governance Levels: Baseline / Recommended / Regulated


Objective

Restrict who can publish AI agents to production environments by implementing security group-based authorization, separation of duties, and formal approval workflows.


Why This Matters for FSI

  • FINRA 4511(a) — Books and Records: Records of who created, modified, and published each agent must be preserved. Publishing-authorization controls produce the maker/publisher provenance that supports those records.
  • FINRA 3110(a)/(b) — Supervision: Supervisory systems must designate who may deploy AI capabilities and demonstrate enforcement. Restricting publishing to a defined Publishers group, with logged approvals, supports this.
  • FINRA Regulatory Notice 25-07 (in comment period — proposed): Anticipates explicit AI supervision requirements. Implementing publishing restrictions now positions the firm ahead of likely final rule.
  • SEC Rule 17a-4(f) — Electronic Records: Agent publish events and approval records must be captured in WORM-compliant, query-able storage. The control feeds the data into Purview Audit; retention policies (separate control) provide the WORM characteristic.
  • GLBA Safeguards Rule 16 CFR §314.4(c)(1) — Access Controls: Limiting who can deploy agents that may handle customer NPI is an access-control safeguard.
  • SOX §404 — Internal Controls over Financial Reporting: Agents that touch financial data, reporting workflows, or material disclosures fall in ICFR scope. Restricting publishing supports SoD.
  • OCC Bulletin 2023-17 — Third-Party Risk: Documented governance over third-party AI platform usage, including who can deploy.
  • Federal Reserve SR 11-7 — Model Risk Management: Generative-AI agents are models; SR 11-7 requires controlled deployment, validation, and inventory.
  • NIST SP 800-53 AC-2 / AC-3 / AC-6: Account management, access enforcement, least privilege.
  • NIST AI RMF 1.0 (GOVERN-1.5, GOVERN-3.2): Documented authorization controls and accountability for AI deployment.

Implementation note: No single control "guarantees compliance." This control supports the above obligations when paired with retention (1.7), audit logging (1.7, 1.13), evidence preservation (3.x), and incident response (1.18). Firms must validate that their specific examination posture is met.


No companion solution by design

Not all controls have a companion solution in FSI-AgentGov-Solutions; solution mapping is selective by design. This control is operated via native Microsoft admin surfaces and verified by the framework's assessment-engine collectors. See the Solutions Index for the catalog and coverage scope.

Prerequisites & Licensing

Implementing this control end-to-end requires the following — verify each before starting, or expect partial enforcement:

Prerequisite Why Where to verify
Microsoft Entra ID P1 (or higher) Required for dynamic security groups used in maker/publisher gating M365 Admin Center > Billing > Licenses
Power Platform Managed Environments add-on Required for sharing limits and "Solution checker" governance signals on the maker environment PPAC > Environments > [env] (look for "Managed" badge)
Microsoft Copilot Studio license (per-user or per-tenant) Required for any agent publishing surface beyond M365 Agent Builder M365 Admin Center > Billing
Microsoft Purview Audit (Standard) Captures BotUpdateOperation-BotPublish events; default 180-day retention Purview portal > Audit
Microsoft Purview Audit (Premium) + custom retention policy Required for >180-day retention. Premium alone gives 1 year by default; 7–10 year retention requires an explicit Audit Retention Policy Purview portal > Audit > Audit retention policies
AI Administrator role (added Mar 2025, MC1041454) Preferred least-privilege role for agent governance — use instead of Global Administrator Entra > Roles & administrators
Sovereign cloud (GCC / GCC-High / DoD) awareness PowerShell automation must explicitly target sovereign endpoints (-Endpoint usgov, usgovhigh, dod) — commercial defaults silently return 0 environments See PowerShell Setup playbook

Control Description

This control establishes authorization controls over who can create and publish AI agents in Microsoft 365 and Power Platform. Microsoft provides native governance controls for agent authoring and publishing — including environment access, security roles, data policies, Managed Environments, and Microsoft 365 admin-center agent controls — but there is no single tenant-wide switch that covers every agent-creation surface. The governance strategy therefore combines preventive controls with containment and approval workflows:

  1. Block Publishing - DLP policies block channel connectors
  2. Restrict Sharing - Disable "Share with Everyone" capability
  3. Route Away - Environment routing directs makers to governed environments

The control implements a "Sterile Default Environment Strategy" where the Default environment has all publishing channels blocked via DLP, combined with security group-based access control for designated maker environments.


Key Configuration Points

Maker Authorization

  • Create security groups: FSI-Agent-Makers-*, FSI-Agent-Publishers-Prod, FSI-Agent-Approvers-Compliance
  • Remove Environment Maker role from "All Users" in each environment
  • Assign Environment Maker only to authorized security groups
  • Restrict Copilot Studio agent creation to specific security groups (controls who can author agents)
  • Configure agent sharing settings to control who can use published agents
  • Configure Managed Environment sharing limits
  • Implement release gates with approval workflows for production publishing

M365 Admin Center Agent Governance Actions (GA)

The Microsoft 365 Admin Center provides comprehensive agent lifecycle management at Copilot > Agents & connectors > Agents:

Action Description Governance Impact
Publish Admin approval required before agents become available Helps prevent unauthorized agent distribution
Activate Enable agent with governance template application Applies organizational standards at activation
Deploy Auto-install agents for targeted user groups Controlled rollout with scope management
Pin Pin selected agents for organization-wide visibility (current Microsoft documentation does not specify a fixed pin limit; verify the live cap in your tenant before publishing internal guidance) Managed discovery for approved agents
Block / Unblock Prevent or restore agent availability Immediate risk mitigation capability
Delete Permanently remove agents from the tenant Lifecycle termination control
Approve Updates Review and approve agent version changes Change management enforcement
Reassign Ownership Transfer agent ownership between users Continuity management for departing staff
Manage Ownerless Handle agents without active owners Orphaned agent governance
Export Inventory Download agent inventory data Audit and compliance reporting

The agent inventory view also includes a Risks column that surfaces Entra-based risk alerts for individual agents, enabling administrators to identify and prioritize agents requiring governance review.

Agent-Level Authentication and Access Control

  • Require user authentication for all agents: In Copilot Studio, navigate to each agent's Settings > Security and verify authentication is not set to "No Authentication." Use "Authenticate with Microsoft" (recommended for internal agents using Entra ID) or "Authenticate Manually" (for OAuth-based scenarios)
  • Enforce sign-in for manual authentication: When using "Authenticate Manually," enable the "Require users to sign in" toggle to prevent anonymous interactions with the agent. For "Authenticate with Microsoft," users are already authenticated through Teams and Microsoft 365
  • Restrict agent sharing scope: In Copilot Studio, open the agent and use … > Share to configure who can chat with or collaborate on the agent. Restrict access to designated Copilot Readers or Security Groups. Enforce broader sharing restrictions through Managed Environment sharing rules in Power Platform Admin Center. Do not allow unrestricted access ("Anyone" or "Any multi-tenant") for agents handling non-public data
  • Control generative AI agent publishing at tenant level: In Power Platform Admin Center > Tenant Settings, disable the ability to publish agents that use generative AI features until governance review confirms AI feature controls are in place
  • Block unapproved shared agents: In the M365 Admin Center, open the Copilot Control System / Agents experience and use All agents to review and block agents that have not been through the approval workflow

Copilot Studio Data Policies

Use Copilot Studio data policies to enforce governance controls at the environment, environment-group, or tenant scope:

  • Require maker/user authentication
  • Explicitly allow or block knowledge sources, actions/connectors, skills, HTTP requests, publication to channels, triggers, and Application Insights integration

Data Residency Verification

  • Verify all Power Platform environments used for agent publishing are provisioned in US regions
  • Confirm Copilot Studio geographic data residency aligns with regulatory boundaries
  • Document region configuration as part of environment setup evidence

Zone-Specific Requirements

Zone Requirement Rationale
Zone 1 (Personal) Any licensed user can create personal agents; no approval required; authentication recommended but not enforced Low risk, no customer data
Zone 2 (Team) Security group membership required; manager approval before production; authentication required with "Require users to sign in" enabled; sharing restricted to security groups Internal data access requires accountability
Zone 3 (Enterprise) Strict group membership; Governance Committee + Legal review; quarterly certification; authentication required with "Require users to sign in" enabled; sharing restricted to named security groups only; generative AI publishing disabled until governance review complete Customer-facing, regulatory examination risk

Roles & Responsibilities

Role Responsibility
AI Administrator (preferred — MC1041454, Mar 2025) Day-to-day agent governance: configure agent-publishing controls, review M365 Admin Center agent inventory, manage block/allow decisions. Use this role in preference to Global Admin for ongoing operations.
Power Platform Admin Configure environment security roles (PPAC), Managed Environment sharing limits, Copilot Studio data policies, tenant-level Copilot Studio author settings
Dataverse System Admin (per-environment) Assign Dataverse security roles (Environment Maker, Copilot Author) on Dataverse-backed environments — the PowerShell *-AdminPowerAppEnvironmentRoleAssignment cmdlets do not work on Dataverse environments; assignment must be made via PPAC or the Dataverse API
Entra Global Admin Initial setup and consent only — create the security groups (FSI-Agent-Makers-*, FSI-Agent-Publishers-Prod, FSI-Agent-Approvers-Compliance); thereafter delegate group ownership to a security-group owner and operate under AI Administrator
Compliance Officer Approve production publishes; quarterly access review of FSI-Agent-Publishers-Prod; review audit logs
AI Governance Lead Define approval workflow, governance-tier requirements, exception handling

Control Relationship
2.1 - Managed Environments Enables sharing restrictions and governance features
1.2 - Agent Registry Tracks all published agents
1.7 - Audit Logging Logs all publishing attempts
2.3 - Change Management Approval workflow for promotions

Implementation Playbooks

Step-by-Step Implementation

This control has detailed playbooks for implementation, automation, testing, and troubleshooting:

Advanced Implementation: Configuration Hardening Baseline

This control is covered by the Configuration Hardening Baseline, which consolidates SSPM-detectable settings across all 7 mapped controls into a single reviewable checklist with automation classification and evidence export procedures.

Advanced Implementation: Unrestricted Agent Sharing Detector

For continuous detection of overly permissive agent sharing configurations, see the Unrestricted Agent Sharing Detector. This solution scans all Copilot Studio agents for organization-wide sharing, public internet links, unapproved groups, excessive individual shares, and cross-tenant access — with automated approval-based remediation and exception management.

Governance Script: Agent Authentication Enforcement

Test-AgentAuthConfiguration.ps1 validates per-agent authentication configuration against 6 SSPM items with zone-based logic. Checks authentication mode, sign-in enforcement, timing settings, sharing scope, AI feature publishing, and agent approval status — with drift detection and SHA-256 evidence export.

Script Location: scripts/governance/Test-AgentAuthConfiguration.ps1

Governance Script: Publishing Restriction Validation

restrict-agent-publishing.ps1 validates 6 publishing restriction criteria: Environment Maker role removal, authorized security groups, Share with Everyone disabled, DLP connector blocking, Managed Environment sharing limits, and approval workflow status — with SHA-256 evidence export for audit readiness.

Script Location: scripts/governance/restrict-agent-publishing.ps1


Verification Criteria

Confirm control effectiveness by verifying:

  1. Non-authorized users cannot create or publish agents (test with non-member account)
  2. Authorized users can create agents in designated environments
  3. Production publishing requires membership in FSI-Agent-Publishers-Prod
  4. All publish events appear in Microsoft Purview Audit logs
  5. Sharing restrictions block "Share with Everyone" attempts
  6. No Copilot Studio agents are configured with "No Authentication" (Copilot Studio > Agent > Settings > Security)
  7. Agents using manual authentication have "Require users to sign in" enabled
  8. No agents are shared with unrestricted access ("Anyone" or "Any multi-tenant")
  9. Generative AI agent publishing is disabled at tenant level or governance review is documented
  10. Unapproved agents are blocked in M365 Admin Center agent inventory
  11. Each production agent's Protection status is reviewed in Copilot Studio; any "Needs review" status is resolved before publish approval
  12. Audit log retention is configured to meet firm-specific obligations (commonly 6 years per FINRA 4511(b) and SEC 17a-4(b)(4)). Important: Microsoft Purview Audit (Standard) retains audit logs for 180 days; Audit (Premium) defaults to 1 year. Retention beyond 1 year requires an explicit Audit Retention Policy in Purview (configured separately under Purview > Audit > Audit retention policies). Verify the active retention policy covers the required period and includes the BotUpdateOperation-BotPublish operation
  13. Publishing records (audit logs, approval records, group membership changes) are searchable and exportable via Microsoft Purview eDiscovery for regulatory examination readiness
  14. All agent environments are provisioned in US regions with documented data residency confirmation

Additional Resources

Agent Essentials

Note: Agent governance features in the M365 Admin Center are rolling out progressively. Verify feature availability in your tenant.


Implementation Note

Organizations should verify that their implementation meets their specific regulatory obligations. This control supports compliance efforts but requires proper configuration and ongoing validation.

Updated: April 2026 | Version: v1.4.0 | UI Verification Status: Current (April 2026)