In This Article
- The Cost of NOT Shipping Copilot
- Five Microsoft 365 Controls That Turn Shadow AI Into Governed AI
- What FFIEC, OCC, NCUA, and FDIC Examiners Actually Check in 2026
- Control 1: Microsoft Entra ID Conditional Access
- Control 2: Microsoft Purview Audit
- Control 3: Microsoft Purview DLP for Copilot
- Control 4: Microsoft Defender for Cloud Apps
- Control 5: Microsoft Intune
- The Audit Log Is the Receipt
- What Comes Next
- Frequently Asked Questions
Examiners are reading the audit log, not the policy binder. A "no AI" policy in a tenant where Copilot is already running draws the finding. Five Microsoft 365 controls turn that exposure into a clean exam: Conditional Access, Purview Audit, Purview DLP for Copilot, Defender for Cloud Apps, and Intune. This is the step you do not skip before Copilot goes live.
If you are a CISO or IT director at a community bank, credit union, or mortgage lender preparing to deploy Microsoft 365 Copilot Business, the productivity case has already won the room. Your CFO has run the math. Your business heads want it on every desk. The question that lands in your inbox is the one the auditor will ask first: before we flip Copilot on, what does the FFIEC, OCC, NCUA, or FDIC examiner expect to see?
The honest answer is shorter than the binder makes it sound. Five Microsoft 365 controls. Two of them you almost certainly already have configured for other reasons. Three of them are specific to how Copilot reads tenant data, and configuring them is the difference between a clean Copilot rollout and one that gets paused mid-deployment because an examiner walked in.
This article walks through each of the five controls, what Microsoft Learn documents about how they apply to Copilot, what changed in April 2026 when the OCC, Federal Reserve, and FDIC issued joint guidance on AI governance, and what examiners are actually reading from your tenant when they review AI risk. The five-control framework is the same whether the institution is a $750 million community bank in Texas, a $3 billion credit union in Washington, or a 25-person mortgage lender in Ohio. The proportionality changes; the controls do not.
The article is structured around the rollout sequence a CISO or IT director would actually follow. The first section makes the economic case for shipping Copilot rather than blocking it, because that is the conversation a CISO has with the CFO and CEO before any control discussion begins. The second section introduces the five-control framework as a single page so the rest of the article can map back to it. The third section walks through what each of the four federal banking regulators (FFIEC, OCC, Federal Reserve, FDIC) and the NCUA expect to see in 2026, with the relevant state-layer additions for Texas, Colorado, and New York. The five control-specific sections that follow each end with the same answer to the same examiner question: "What does the audit log show?" That repetition is intentional, because that question is the one the examiner is going to ask.
The Cost of NOT Shipping Copilot Is Bigger Than the Cost of Shipping It Wrong
The instinct, when a new technology hits the regulator radar, is to delay. Wait for clearer guidance. Write a "no AI" policy. Block the URL. Let the bigger banks go first.
That instinct is now the trap. The FDIC's 2026 IT examination playbook restructured around five integrated domains, and one of the first findings examiners write up is when documented institution policy diverges from actual tenant practice. A "no AI" policy on file while loan officers are pasting borrower scenarios into a personal ChatGPT tab is a finding waiting to happen. Cyberhaven's 2025 financial-services shadow AI research found that 64% of sensitive AI usage across financial-services workplaces is happening through unsanctioned channels, and Harmonic's Q2 2025 study clocked 45.4% of sensitive workplace AI traffic flowing through personal accounts. Your team is already using AI. The question is whether they are using AI you can audit, or AI you cannot.
The productivity payoff makes this even more pointed. Microsoft's Work Trend Index found knowledge workers handle roughly 117 emails per day. Forrester's July 2025 Total Economic Impact study of Microsoft Teams plus Copilot calculated $58.8 million in net benefits and 243% return on investment in the medium case over three years. A community bank that delays Copilot another full quarter while writing a "no AI" policy is not protecting itself; it is paying twice. Once in lost productivity. Once in an exam finding when the auditor pulls the log and sees the shadow traffic anyway.
The pattern repeats across institutions ABT works with. A credit union waits two quarters to ship Copilot because the compliance team is drafting a policy that does not need to be drafted from scratch; in the same two quarters, the lending team has been pasting borrower context into a personal Claude tab to draft follow-up emails. A community bank blocks the Copilot URL at the firewall; in the same week, three loan officers start using a consumer ChatGPT app on personal phones because the work is still the work, the AI assist is too useful to walk away from, and the firewall block does not reach the phone. A mortgage lender's CISO sets a "let's wait for clearer guidance" position; the guidance lands on April 17, 2026 and the lender is starting from zero on a deployment that should have shipped six months earlier. The delay does not create safety; it creates a longer window during which the institution is governed neither by the sanctioned Copilot tool nor by the absence of AI in the workflow.
The right diagnostic question for the audit committee is not "how do we make sure AI is safe before we deploy it." The right question is "how do we replace the AI usage already happening in our tenant with AI that we can govern." That is the question the five-control framework answers. It is also the question that lets the CISO and IT director ship a productive tool to their team within a known governance posture, rather than spending the next two quarters proving the negative on a deployment that will eventually happen anyway.
Why This Matters for Financial Institutions
On April 17, 2026, the OCC, Federal Reserve, and FDIC jointly issued revised model risk management guidance that explicitly excludes generative and agentic AI from MRM scope but requires broader enterprise risk governance. Translation: Microsoft 365 Copilot is not a "model" in the MRM sense, so you do not need to validate it like you would a credit-scoring model. But you must govern it under your enterprise risk, vendor management, and information security frameworks. That is the lane the five controls below sit in.
The reframe a CISO needs in the boardroom is this: shipping Copilot well, with five controls in place, is faster, cheaper, and more defensible than blocking it and finding out later that staff routed around the block. The five controls are not heavy lifts. Three of them are clicks inside the Microsoft 365 admin centers. Two of them are policies you already author for other reasons. The work is making sure the policies actually cover the Copilot interaction surface, not the work of inventing new controls.
A second economic point lands when the CFO asks about the shadow AI cost-recovery math. Most financial institutions have at least some portion of staff already paying for consumer ChatGPT Plus or Claude Pro on personal credit cards (the BYOAI rate across knowledge workers sits around 81% per recent UpGuard research). Those personal subscriptions cost the institution nothing on the books and everything on the data-boundary side. When the institution deploys Microsoft 365 Copilot Business at the $10 ABT-channel incremental price through June 30, 2026, the personal subscriptions can be retired because the tenant-bound Copilot now does the same work with proper governance. The math often shows the deployment paying for itself inside the first 60 days from shadow-tool retirement alone, before the productivity gains land on the timesheet. The five controls in this article are what makes the retirement defensible: the personal subscriptions go away because the tenant version is governed, not just cheaper.
One more framing point matters for the boardroom conversation. The five-control framework is not "Microsoft-only" governance. It is governance that happens inside Microsoft 365 because that is where Copilot reads data, the same way a credit-decisioning model's governance happens inside the loan origination system because that is where the model reads data. An institution that already governs Microsoft 365 to the FFIEC baseline already has four of the five controls; the fifth (Purview DLP for Copilot specifically) is a configuration extension on top of an existing DLP program. The audit committee question is not "should we build Microsoft-specific AI governance," it is "have we extended our existing Microsoft 365 governance to cover what Copilot does." The five controls are the answer to that question.
Five Microsoft 365 Controls That Turn Shadow AI Into Governed AI
Microsoft has spent the last 24 months publishing the technical documentation for how Copilot interacts with each of these controls. The configuration is no longer guesswork. The framework an examiner expects to see, mapped to the Microsoft surface that does the work, looks like this.
Before walking through each control individually, it helps to position the framework against the shadow AI problem it is solving. The Cyberhaven 2025 financial-services-specific research found 64% of sensitive AI usage across financial-services workplaces flowing through unsanctioned channels. Harmonic's Q2 2025 study reported 45.4% of sensitive workplace AI traffic flowing through personal accounts. UpGuard's 2025 BYOAI survey found roughly 81% of knowledge workers using AI in some workflow capacity, with a substantial portion of that usage outside any IT-approved channel. The traffic is already in the institution's network. The five-control framework does not prevent users from wanting AI assistance; it gives them an AI assistant that is governed and replaces the shadow tool the user was reaching for anyway.
The five-control framework reads top to bottom as a rollout sequence. Conditional Access decides who can reach Copilot and from what device. Purview Audit logs every Copilot interaction so the rest is provable. Purview DLP for Copilot keeps sensitive content out of prompts and out of grounding. Defender for Cloud Apps surfaces the shadow AI traffic that Conditional Access did not catch. Intune enforces the device posture that Conditional Access requires. For the broader decision tree on which Copilot tier to buy and how the rollout fits into the institution's productivity strategy, the Microsoft 365 Copilot Business buyer's guide for financial institutions walks through the three buying paths, the consumer-versus-paid-Copilot disambiguator, and the ABT Copilot Pilot Pack offer that anchors this cluster.
If you read those five controls and recognize four of them as already present in your NIST Cybersecurity Framework 2.0 baseline, that is the point. Copilot governance is not a separate regulatory regime; it is the controls you already configured under information security, applied to the specific way Copilot reads tenant data.
The mapping to NIST CSF 2.0 reads cleanly. Identify (ID): Defender for Cloud Apps Cloud Discovery report inventories the AI surface area, sanctioned and unsanctioned. Protect (PR): Conditional Access enforces identity and device boundaries (PR.AA, Identity Management, Authentication, and Access Control); Purview DLP for Copilot enforces data boundary (PR.DS, Data Security); Intune enforces platform security (PR.PS, Platform Security). Detect (DE): Purview Audit and Defender for Cloud Apps surface anomalous Copilot activity into the SIEM (DE.CM, Continuous Monitoring). Respond (RS): the audit log and Cloud App Catalog enforcement actions support response playbooks. Recover (RC): Microsoft Purview Information Protection labels with encryption survive incident response and selective wipe scenarios. Examiners reading the AI conversation against NIST CSF 2.0 see the five controls cover the relevant categories already.
The mapping to NIST SP 800-53 Rev. 5 also reads cleanly for the NCUA examination conversation. Access Control (AC) family controls map to Conditional Access. Audit and Accountability (AU) family maps to Purview Audit. System and Communications Protection (SC) and Data Protection (DP) families map to Purview DLP and Information Protection. Configuration Management (CM) family maps to Intune. Risk Assessment (RA) and Supply Chain Risk Management (SR) cover the Microsoft-as-vendor risk evaluation. The point is the same as the CSF mapping: the five Microsoft 365 controls cover the relevant 800-53 control families without inventing a new control family for AI.
The examiner is not looking for a Copilot-specific policy. They are looking for evidence that the controls in your information security program actually cover what Copilot does.
The order the controls roll out is not strictly forced (Purview Audit could be enabled before Conditional Access at most institutions, for example), but the framework sits cleanly in the order above because that is the order the examiner reads through during an AI governance conversation. The identity boundary comes first because the examiner wants to know who can reach the tool. The audit log comes second because everything else is only provable if it is logged. The content boundary comes third because the institution decides what Copilot can and cannot reason over. The shadow surface comes fourth because the examiner wants to know what is happening outside the sanctioned tool. The device posture comes fifth because it enforces the identity boundary in practice. A CISO who walks the examiner through the controls in this order is walking the examiner through the same mental model the examiner is already using.
What FFIEC, OCC, NCUA, and FDIC Examiners Actually Check in 2026
The regulator landscape changed three times in the last twelve months. Each shift moved the floor for what an examiner expects to see during an AI governance conversation.
April 17, 2026: OCC Bulletin 2026-13 plus the joint SR letter from the Federal Reserve. The OCC, Federal Reserve, and FDIC jointly issued revised model risk management guidance that explicitly excludes generative and agentic AI from MRM scope. The agencies stated that broader enterprise risk governance frameworks, not MRM, are the right home for Copilot oversight. The practical effect for a CISO: you are not building a model validation package for Copilot. You are documenting how Copilot fits into your existing vendor risk, information security, and audit programs. The five Microsoft 365 controls in this article are the evidence layer for that documentation.
The April 2026 joint guidance was significant for one specific reason. Before April 17, 2026, institutions had legitimate uncertainty about whether Copilot was inside or outside the MRM perimeter that the original 2011 SR 11-7 letter and the 2021 third-party model risk guidance established. The uncertainty was driving some institutions to delay Copilot adoption until the MRM team could finish a validation package that did not actually fit the product. The joint guidance resolved the uncertainty: generative and agentic AI is outside MRM, inside enterprise risk governance. A community bank CISO can now point at the bulletin during the audit committee meeting and confirm that the Copilot deployment is documented inside enterprise risk, vendor management, and information security, not inside MRM. The conversation moves faster.
October 6, 2025: OCC Bulletin 2025-26. The OCC told community banks under $30 billion in assets they have flexibility in model risk practices and will not be criticized solely for the frequency or scope of validation they reasonably determine. For a community bank deploying Copilot, the bulletin is the practical permission slip to scope governance proportionally. You are not running the validation playbook a $250 billion regional runs. You are documenting that Copilot's risks are bounded, that the bounds are enforced by the five controls, and that the controls are evidenced in your audit log.
OCC 2025-26 also addresses the question of risk-based proportionality in a way that directly applies to Copilot deployment scope. A 30-person mortgage lender deploying Copilot to 20 staff has fundamentally different risk dynamics than a 200-person credit union deploying to 130 staff, and both differ from a 2,000-person regional bank deploying to 1,500 staff. The bulletin gives community banks the explicit framing that scope decisions should reflect the size, complexity, and risk profile of the institution. For Copilot, that means the institution scopes audit retention, DLP policy granularity, sensitivity-label coverage, and Conditional Access stringency proportionally. A small mortgage lender does not need the same Audit Premium 10-year retention configuration that a regional bank under a multi-year exam cycle would configure. A small mortgage lender does need Audit standard 180-day retention enabled, DLP for Copilot in place, and Conditional Access targeting Office 365 at the same baseline as the larger institution. The proportionality applies to depth, not to whether the control exists.
NCUA AI Compliance Plan. The NCUA requires credit unions to identify, monitor, and measure AI-specific risks, document minimum risk management practices for AI, and maintain termination procedures for non-compliant high-impact AI deployments. The framework is aligned to NIST SP 800-53 Rev. 5. For a credit union, that translates directly: Conditional Access enforces the identity boundary, Purview Audit produces the monitoring evidence, DLP for Copilot enforces the measured controls, Defender for Cloud Apps surfaces unsanctioned AI, and Intune enforces the device posture. The five-control framework is NIST SP 800-53 in Microsoft 365 form factor.
The NCUA AI Compliance Plan also includes a termination-procedures requirement that maps to one specific Microsoft 365 control flow: the institution must demonstrate the ability to revoke Copilot access at the user, role, or tenant level in a documented, evidenced manner. Conditional Access supports the user and role-level revocation through policy exclusion or block-mode policies. Tenant-level revocation is the license unassignment at the Microsoft 365 admin center, which surfaces in the unified audit log as a provisioning event. The institution that has documented both flows in advance, with the audit-log evidence pre-mapped, can respond to an NCUA termination requirement inside the audit response window without operational friction.
FFIEC IT Examination Handbook. The handbook is the working document examiners use across the federal banking agencies. The relevant booklets for Copilot governance include the Information Security booklet (covering identity, access, encryption, sensitivity labeling, DLP), the Architecture, Infrastructure, and Operations booklet (covering cloud configuration, vendor management, change control), and the Management booklet (covering governance framework, risk assessment, audit program). The handbook does not have a dedicated AI or Copilot booklet, and the April 17, 2026 joint guidance confirms that one is not coming in the near term because AI sits inside enterprise risk governance. The CISO preparing for an FFIEC-aligned examination maps Copilot controls to the existing booklets, not to a new one.
FDIC 2026 IT exam restructure. The FDIC restructured its IT examination process in 2026 around five integrated domains: governance, cybersecurity, business continuity, vendor management, and audit. AI governance is an explicit examination discussion point inside the governance domain. The restructure is consequential because it embedded AI governance into the existing examination process rather than creating a separate AI examination. For an institution preparing for an FDIC IT exam in 2026, the AI conversation happens inside the governance domain interview, with the controls evidence pulled from cybersecurity, vendor management, and audit. The five-control framework is the structural answer the FDIC examiner is looking for during the governance domain conversation.
State Layer (2026)
Texas Department of Banking Industry Notice 2025-01 (January 24, 2025) reinforces the floor: MFA on all access, asset inventory, 30-day patching for critical vulnerabilities, integration of CISA, FS-ISAC, and InfraGard threat intelligence. The notice does not name AI specifically, but every one of its controls is foundational to Copilot governance. Texas TRAIGA took effect January 1, 2026 and creates a safe harbor for financial institutions already examined under federal prudential guidance. Colorado SB24-205 enforcement is stayed after a federal court granted a TRO on April 27, 2026; legislators proposed a rewrite pushing the effective date to January 1, 2027. NY DFS requires MFA for all user access to all information systems as of November 1, 2025, and recommends deepfake-resistant authentication factors. October 2025 supplemental guidance adds AI-specific contractual clauses for third-party providers.
The takeaway across federal and state guidance: the regulators are not waiting for a Copilot-specific exam playbook to write findings. They are reading the existing information security, vendor risk, and governance frameworks against the new AI surface area. The five Microsoft 365 controls below are how Microsoft 365 Copilot fits cleanly into the existing frameworks.
Two additional regulatory threads sit just below the surface of the exam conversation. The first is FFIEC IT Examination Handbook coverage of AI. The handbook does not have a dedicated Copilot booklet, but the Information Security booklet, the Architecture, Infrastructure, and Operations booklet, and the Management booklet all cover the controls that Copilot exercises. An examiner reading the AI conversation against the existing handbook is reading information security controls (Conditional Access, MFA, sensitivity labels, DLP), infrastructure controls (Defender for Cloud Apps shadow IT discovery, Intune device compliance), and management controls (governance framework, change management, audit program, third-party risk assessment for Microsoft as the Copilot processor). The handbook does not need a new booklet to cover Copilot; the existing booklets cover what Copilot does once you map Copilot interactions to the relevant control domains.
The second thread is the CFPB's evolving guidance on AI-driven adverse-action communications and on AI use in mortgage underwriting more broadly. The CFPB has issued circulars and supervisory observations confirming that AI-generated adverse-action notices must include the specific reasons for adverse action with the same fidelity a non-AI-generated notice would carry. The downstream implication for Copilot deployment at a mortgage lender or credit union: Copilot can assist with first drafts of adverse-action notices, but the underwriter retains authorship and accountability for the final notice. Copilot is in the loop as a productivity tool; it is not the decision-maker. The same pattern applies for any borrower-facing communication that touches a credit decision. The five controls in this article do not enforce that authorship boundary on their own; the institution enforces it through workflow design and through the audit log evidencing who clicked send on the final notice.
The ECOA Authorship Boundary
Under ECOA 12 CFR §1002.9, adverse-action notices must state the specific principal reasons for the action within 30 days, subject to fair-lending review. Microsoft 365 Copilot can produce a first draft of an adverse-action narrative based on the underwriter's notes and the AUS findings, but the final notice must be authored and reviewed by the underwriter or designated credit professional. Treating Copilot as the final author of an adverse-action notice creates a fair-lending exposure that the five-control framework does not address; the boundary is enforced through underwriting workflow, training, and audit trail. The same boundary applies to any communication that delivers a credit decision to a consumer.
Control 1: Microsoft Entra ID Conditional Access
Microsoft Entra ID Conditional Access is the identity boundary around Copilot. It decides which users can reach Copilot, from what devices, from what networks, with what authentication strength, and in what session shape. The configuration is the most important Copilot governance click your administrator will make. Conditional Access also sits at the top of the five-control framework because it gates the rest. A user who cannot reach the tenant cannot reach Copilot, cannot generate a CopilotInteraction event in the audit log, cannot trigger a DLP for Copilot policy, cannot have their device evaluated by Intune, and cannot show up in the Defender for Cloud Apps activity report. If Conditional Access is misconfigured at the start of the deployment, the rest of the controls do not get a chance to fire.
The detail that catches institutions off-guard: Conditional Access targets the Office 365 cloud app suite, not a separate "Copilot App" target. Copilot reads tenant data through the same Microsoft Graph endpoints as the rest of Microsoft 365. A policy that requires multi-factor authentication, managed devices, and approved networks for Office 365 access covers Copilot automatically. A policy that singles out a "Copilot App" target and leaves Office 365 wide open does not cover Copilot at all. This is documented at Microsoft Learn under Conditional Access overview and the specific guidance on blocking unmanaged Windows devices.
The minimum Conditional Access configuration for Copilot governance:
- Require MFA for all users. Aligns with NY DFS November 1, 2025 effective date and FFIEC IT Examination Handbook authentication requirements.
- Block legacy authentication. Legacy auth protocols cannot enforce MFA; allowing them is a back door around the rest of the control.
- Require compliant or hybrid-joined devices for Copilot-eligible users. Compliance is evaluated by Intune (control 5 below); Conditional Access enforces the result.
- Restrict by location where appropriate. Block authentication from countries the institution does not operate in; reduce attack surface from credential stuffing.
- Enforce sign-in frequency. Twelve-hour or twenty-four-hour reauthentication for Copilot-eligible roles aligns with FFIEC session management guidance.
The control evidence an examiner will ask for: the Conditional Access policy export showing Office 365 as a target, the report-only audit log showing the policies fired for Copilot sign-ins, and the exception report showing which users (if any) are excluded and why.
For institutions with a complex authentication footprint (multiple identity providers, federation with legacy AD FS for some apps, B2B guest users from partner institutions), the Conditional Access policy review for Copilot should include a coverage check against each authentication path. A user who signs in through a federated path that bypasses Conditional Access on the home tenant cannot be brought back under Conditional Access at the resource tenant; the gap exists at the federation trust and needs to be closed at that layer. Microsoft Entra ID supports cross-tenant access settings that enforce inbound MFA and device compliance for B2B guests, which is the control layer for federation paths. The same review applies to legacy auth protocols (POP, IMAP, MAPI/HTTP without modern auth), which Conditional Access cannot enforce against; the only safe control is to disable legacy auth at the tenant level, which Microsoft has been pushing customers toward for several years and which is now the default for new tenants.
One configuration nuance worth surfacing for the deployment team. Conditional Access policies should be staged in report-only mode for at least a week before enforcement, then promoted to grant or block. Report-only mode logs what the policy would have decided without actually denying access, which lets the institution catch over-broad block conditions before they cause a help-desk surge. The Entra ID sign-in log surfaces the report-only outcomes in a dedicated column, and examiners reading the rollout history can see the staging timeline directly from those logs. A Copilot rollout that goes straight from off to full block without report-only staging is a rollout that will generate help-desk tickets the institution did not need to generate, and the examiner will read the help-desk ticket volume as a measure of operational maturity.
The second nuance is the Copilot sign-in flow itself. When a user invokes Copilot inside Word, Excel, Outlook, or Teams, the authentication token used to call Microsoft Graph is the same Office 365 token the user already holds for that application session. Copilot does not negotiate a separate authentication. That is why targeting Office 365 covers the surface and targeting a hypothetical "Copilot App" does not. For a CISO reviewing the policy export before deployment, the audit-evidence question is: does the Office 365 cloud app target in our Conditional Access policy include the suite of services Copilot reads from? Exchange Online, SharePoint Online, OneDrive for Business, Microsoft Teams, and the underlying Microsoft Graph endpoint. The answer should be yes by default in any properly configured tenant, but verifying the policy scope before Copilot deployment is a five-minute check that prevents a finding.
Control 2: Microsoft Purview Audit
Purview Audit is the foundation control because everything else is only provable if it is logged. Conditional Access decisions, DLP policy hits, Defender for Cloud Apps discoveries, and Intune compliance evaluations all flow through the unified audit log. If audit is not enabled, the institution cannot prove the rest of the program existed.
For Copilot specifically, Purview Audit captures three event types worth knowing by name. CopilotInteraction logs each Copilot prompt with the user identity, timestamp, files and resources accessed, sensitivity labels touched, and the Copilot capability invoked (Word draft, Excel summary, Teams recap, etc.). ConnectedAIAppInteraction logs interactions with connected AI apps inside Microsoft 365. AIAppInteraction covers the broader AI-app surface area. Administrative activity is logged separately under events like UpdateTenantSettings, CreatePlugin, and EnablePromptBook, which let an examiner see who changed Copilot policy and when.
The CopilotInteraction event schema is rich enough to answer most of the audit questions an examiner can credibly ask. The event captures who initiated the Copilot interaction, the timestamp at second-level resolution, the application surface the interaction occurred on (Word, Excel, PowerPoint, Outlook, Teams, OneNote, the Microsoft 365 chat, or a Copilot Studio agent), the documents and resources Copilot grounded on while producing the response, the sensitivity labels assigned to those resources, the URI of the Copilot Studio agent invoked if one was, and the broader workload context. The event does not capture the literal text of the prompt or the response (those are filtered out for privacy reasons by default), but the metadata is sufficient to reconstruct the activity pattern. An examiner asking "what did Copilot do for Sandra in Lending over the last 30 days" gets back a structured event list, not a free-form text dump.
For institutions that need to capture prompt and response text for specific use cases (regulated communication review, legal hold, internal investigations), Microsoft Purview Communication Compliance and the Customer Lockbox features cover those scenarios, layered on top of the standard CopilotInteraction event. The default privacy posture is to not retain prompt and response text; the institution explicitly opts in to retention where the business case requires it. Examiners are aware of this default and do not generally ask for prompt/response text; they ask for the structured event list, which is what the standard configuration produces.
Retention math you need to know
Standard Purview Audit retention is 180 days, raised from 90 days in October 2023. That covers the typical two-quarter look-back examiners use. Audit Premium extends retention to 1 year by default and up to 10 years with the long-term retention add-on, with the option to set custom retention policies per record type. For an institution preparing for a multi-year examination cycle or a litigation hold, Audit Premium is the configuration the auditor will ask about. Microsoft Learn documents the retention math under audit-copilot and audit-log-retention-policies.
The control evidence an examiner will ask for: confirmation that Purview Audit is enabled at the tenant level, the search query showing 30 to 90 days of CopilotInteraction events, the retention policy applied to those events, and the export procedure documented for examiner response. If the auditor asks "show me 30 days of Copilot activity from Sandra in Lending, including which loan files she summarized" and the institution cannot produce that report inside a reasonable response window, the auditor's next question is about the audit program, not about Copilot.
Two operational considerations for the audit team. First, Purview Audit ingestion latency is typically 30 minutes for most event types and up to 24 hours for some, which is well inside the response windows for routine examiner requests. An institution preparing for a multi-day on-site exam should pre-run the standard audit queries the day before the exam begins so the audit data is already cached in the search results panel and exportable in seconds. Second, the audit search results panel has a 50,000-row export limit per search. For a large institution with high Copilot interaction volume, queries should be scoped to a specific role group, date range, or event type to stay under the limit. The Microsoft Purview audit log search API supports programmatic pagination for institutions that need broader exports for litigation hold or internal investigation.
The third consideration is the audit log SIEM integration. Most financial institutions stream the Microsoft Purview unified audit log to a SIEM (Microsoft Sentinel, Splunk, IBM QRadar, or similar) for correlation with other identity, endpoint, and network signals. For Copilot specifically, the SIEM rule library should include detections for anomalous CopilotInteraction volume per user, CopilotInteraction events on high-sensitivity-label content, and administrative events that modify Copilot configuration without an approved change ticket. The detections do not replace Purview Audit as the foundation control; they layer on top to surface the events the audit team actually needs to investigate.
Control 3: Microsoft Purview DLP for Copilot
Purview Data Loss Prevention for Copilot is generally available for the Microsoft 365 Copilot and Copilot Chat location. The control sits in the prompt processing path and the grounding path, which is the unique Copilot-specific configuration this article exists to call out. DLP for Copilot is the control most institutions did not have in place a year ago because the product feature itself reached general availability in early 2026. For institutions that already have a mature DLP program for email and SharePoint, the configuration extension to add Copilot as a location is short. For institutions that have only minimum DLP coverage, the Copilot deployment is a useful forcing function to mature the underlying program.
Three behaviors a CISO needs to understand. First, DLP for Copilot blocks prompt processing when a sensitive information type (SIT) is detected in the prompt itself. A loan officer who tries to paste a borrower's full Social Security Number into a Copilot prompt hits the DLP block; the prompt does not process. Second, DLP for Copilot restricts web (Bing) searches separately, so a user prompt that would otherwise send sensitive content to Bing for grounding is blocked at that hop. Third, DLP for Copilot prevents Copilot from grounding on files and emails carrying restricted sensitivity labels. The citation will still show that a relevant document exists, but the content is not summarized into the Copilot response. The label-based restriction applies to emails sent on or after January 1, 2025 (the cutover date Microsoft published as the start of full label-based Copilot grounding enforcement).
The configuration map for a financial institution looks like this:
- SIT-based prompt blocks for NPI patterns. Social Security Numbers, full credit card numbers, driver's license numbers, account numbers paired with PII. The Microsoft Purview SIT library ships with the common financial-services patterns; the institution tunes the confidence thresholds and adds custom SITs for institution-specific identifiers.
- Sensitivity-label gating on confidential files. Customer records, audit work-papers, board materials, examiner correspondence, internal investigations, and HR matters get the labels that exclude them from Copilot grounding.
- Web-search restriction for sensitive prompts. Bing grounding is disabled for prompts that would otherwise leak sensitive content outside the tenant.
The control evidence an examiner will ask for: the DLP policy summary showing Microsoft 365 Copilot as a configured location, the policy match report showing which prompt blocks fired over the prior 30 to 90 days, and the sensitivity-label coverage report showing what percentage of confidential content is labeled.
One DLP-specific subtlety worth flagging for the compliance team. Purview DLP for Copilot interacts with Microsoft Purview Information Protection sensitivity labels in two directions. The first direction is grounding: when Copilot is composing a response, the DLP policy evaluates each candidate grounding source against the sensitivity label assigned to that source. If the source is labeled "Confidential - NPI" with a Copilot exclusion rule applied, Copilot will show the citation that the source exists but will not pull the content into the response. The second direction is output labeling: when Copilot generates content from a grounding source carrying a sensitivity label, the generated content can inherit the label automatically. The configuration for label inheritance is at the Information Protection label setting, not the DLP rule. For a financial institution that has not yet rolled out sensitivity labels at scale, the label-rollout work is sometimes the longest lead time inside a Copilot deployment, because labels need to land on the documents before DLP for Copilot has anything to gate against. The pragmatic sequence most institutions follow is: enable Purview Audit, configure Conditional Access, enable DLP for Copilot with SIT-based prompt blocks first, then layer in sensitivity-label-based grounding restrictions as labels reach coverage on confidential content over the following 60 to 90 days.
For institutions that have already invested in Microsoft Purview Information Protection labels (most ABT customers have at the Tier-1 CSP relationship level), the DLP-for-Copilot configuration is a 30-minute exercise rather than a quarter-long project. The labels exist; the DLP rule simply maps Copilot as a new location that respects them.
Get the examiner-ready Copilot audit before you deploy
ABT runs a 90-minute examiner-ready Copilot audit that maps your current Microsoft 365 governance configuration against the five-control framework above, identifies the gaps, and produces a written gap report you can hand to your auditor. Free for ABT Tier-1 Microsoft Cloud Solution Provider customers through June 30, 2026.
Schedule your Copilot examiner-ready audit Run the Security Grade scan firstControl 4: Microsoft Defender for Cloud Apps
Defender for Cloud Apps is the shadow AI control. Conditional Access and DLP enforce the rules inside Microsoft 365 Copilot; Defender for Cloud Apps surfaces the AI traffic that did not route through Microsoft 365 Copilot in the first place. For an institution that has just stood up Microsoft 365 Copilot Business, Defender for Cloud Apps answers the operational question the audit committee will ask in the first quarterly review: are people still using the unsanctioned AI tools, or did the sanctioned Copilot deployment actually replace them?
The mechanism is the Cloud App Catalog, which Microsoft maintains across 31,000+ apps, including a dedicated Generative AI category with 1,000+ apps. Defender for Cloud Apps ingests traffic data from Microsoft Defender for Endpoint on managed devices and from on-premises log collectors for network egress. It compares that traffic against the catalog and reports which AI apps the institution's users are reaching, which users are reaching them, and at what volume.
The Cloud App Catalog risk scoring covers more than 80 risk attributes per app, including the app's hosting jurisdiction, encryption posture, data residency, SOC 2 attestation status, regulatory certifications, and breach history. For an institution that needs to make a sanctioned-or-unsanctioned decision on a specific AI app the discovery report surfaces, the catalog risk score does most of the analysis work. An IT manager reviewing the discovery report can sort by risk score, see which apps fall below the institution's risk threshold, and unsanction those apps in a single bulk action. The high-risk apps the discovery report typically surfaces in financial services tenants include consumer GenAI tools without enterprise data protection commitments, image generation services that retain prompts indefinitely, and chat-based AI assistants tied to personal email addresses rather than the work account.
For an institution that has not yet deployed Copilot, the discovery report is usually the most uncomfortable document in the room. It is also the most useful. The institution that ships Copilot the day it appears in the discovery report is shipping into a known traffic pattern; the institution that ships Copilot six months after the report sits unread is shipping while the shadow AI surface has grown.
Three configuration steps inside Defender for Cloud Apps matter for Copilot governance:
- Enable Cloud Discovery against Defender for Endpoint and any on-premises egress log collector so the catalog comparison runs against real institution traffic, not a snapshot.
- Unsanction generative AI apps in bulk that the institution does not approve. Unsanctioning marks the app as out-of-policy and shows up in user warnings and admin reports.
- Auto-block unsanctioned generative AI apps via Defender for Endpoint indicators where the institution has the device coverage to enforce. The auto-block closes the loop between discovery and enforcement.
The control evidence an examiner will ask for: the most recent Cloud Discovery report showing generative AI apps in use, the list of sanctioned and unsanctioned apps, and the enforcement evidence showing that unsanctioned traffic dropped after the auto-block rolled out.
Two practical notes from rolling out Defender for Cloud Apps shadow AI discovery at financial institutions. First, the initial Cloud Discovery report almost always includes legitimate apps the institution uses for non-AI work but that happen to have AI features now (Notion, Grammarly, Otter.ai, certain CRM and marketing tools). The discovery report does not automatically distinguish between sanctioned business use and shadow AI use; the institution makes that classification. The first pass through the report typically takes a Friday afternoon for an IT manager to sort, with periodic re-passes as the catalog grows. Second, the auto-block via Defender for Endpoint indicators only enforces on managed devices. For BYOD scenarios, the auto-block does not reach the personal device, which is why Conditional Access plus Intune app-protection policies for BYOD (control 1 and control 5) cover the gap. An institution that has full Defender for Endpoint coverage on managed devices and full Intune app-protection on personal devices has effectively closed the shadow AI loop for sanctioned Microsoft 365 users.
The shadow AI discovery report is also the single most useful artifact for the board-level AI governance conversation. A community bank CISO who walks into the audit committee meeting with the prior-quarter discovery report (sanctioned use, unsanctioned use trend line, enforcement outcomes) has the same evidence base as a CISO at a $50 billion regional. The discovery report makes AI governance concrete in a room that is otherwise reading abstract policy language. ABT's Copilot examiner-ready audit includes a first-pass run of the discovery report as a starting baseline for institutions that have not yet enabled Cloud Discovery, and the report itself often shifts the deployment timeline forward because the board sees the existing exposure.
Control 5: Microsoft Intune
Intune is the device posture control. Conditional Access decides what device posture is required for Copilot access; Intune evaluates the actual device against the policy and reports compliant or non-compliant back to Entra ID. Together with Conditional Access, Intune closes the loop on the most common Copilot governance gap: the institution requires MFA in policy, but a personal phone with an outdated operating system on an unsecured home network is still reaching the tenant because the device posture is not being enforced.
For financial institutions, the Intune configuration profile usually maps cleanly to the existing endpoint management program the institution already runs. Windows compliance policies enforce BitLocker, Defender for Endpoint enrollment, the most-recent Windows version, screen lock, and password complexity. macOS compliance policies enforce FileVault, Defender for Endpoint on Mac, and the latest macOS version. iOS and Android compliance policies enforce jailbreak/root detection, OS version, screen lock, and Defender for Endpoint on mobile. The Intune configuration is not Copilot-specific; it is the general endpoint management posture, with Conditional Access using the compliance result to gate Copilot access.
Three Intune configurations matter for Copilot governance:
- Compliance policy for Copilot-eligible users. The policy enforces operating system version, encryption, antivirus, firewall, and screen lock. A device that fails the policy is reported as non-compliant and Conditional Access denies Copilot (and the rest of Microsoft 365) access.
- App-protection policies for mobile. On personal mobile devices accessing Microsoft 365 (BYOD), app-protection policies enforce the data boundary inside the Microsoft 365 app suite without requiring full device enrollment. Copy-paste between work apps and personal apps is restricted; data inside the work app is encrypted at rest.
- Settings Catalog policy for Copilot app management. Intune ships a "Remove Microsoft Copilot App" setting that can be applied to specific device groups during phased rollouts, plus the broader Office and Microsoft 365 Apps configuration policies that govern the app behavior on Copilot-eligible devices.
The control evidence an examiner will ask for: the Intune compliance dashboard showing compliant vs. non-compliant Copilot-eligible devices, the enrollment report showing BYOD app-protection coverage, and the configuration policy export for Copilot-relevant Settings Catalog policies.
One BYOD specific note for institutions where loan officers and processors work outside the branch. App-protection policies in Intune apply to the Microsoft 365 apps installed on the personal device without requiring the device itself to be enrolled in Intune as a managed device. The user keeps their personal phone or tablet personal; the work apps inside the personal device honor encryption, copy-paste restrictions, conditional access, and selective wipe. For Copilot specifically, the Outlook mobile app, Teams mobile app, Word, Excel, and PowerPoint mobile apps all support the app-protection policy. The user invokes Copilot through the work app, and the conversation is bounded by the same data boundary as the rest of the Microsoft 365 work surface on that device. The pattern is materially different from a "block Copilot on mobile" approach, which generates support tickets and shadow workarounds without actually preventing the underlying data flow.
The Intune Settings Catalog has grown substantially in 2025 and 2026. The Copilot-specific configuration surface includes settings that control which Copilot capabilities are available to which user groups, how Copilot Studio agents are deployed, and how Copilot interacts with line-of-business apps inside Microsoft 365. For a 200-person credit union with a mix of branch staff, lending staff, and remote operations staff, the configuration policies can scope Copilot capabilities differently per role group. Lending staff get the full Copilot capability set; branch teller staff get a reduced set focused on Outlook and Teams summarization; operations staff get the full set plus specific Copilot Studio agents that automate internal workflows. The role-based scoping is documented in audit logs as administrative configuration events, which is what the examiner will read.
The Audit Log Is the Receipt
The closing principle behind all five controls: an examiner reads what is logged. A control that exists but is not evidenced in the audit log is a control that cannot be proven, and a control that cannot be proven becomes a finding waiting to happen.
Conditional Access decisions appear in the Entra ID sign-in logs and Conditional Access report. DLP policy hits appear in the Purview compliance portal and the unified audit log. Defender for Cloud Apps discoveries appear in the Cloud Discovery dashboard and the unified audit log. Intune compliance evaluations appear in the Intune device compliance reports and the unified audit log. Copilot interactions appear in the unified audit log under the CopilotInteraction event type. Every one of the five controls feeds back into Purview Audit, which is why audit is the foundation.
The question an examiner asks when reviewing AI governance is almost always a variant of: "Show me 30 days of Copilot interactions, the DLP blocks that fired, and the Conditional Access denials." If the institution can produce that report in a single audit search inside a reasonable response window, the rest of the conversation is about scope and proportionality. If the institution cannot produce that report, the rest of the conversation is about why audit was not enabled, and the AI conversation becomes an audit-program conversation.
The broader Microsoft 365 control plane that builds on the foundation includes Microsoft Purview Data Security Posture Management for AI (DSPM for AI), which surfaces a single dashboard for AI activity across the tenant including sensitive-information exposure in prompts, jailbreak detection, and per-user risk scoring. DSPM for AI is a layer of operational visibility on top of the audit log, not a replacement for the five controls. The institution that has the five controls in place gets the most value from DSPM for AI because the audit signal is rich. The institution that does not have the foundation controls in place sees DSPM for AI as a thin layer over thin data.
Microsoft Agent 365, which became generally available May 1, 2026, provides the agent control plane (registry, lifecycle, observability) for autonomous AI agents in the tenant. For institutions that plan to deploy Copilot Studio agents alongside Microsoft 365 Copilot, Agent 365 is the operational layer that surfaces agent activity into the same audit, DLP, and compliance reporting framework. The five-control framework still applies; Agent 365 extends the visibility to cover the agents Copilot Studio produces. The control evidence layers cleanly: Purview Audit captures the underlying events, DSPM for AI surfaces the risk view, Agent 365 surfaces the agent-specific operational view, and the institution composes the examiner response from all three.
One closing operational note for institutions preparing for examination. The first time an institution actually exercises the "30-day Copilot interaction report plus DLP blocks plus Conditional Access denials" query end to end, the workflow often surfaces a configuration gap that the policies-on-paper review did not catch. A common finding: the DLP policy is in place, but the audit retention does not cover the look-back window the examiner asked for. Another common finding: the Conditional Access policy targets the right cloud app suite, but the exclusion list includes a service account that has more access than it needs. The pre-exam dry run, run two to four weeks before the on-site exam, catches these gaps while the institution still has time to remediate. ABT's Copilot examiner-ready audit includes the dry-run workflow as a deliverable.
The five-control framework summary
Conditional Access sets the identity and device boundary. Purview Audit logs the proof. Purview DLP for Copilot blocks the wrong content from prompts and grounding. Defender for Cloud Apps surfaces the shadow AI that did not route through Microsoft 365. Intune enforces the device posture Conditional Access requires. Four out of the five are controls the institution already configured for other reasons. The work is making sure they actually cover Copilot.
What Comes Next
You have read the framework. The five controls map cleanly to FFIEC, OCC, NCUA, FDIC, NIST SP 800-53 Rev. 5, and the April 17, 2026 joint guidance on AI governance. The next two questions a CISO asks are almost always the same.
Before getting to those questions, two operational realities worth surfacing. First, configuring the five controls is not a single project; it is an extension of work the institution is already doing. Conditional Access is configured the day the institution stands up Microsoft 365; the work for Copilot is to verify the Office 365 cloud app target covers the Copilot surface. Purview Audit is enabled the day the institution turns on Microsoft 365 E3 or Business Premium; the work for Copilot is to verify standard retention covers the look-back window and to move to Audit Premium if a longer window is required. Purview DLP is configured for email and SharePoint at most institutions; the work for Copilot is to extend the policy locations to include Microsoft 365 Copilot. Defender for Cloud Apps is licensed under E5, EMS E5, Microsoft 365 E5 Security, or as a standalone; the work for Copilot is to enable Cloud Discovery and run the first discovery report. Intune is configured the day the institution stood up endpoint management; the work for Copilot is to verify compliance policy covers the Copilot-eligible user base. None of these is greenfield work. All of it is verification and extension.
Second, the five-control framework is not a checklist the institution finishes once and never revisits. Microsoft adds capabilities to each of the five products on a continuous release cadence, and the regulator landscape continues to evolve as state-level AI guidance lands and federal guidance is updated. The institution that treats the five controls as a quarterly review checkpoint, with the audit-log evidence captured each quarter as the artifact, stays examiner-ready continuously rather than scrambling to assemble evidence the month before an exam. ABT's Copilot governance program includes the quarterly review as a service deliverable for Tier-1 CSP customers, which removes the operational lift from the institution's IT and compliance teams.
First: what does Copilot actually cost, and is the bundle math worth locking in before June 30, 2026? Spoke 1 in this cluster walks through the three-price disclosure ($10 ABT-channel incremental, $18 Microsoft promo standalone, $21 Microsoft standard standalone after June 30) and the shadow-AI cost-recovery math that often makes the rollout self-funding inside the first quarter. See the pricing walk-through for community banks and credit unions. The pricing conversation matters here because the bundle math is what makes the deployment defensible to the CFO and the board: a community bank that ships Copilot at $10 incremental over Business Premium, retires the shadow personal subscriptions, and counts the productivity gain on top is showing the audit committee a tool that pays for itself before quarter-end.
Second: which roles inside our institution should we deploy Copilot to first, and what does productive use actually look like? Spoke 3 walks through the role-by-role use cases for mortgage lenders specifically, from loan officer borrower outreach drafts to underwriter decline narrative first drafts (with the ECOA fair-lending guardrail Copilot cannot cross). See the role-by-role use cases for loan officers, processors, and underwriters. The role analysis matters here because deployment scope is a control decision: the institution decides which user groups get Copilot first, which sensitivity labels apply to their workflows, and which Conditional Access policies gate their access. The five-control framework in this article gives the boundaries; the role analysis fills in which staff land inside which boundaries on day one.
And anchor everything to the pillar: the Microsoft 365 Copilot Business Buyer's Guide for Financial Institutions covers the three-buying-paths decision tree, the disambiguator that separates consumer Copilot from free Copilot Chat from paid Microsoft 365 Copilot, and the ABT Copilot Pilot Pack offer that bundles the 30-day adoption sprint, the AI Readiness Assessment, MortgageGuide beta access, and the examiner-ready gap report. The pillar is also where the deployment offer lives: the Pilot Pack is available through June 30, 2026, with the AI Readiness Assessment included at no charge for existing ABT Tier-1 CSP customers and for new customers who add 11 or more Copilot seats through ABT during the promo window.
The pattern across the cluster is the same one you have read here. Productivity is the lead. Security protects what your team gets done. Governance closes the loop with the examiner. Five Microsoft 365 controls together turn a shadow-AI exposure into a clean audit conversation. The step you do not skip before Copilot goes live is the one most institutions skip first.
Two ways to make sure your Copilot deployment passes examination
Schedule the 90-minute Copilot examiner-ready audit. ABT's Tier-1 Microsoft CSP team maps your current Microsoft 365 governance configuration against the five-control framework, identifies the gaps, and produces a written gap report you can hand to your auditor. Or start with the Security Grade scan: a free 5-minute external scan that gives you the baseline score for the controls a regulator can see from outside your tenant.
Schedule your Copilot examiner-ready audit Run the free Security Grade scanOne last note before the FAQ. The five-control framework in this article is a snapshot of what FFIEC, OCC, Federal Reserve, FDIC, and NCUA examiners are reading from Microsoft 365 tenants in 2026. Microsoft's Copilot product surface, Microsoft Purview, Microsoft Defender for Cloud Apps, and Microsoft Intune are all moving products with frequent feature additions. The framework holds because it is mapped to the regulatory expectations (identity boundary, audit log, content boundary, shadow surface, device posture), not to specific Microsoft features that may change names. As Microsoft adds capabilities (DSPM for AI, Agent 365, Copilot Studio agent governance, additional Conditional Access controls), the five categories absorb the new features rather than requiring new categories. The CISO who builds the Copilot governance program around the five categories is building a framework that survives the next two years of Microsoft feature releases, not a framework that gets reset every quarter when Microsoft renames a product.
The institutions that ship Copilot well in 2026 are the ones that read the regulatory landscape as a stable five-category framework, map it to existing Microsoft 365 controls, and treat the work as configuration extension rather than greenfield governance. The institutions that get caught off guard are the ones that wait for a Copilot-specific exam booklet to publish, find shadow AI usage in the tenant when the exam arrives, and discover that the existing Microsoft 365 controls had not been extended to cover Copilot. The difference between the two outcomes is not budget, headcount, or institutional size. It is whether the CISO and IT director read the five controls in this article as a configuration checklist and ran the checklist end to end before the examiner walked in.
For institutions that read this article and recognize that they are already 80% of the way to the framework on the strength of existing Microsoft 365 governance, the remaining 20% is a productive use of the next 30 to 60 days. For institutions that read this article and recognize they have foundation work to do across two or three of the controls, the remaining work is still bounded and predictable. The frameworks are stable. The Microsoft documentation is comprehensive. The regulatory expectations are clear. The work that remains is configuration, evidence capture, and a dry run with the audit team before the on-site exam. The five-control framework turns what looks like AI governance ambiguity into a finite list of clicks, policies, and reports. The CISO who runs the list end to end is the CISO whose Copilot deployment passes examination on the first read.
Frequently Asked Questions
Examiners expect five Microsoft 365 control surfaces to be configured before Copilot accesses customer non-public information: Microsoft Entra ID Conditional Access requiring multi-factor authentication and managed devices, Microsoft Purview Audit logging with at least 180 days of standard retention (1 year default and up to 10 years with Audit Premium), Microsoft Purview Data Loss Prevention for Copilot with sensitive-information-type prompt blocks and sensitivity-label-based grounding restrictions, Microsoft Defender for Cloud Apps for shadow AI discovery against the Generative AI app catalog, and Microsoft Intune for device compliance and app-protection policies. The configuration baseline maps to the NIST Cybersecurity Framework 2.0, NIST SP 800-53 Rev. 5, the FFIEC IT Examination Handbook, NCUA AI Compliance Plan, OCC bulletins on third-party and AI risk, and the April 17, 2026 OCC, Federal Reserve, and FDIC joint guidance that excludes generative AI from Model Risk Management scope but requires enterprise risk governance.
The institution decides which document types are out of scope for Copilot through Microsoft Purview Information Protection sensitivity labels and Microsoft Purview Data Loss Prevention policies. Common categories that financial institutions place out of scope include source-system credit decisioning logic and adverse-action determinations under ECOA 12 CFR §1002.9 (Copilot can assist with first drafts but cannot be the final author), examiner work-papers and confidential supervisory information, internal investigations and HR matters, source-code repositories outside engineering teams, and any document containing customer non-public information that has not been labeled and access-reviewed. The configuration is institution-specific. The pattern is the same across institutions: sensitivity-label the documents that should be in scope, DLP-rule the patterns that should never appear in a prompt, audit-log the access events for examiner response.
Copilot follows the compromised user's existing Microsoft 365 permissions. A compromised account can use Copilot to reason over whatever content that account is permissioned to access, the same way a compromised account today can use Outlook search, SharePoint search, or OneDrive sync. The compensating controls are the same controls that limit any account compromise. Microsoft Entra ID Conditional Access requiring multi-factor authentication, managed or hybrid-joined devices, and session controls reduces the likelihood of compromise. Microsoft Purview Information Protection sensitivity labels with encryption follow the document outside the account, so labeled confidential files stay encrypted even if exfiltrated. Microsoft Purview Audit detects anomalous Copilot query patterns through the CopilotInteraction event log. Microsoft Defender for Cloud Apps surfaces unusual data exfiltration patterns to other AI services. Copilot is a productivity tool inside the access boundary; the access boundary is what an attacker would need to break first, which is true with or without Copilot in the tenant.
Examiners typically open the AI governance conversation by asking for the audit log. Microsoft Purview Audit is the first control because every other control flows back through it as evidence. The CopilotInteraction event captures each Copilot prompt, the user identity, the files and resources accessed, and the sensitivity labels touched. From there, examiners check Microsoft Entra ID Conditional Access policies for multi-factor authentication and device compliance, Microsoft Purview Data Loss Prevention for Copilot policy coverage against the institution's defined sensitive information types and sensitivity labels, Microsoft Defender for Cloud Apps for shadow AI discovery against the Generative AI app catalog, and Microsoft Intune for the device compliance and app-protection policies that Conditional Access enforces. The order maps to how the regulators think about risk: identity boundary, log of activity, content boundary, shadow surface, device posture.
Yes. Purview Audit is the foundation control because every other Copilot governance control is only provable through the audit log. Standard Purview Audit retention is 180 days (raised from 90 days in October 2023), which typically covers the look-back examiners use during routine reviews. Audit Premium extends retention to 1 year default and up to 10 years with the long-term retention add-on, which is the configuration most institutions move to in advance of multi-year exam cycles or litigation hold requirements. For Copilot specifically, the relevant audit events include CopilotInteraction (each Copilot prompt with user, timestamp, files accessed, and labels touched), ConnectedAIAppInteraction, AIAppInteraction, and the administrative events that show who changed Copilot configuration and when. Without Purview Audit, the institution cannot evidence that Conditional Access, DLP for Copilot, Defender for Cloud Apps, or Intune controls actually fired, which means the controls are unprovable to the examiner.
Microsoft Entra ID Conditional Access policies that target the Office 365 cloud app suite cover Copilot automatically, because Copilot reads tenant data through the same Microsoft Graph endpoints as the rest of Microsoft 365. A common configuration mistake is to single out a "Copilot App" target in the policy and leave Office 365 wide open, which fails to cover Copilot at all. The recommended minimum configuration for Copilot governance includes requiring multi-factor authentication for all users, blocking legacy authentication, requiring compliant or hybrid-joined devices for Copilot-eligible users (with Intune evaluating compliance), restricting authentication by location where appropriate, and enforcing session controls and sign-in frequency aligned to FFIEC session management guidance. The control evidence examiners ask for is the Conditional Access policy export showing Office 365 as a target, the Entra ID sign-in log showing the policies fired for Copilot sign-ins, and the exception report showing which users (if any) are excluded and why.