11 min read
Microsoft 365 License Audit: Are You Overpaying?
In this article: The Hidden Cost of Microsoft 365 Licensing Five Licensing Mistakes Costing You Money E3 vs. E5 vs. Business Premium: What...
8 min read
Justin Kirsch : Updated on February 27, 2026
In this article:
In January 2026, Microsoft confirmed that Copilot — their own AI assistant — was bypassing Data Loss Prevention policies and summarizing confidential emails. Emails with sensitivity labels applied. Emails that DLP rules were supposed to protect. Copilot read them anyway and handed summaries to users who shouldn't have seen the content.
Microsoft tracked it as case CW1226324. They called it a code logic error and deployed a server-side fix. But think about what happened: Microsoft's AI tool broke Microsoft's security controls. Inside Microsoft's own platform. For financial institutions, that means any environment with Copilot licenses assigned could have had loan committee notes, board communications, or member account data surfaced through a pathway that DLP was supposed to block.
Then in February 2026, Microsoft's own security team published a document titled "Copilot Studio Agent Security: Top 10 Risks." Not a competitor's analysis. Not a third-party audit. Microsoft's security researchers telling their own customers: here are the ways AI agents in your environment can go wrong.
If you run a bank, credit union, or mortgage company and your team is evaluating Microsoft Copilot, both of these should be required reading before anyone approves a pilot. The DLP bypass proved that even properly configured controls can fail. And the risks Microsoft identified in their Top 10 land differently when the data in your environment includes Social Security numbers, loan applications, and member account records.
Microsoft released the Copilot Studio Agent Security Top 10 on February 12, 2026. It identifies the ten most significant security risks when organizations build and deploy AI agents using Copilot Studio.
The key risks include:
Microsoft's recommendation is direct: treat every AI agent as a first-class identity with human-equivalent governance. That means access reviews, permission scoping, audit logging, and security monitoring, the same controls you'd apply to a human employee with the same data access.
Every risk on Microsoft's list is relevant to financial services, but three of them create specifically regulatory problems.
Most employees at financial institutions are over-permissioned. They have access to SharePoint sites, Teams channels, and email folders that go beyond what their job function requires. In a pre-Copilot world, that's a manageable risk because employees don't actively browse every file they can technically access.
Copilot changes that equation. It surfaces all data the employee has access to. Ask Copilot a question, and it searches across every SharePoint site, every email, every Teams message the user's permissions allow. Data that was technically accessible but practically hidden becomes instantly retrievable.
For a bank, that means a teller asking Copilot a question could get results that include board meeting minutes, loan committee notes, or HR files if the permission boundaries aren't right. The data was always there. Copilot just made it findable.
Financial regulators expect audit trails for data access. Who accessed what, when, and why. AI agents introduce a new category of data access that most institutions haven't incorporated into their audit framework.
When a Copilot agent retrieves member data to answer a question, that access needs to be logged, reviewable, and attributable. If your examiner asks "who accessed this member's loan records in the last 90 days" and the answer includes "an AI agent, but we don't have detailed logs," that's a compliance gap.
If a Copilot agent generates a response to a member inquiry that contains incorrect information about their account, loan terms, or regulatory rights, your institution is responsible for that content. AI-generated responses carry the same regulatory weight as human-generated responses when they reach a customer.
The core issue isn't Copilot itself. It's what Copilot reveals about your existing data governance.
Most financial institutions have permission sprawl. Over the years, SharePoint sites get shared broadly. Teams channels accumulate members. Email distribution groups expand. Nobody cleans up access when people change roles or leave the organization.
Before Copilot, this was a background risk. After Copilot, it's an active exposure. Every over-permissioned user now has an AI assistant that can surface any data within their access boundary in seconds.
Microsoft offers Purview Suite for Copilot (currently at 50% discount through June 2026) specifically to address this problem. Purview provides data classification, sensitivity labeling, and access governance tools that let you define what data Copilot can and can't surface.
But deploying Purview is a data governance project, not a software installation. You need to classify your data, define sensitivity labels, apply those labels consistently, and then configure Copilot's access boundaries around them. For a financial institution with thousands of documents containing member personally identifiable information (PII), that's a significant undertaking.
If your institution is evaluating Copilot, these steps should happen before the pilot begins, not after:
Review who has access to what across your Microsoft 365 environment. SharePoint site permissions, Teams channel memberships, OneDrive sharing settings, email distribution groups. Identify and remediate over-permissioning before Copilot amplifies it.
Classify your data by sensitivity level. Member PII, loan records, board materials, financial reports, and HR documents all need explicit sensitivity labels. These labels govern what Copilot can surface and to whom.
Your existing Microsoft 365 security configuration needs to account for Copilot's access patterns. Conditional Access policies should include conditions for AI-assisted data retrieval, not just direct user access.
Enable and configure audit logging for Copilot interactions. Every query, every data retrieval, every response should be logged in a format your compliance team can review and your examiner can audit.
Document your institution's AI acceptable use policy. Which roles can use Copilot? For which tasks? What data categories are off-limits? What review process applies to AI-generated customer communications? Your policy should build on identity security fundamentals that are already in place. Your examiner will ask about this policy.
Microsoft's Copilot Control System provides monitoring for oversharing, anomalous behavior, and potential misuse. But the technology is only one piece. Your institution needs a governance framework that addresses:
The institutions that deploy Copilot with proper governance will gain a genuine operational advantage. The ones that deploy it without governance will hand their examiner a new category of findings. A provider that understands financial services compliance can help you build the governance framework before the deployment, not after the examination.
Before deploying AI tools, your Microsoft 365 security and data governance need to be solid. ABT's Security Grade Assessment identifies permission gaps, data classification needs, and compliance configuration issues that Copilot would amplify.
Get Your Security GradeMicrosoft published a Top 10 security risks list for Copilot Studio agents in February 2026. Key risks include hard-coded credentials in agent configurations, over-permissioning that grants agents broader data access than needed, unreviewed third-party connectors, prompt injection attacks that trick agents into unauthorized actions, and data exfiltration through agent responses.
Copilot surfaces all data within a user's existing permissions. Most financial institution employees are over-permissioned, with access to SharePoint sites and Teams channels beyond their job function. Copilot makes previously hidden but accessible data instantly retrievable, turning background permission sprawl into active data exposure risk for member PII and sensitive financial records.
Banks should complete a permission audit across their Microsoft 365 environment, classify data with sensitivity labels using Microsoft Purview, review Conditional Access policies for AI access patterns, configure audit logging for Copilot interactions, and document an AI acceptable use policy. These steps must happen before pilot deployment, not after, to prevent data exposure and compliance gaps.
Yes. Copilot introduces new data access patterns that examiners will evaluate under existing regulatory frameworks. The Federal Financial Institutions Examination Council (FFIEC) and National Credit Union Administration (NCUA) expect audit trails for all data access, including AI-assisted retrieval. AI-generated customer communications carry the same regulatory weight as human-generated ones. Institutions need AI governance policies and audit logging that satisfy examination requirements.
Microsoft Purview provides data classification, sensitivity labeling, and governance tools that control what data Copilot can surface. Without Purview, Copilot accesses all data within user permissions without distinction. For financial institutions handling member PII and regulated data, Purview's classification and access controls are prerequisites for safe Copilot deployment. Microsoft currently offers Purview Suite for Copilot at a promotional discount.
Before deploying Copilot, financial institutions should verify that Conditional Access policies enforce multi-factor authentication for all users, block legacy authentication protocols, and include conditions for AI-assisted data retrieval patterns. Data Loss Prevention (DLP) rules should detect and block sharing of member Social Security numbers, account numbers, and loan data through Copilot-generated responses, email, and Teams. Sensitivity labels must be applied to all documents containing regulated data so Copilot respects classification boundaries. DMARC email authentication should be enforced to prevent domain spoofing in any Copilot-triggered email workflows. Audit logging for all Copilot interactions should be retained for at least one year to satisfy FFIEC and NCUA examination requirements.
| Term | Definition |
|---|---|
| Conditional Access | Microsoft Entra ID policies that control who can access resources based on identity, device, location, and risk signals. Used to enforce multi-factor authentication and block unauthorized access patterns. |
| Copilot Studio | Microsoft's platform for building custom AI agents that can access organizational data, call external services, and automate workflows within the Microsoft 365 ecosystem. |
| DLP (Data Loss Prevention) | Microsoft Purview policies that detect and block sensitive data — such as Social Security numbers and account numbers — from leaving the organization through email, Teams, or file sharing. |
| DMARC / DKIM / SPF | Email authentication protocols that verify sender identity and prevent domain spoofing. DMARC (Domain-based Message Authentication, Reporting, and Conformance) builds on DKIM and SPF to provide enforcement and reporting. |
| FFIEC | Federal Financial Institutions Examination Council. Interagency body that sets IT examination standards for banks and credit unions, including the Cybersecurity Assessment Tool used by examiners. |
| MFA (Multi-Factor Authentication) | Security control requiring two or more verification methods to access a system. Typically combines something you know (password), something you have (phone or security key), and something you are (biometrics). |
| Microsoft Purview | Microsoft's data governance suite providing data classification, sensitivity labeling, DLP, and compliance tools. Required for controlling what data Copilot can access and surface. |
| NCUA | National Credit Union Administration. Federal regulator that examines credit unions for safety and soundness, including cybersecurity and data governance practices. |
| PII (Personally Identifiable Information) | Data that can identify an individual, including Social Security numbers, account numbers, loan records, and contact information. Subject to strict handling requirements under financial regulations. |
| Prompt Injection | An attack technique where crafted input tricks an AI agent into performing unauthorized actions, accessing restricted data, or bypassing security boundaries. |
| Sensitivity Labels | Microsoft Purview classification tags applied to documents and data that enforce protection rules — such as encryption, access restrictions, and watermarking — based on the content's sensitivity level. |
11 min read
In this article: The Hidden Cost of Microsoft 365 Licensing Five Licensing Mistakes Costing You Money E3 vs. E5 vs. Business Premium: What...
The castle-and-moat model of cybersecurity worked when your data lived on a server in the closet, your employees sat at desks in the building, and...
7 min read
In this article: What NCUA Has Actually Changed About Cybersecurity Requirements in 2026 Why Risk-Based Examination Raises the Bar The Credit...