AI News

Critical Vulnerability in Microsoft 365 Copilot Exposes Confidential Data to AI Summarization

A significant security oversight in Microsoft 365 Copilot has been uncovered, revealing that the AI assistant has been summarizing confidential emails since late January 2026. This glitch, which bypassed established Data Loss Prevention (DLP) policies and sensitivity labels, allowed the AI to access and process restricted information stored in users' "Sent Items" and "Drafts" folders.

For enterprises relying on Microsoft's ecosystem to maintain strict data governance, this incident highlights the growing complexities of integrating generative AI into secure corporate environments. Microsoft has acknowledged the issue, attributed to a code error, and has begun deploying a fix, though the full scope of the impact remains under investigation.

The Mechanics of the Breach

The vulnerability, tracked under the advisory ID CW1226324, specifically affects the Copilot "work tab" chat feature. Under normal operations, Microsoft 365 Copilot is designed to respect organizational compliance boundaries, meaning it should not retrieve or summarize content flagged as confidential or restricted by DLP policies.

However, an "unspecified code error" effectively neutralized these safeguards for specific email categories. When a user interacted with Copilot, the AI was able to pull data from emails that had been explicitly labeled as sensitive, provided those emails were located in the user's sent or draft folders.

The following table outlines the discrepancy between the intended security protocols and the behavior exhibited during this incident:

Table: Security Protocol Failure Analysis

Feature Intended Security Behavior Behavior During Glitch
Sensitivity Labels AI blocks access to documents/emails
marked "Confidential" or "Internal Only".
AI accessed and summarized labeled
emails in Sent/Drafts folders.
DLP Policies Prevents data extraction from
protected sources to unauthorized tools.
Policies were bypassed, allowing
data to flow into Copilot's context window.
Folder Scope Scanning limited to user-authorized
inbox items and safe directories.
Erroneously expanded active scanning
to include Sent Items and Drafts.

This failure is particularly concerning because "Sent Items" and "Drafts" often contain the most sensitive versions of internal communications—unpolished strategic documents, internal financial discussions, or personnel matters—that are not yet ready for broader distribution or have been archived for record-keeping.

Timeline of Detection and Response

The issue was first detected on January 21, 2026, yet it persisted for several weeks before a comprehensive remediation plan was enacted. Microsoft publicly confirmed the bug in mid-February, stating that the error allowed the AI to ignore the sensitivity labels that IT administrators rely on to secure their digital perimeters.

According to reports, the rollout of the fix began in early February. Microsoft is currently reaching out to a subset of affected users to verify the efficacy of the patch, but they have not provided a definitive timeline for when the issue will be fully resolved globally. The incident is currently flagged as an advisory, a classification often reserved for issues with a limited scope, though the potential for data leakage in high-compliance industries suggests the impact could be qualitative rather than merely quantitative.

Implications for Enterprise AI Adoption

This incident serves as a stark reminder of the "black box" nature of AI integration in legacy IT infrastructures. While organizations are rushing to adopt tools like Microsoft 365 Copilot to boost productivity, the security architecture governing these tools must evolve to handle non-deterministic behaviors.

Key Security Concerns:

  • Erosion of Trust: When security labels are ignored by the underlying infrastructure, trust in automated compliance tools diminishes. IT leaders may hesitate to deploy further AI capabilities without stronger assurances.
  • Shadow Access: The bug illustrates that AI agents often operate with different permissions or access pathways than standard users or legacy search tools, creating "shadow access" points that traditional audits might miss.
  • Compliance Violations: For sectors like healthcare and finance, even a temporary exposure of confidential drafts to an AI summarizer could constitute a reportable data breach under regulations like GDPR or HIPAA.

Recommendations for IT Administrators

In light of this advisory, Creati.ai recommends that IT administrators and security operations centers (SOCs) take immediate proactive steps. While Microsoft is deploying a fix, relying solely on vendor patches is insufficient for high-security environments.

  • Review Advisory CW1226324: Administrators should continuously monitor the status of this specific advisory in the Microsoft 365 Admin Center to confirm when their tenant receives the patch.
  • Audit AI Interactions: Where possible, review audit logs for Copilot interactions that occurred between late January and mid-February, specifically looking for queries related to sensitive keywords.
  • Reinforce Employee Training: Remind staff that while AI tools are powerful, they should avoid inputting or referencing highly classified information in AI chat interfaces until the stability of these tools is guaranteed.

As the industry moves forward, the expectation is that vendors like Microsoft will implement more rigorous "fail-closed" mechanisms—where an AI defaults to denying access if a security policy cannot be verified—rather than "fail-open" errors that expose sensitive data. This bug is likely to drive a new standard of validation for AI permissions within enterprise suites.

Featured