The question that started it
A friend running enterprise architecture at a global bank asked me a simple question over the winter: "If we turn on M365 Copilot, what changes for our regulators?"
It is, on the surface, a one-paragraph answer. Microsoft's contract — the Online Services Terms plus the Data Protection Addendum — covers a substantial share of privacy and security obligations. That coverage is real, well-documented, and reflected in the Trust Center.
But the moment you push past the contract and try to map the architecture to the actual obligations a Global Systemically Important Financial Institution carries — model risk under SR 11-7, immutable recordkeeping under 17 CFR 240.17a-4, operational resilience under DORA, conduct supervision under MiFID II Article 16(7), the EU AI Act risk classification — the answer fragments. There is no single Microsoft document that says "here is the residual obligation your firm must operate, regardless of contract." There can't be — that's not Microsoft's job.
So I built one, with MogamboAI's help.
What came out of it
Three artifacts, one cluster.
📄 Research piece — M365 E5 Copilot Architecture & Regulatory Obligations. Practitioner-grade reference, ~15-minute read in HTML, ~30-page Word document version available as well. Maps the base M365 E5 architecture (identity, data, compliance, telemetry, residency planes), overlays the Copilot architecture (orchestrator, model providers, Anthropic mechanics, CoWork extension layer), and walks the compliance gap. The full document is baselined v2.2 with revision history; the Word version (rendered via Office Online inside the Lab) is the canonical source.
🧭 Interactive architecture tool. The same model rendered as an animated walkthrough — eleven prompt-lifecycle steps, six trust boundaries (firm perimeter, internet transit, Azure / Microsoft, M365 tenant DPA, AOAI inference, sub-processor), the auth-token chain, data-packet assembly, and the CoWork firm-extension overlay. Click any node for a deep-dive panel. Works in dark and light themes. Use it to explore the topology; use the research piece to understand the obligations that attach to it.
The tool and the research piece are paired by design. The research is the prose argument; the tool is the visual companion. Most enterprise architects will find the tool faster for "show me the data flow" and the research faster for "show me the obligation." Switch between them as the question changes.
The framing that took the longest
The hardest part of the document was not the architecture. The architecture is in Microsoft's docs if you read enough of them. The hardest part was the framing of where Microsoft's contract ends — because that's where regulator findings live, and that's where the firm has to do its own work.
Specifically: Microsoft's DPA covers Customer Data and Personal Data as those terms are defined. It does not cover (a) the firm's Conditional Access policy correctness, (b) sensitivity-labeling completeness, (c) DLP rule effectiveness, (d) the firm's response to a sub-processor disclosure update, (e) any model-risk obligation under SR 11-7 that depends on how the firm itself uses the AI service, (f) the operational-resilience exit strategy under DORA, (g) supervisory review of AI-assisted client communications under MiFID II / FCA SYSC 8, (h) the EU AI Act risk classification of the firm's specific deployment.
None of that is satisfied by Microsoft's contract. All of it requires the firm to do work and produce evidence.
The research piece structures that residual obligation set in three places: Part 3 (Compliance Gap Analysis) for the diagnosis, Part 4 (Recommended Controls Catalog, three tiers) for the prescription, and Appendix D (Architecture-to-Obligation Cross-Reference) for the line-by-line mapping. If you read nothing else in the document, read those three sections.
Three things I want feedback on
Specific asks, ranked by what I'd most like pushback on:
- The xAI-as-independent-processor classification. Microsoft's sub-processor list is the source of truth, and we caught the xAI/Grok classification wrong in version 2.0 — it's reachable via Copilot Studio integration as an independent processor, not as an M365 Copilot sub-processor with DPA flow-down. If you've seen this differently in your tenant, or seen Microsoft's documentation update since 2026-05-05, I want to know.
- The CoWork OBO mechanics. The On-Behalf-Of token mechanics in the firm-extension layer are subtle. The piece argues the CoWork service must request tokens for downstream services using the user's identity, not its own service principal — which sounds obvious until you realize this is the most common misconfiguration I've seen in CoWork builds. If you've shipped a different pattern that defends in front of an Identity-Office review, share the architecture.
- The "Anthropic non-retention" reframing. The piece treats Anthropic's non-retention of tenant content as a Microsoft contractual posture under the DPA, not as an architectural guarantee independently verifiable by the firm. If your supervisory examiners are asking for architectural evidence beyond contractual commitment, what posture are you taking? This is a finding the firm community needs to compare notes on.
Caveat on the sample classification
The Word document carries an "Internal — Restricted" classification in its template header. That is the classification a firm using this document should apply to it — not a claim that what's published here is itself somebody's restricted internal material. The document is a reference template; the contents are synthesized from public Microsoft documentation and public regulatory frameworks. If a firm adopts it as a starting point, the classification flips to "internal" automatically because the firm's deployment specifics make it so.
Apply this under the supervision of in-house Legal, Compliance, and Risk. It is not legal or compliance advice. Send corrections; the piece is open to update with dated revision notes.