
Data is raw material. Context is governed material. If you feed raw, permission-chaotic data into AI and call it context, you’ll get polished outputs that fail audit. Two boundaries matter:
Bigger context ≠ better context. Bigger context often means diluted signal and increased hallucination risk. Measure context quality like infrastructure:
If two sources disagree and you haven’t defined precedence, the model will average them into something that never existed. That’s not intelligence. That’s compromise rendered fluently. 3) Why Agents Fail First: Non-determinism meets enterprise entropy Agents fail before chat does. Why? Because chat can be wrong and ignored.
Agents can be wrong and create consequences. Agents choose tools, update records, send emails, provision access. That means ambiguity becomes motion. Typical failure modes: Wrong tool choice.
The tenant never defined which system owns which outcome. The agent pattern-matches and moves. Wrong scope.
“Clean up stale vendors” without a definition of stale becomes overreach at scale. Wrong escalation.
No explicit ownership model? The agent escalates socially, not structurally. Hallucinated authority.
Blended documents masquerade as binding procedure. Agents don’t break because they’re immature. They break because enterprise context is underspecified. Autonomy requires evidence standards, scope boundaries, stopping conditions, and escalation rules. Without that, it’s motion without intent. 4) Graph as Organizational Memory, Not Plumbing
4
Microsoft Graph is not just APIs. It’s organizational memory. Storage holds files.
Memory holds meaning. Graph encodes relationships:
Copilot consumes relational intelligence. But Graph only reflects what the organization leaves behind. If containers are incoherent, memory retrieval becomes probabilistic. If containers are engineered with ownership and authority, retrieval becomes repeatable. Agents need memory to understand context. But memory without trust is dangerous. Which brings us to permissions. 5) Permissions Are the Context Compiler Permissions don’t just control access. They shape intelligence. Copilot doesn’t negotiate permissions. It inherits them. Over-permissioning creates AI-powered oversharing.
Under-permissioning creates AI mediocrity. Permission drift accumulates through:
When Copilot arrives, it becomes a natural language interface to permission debt. Less eligible context often produces better answers. Least privilege is not ideology. It’s autonomy hygiene. Because agents don’t just read. They act. 6) Prompt Engineering vs Grounding Architecture Prompting steers conversation. Grounding constrains decisions. Prompts operate at the interaction layer.
Grounding architecture operates at the substrate layer. Substrate wins. Grounding primitives include:
If the system can’t show evidence, it must escalate. Web grounding expands the boundary beyond your tenant. Treat it like public search. Prompts don’t control what the system is allowed to know. Permissions and grounding do. 7) Relevance Windows: The Discipline Nobody Budgets For Relevance windows define eligible evidence per workflow step. Not everything retrievable is admissible. Components:
More context increases contradictions. Tighter windows increase dependability. If a workflow cannot state: “Only these sources count.” It isn’t ready for agents. 8) Dataverse as Operational Memory
4
Microsoft Dataverse is operational memory. State answers:
Without state, agents loop. With explicit state machines:
Agents stop guessing. They check. Operational memory reduces hallucinations without touching the model.
Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-fm-modern-work-security-and-productivity-with-microsoft-365–6704921/support.
If this clashes with how you’ve seen it play out, I’m always curious. I use LinkedIn for the back-and-forth.






