
🤖 AI Agents in Production: What Actually Goes Wrong The conversation begins with a real scenario: a team asks an AI agent to quickly integrate an internal system with Microsoft Graph. What should have been a simple task exposes a cascade of issues—unexpected API calls, unsafe defaults, and behavior that engineers can’t easily reproduce or debug. Key takeaways include:
🔐 Security, Permissions, and Accidental Chaos One of the most critical themes is security. AI agents often:
This section emphasizes why traditional security models break down when agents are treated as “junior engineers” rather than untrusted automation. 🧠 Determinism Still Matters (Even With AI) Despite advances in LLMs, the episode reinforces that deterministic systems are still essential:
AI can assist—but it should never be the final authority without checks. 🛠️ Best Practices for Building AI Agents Safely Practical guidance discussed in the episode includes:
Tools and platforms like GitHub and modern AI APIs from OpenAI can accelerate development—but only when paired with strong engineering discipline. 🎯 Who This Episode Is For This episode is especially valuable for:
🚀 Final Takeaway AI agents are powerful—but power without control creates risk. This episode cuts through marketing noise to show what happens when agents meet real infrastructure, real users, and real security constraints. The message is clear: AI agents should augment engineers, not replace engineering judgment.
Become a supporter of this podcast: https://www.spreaker.com/podcast/m365-show-modern-work-security-and-productivity-with-microsoft-365–6704921/support.






