Connecting Claude to D365FSC with MCP

If you’ve been following along with this series, you already know the journey we’ve been on. In Part 1, we manually exported D365 security data, pasted it into a Claude chat window, and let AI do the analysis heavy lifting – it’s effective but does take some manual processes to gather the data. In Part 2, we graduated to the Claude API with a C# console app that queried D365 directly via OData and sent the data programmatically – a more automated option, but Claude was still essentially a passenger being handed pre-packaged data rather than a driver who could go get it herself.

Part 3 is where things get genuinely interesting. We’re going to give Claude the keys.

By the end of this post, you’ll have a working MCP server that exposes your D365 F&SC security data as tools Claude can discover and call on its own.

What Is MCP, and Why Should D365 Professionals Care?

MCP stands for Model Context Protocol. It’s an open standard introduced by Anthropic in late 2024 that defines a common way for AI models to interact with external tools and data sources. Think of it as a universal adapter, instead of every AI integration needing its own bespoke API wrapper, MCP gives you one consistent protocol that any MCP-compatible AI client (Claude Desktop, Claude Code, and a growing list of third-party tools) can use to discover and call your tools.

If you’ve ever built a REST API, the mental model is pretty familiar. Your MCP server advertises a list of tools with names, descriptions, and input schemas. The AI client discovers those tools, decides when they’re relevant, and calls them with appropriate parameters. Your server executes the logic and returns the result.

The key difference from a traditional API integration, Claude gets to decide when and what tools to call. You don’t have to pre-script “first fetch users, then fetch roles, then send everything to Claude.” You describe what each tool does, and Claude reasons about how to use them to answer whatever the user is asking. It’s the difference between handing someone a printed report and giving them live access to the database.

The Three-Layer MCP Architecture

Before we dive into the how, it helps to understand the what. An MCP setup has three players:

Your D365 environment doesn’t know or care that an AI is involved. It just sees authenticated OData request and returns the data. The MCP server is the middleman that translates between “Claude needs security role data” and “D365 OData GET <D365>/data/SecurityUserRoles?$filter=xyz”

Manual vs. API vs. MCP: An Honest Comparison

Now that we know what MCP is, let’s be honest about the tradeoffs across all three approaches. Each one has a legitimate place depending on your situation, and none of them is universally “best.”

Approach 1: Manual (Part 1)

You export data from D365 (CSV or Excel format) from a security report and drop it into a Claude chat window manually.

When it works well:

  • One-off analysis where setup time would dwarf the actual task
  • You don’t have API credentials or developer access
  • You want to explore a dataset before investing in automation
  • Your audience is a functional consultant who will never touch code

When it breaks down:

  • Data goes stale the moment you export it – you’re analyzing a snapshot, not reality
  • There’s a hard ceiling on data volume (context window limits)
  • Completely manual – no repeatability, no scheduling, no sharing
  • You’ll start recognizing the copy-paste fatigue after your third “quick analysis”

Verdict: Great for exploration and one-off audits. Not a strategy.

Approach 2: Direct API (Part 2)

A C# console app queries D365 via OData, packages the data, and sends it to Claude via the Anthropic API. The code decides what data to fetch, Claude decides what it means.

When it works well:

  • You need repeatable, scheduled analysis (run it nightly, email results)
  • The data requirements are well-defined and don’t change much
  • You want to own the full pipeline end-to-end
  • Token efficiency matters, you control exactly what data gets sent

When it breaks down:

  • You’re essentially writing a query planner in code – the prompts are rigid, the data fetch is pre-scripted
  • Adding a new data source means writing more code and redeploying
  • Claude can’t adapt mid-analysis (“actually, can you also check if this user has the Vendor Payments role?”)
  • Interactive conversations aren’t really possible – it’s a pipeline, not a dialogue

Verdict: Excellent for known, recurring workflows. Brittle for exploratory analysis.

Approach 3: MCP Server (Part 3 – this post)

Claude gets direct access to tools that query D365 on demand. You describe what each tool does; Claude decides when and how to use them.

When it works well:

  • Interactive analysis where the questions aren’t known upfront (“hey Claude, does anything look weird about this user’s access?”)
  • You want Claude to pull exactly the data it needs rather than receiving a data dump
  • Multiple analysts sharing one centrally managed integration
  • The data universe is broad – you don’t want to pre-script every possible query combination
  • Exploratory security reviews where one finding leads to another investigation

When it breaks down:

  • More upfront setup than the other two approaches (though less than you might expect)
  • Token costs can creep up if Claude makes lots of tool calls – which is actually why we’re also covering programmatic tool calling in this post
  • Requires Claude Desktop or another MCP-compatible host for interactive use
  • Not ideal for fully automated, no-human-in-the-loop pipelines where the API approach is cleaner

Verdict: The most powerful and flexible approach for interactive analysis. Pairs well with programmatic tool calling for batch scenarios.

Side-by-Side Summary

Manual Direct API MCP Server
Setup effort None Medium Medium
Data freshness Stale snapshot Real-time Real-time
Interactivity None None High
Repeatability None High Medium
Flexibility Low Low High
Token efficiency N/A High Variable*
Best for One-off exploration Scheduled pipelines Interactive analysis

*Token efficiency for MCP improves significantly if utilizing programmatic tool calling, which caches results for processing.

Two Ways to Run an MCP Server: stdio vs. HTTP/SSE

The MCP protocol supports different transport mechanisms for how the host and server communicate. For this blog post we’re using stdio, but it’s worth understanding both options — especially if you’re thinking about this in an enterprise D365 context.

stdio (What We’re Building Today)

Claude Desktop spawns your MCP server as a child process and communicates through standard input/output pipes. It’s the simplest possible setup: one config file entry pointing at your executable, and Claude handles the rest.

This is the right choice for personal use, developer workstations, and blog post tutorials where you want readers to actually follow along without fighting network config. The tradeoff is that everything runs locally — which is fine if you’re the only one using it, but doesn’t scale to a team.

HTTP/SSE (The Enterprise Option)

Your MCP server runs as an ASP.NET Core web application, and Claude connects to it over HTTP using Server-Sent Events for the streaming channel. Host it in Azure Container Apps, put it behind Azure API Management for auth and rate limiting, and suddenly your entire D365 security team is sharing one centrally managed integration — with audit logs, access controls, and no one needing D365 credentials on their local machine.

The README file includes the one-line code change to flip from stdio to HTTP/SSE at the end of the implementation section. If your organization is serious about operationalizing this kind of AI-assisted security analysis, that’s the direction to grow into.

What We’re Building

With the concepts established, here’s the concrete solution we’re going to build:

D365McpServer – A .NET 9 console app that:

  • Authenticates to D365 F&SC via MSAL (same app registration as Part 2)
  • Exposes four MCP tools: GetUsers, GetSecurityRoles, GetUserRoleAssignments, and RunODataQuery
  • Runs via stdio, spawned by Claude Desktop

D365ProgrammaticClient – A separate .NET 9 console app that:

  • Connects to the MCP server directly (no Claude Desktop required)
  • Demonstrates how to pre-fetch the role schema once, then batch-analyze multiple users without re-sending it on every API call
  • Shows the cross-environment comparison pattern as a natural extension

Both projects use the same appsettings.json config pattern from Part 2, so if you’ve already got that wired up, most of the heavy lifting is already done.

Building the MCP Server

I first asked Claude to build the structure of the MCP server as a .NET project along with instructions for configuring this for usage within Claude Desktop (so that we could call this functionality from within the desktop app). Claude then broke this into a couple different projects:

The core of the MCP server is found in the D365SecurityTools where we basically describe what each MCP tool does and then actually write the code as to what should be performed when this is invoked. This is where you could add additional functionality if you wanted this solution to be able to query into other areas of the system, although it does include a generic ‘RunODataQuery’ call that can be invoked if none of the other options fit the user request:

Now we need to tell the MCP server how to connect to your D365 instance, the connection uses a standard client credential flow which uses a client ID and client secret. The settings are for this are found in the appsettings.json file for the project. If you have set this up from one of my previous posts you can reuse it, otherwise you will need to set up an Azure App Registration and provide the necessary authentication information:

After we are satisfied, build the D365McpServer project which will produce a D365McpServer.exe – we then need to let Claude know where this server resides via the claude_desktop_config.json file. You can find this at <UserDirectory>AppDataRoamingClaudeclaude_desktop_config.json and add this as an option as an MCP server.

Note: I left my MCP server at its default build location, you are more than welcome to move this to another file path, just be sure you bring the necessary DLLs and corresponding appsettings.json file with so the MCP server can connect your D365 instance. (I’ve created a pre-built package for this on the GitHub link later in the post.)

Once this is done, be sure to full close and restart Claude Desktop. Then launch Claude Code and navigate to Settings -> Developer -> Local MCP Servers -> and validate that your new d365-security MCP is available and marked as ‘running’.

Note: You should see the same file path here as you put in the claude_desktop_config.json file.

We can now start to ask questions to Claude about D365 data, for example we can ask questions like ‘show me all users that have more than 5 roles assigned to them’ and we can see that Claude does indeed utilize our MCP server and connects to D365 and pulls the data it needs based on the request.

Note: I always error on the side of caution and tell Claude to use the ‘d365-security mcp’ when asking questions just to make sure it utilizes the tool although this isn’t actually required.

One thing to keep in mind is that because we are not explicitly calling an AI API, the using the MCP server from Claude Desktop falls under our normal AI usage. If however you use the Programmatic Calling Tool (which you’ll notice requires a Claude API key in the appsettings.json file) that uses API credits, which is separate from your normal AI subscription. I am planning on writing a future blog on doing a deeper dive comparing these two offerings.

GitHub Link

This project can be found on my GitHub at: https://github.com/ameyer505/D365FSC-Security-MCP

Conclusion

Hopefully this series of posts have helped show different options of using AI to help analyze D365 data. There are a lot of options that each have pros / cons so its important to know what options are out there so you can make the best choice for your own use case.

The post Connecting Claude to D365FSC with MCP appeared first on Alex Meyer.

Original Post https://alexdmeyer.com/2026/04/13/connecting-claude-to-d365fsc-with-mcp/

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Join Us
  • X Network2.1K
  • LinkedIn3.8k
  • Bluesky0.5K
Support The Site
Events
April 2026
MTWTFSS
   1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30    
« Mar   May »
Follow
Search
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Discover more from 365 Community Online

Subscribe now to keep reading and get access to the full archive.

Continue reading