If you’re following the latest AI trends, you know that the word “MCP” is emerging a lot in the last months.
While AI models continue to advance in reasoning and quality, their capabilities are often constrained by limited access to data. Each new data source requires a custom integration, making truly connected systems difficult to scale.
MCP stays for Model Context Protocol. The Model Context Protocol is an open standard introduced by Anthropic with the main goal to standardize how AI applications connect with external tools, data sources, and systems.
Today it’s often quite difficult to connect an AI system to external tools and system needed to perform certain operations. The first response to this problem was what is called function calling. Function calling lets you connect models to external tools and APIs. Instead of generating text responses, the model understands when to call specific functions and provides the necessary parameters to execute real-world actions. This allows the model to act as a bridge between natural language and real-world actions and data:
Model Context Protocol (MCP) instead solves this problem by introducing a universal standard for connecting AI to data and tools. With MCP, developers no longer have to reinvent the wheel for every data source they need to use with an LLM, but they can use a standard protocol for that scope:
MCP provides a single plug-and-play interface that any AI model can use to retrieve information from any system or execute tasks in any system.
MCP helps on building agents and complex workflows on top of LLMs. LLMs frequently need to integrate with data and tools, and MCP provides:
As detailed in a great way on the official MCP protocol site, MCP follows a client-server architecture where a host application can connect to multiple servers:
An MCP server exposes data sources and MCP clients (AI apps) connect to those servers.
MCP includes 4 main components:
The 𝗠𝗼𝗱𝗲𝗹 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗼𝘁𝗼𝗰𝗼𝗹 (MCP) is set to become the new standard for connecting AI Agents and Assistants to the systems where data resides, including content repositories, business tools, and development environments.
This is a natural question:
Both Function Calling and the Model Context Protocol (MCP) are mechanisms that enable LLMs to interact with external tools or systems, but they differ in their focus, implementation, and scope.
Function Calling is a capability built into many LLMs that allows them to interpret a user’s natural language prompt and generate structured instructions to call predefined functions or APIs. The LLM doesn’t execute the function itself; it outputs a specification (e.g., function name and parameters) that an external application can use to perform the action.
Function Calling focuses on the “generation” phase (translating a prompt into a structured command). It’s LLM-specific and varies across providers (e.g., OpenAI, Azure OpenAI, Anthropic, Google etc.) in terms of syntax and capabilities. It’s about deciding what to call and how to structure it.
Function Calling is implemented within the LLM itself. Developers define a schema (e.g., JSON) of available functions, and the LLM decides when and how to use them based on the prompt.
Function Calling lacks a universal standard. Each LLM provider (OpenAI, Google, etc.) has its own way of structuring function calls, which can complicate integration across models (this is where frameworks like Semantic Kernel or LangChain can help).
Model Context Protocol (MCP) is a standardized framework designed to manage the execution of LLM-generated instructions (like function calls) across external tools or services. It goes beyond just generating the call by providing a consistent protocol for tool discovery, invocation, and response handling. MCP aims to create a uniform way for LLMs to interact with diverse systems.
MCP covers the “execution” phase. It takes the LLM’s output (e.g., a function call) and ensures it’s executed correctly by the target system, handling the logistics of connecting to tools and returning results. MCP is less about generating the call and more about standardizing how it’s carried out.
MCP acts as an intermediary layer. It uses a standardized request format (e.g., JSON-RPC) that applications translate LLM outputs into, ensuring compatibility across tools. It abstracts away the differences in LLM outputs, making execution consistent.
MCP is borned for standardization. It provides a unified protocol that any LLM or application can adopt, enabling scalability and interoperability across diverse systems and tools.
I can emphasize the following points:
Function Calling:
MCP:
If you’re building a simple AI-powered app with one LLM and a few functions, Function Calling alone might suffice. For enterprise-scale integration with many tools and LLMs, MCP’s standardization becomes nowadays a valuable choice to evaluate.
In my opinion YES!
An MCP server for Dynamics 365 Business Central permits you to integrate ERP-powered actions or ERP data directly in your favorite MCP-compliant Copilot tools like Microsoft Copilot, GitHub Copilot, Claude Desktop and many many more.
Here is a quick example (alpha version) of an MCP server that I’ve created for managing with AI the AppSource apps and the related updates in a given customer’s tenant (I plan to add more features in a next release).
My MCP server is registered in my GitHub Copilot and you can see that now some new tools are available:
When opening the Copilot Chat in Agent mode, you can see that my agent has that 4 tools available to perform tasks (the tools exposed by my MCP server are automatically discovered):
Now I can ask GitHub Copilot for what AppSource apps I have installed in my Dynamics 365 Business Central tenant. Here is the response from the AI agent :
As you can see, the agent recognizes that for answering the question it needs to use my MCP server and then it invokes the right tool.
Now let’s see if I have some updates available for those installed apps:
Wow… I have 2 apps to update in my environment
Let’s update an app with GitHub Copilot:
Done! The selected app is now scheduled for the update…
This is just a prototype (consider it as an ALPHA version) but as you can see, this can open many doors. We can have tools in the MCP server that reads data from your Business Central tables, from your corporate network, from your files, from third-party applications and more. And the best of that is that this newly created Dynamics 365 Business Central Admin Center MCP Service can be used from any AI tool that supports MCP servers.
Here for example my MCP service registered in Cursor agent (so a completely different tool):
Cool… isn’t it?
I see lots of future on that. Expect to see more on this topic, next for sure in my sessions at BC DAY Italy, DynamicsMinds and at BC TechDays.
P.S. here is a quick video with GitHub Copilot + my Dynamics 365 Business Central Admin Center MCP Service in action:
As you can see, MCP represents a powerful new paradigm in AI integration and it gives you a clean and reusable framework to plug LLMs into really anything.
P:S. please rememebr that MCP is a very new protocol and is still in active development.
Original Post https://demiliani.com/2025/04/09/an-mcp-server-for-dynamics-365-business-central-why-not/