This article presents a practical adaptation of AI Native Development for AL development in Dynamics 365 Business Central. It is not a normative guide but a pragmatic approach born from innovation projects, designed to transform experimentation into repeatable, auditable engineering practices.
I will share the repository and public project very soon.
Many developers began interacting with AI in an ad-hoc way—firing off prompts and hoping for a useful result. Treating AI as a simple utility becomes an obstacle when building efficient systems. When consistent and reliable outputs are required for complex tasks, improvisation causes inconsistency and makes reproducibility difficult.
To solve this problem AI Native Development emerges: a systematic approach that converts AI experimentation into disciplined engineering practice. It redefines the developer’s role: instead of manually supervising agent conversations, the developer becomes a proactive system architect. The goal is to move from manual supervision to delegation through engineering.
This article explains the framework’s three layers—Markdown Prompt Engineering, Agent Primitives, and Context Engineering—so teams can build robust and reliable AI systems in AL.
At the core of the framework is Markdown Prompt Engineering. Instead of issuing ambiguous natural-language requests, we use Markdown’s semantic structure — headings, lists, emphasis and, crucially, links — to convert natural language into structured instructions. Links act as a primary context-loading mechanism, allowing the agent to inject relevant content from other files directly into its reasoning process.
The main advantage is that this approach actively guides the model’s cognitive process. Rather than guessing what you want, the model follows a logical architecture, producing far more predictable and consistent results.
Key techniques in this layer:
Although prompt engineering in Markdown is a powerful technique, applying it manually to every task is not a sustainable or scalable practice. This brings us to the need for the next layer: Agents Primitive.
Markdown Prompt Engineering is powerful, but manually applying it to every task is not scalable — which leads naturally to the second layer.
Agent Primitives are the implementation layer: configurable building blocks that systematically apply prompt engineering techniques. Think of them as reusable libraries and configuration modules in traditional software development. They let you compose AI systems from modular, interoperable components.
Primitives encapsulate instructions, roles and workflows into modular files that can be invoked consistently.
Primitive type | File / Extension | Purpose |
---|---|---|
Agentic workflows | .prompt.md |
Orchestrate end-to-end processes (e.g. al-build , al-performance ) with human gates. |
Chat modes | .chatmode.md |
Role-based specialists (e.g. al-architect , al-debugger ) with explicit CAN/CANNOT boundaries. |
Instruction files | .instructions.md |
Concise persistent rules (e.g. al-code-style , al-naming-conventions ). |
Agent Primitives turn ad-hoc requests into auditable, systematic workflows and become reusable knowledge assets that improve over time. But even well-designed primitives can fail if the model’s attention is overwhelmed — which brings us to the third layer.
However, even the best primitives can fail if the AI is disposed of irrelevant information. This brings us to the challenge of managing the model’s limited attention, a problem solved by the third and final layer.
Large language models have a finite working memory — the “context window.” If that window is filled with irrelevant information, model performance degrades. Context Engineering is the strategic management of that window to maximize an agent’s effectiveness.
Practical aspects of Context Engineering:
applyTo
field in frontmatter instead of broad globs.al-testing.instructions.md
activates only for files under **/test/**/*.al
).AGENTS.md
support consistent context packaging for builds and deployments.By controlling which guidance the agent sees at each step, Context Engineering reduces noise and error, enabling the primitives and prompt engineering to operate reliably.
The three layers – Markdown Prompt Engineering, Agent Primitives and Context Engineering – do not work in isolation. They combine to create reliable Agentic Workflows, where each layer reinforces the others.
The formula that sums up this synergy is simple but powerful:
Markdown Prompt Engineering + Agent Primitives + Context Engineering = Reliability.
This framework represents a fundamental shift: it allows developers to move from manual, reactive monitoring to a proactive systems architecture. Instead of managing conversations, reusable configurations are built that securely delegate complete workflows to agents. This creates composite intelligence: knowledge assets that improve with every use and scale across the entire organisation, transforming AI from an experimental tool to a trusted engineering partner.
The AI Native AL Development Toolkit is the practical implementation of AI Native Development for AL and Business Central projects. Below is an operational summary of the collection (28 primitives) that explains what it includes and how it is organized.
Section | Contents | Notes |
---|---|---|
![]() |
|
Auto-applied via applyTo patterns for optimal Context Engineering. |
![]() |
|
Complete .prompt.md processes invocable (e.g. @workspace use [prompt-name] ) with human validation gates. |
![]() |
|
Tool boundaries enforce CAN/CANNOT lists for safety. |
![]() |
copilot-instructions.md — Master doc with examples and guidance. |
Central guide for orchestrating the collection. |
Total: 28 Agent Primitives (7 + 14 + 6 + 1) |
When implemented via this toolkit, teams obtain:
Agents stop being a hobby when we treat them as code: versioned artifacts, clear protocols, and automated validations. Applied with discipline, this approach reduces repetitive tasks, improves consistency and preserves operational control. No shortcuts—only processes that work. With rigor, agents move from promise to a reliable engineering partner.
This text reflects my practical adaptation of AI Native Development to AL development: the AI Native AL Development Toolkit. Consider it a starting point to convert experimental practices into reproducible, auditable processes adapted to your organization.
These references collect resources and examples about AI-native practices that inspired some of the ideas and conventions presented in this toolkit. Reviewing the original sources is recommended to dive deeper into the references and practical examples.
Subscribe to the channel (it encourages and gives that push to all this).
Click «like» if you liked it.
If you don’t want to miss anything, you know, press the bell.
In the comments leave me any ideas, doubts, corrections or contributions. Everything will be welcome.
Nota:
ES-El contenido de este artículo ha sido generado en parte con la ayuda de IA para revisión, orden o resumen.
El contenido, las ideas ,comentarios ,opiniones son totalmente humanas. Los posts pueden basarse o surge la idea de escribirse de otro contenido se referenciará ya sea oficial o de terceros.
Por supuesto ambas humana e IA pueden contener errores.
Te animo a que en los comentarios lo indiques, para más información accede a la página sobre responsabilidad AI del blog TechSphereDynamics.
EN-The content of this article has been generated in part with the help of IA for review order or summary.
The content, ideas, comments, opinions are entirely human. The posts can be based or arises the idea of writing another content will be referenced either official or third party.
Of course both human and IA can contain errors.
I encourage you to indicate in the comments, for more information go to the page on responsibility AI of the blog TechSphereDynamics.
Original Post https://techspheredynamics.com/2025/10/21/adapting-the-ai-native-development-architecture-to-dynamics-365-business-central/