Adapting the AI Native Development Architecture to Dynamics 365 Business Central

jarmestoBusiness Central6 hours ago12 Views

This article presents a practical adaptation of AI Native Development for AL development in Dynamics 365 Business Central. It is not a normative guide but a pragmatic approach born from innovation projects, designed to transform experimentation into repeatable, auditable engineering practices.

I will share the repository and public project very soon.

From trial and error to reliable engineering

Many developers began interacting with AI in an ad-hoc way—firing off prompts and hoping for a useful result. Treating AI as a simple utility becomes an obstacle when building efficient systems. When consistent and reliable outputs are required for complex tasks, improvisation causes inconsistency and makes reproducibility difficult.

To solve this problem AI Native Development emerges: a systematic approach that converts AI experimentation into disciplined engineering practice. It redefines the developer’s role: instead of manually supervising agent conversations, the developer becomes a proactive system architect. The goal is to move from manual supervision to delegation through engineering.

This article explains the framework’s three layers—Markdown Prompt Engineering, Agent Primitives, and Context Engineering—so teams can build robust and reliable AI systems in AL.

Layer 1 — Foundation: Markdown Prompt Engineering

At the core of the framework is Markdown Prompt Engineering. Instead of issuing ambiguous natural-language requests, we use Markdown’s semantic structure — headings, lists, emphasis and, crucially, links — to convert natural language into structured instructions. Links act as a primary context-loading mechanism, allowing the agent to inject relevant content from other files directly into its reasoning process.

The main advantage is that this approach actively guides the model’s cognitive process. Rather than guessing what you want, the model follows a logical architecture, producing far more predictable and consistent results.

Key techniques in this layer:

  • Role Activation: Use patterns like “You are an expert in…” to focus the model on a specific domain or specialty, improving relevance and quality.
  • Structured Thinking: Headings and list items define a clear reasoning path; each item becomes a logical step in the model’s execution.
  • Human Validation Gates: Explicit checkpoints in prompts such as “Stop here and request user approval before proceeding” ensure human oversight at critical stages.
  • Tool Integration: Directives like Use MCP tool tool-name enable deterministic interactions with external tools, so the agent can perform operations rather than only generating text.

Although prompt engineering in Markdown is a powerful technique, applying it manually to every task is not a sustainable or scalable practice. This brings us to the need for the next layer: Agents Primitive.

Markdown Prompt Engineering is powerful, but manually applying it to every task is not scalable — which leads naturally to the second layer.

Layer 2 — Implementation: Agent Primitives

Agent Primitives are the implementation layer: configurable building blocks that systematically apply prompt engineering techniques. Think of them as reusable libraries and configuration modules in traditional software development. They let you compose AI systems from modular, interoperable components.

Primitives encapsulate instructions, roles and workflows into modular files that can be invoked consistently.

Primitive type File / Extension Purpose
Agentic workflows .prompt.md Orchestrate end-to-end processes (e.g. al-build, al-performance) with human gates.
Chat modes .chatmode.md Role-based specialists (e.g. al-architect, al-debugger) with explicit CAN/CANNOT boundaries.
Instruction files .instructions.md Concise persistent rules (e.g. al-code-style, al-naming-conventions).

Agent Primitives turn ad-hoc requests into auditable, systematic workflows and become reusable knowledge assets that improve over time. But even well-designed primitives can fail if the model’s attention is overwhelmed — which brings us to the third layer.

However, even the best primitives can fail if the AI is disposed of irrelevant information. This brings us to the challenge of managing the model’s limited attention, a problem solved by the third and final layer.

Layer 3 — Strategy: Context Engineering

Large language models have a finite working memory — the “context window.” If that window is filled with irrelevant information, model performance degrades. Context Engineering is the strategic management of that window to maximize an agent’s effectiveness.

Practical aspects of Context Engineering:

  • Modular instruction loading: Use the applyTo field in frontmatter instead of broad globs.
  • Selective loading: The system loads only the instructions relevant to the current file type (for example, al-testing.instructions.md activates only for files under **/test/**/*.al).
  • Context optimization: Selective loading preserves model capacity for the code that matters, improving comprehension and reducing contradictions.
  • Context portability: The repository structure and a consolidated AGENTS.md support consistent context packaging for builds and deployments.

By controlling which guidance the agent sees at each step, Context Engineering reduces noise and error, enabling the primitives and prompt engineering to operate reliably.

How the Three Layers Work Together

The three layers – Markdown Prompt Engineering, Agent Primitives and Context Engineering – do not work in isolation. They combine to create reliable Agentic Workflows, where each layer reinforces the others.

The formula that sums up this synergy is simple but powerful:

Markdown Prompt Engineering + Agent Primitives + Context Engineering = Reliability.

This framework represents a fundamental shift: it allows developers to move from manual, reactive monitoring to a proactive systems architecture. Instead of managing conversations, reusable configurations are built that securely delegate complete workflows to agents. This creates composite intelligence: knowledge assets that improve with every use and scale across the entire organisation, transforming AI from an experimental tool to a trusted engineering partner.

AI Native AL Development Toolkit — in brief

The AI Native AL Development Toolkit is the practical implementation of AI Native Development for AL and Business Central projects. Below is an operational summary of the collection (28 primitives) that explains what it includes and how it is organized.

Section Contents Notes
📋 Instructions Files (7)
  • al-guidelines — Master hub
  • al-code-style — Feature-based org, 2-space indent, XML docs
  • al-naming-conventions — PascalCase, 26-char limit
  • al-performance — Early filtering, SetLoadFields, temp tables
  • al-error-handling — TryFunctions, error labels, telemetry
  • al-events — Event subscribers, integration patterns
  • al-testing — AL-Go, test generation rules, Given/When/Then
Auto-applied via applyTo patterns for optimal Context Engineering.
🎯 Agentic Workflows (14)
  • al-setup, al-workspace, al-build
  • al-events, al-debug, al-performance
  • al-permissions, al-troubleshoot, al-migrate
  • al-pages, al-workflow, al-spec.create
  • al-performance.triage, al-pr.prepare
Complete .prompt.md processes invocable (e.g. @workspace use [prompt-name]) with human validation gates.
💬 Chat Modes (6)
  • al-orchestrator — Smart router / workflow coordinator
  • al-architect — Solution design (no build execution)
  • al-debugger — Deep diagnosis
  • al-tester — Testing strategy, QA
  • al-api — RESTful API design & implementation
  • al-copilot — Copilot feature development
Tool boundaries enforce CAN/CANNOT lists for safety.
📖 Integration Guide (1) copilot-instructions.md — Master doc with examples and guidance. Central guide for orchestrating the collection.
Total: 28 Agent Primitives (7 + 14 + 6 + 1)

Practical outcome and use case

When implemented via this toolkit, teams obtain:

  • Clear tool boundaries and role separation.
  • Focused context loading so agents see only what matters.
  • Orchestrated workflows with human validation checkpoints.

Conclusion

Agents stop being a hobby when we treat them as code: versioned artifacts, clear protocols, and automated validations. Applied with discipline, this approach reduces repetitive tasks, improves consistency and preserves operational control. No shortcuts—only processes that work. With rigor, agents move from promise to a reliable engineering partner.

Personal note

This text reflects my practical adaptation of AI Native Development to AL development: the AI Native AL Development Toolkit. Consider it a starting point to convert experimental practices into reproducible, auditable processes adapted to your organization.

References

These references collect resources and examples about AI-native practices that inspired some of the ideas and conventions presented in this toolkit. Reviewing the original sources is recommended to dive deeper into the references and practical examples.

Remember this because it helps a lot

✅ Subscribe to the channel (it encourages and gives that push to all this).

✅ Click «like» if you liked it.

✅ If you don’t want to miss anything, you know, press the bell.

✅ In the comments leave me any ideas, doubts, corrections or contributions. Everything will be welcome.

Nota:
ES-El contenido de este artículo ha sido generado en parte con la ayuda de IA para revisión, orden o resumen.

El contenido, las ideas ,comentarios ,opiniones son totalmente humanas. Los posts pueden basarse o surge la idea de escribirse de otro contenido se referenciará ya sea oficial o de terceros.

Por supuesto ambas humana e IA pueden contener errores.

Te animo a que en los comentarios lo indiques, para más información accede a la página sobre responsabilidad AI del blog TechSphereDynamics.

EN-The content of this article has been generated in part with the help of IA for review order or summary.

The content, ideas, comments, opinions are entirely human. The posts can be based or arises the idea of writing another content will be referenced either official or third party.

Of course both human and IA can contain errors.

I encourage you to indicate in the comments, for more information go to the page on responsibility AI of the blog TechSphereDynamics.

Original Post https://techspheredynamics.com/2025/10/21/adapting-the-ai-native-development-architecture-to-dynamics-365-business-central/

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Join Us
  • X Network2.1K
  • LinkedIn3.8k
  • Bluesky0.5K
Support The Site
Events
October 2025
MTWTFSS
   1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31   
« Sep   Nov »
Follow
Search
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...