ALDC: A Delivery Model for AI-Assisted AL Development

jarmestoBusiness Central8 hours ago41 Views

Most AI coding tools generate a file and hope for the best. ALDC — AL Development Collection — takes a different approach: it gives GitHub Copilot and Claude Code a structured delivery model for Business Central AL. Contracts before code, agents with scoped roles, and human approval at every gate.

From Prompt to Spec Contract

In ALDC, no feature starts with a chat prompt. Every task begins with a spec contract that captures three dimensions: the functional requirement (what the user needs), the technical design (how it fits the extension), and the test criteria (how we know it works). The contract is the single source of truth for everything downstream. If it is not in the spec, it does not get built.

This reverses the default dynamic of AI-assisted coding. Instead of iterating on vague prompts and hoping the output converges, the conversation happens upfront — where it is cheapest to resolve ambiguity.

Orchestration, Not Autonomy

ALDC is built around a Conductor agent that coordinates three scoped subagents: Architect, Developer, and Review. Each has a narrow responsibility, a defined input, and an expected output. The Conductor moves work through gated phases, and no phase closes without explicit human approval.

The model matters because autonomy is not the goal. Traceability is. At any point in the pipeline, you can see which agent produced which artifact, which skills it loaded, and which decisions it recorded. Every output is evidence — not just a generated file.

TDD as Contract, Not Convention

The Implementation Subagent writes tests before code. Always. This is not a suggested practice; it is enforced by the delivery model. Tests are generated from the spec contract, run against the implementation, and included in the review package.

The effect is practical: AL code reaches human review already proven against its own acceptance criteria. Review becomes about design quality and business fit — not about catching what the agent forgot to check.

Composable Skills, Loaded on Demand

Rather than a single bloated system prompt, ALDC ships 11 composable skills that load only when the task needs them. RDLC report transformation, event-driven integration, upgrade codeunit patterns, AL performance tuning — each lives in its own module, with its own context, activated by the subagent that requires it.

This keeps the working context clean, makes the agent’s behaviour auditable, and lets the framework grow without regressing. New skills plug in; existing ones stay isolated.

One Framework, Two Tools

ALDC runs in both GitHub Copilot and Claude Code, under MIT license, fully open. The same spec contracts, the same agents, the same skills — no vendor lock-in, no parallel maintenance. Whichever assistant the team prefers, the delivery model stays consistent.

The goal is straightforward: AL code that passes review the first time, with every decision traceable from requirement to merge.

References

AL Development Collection — Official Site
ALDC on GitHub

Closing

AI does not make engineering optional. It makes engineering the differentiator. ALDC is a bet on that idea — that the teams who ship reliable AL code will be the ones who kept contracts, gates, and traceability, and let the agents work inside that structure.

Original Post https://techspheredynamics.com/2026/04/22/aldc-a-delivery-model-for-ai-assisted-al-development/

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Join Us
  • X Network2.1K
  • LinkedIn3.8k
  • Bluesky0.5K
Support The Site
Events
April 2026
MTWTFSS
   1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30    
« Mar   May »
Follow
Search
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Discover more from 365 Community Online

Subscribe now to keep reading and get access to the full archive.

Continue reading