
Most AI coding tools generate a file and hope for the best. ALDC — AL Development Collection — takes a different approach: it gives GitHub Copilot and Claude Code a structured delivery model for Business Central AL. Contracts before code, agents with scoped roles, and human approval at every gate.
In ALDC, no feature starts with a chat prompt. Every task begins with a spec contract that captures three dimensions: the functional requirement (what the user needs), the technical design (how it fits the extension), and the test criteria (how we know it works). The contract is the single source of truth for everything downstream. If it is not in the spec, it does not get built.
This reverses the default dynamic of AI-assisted coding. Instead of iterating on vague prompts and hoping the output converges, the conversation happens upfront — where it is cheapest to resolve ambiguity.
ALDC is built around a Conductor agent that coordinates three scoped subagents: Architect, Developer, and Review. Each has a narrow responsibility, a defined input, and an expected output. The Conductor moves work through gated phases, and no phase closes without explicit human approval.
The model matters because autonomy is not the goal. Traceability is. At any point in the pipeline, you can see which agent produced which artifact, which skills it loaded, and which decisions it recorded. Every output is evidence — not just a generated file.
The Implementation Subagent writes tests before code. Always. This is not a suggested practice; it is enforced by the delivery model. Tests are generated from the spec contract, run against the implementation, and included in the review package.
The effect is practical: AL code reaches human review already proven against its own acceptance criteria. Review becomes about design quality and business fit — not about catching what the agent forgot to check.
Rather than a single bloated system prompt, ALDC ships 11 composable skills that load only when the task needs them. RDLC report transformation, event-driven integration, upgrade codeunit patterns, AL performance tuning — each lives in its own module, with its own context, activated by the subagent that requires it.
This keeps the working context clean, makes the agent’s behaviour auditable, and lets the framework grow without regressing. New skills plug in; existing ones stay isolated.
ALDC runs in both GitHub Copilot and Claude Code, under MIT license, fully open. The same spec contracts, the same agents, the same skills — no vendor lock-in, no parallel maintenance. Whichever assistant the team prefers, the delivery model stays consistent.
The goal is straightforward: AL code that passes review the first time, with every decision traceable from requirement to merge.
References
AL Development Collection — Official Site
ALDC on GitHub
Closing
AI does not make engineering optional. It makes engineering the differentiator. ALDC is a bet on that idea — that the teams who ship reliable AL code will be the ones who kept contracts, gates, and traceability, and let the agents work inside that structure.
Original Post https://techspheredynamics.com/2026/04/22/aldc-a-delivery-model-for-ai-assisted-al-development/