
What an HR agent taught me about developing with the Business Central AI Development Toolkit
The HR Absence Agent had received an email requesting holiday leave. It had identified the employee by their email address. It had navigated to Causes of Absence, found the code HOLIDAY, memorised it along with the employee number, dates and unit of measure. Everything was correct.
And then it froze. Loop. Unable to move on to the next page.
Its decision was this: «I am stuck in a Causes of Absence loop and cannot get to HA Agent Setup. Please assist me.» (It sounds cooler in Shakespeare’s language, but this translation will do.)
It had everything. And it didn’t know what to do with it.


To understand the error, we need to see exactly what the agent saw at that moment. The active page was Causes of Absence, a list with three rows: DAYOFF, HOLIDAY, SICK. The available tools included closing the page, searching for pages in the system, invoking actions, and selecting rows. And he already had causeCode: HOLIDAY memorised.
The problem was that the agent had completed step 3 of its instructions (obtain information on the cause of absence) but had not closed the page before attempting to navigate to the next step. Unable to find a clear path to HA Agent Setup from where it was, it entered a loop instead of using the page search tool to navigate directly.
The fix in the instructions was surgical: add an explicit principle to the guidelines. «If navigation to the required next page fails after closing the current page, use the page search tool to navigate directly.» An instruction that covers that case and all structurally similar cases that will appear in later steps of the flow.
How the agent’s instructions are structured
The HR Absence Agent’s instructions follow the RESPONSIBILITY / GUIDELINES / INSTRUCTIONS framework proposed by the SDK.
RESPONSIBILITY defines the mission in one sentence: process absence requests received by email, evaluate them against company policies, and record them in BC when operating in Manager mode.
GUIDELINES are the rules of behaviour that the agent must always follow, regardless of the step it is in. Among them, the one that would have prevented this afternoon’s loop:
ALWAYS close each page using the tool "Closes the currently active page" before navigating to the next page to avoid navigation loops
INSTRUCTIONS are the 13 steps of the processing flow, from reading the email to recording the absence or escalating for human review. Each step has its own MEMORIZE rules, error conditions, and exit actions.
The MEMORISE keyword is especially important in this SDK. It tells the agent to explicitly retain that data so that it can be referenced in later steps. Without MEMORISE, the agent may «forget» information between navigation steps. With it, you build a persistent state throughout the entire task.
Read "No." → MEMORIZE "employeeNo: {value}"Read "Full Name" → MEMORIZE "employeeName: {value}"Read "Status" → MEMORIZE "employeeStatus: {value}"

The bulk of agent logic is not in AL. It is in the instructions. The evaluation criteria for an absence request are defined as follows:
6. Evaluate basic feasibility criteria a. Check if {employeeStatus} = "Active" → MEMORIZE "criteriaEmployeeActive: {pass or fail}" b. Check if {privacyBlocked} = false → MEMORIZE "criteriaNotBlocked: {pass or fail}" c. Check if {fromDate} >= today → MEMORIZE "criteriaValidStartDate: {pass or fail}" d. Check if {daysRequested} <= {maxDaysPerRequest} → MEMORIZE "criteriaMaxDays: {pass or fail}" e. Calculate days until {fromDate} from today → MEMORIZE "advanceDays: {value}" f. Check if {advanceDays} >= {minAdvanceDays} → MEMORIZE "criteriaAdvanceNotice: {pass or fail}"
Each criterion is explicitly memorised as pass or fail. At the end of the flow, the agent evaluates whether all are pass to determine whether the request is VIABLE or NOT VIABLE. If any are fail, it includes them in the response message with a detailed explanation.
The thresholds (maxDaysPerRequest, minAdvanceDays, humanReviewThreshold) are not hardcoded. They reside in the HA Agent Setup table, which the agent reads in step 4. This allows each company to configure its own policies without touching the instructions.

An important design decision was to separate evaluation from logging. The agent has two configurable modes:
Evaluator: analyses the request, evaluates all criteria, and responds with the result. It does not log anything in BC. This mode is for companies that want assistance with evaluation but want to keep manual logging, or to validate the agent’s behaviour before giving it write permissions.
Manager: does all of the above and, if the evaluation is VIABLE, navigates to Employee Absence, creates the record, requests a review before finalising, and sends the corresponding notifications.
The mode is read from HA Agent Setup in step 4 and stored as operationMode. From there, each action step checks it before executing.
Debugging the agent: what to look for when something goes wrong
Here is the most important methodological change when working with this SDK.
In classic Business Central, when something goes wrong, you open the debugger. There is a deterministic causal chain: this trigger calls this function, this function modifies this field. The error has a line number and an exact cause.

With agents, the questions are different:
What did the agent see? The state of the active page, the available rows, the tools it had at that moment. If the agent made an incorrect decision, it is almost always because what it saw was not what you expected it to see.
What did it have in memory? If a subsequent step fails, check whether the data it needed was correctly stored in the previous step. A poorly specified or missing MEMORIZE is a frequent source of silent failures.
Is the instruction ambiguous? If the agent has to choose between two possible interpretations of an instruction, it will choose one. Not necessarily the one you wanted. Precision in instructions is not optional.
Is it a navigation, decision, or data interpretation failure? Classifying the type of failure before correcting it avoids adding instructions that solve the symptom without resolving the cause.
The rule of thumb I have adopted: when the agent fails, I first read its entire decision and the status it reports before touching anything. Most of the time, the failure is readable if you know what to look for.
I have been using the AI Development Toolkit in public preview for two weeks. That is enough time to notice something I did not expect: my work as a BC developer has changed more profoundly than I anticipated.

Not in the sense that AL no longer matters. Mastery of the system is still essential. I know why the agent got stuck on Causes of Absence because I know how navigation works in BC, which pages open in embedded mode, how the system interprets lookups. Someone who only has experience with LLMs doesn’t have that.
What has changed is the type of problem I solve. Before, I debugged execution. Now I debug reasoning. Before, the fix was a line of code with strict syntax. Now it is a natural language instruction that has to be precise enough to cover the current case and general enough to cover similar cases that have not yet appeared.
That requires a different way of thinking. And for those of us who come from classic development, it can be uncomfortable at first. It seems less rigorous because the artefact you are refining does not compile, has no types, has no syntax errors. But the rigour is there. It takes another form: well-structured instructions, explicit keywords such as MEMORISE and Reply, documented edge case handling, telemetry to observe behaviour in production.
The feedback cycle has been greatly compressed. In classic AL, the design-code-test cycle took days. With agents, you see behaviour almost in real time and correct it almost in real time. What looks like messy trial and error is actually rapid iteration on a different artefact.
If you haven’t read my previous article, go ahead and take a look. 

Remember this because it helps a lot.
Subscribe to the channel (encourage and give it a boost).
Click ‘Like’ if you enjoyed it.
If you don’t want to miss anything, you know what to do: click on the bell.
Leave your ideas, questions, corrections, or contributions in the comments. Everything is welcome.
Original Post https://techspheredynamics.com/2026/03/05/i-no-longer-debug-code-i-debug-decisions-agents-in-business-central/






