Dynamics 365 Business Central: be careful on your upgrade code.

It’s the second time that I see partners asking informations about the following message coming from Dynamics 365 Business Central:

The Dynamics 365 Business Central service update failed.

The scheduled update to Dynamics 365 Business Central version X.Y for the environment Production couldn’t be completed within the update window set on the environment for the scheduled date.

so it’s probably worth spending a minute on explaining the cause of this behavior.

When a Dynamics 365 Business Central scheduled update date is available (and not postponed by the administrator), the update runs automatically within the update window that you specified for this environment. The update window specifies the hours of the day (range from.. to) that an update can run. All users are from this environment, and all attempts to sign in during the update are blocked with the message Service is under maintenance.

Any environment that fails to update will be automatically restored to the original application version so that users can connect to it again. The environment is then automatically rescheduled for a new update attempt in seven days. 

The above error message usually occurs for the following situations (at least in my experience):

  1. Problems on Microsoft’s side (resource capacity)
  2. Update window too short (minimum accepted is 6 hours) for handling the partner’s upgrade code.

Point 1 is honestly not so common. Where you should be careful is point 2.

Sometimes I see partners creating extensions for a new Business Central major version that are a full redesign of a complex solution, so with lots of changes in the data structure. When doing that, they also create complex upgrade code in Upgrade codeunits for handling those breaking changes, data changes and more. When the extension is ready, they upload the new app version into the production environment in the Extension Management page, setting the Deploy to field to Next major version. 

Usually the upgrade code of a PTE completes without problems in a quite short time. But what about if you have a complex upgrade code working on really huge tables? The upgrade code can take a lot of time for executing and it can run out of the upgrade window. In such cases, you will receive the above error.

Telemetry is the best way for discovering the real cause of the problem. To know the cause of a failed upgrade, you can use the following KQL query:

traces
| where timestamp > ago(1d)
| where customDimensions.eventId == 'LC0107' 
| project timestamp
, message
// in which environment/company did it happen
, aadTenantId = customDimensions.aadTenantId
, applicationFamily = customDimensions.applicationFamily
, countryCode = customDimensions.countryCode
, environmentName = customDimensions.environmentName
, environmentType = customDimensions.environmentType
// information about the update
, sourceVersion = customDimensions.sourceVersion
, destinationVersion = customDimensions.destinationVersion
, updatePeriodStartDateUtc = customDimensions.updatePeriodStartDateUtc
, updatePeriodEndDateUtc = customDimensions.updatePeriodEndDateUtc
, updateWindowStartTimeUtc = customDimensions.updateWindowStartTimeUtc
, updateWindowEndTimeUtc = customDimensions.updateWindowEndTimeUtc
, ignoreUpdateWindow = customDimensions.ignoreUpdateWindow
, initiatedFrom = customDimensions.initiatedFrom
, totalTime = customDimensions.totalTime
// what happened
, failureReason = customDimensions.failureReason
, failureCode = customDimensions.failureCode
, recovered = customDimensions.recovered

Upgrade code on extensions must be tested carefully, because it can break tenant upgrades.

When you have complex upgrade code working on really huge tables, I absolutely recommend using the DataTransfer data type (often forgot). I talked about it in the past here.

DataTransfer is an AL data type (introduces in Business Central version 21) that supports the bulk transferring of data between SQL based tables. Instead of operating on a row-by-row model, like the standard Record API does, DataTransfer works at the SQL level by producing SQL code that operates on sets. 

DataTransfer is fast (from 50x to 200x fastest than traditional AL code, see my measures here) and it’s the mandatory solution to use if you need to move large set of records between tables during an upgrade. Complex upgrades code can run for hours if using Record variables, while with DataTransfer they can complete in minutes.

Remember the following:

  • The DataTransfer object can only be used in upgrade codeunits and it’ll throw a runtime error if used outside of upgrade codeunits.
  • Using the DataTransfer object in install codeunits, it’s checked that the install code is running inside the scope of installing an extension, meaning that the install code is triggered from the OnInstallAppPerDatabase and OnInstallAppPerCompany events that are emitted during installation.
  • Because DataTransfer operates in bulk and not on a row-by-row basis, row based events or triggers won’t be executed.

If using the DataTransfer object is not possible (need to know why) and you have huge data operations to execute on upgrade, the other possible solution that I usually recommend is to “detach” these operations from the upgrade process itself. To do that you can:

  • Create a ProcessingOnly report or an Assisted Setup page that handle the data upgrade after the installation of the new extension.
  • Schedule a job queue task from an upgrade codeunit that will handle the data upgrade afterwards.

These two methods have a main disadvantage: you cannot read data from obsoleted (removed) tables and/or fields. But if you’re not on this scenario (maybe you can wait before removing a table/field) then they can save your life.

Original Post https://demiliani.com/2024/12/10/dynamics-365-business-central-be-careful-on-your-upgrade-code/

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Follow
Sign In/Sign Up Sidebar Search
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...