Microsoft provides Business Continuity and Disaster Recovery (BCDR) across all production environments as part of the Dynamics 365 and Power Platform offerings. This aims to minimize outages and disruptions and ensure that your data is protected at all times.
Infrastructure is deployed to an Azure Geography, and a geography is made up of between 2 to 3 Azure Availability Zones (generally located 300 miles / 482 kms away from each other). An Azure Availability Zone deploys critical data center infrastructure such as network, power, and cooling. To ensure resilience across a geography, your environments are replicated across to at least two Availability Zones in real time.
Figure 1 – Example of Azure Geography and Availability Zones
If a failure is detected within an availability zone, disaster recovery will route traffic to the unaffected availability zone. Recovery Point Objective (RPO) is stated to be near zero, and the Recovery Time Objective (RTO) is stated to be less than 5 minutes.
As part of Wave 1 Release 2025, Microsoft released Self-Service Disaster Recovery for Power Platform as a public preview.
Today we’ll be further exploring this capability by running a Disaster Recovery Drill.
Self Service Disaster Recover enables organisations to integrate DR steps into their documented BCP plans. It will also enable organisations for the first time to test their recovery times against their documented RTOs and RPOs.
For more information about Cross-Region Self-Service Disaster Recovery please visit the official site here:
https://learn.microsoft.com/en-us/power-platform/admin/business-continuity-disaster-recovery?tabs=new#cross-region-self-service-disaster-recovery-preview
In order to familiarise yourself with this new capability, I will be running a Disaster Recovery Drill.
The first thing to note, is that Disaster Recovery can only be enabled on Production-Type (i.e. Production*, Sandbox, but not Trial or Developer) environments and Managed Environments must be enabled.
*During the Public Preview phase, it is further recommended that this capability is only enabled on Sandbox environments. And any issues should be reported immediately to Microsoft.
When this capability is generally-available (GA), your organisation will need to be linked to a Pay-as-you-go Azure Billing plan in order to operate the self-service DR. This is explained by Microsoft as replicated data will initially draw from your current Dataverse capacity allowances and will then use PAYG storage for any capacity overruns.
At time of writing, Microsoft Dynamics 365 Finance and Supply Chain environments are not supported. However, all other Dynamics 365 CE/CRM platforms and Power Platform / Dataverse environments are supported.
Figure 2 – Go to Desired Environment with PPAC
Figure 3 – Enabling DR on Environment
Figure 4 – Configuration in Progress Message
Note: this step will take approximately 48 hours (2 days), and you should receive a notification when it has completed. Although in my test, I did not receive a notification, so you may need to verify manually.
After 48 hours, you should see the Disaster Recovery pane say “Environment ready for disaster recovery”.
Figure 5 – Disaster Recovery Configuration Complete
With your newly configured DR Environment. To execute a Disaster Recovery Drill, you will need to:
Figure 6 – Selecting Drill Option (Disaster Recovery Drill)
Figure 7 – Double Confirmation
Failover is now being initiated. You should take note of the time started, and the time completed.
Figure 8 – DR Drill Initiated
Figure 9 – DR Drill Complete
As you can see from the completion results, that DR took exactly 2 minutes and 7 seconds.
Normally, I like to verify that my network traffic is actually being redirected to another server. In order to do so, you can run a tracert from your local machine.
Figure 10 – Network Traffic Routing to New Server
And I can see that the destination IP has actually changed.
Figure 11 – Failover to Secondary DC
Once you have completed your DR Drill, you can switch back to your primary region.
In order to do so, once again within PPAC:
Figure 12 – Revert to Primary Region
It will then execute the command, and remember to take note of the start time and the finish time.
The roll back in this example took 3 minutes and 9 seconds. Still within acceptable limits in my BCP plan.
Note that whenever a change is made, it may take some time for your ISP to refresh its DNS cache. As a result, some users may be still trying to reach the URL via an outdated IP Address. This is based on a variety of factors that are out of control of Microsoft, and therefore some end users may receive messages such as the following.
If this is the case, the end user can instructed to wait for their ISP to pick up the change (may take minutes to an hour) or you can ask them to run ipconfig /flushdns from the command line to forceably flush the dns cache (this is if the problem is on the client machine, and not at the ISP).
But eventually, it will be reachable, and you will be able to see your D365 / Power App application as normal.
If you found this article useful, please do drop me a message on LinkedIn.
Original Post http://365lyf.com/self-service-disaster-recovery-for-power-platform-and-d365/