Self-Service Disaster Recovery for Power Platform and D365

Saurav DhyaniDyn365CEYesterday62 Views

Microsoft provides Business Continuity and Disaster Recovery (BCDR) across all production environments as part of the Dynamics 365 and Power Platform offerings. This aims to minimize outages and disruptions and ensure that your data is protected at all times.

Infrastructure is deployed to an Azure Geography, and a geography is made up of between 2 to 3 Azure Availability Zones (generally located 300 miles / 482 kms away from each other). An Azure Availability Zone deploys critical data center infrastructure such as network, power, and cooling. To ensure resilience across a geography, your environments are replicated across to at least two Availability Zones in real time.

Figure 1 – Example of Azure Geography and Availability Zones

If a failure is detected within an availability zone, disaster recovery will route traffic to the unaffected availability zone. Recovery Point Objective (RPO) is stated to be near zero, and the Recovery Time Objective (RTO) is stated to be less than 5 minutes.

As part of Wave 1 Release 2025, Microsoft released Self-Service Disaster Recovery for Power Platform as a public preview.

Today we’ll be further exploring this capability by running a Disaster Recovery Drill.

Self Service Disaster Recovery for Power Platform (Preview)

Self Service Disaster Recover enables organisations to integrate DR steps into their documented BCP plans.  It will also enable organisations for the first time to test their recovery times against their documented RTOs and RPOs.

For more information about Cross-Region Self-Service Disaster Recovery please visit the official site here:
https://learn.microsoft.com/en-us/power-platform/admin/business-continuity-disaster-recovery?tabs=new#cross-region-self-service-disaster-recovery-preview

In order to familiarise yourself with this new capability, I will be running a Disaster Recovery Drill.

What to know before Enabling Disaster Recovery on Power Platform / D365

Environment Types

The first thing to note, is that Disaster Recovery can only be enabled on Production-Type (i.e. Production*, Sandbox, but not Trial or Developer) environments and Managed Environments must be enabled. 

*During the Public Preview phase, it is further recommended that this capability is only enabled on Sandbox environments.  And any issues should be reported immediately to Microsoft.

Storage and PAYG Azure Billing

When this capability is generally-available (GA), your organisation will need to be linked to a Pay-as-you-go Azure Billing plan in order to operate the self-service DR. This is explained by Microsoft as replicated data will initially draw from your current Dataverse capacity allowances and will then use PAYG storage for any capacity overruns.

Unsupported Environments

At time of writing, Microsoft Dynamics 365 Finance and Supply Chain environments are not supported. However, all other Dynamics 365 CE/CRM platforms and Power Platform / Dataverse environments are supported.

Enabling Disaster Recovery

  1. Firstly, within Power Platform Admin Center – click on your desired environment to enable SSDR capabilities.
    A screenshot of a computer

AI-generated content may be incorrect.

    Figure 2 – Go to Desired Environment with PPAC

  2. Within the upper right hand tile, you will see a new tile called Disaster Recovery. Click Open on this tile, to enable this setting.
    A screenshot of a computer

AI-generated content may be incorrect.

    Figure 3 – Enabling DR on Environment

  3. Once enabled, you will see the following message that indicates that DR is now being configured for your environment.
    A screenshot of a computer

AI-generated content may be incorrect.

    Figure 4 – Configuration in Progress Message

Note: this step will take approximately 48 hours (2 days), and you should receive a notification when it has completed. Although in my test, I did not receive a notification, so you may need to verify manually.

After 48 hours, you should see the Disaster Recovery pane say “Environment ready for disaster recovery”.

A screenshot of a computer

AI-generated content may be incorrect.

Figure 5 – Disaster Recovery Configuration Complete

 

Running a Disaster Recovery Drill

With your newly configured DR Environment. To execute a Disaster Recovery Drill, you will need to:

  1. Open the Disaster Recovery panel. And then select the Disaster Recovery Reason (in this case Disaster Recovery Drill).
    A screenshot of a computer

AI-generated content may be incorrect.

    Figure 6 – Selecting Drill Option (Disaster Recovery Drill)

  2. Complete the double confirmation by typing in the full environment name, and select continue.
    A screenshot of a computer

AI-generated content may be incorrect.

    Figure 7 – Double Confirmation

Failover is now being initiated.  You should take note of the time started, and the time completed.

A screenshot of a computer

AI-generated content may be incorrect.

Figure 8 – DR Drill Initiated

 

A screenshot of a computer

AI-generated content may be incorrect.

Figure 9 – DR Drill Complete

 

As you can see from the completion results, that DR took exactly 2 minutes and 7 seconds.

Failover Verification

Normally, I like to verify that my network traffic is actually being redirected to another server. In order to do so, you can run a tracert from your local machine.

A screenshot of a computer

AI-generated content may be incorrect.

Figure 10 – Network Traffic Routing to New Server

And I can see that the destination IP has actually changed.

A screenshot of a computer

AI-generated content may be incorrect.

Figure 11 – Failover to Secondary DC

 

Roll back

Once you have completed your DR Drill, you can switch back to your primary region.

In order to do so, once again within PPAC:

  1. Go to Disaster Recovery and select Switch to primary region.
    A screenshot of a computer

AI-generated content may be incorrect.

    Figure 12 – Revert to Primary Region

It will then execute the command, and remember to take note of the start time and the finish time.

A screenshot of a computer

AI-generated content may be incorrect.

The roll back in this example took 3 minutes and 9 seconds. Still within acceptable limits in my BCP plan.

Note that whenever a change is made, it may take some time for your ISP to refresh its DNS cache. As a result, some users may be still trying to reach the URL via an outdated IP Address. This is based on a variety of factors that are out of control of Microsoft, and therefore some end users may receive messages such as the following.

A screenshot of a computer

AI-generated content may be incorrect.

If this is the case, the end user can instructed to wait for their ISP to pick up the change (may take minutes to an hour) or you can ask them to run ipconfig /flushdns from the command line to forceably flush the dns cache (this is if the problem is on the client machine, and not at the ISP).

But eventually, it will be reachable, and you will be able to see your D365 / Power App application as normal.

A screenshot of a computer

AI-generated content may be incorrect.

If you found this article useful, please do drop me a message on LinkedIn.

Original Post http://365lyf.com/self-service-disaster-recovery-for-power-platform-and-d365/

Leave a reply

Join Us
  • X Network2.1K
  • LinkedIn3.8k
  • Bluesky0.5K
Support The Site
Events
April 2025
MTWTFSS
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30     
« Mar   May »
Follow
Sign In/Sign Up Sidebar Search
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...