If you have clicked on this article, it is not only due to this attractive title, but also because you have certainly asked yourself this question or simply because this subject may have come up in a discussion among your colleagues. If it is not for one of these reasons, you may never have had an issue with it, so you’ve come to the right place to save your time!
As you can imagine from this introduction, this is not a new topic in the community, and you can find other articles on this subject. My desire to write about this topic is also a result of Alex Shlega’s article: Dataverse dilemma: should it be a flow or should it be a plugin?. My goal is just to pose some thoughts/guidelines and remarks to bring you on the right path.
As a reminder, we are in a Dataverse/Power Apps context and therefore some information may not be accurate in your context or may have changed in the meantime. Furthermore, I am expressing my opinion based on my experience and my feelings.
It doesn’t matter what the subject is, but when trying to compare two things, whether they are technical features or any other aspect in life, it is necessary to look at their capabilities from a certain distance and perspective, their advantages, disadvantages, and then compare them to make sure that you see something positive.
I’m not going to list everything or highlight all the capabilities and interoperability that Cloud Flows allows but highlight some points that I think are worth to be considered in this approach.
One of the first things to understand is that depending on the license applied to a Power Automate, its performance will not be the same! Yes, you read it right! Even if it is quite logical to not have the same performance with a Free Plan and a Per Flow Plan, this notion of performance profile remains quite important to consider in the case of high frequency, use of many actions, sending of consequent data or just the maximum number of items to browse in a for each.
In the case of “Apply For Each”, the performance profile can have an impact because if you are using a Low profile, you will be able to process only 5k items against 100k for the others.
What is quite interesting is that Microsoft indicates through the implementation of these performance profiles that their use is well contextualized to the licenses, and therefore it may be necessary to switch to a standalone Power Automate plan to have all the capabilities of this service. So don’t hesitate to keep an eye on this
For your information, you can at any time know which plan is assigned to you by navigating to the Flows tab from the creators’ portal and clicking on ALT+CTRL+A. Then you just need to find the license SKU ("licenseSku": "P2"
) that has the property "isCurrent": true.
During my various research and experimentation, I have kept in mind several points and limitations that I consider essential to know when talking about/implementing Cloud Flows.
If you want a complete list of all the limitations and capabilities, without mentioning all the connectors but only the service itself, you can check this official documentation: Limits for automated, scheduled, and instant flows
If I started by discussing the low-code part with Power Automate, I must admit that it is because it was more interesting for me to go deeper into this subject than the historical development of Dataverse that I have been able to deal with for years. In the same way, I wanted to highlight certain points and characteristics of this type of development.
It is necessary to be vigilant on these last two aspects to understand that a development triggering another logic in a synchronous one will thus be part of the same transaction, so if one fails it is the whole transaction that fails (which can be a strategy itself).
Let’s start now one of the best parts of this article, where I will try to highlight some problems/scenarios, often observed in real cases, for which it is necessary to reflect on this choice or to understand what it implies but also to put forward some recommendations.
The first point that comes to mind is of course maintainability. I think that any consultant who has worked on recent projects could have started to introduce Cloud Flows or even achieved a project only using this kind of component. Moreover, the democratization of Cloud Flows is going on at the same time as the growing popularity of Low-Code / No-Code. As a result, one can quickly find oneself with a mountain of Cloud Flows, for example, I was able to observe a project with about 200 Cloud Flows with the Dataverse trigger and performing only Dataverse actions. You can understand that it becomes extremely complicated in this kind of situation to understand what exactly each flow does. Added to this is the fact that the more consequent the logic is, the less obvious the Cloud Flows are to read (we are thinking of the fact that you must scroll to read the actions or the ForEach loop sequences etc.). This case is particularly problematic when we try to rationalize the triggers, for example when we have X Cloud Flows which are triggered on the update of the same field. It then becomes complicated to know which one will execute before the other (which could impact the others) and to understand the different logic implemented/triggered. Understanding the different components implemented is crucial on a project, because if you start to have a mix of components triggering on the same trigger (Cloud Flows, Workflows, plug-ins..) it will become complicated to investigate.
To avoid this kind of problem, I strongly recommend that you establish a naming convention at the beginning (you can still do it after, but it will be tedious…) like for example “TABLE TRIGGER – CRUD OPERATION – FUNCTIONAL ACTIONS” which would give: “ACCOUNT – UPDATE – UPDATES ADDRESSES OF RELATED CONTACTS”, you can of course be more granular according to the need. Proper documentation of the implemented logics with reference to the technical field names is also a key element to easily find the implemented logics for a specific field. (There is an existing tool if you want to generate a Visio file: Flow To Visio – XrmToolBox Addon). In case you are extending an existing project, put aside your preferences and your opinion to avoid mixing components when it is not necessary. If there are only Clouds Flows, avoid adding a “layer” of plug-ins unless your problematic requires it of course .
The second point is simply the capacities of these two components which are complementary and not opposed. It is undeniable that the different connectors that Power Automates provides are a real accelerator for communicating with other systems compared to the same implementation in C# and even more now with environment variables that offer a native integration with Azure Key Vault (feel free to check this blog post: Azure Key Vault Secrets in Dataverse). However, you should not fall into the trap of implementing Cloud Flows to meet integration requirements requiring scalability/performance (in this case, Logics Apps are still preferred). Another example, often encountered unfortunately, is to avoid overloading the Dataverse native SharePoint integration via Cloud Flows to generate folders as soon as a record is created, this forces the user to wait for an indefinite time without getting any feedback from the Cloud Flow itself. It’s a good thing to be able to resubmit an execution of a Cloud Flow which is not possible for a plug-in (at least in standard) but I must admit that the use of Pre/PostImages often force me to stay on plug-ins (let’s not forget that we must imperatively perform a Get Record, in the case of a Cloud Flow with an update trigger, if we want to retrieve other information from the same object, which does not guarantee that the information has not been altered in the meantime!). I couldn’t mention the notion of capacity without talking about the Synchronous mode which, as you know, is only possible via Plug-ins. Even so, some possibility remains like the use of Power Fx coupled with a Cloud Flow Power Apps Trigger, which would be triggered from a button of the command bar or from a button of a custom page/canvas app embedded, but this does not cover all cases.
Another aspect is reusability which is particularly applicable to projects of a certain size where we can capitalize on certain logic / patterns because they will be reused in other places. In this case, we set up shared projects or common classes on the Pro-Dev side that we can transcribe into Child Flows in the Power Automate universe, but you will run into readability and maintainability issues at some point because it will generate a multitude of Cloud Flows. Now, there are also intermediate scenarios where we can set up Custom APIs with the objective of making complex logic (either impossible via Cloud Flow or requiring a certain robustness) available to the makers.
Finally, a last point is obviously the skills because the pro-dev option will indeed require specific skills while we remain on Low-Code for the Cloud Flows. Nevertheless, let’s not forget that many good practices exist to ensure not only the efficiency but also the robustness and consistency of the latter (e.g: Using Filtering Conditions).
As you may have realized, there is no miracle solution, but the goal is to determine the best solution according to a context, based on a certain number of variables, and by taking a certain level and not just your personal preference
Original Post https://www.blog.allandecastro.com/dataverse-power-automate-vs-plug-ins/