Custom security roles on Power Platform are mostly used for Dynamics and model-driven apps, but they also work for canvas apps. By default an environment (without a Dataverse database) has two default roles (environment maker and environment admin). However if you create and environment with a Dataverse database, you get the ability to create custom security roles and 10 default roles. I strongly advise not to change the default roles.
Custom administrator role
At the moment of writing this blog it is possible to alter the default environment maker role (not system administrator), but I would not recommend it. Microsoft might push changes to the default roles and overwrite the customizations.
Creating a copy of the system administrator role or the environment maker role and making changes to the copied role, is also not an option. In the background Microsoft sets the CanEdit privilege to the administrator/environment maker role, and if you copy the role the CanEdit privilege is lost. The CanEdit privilege can only be set by Microsoft.
This practically means that creating a custom administrator role is not possible.
Note: Granting a user a role that effects the CanEdit privilege will take a non-specified amount of time to take effect. For example, if you switch from a copied role to a default role, it can take 30 minutes for the change to take effect.
Custom developer role
Creating a custom developer role is possible if you are willing to accept the following scenario. The developer gets an custom security role granting the required privileges, for example the ability to work with solutions and canvas apps but no export privileges. With only this security role the developer cannot access the environment and is missing the hidden CanEdit role.
The CanEdit role can also be granted by being an owner or a co-owner of a canvas app in the environment. If an administrator creates a canvas app and makes the developer co-owner of that app then the developer can access the environment and has the hidden CanEdit role.
Conditional access policies for individual Power Apps will be general available in September 2022 (currently in public preview) and will give us a lot of control on how users can access Power Apps. With the use of Azure Active Directory Conditional Access, we can add extra layers of security to individual Power Apps to contain sensitive data. In my project we needed to create a conditional access policy to prevent a canvass app being opened on any mobile device. With Conditional access policies for individual Power Apps we were able to do this.
Select the users and/or group to apply the policy to
Select the authentication context created earlier.
Create the conditions and select all device platforms besides windows.
Select the Block access under Grant to block all the device platforms besides windows.
Click on Save.
The policy is now created, but still needs to be connected to the canvas app.
Connect the conditional access to the canvass app
The policy needs to be connected to the canvas app with PowerShell.
Open PowerShell as an administrator.
Connect PowerShell to the Power Platform with the following command.
Add-PowerAppsAccount
The PowerShell command requires the EnvironmentName (ID of the environment), AppName (ID of the canvass app) and the ID of the authentication context. The ID’s in my example are changed for security reasons.
When validations or manipulations in a model-driven app are too complex for a business rule you can use JavaScript instead. With JavaScript you can use the Dynamics API to gather information and/or update records. JavaScript only runs on the interface; this means that the validation or manipulation only happen when a user is interacting with the model-driven app.
retrieveRecord
With retrieveRecord you can retrieve a records form a table if you know the ID.
In this example a record from the table accounts is retrieved and the columns name and revenue are returned. If it was successful the results are displayed in the console, if an error occurred then the error message is displayed in the console.
Xrm.WebApi.retrieveRecord("account", "a8a19cdd-88df-e311-b8e5-6c3be5a8b200", "?$select=name,revenue").then(
function success(result) {
console.log("Retrieved values: Name: " + result.name + ", Revenue: " + result.revenue);
// perform operations on record retrieval
},
function (error) {
console.log(error.message);
// handle error conditions
}
);
retrieveMultipleRecords
With retrieveMultipleRecords you can retrieve multiple records from a table based on a filtering.
In this example three records from the table accounts are retrieved and the columns name is returned. If it was successful the results are displayed in the console, if an error occurred then the error message is displayed in the console.
Xrm.WebApi.retrieveMultipleRecords("account", "?$select=name", 3).then(
function success(result) {
for (var i = 0; i < result.entities.length; i++) {
console.log(result.entities[i]);
}
console.log("Next page link: " + result.nextLink);
// perform additional operations on retrieved records
},
function (error) {
console.log(error.message);
// handle error conditions
}
);
Expand query to get related records
With the $expand options we can retrieve related records of the record that was returned, this works for both retrieveRecord and retrieveMultipleRecords. Expand uses navigation columns (relationship/lookup) to retrieve the related records.
When using retrieveMultipleRecords you might need to use an asynchronous function. The function needs to wait on retrieveMultipleRecords to return the values before continuing with the function. You do this by making two async functions, one with the main logic and the second one which retrieves the records.
async function xseption(formContext) {
var xseptions = await getXseptions(companyProfileId);
//Do something with the return
}
async function getXseptions(guid) {
var query = "?$select=rc_categorytypeid,rc_xseptionsid&$filter=_rc_related_companyprofile_value eq " + guid + "&$expand=rc_categorytypeid($select=rc_value)";
var result = await Xrm.WebApi.retrieveMultipleRecords("rc_xseptions", query);
return result;
}
Did you know that you can embed (add) a canvas app in a model-driven app? With the embedded canvas app, you can fully use the power of the canvas app inside a model-driven app. In my project I used it to provide the user with the capability to search an Oracle database and select a specific company.
It is very easy to add a canvas app, but I recommend to use it only when no other options are viable. The reason for this is that the embedded canvas app needs to be reconnected every time you transfer the solution form one environment to another.
Embed the canvas app
Create / add a canvas app in the same environment as the model-driven app.
Open the form of the entity where the canvas app needs to be embedded.
Click on +Component and select the Canvas app.
Fill in the App ID Static value with the unique ID of the canvas app and click on Done.
You can find the App ID by right clicking on an app and clicking on Details.
Solution deployments
The canvas app is now part of the model-driven app and needs to be in the same solution. When you transfer the solution from the development environment to the test environment, you will need to update the model-driven form manually. The reason for this is that the model-driven app is still connected to the canvas app on development. You will need to change the reference / GUID to the canvass app on production. And do not forget to share the canvas app with the users.
When sharing information or sending out a notification on teams I like to use the adaptive card feature. Adaptive Cards are the Teams method of sharing and displaying blocks of information in an easy-to-read manor without the complexity of customizing CSS or HTML to render them. With adaptive cards you can even create polls, show weather information, and create hyperlinks.
Creating a feedback form
Create a Power Automate flow with the required trigger.
Add the Post adaptive card in a chat or channel Teams action.
Select the Group chat (Channel or Group Chat) or provide the chat ID.
Add the following JSON code for an example feedback form with a title, single line, and multi-line text input fields and two types of choice list.
Power Automate has so many options and possibilities by clicking through the user interface. But it can do even more with the help of expressions. I think I use the following two experssion the most, empty() and first(). Thats why I wanted to share them with you.
Get the first returned value
When looking for a record in the Dataverse, but the record id is unknown we use the action List Rows. This action always returns an array of values. This even happens when only one record is returned. If you try to use the returned value to for example update the found record, then Power Automate automatically generates an apply for each loop. This can be confusing when developing the flow, why would it loop through all the values when there is just one? With the first() expression we can prevent Power Automate to automatically generates an apply for each loop.
Add the Dataverse action list rows and configure it to find the unique record.
In my example I look for the unique record based on the unique order id.
When trying to update the found record the following Apply for each will be automatically generated.
The creation of the Apply for each can be prevent by using the expression first().
Remove the Apply for each and the Update a row action.
Add a new Update a row action and open the expression tab.
Add the expression first(body(‘Internal_name_of_the_list_row_action’)?[‘value’])?[‘internal_field_name’].
This expression returns the first specified field value from the value array in the returned body of the list row action.
In my Cloud Flow I needed to check if a List rows action did not return a value. This is not an option in the drop down list, but it can be done using the empty expression.
Add the condition action.
Open the expression tab.
Add the expression empty(output(‘Internal_name_of_the_list_row_action’)?[‘body/value’]).
This checks if the returned body have any values, with other words checks if the value is empty.
When managing multiple Power Automate Cloud Flows it can easily become a big task to figure out what went wrong and where. My client has more than 10 Cloud Flows that run multiple time per day or per hour. The solution is to create a log table (dataverse) or list (SharePoint) where all the runs are stored. The log table contains all the data an administrator needs to maintain the Cloud Flows. In my example I create a Dataverse table.
The Dataverse log table
In the log table we store the following data.
The Cloud Flow
In every Cloud flow a try and catch scope is added. All the main actions are in the try scope, if one of these actions fails than the catch scope will be used.
Add a Scope with the name Try.
Add the action the Dataverse Add a new row action.
Select the log table by Table name.
Select the Log status Processing, I use Processing, Failed and Successful.
Add the Cloud Flow name to Process name.
Add the following code to the Environment field.
workflow()?['tags']?['environmentName']
Add the following code to External URL, use the code snippets for the fx code. This creates a link to the actual flow run.
workflow()?['tags']?['environmentName']
workflow()?['name']
workflow()?['run']?['name']
Set the Status Reason to Active.
Add the end of the try scope add the Dataverse Update a row action.
Select the log table by Table name.
Add the ID of the create log in the Row ID field.
Select Successful in the Log status field.
Add a Scope with the name catch.
Add the Filter Array action.
Add the result of the try scope in the From field.
result('Try')
Add the following filter Status is equal to Failed
item()?['status']
Add the Dataverse Update a row action.
Select the log table by Table name.
Add the ID of the create log in the Row ID field.
Select Failed in the Log status field.
Select Inactive in the Status field.
Add the following text to the Log details: Flow log action(s):
Add the following code to the Log details, to show the error message from the try scope.
Power Automate is one of my favorite tools from the Power Platform. It is extremely versatile and can be used to automate tasks between online services and automate processes ranging from simple to highly complex. In this post, I will share with you 3 advanced expressions I have used recently on my project. One part of the project is to convert XML data to data we can store in the dataverse.
Convert XML to JSON for easy access
For a project I needed to read multiple XML files with millions of rows and store data from the files into the dataverse. XML is harder to use in a Flow then JSON, so with a simple expression I transformed the XML to JSON.
Add a compose action with the name XML to JSON with the following code.
json(xml(variables('XML')))
Change the variable(‘XML’) to your XML content or store your XML content in that variable.
Add a parse json action set the Content to the output of the XML to JSON compose.
Add/create the JSON schema.
Using path in JSON
In most cases when you need to save data from JSON you can use the dynamic content to find it. But sometimes you are looking for a field name that is not unique. In my case I needed a field called country related to the company. But the country field was used multiple times for various blocks. You can select the correct country by using the path (location) of the field in an expression.
Select the JSON through the dynamic content.
Copy the code from the dynamic content to the Expression.
Add the path add the end of the code.
I added .company.country to select the country of the company.
body('Parse_JSON').company.country
Dataverse lookup field
Lookup fields in Dataverse are really useful, but when you select them through the dynamic content the value will be the id not the display value. If the data must be readable for users, you can use the following steps to select the display value.
Add a compose to the flow.
Select the lookup field in the Inputs through the dynamic content.
Copy the code from the dynamic content to the Expression.
Your output looks something like this.
outputs('company')?['body/rc_countrycode']}
Add the @OData.Community.Display.V1.FormattedValue’ after rc_countrycode (your field name will be different).
When working with an ATOP setup you might need to know in which environment the Power Automate flow is running. For my solution I needed to know the environment because each environment uses a different Gateway and database credentials. In this post, I will share with you how to find the environment GUID and name.
Creating the flow
Create a flow and use the trigger Manually trigger a flow.
Add the action Get Environments under Power Apps for Makers
Add the Compose action and use the Workflow() expression, to get the current instance of the flow.
Parse the Output in a Parse JSON action.
Initialize a variable with the name environment as a string.
The Value is the EnvironmentName form the Parse JSON output.
Now you have the GUID of the current Environment in the variable.
To find the name of the current environment we need to go through the results of the Get Environments action.
Add a Condition control action and check if the environment variable is equal to the name from the Get Environments actions.
This will automatically add an Apply to each, this is because the Get Environments action might return more than one environment.
Add a Set variable action in the If yes section and set the variable environment to displayName.
Now you have the Name of the current Environment in the variable.
The final step is to add a Switch control and switch based on the name of the current environment.
Power Automate is one of my favorite tools from the Power Platform. It is extremely versatile and can be used to automate tasks between online services and automate processes ranging from simple to highly complex. In this post, I will share with you 6 tips and tricks when building flows in Power Automate.
Flow templates
A good way to get started with Power Automate is to use a template. Microsoft created a huge library of templates to choose from. You can browse by category to find your scenario, and then follow the steps in the template to create a flow from the template. You can also us the templates to figure out how to setup certain actions.
Equal to empty
Over the years many colleagues have asked me; how do I check if a value is empty? You can do this with the null expression! Note that sometimes you need to place the null between ‘ ‘.
Use parallel branches
Most flow builders forget to use the parallel branches. With parallel branches you can have two or more actions that run at the same time, after which the flow will only proceed once all parallel steps have completed. Parallel branches can be very useful for approval flows. For example, you have a request that needs to be approved by both IT and Sales, but the approval doesn’t need to be in a particular order. To save time you can run the approvals parallel.
Use scopes
We can use the action scope to group actions to make the flow easier to read. There is however another great use for them. The scope action encapsulates a block of actions and inherit the last terminal status (Succeeded, Failed, Cancelled) of actions inside. This in combination with the Configure run after setting we can create a try and catch logic in our flows. In this example the second scope (catch) only runs if the first scope (Try) failed.
Create a scope with some actions
Create a second scope after the first scope
Set the Configure run after setting of the second scope to has failed, is skipped and has timed out.
In this scenario the catch scope will only run if the Try scoped failed.
The flow will look like this.
Add redundant owners
If you have a flow that is used by your entire team, make sure you add a couple of co-owners. Then you will not be bothered during your vacation when the flow breaks. If you keep adding the same colleaguesas co-owners you can consider creating a security group and add the security group to the flows. Also make sure you add the co-owners to all the resources required by the flow. For example, the shared mailed that is used by the flow.
Connections
Actions use by default the connection (if required) of the creator of the flow. This is not always the best way to setup the connections. For example, if your flow updates a list you might not want to see your name as the modifier. I recommend using a dedicated account (service account) for most shared flows. The added benefit of using a dedicated account is that the flow will keep working even if you change your job.