Advanced configuration settings – Azure Synapse Link / Export to Data Lake service (Dataverse/ Dynamics 365)


The Export to Data Lake service now has some Advanced configuration settings available.

To learn more on Export to Data Lake service

https://nishantrana.me/2020/12/10/posts-on-azure-data-lake/

The new settings allow us to configure how the DataVerse / CRM table data is written to Azure Data Lake.

  • In-Place update or upsert (default)
  • Append Only

With the in-place update, the default setting, the file will contain the full data set, and any update in the source will update the same in the synced CSV file or the data partition, similarly, any record deleted will delete the row from the data partition, unlike Append Only where a new row will be added in case of both update and delete.

For huge volume of data, Microsoft recommends opting for Append only mode. This mode is also preferable when an organization wants to incrementally review the changed data.

The other option is to define the data partition strategy.

  • By Month (default)
  • By Year

With this option, files generated are partitioned by either year or more granular month-wise, which can be specified per-table basis.

Microsoft recommends Monthly partition if data volume is high.

Now, let us see it in action.

For the Lead table, we haven’t selected the option for advanced configuration settings and are going by default.

  • Append Only – No
  • Partition Strategy – Month

For contact, we have enabled the advanced configuration settings and opted for Partition Strategy as Year.

For Account , we have opted for Append Only as true, for which the Partition strategy option is disabled and set as Year.

The final configuration à

Within the container inside the Storage Account, we can see corresponding folders created per table/entity along with model.json as shown below.

Let us explore the Lead folder –

We can see 2 CSV created with format YYYY-MM.csv i.e. having the month part in it because we had specified Partition Strategy as Month i.e. the default value.

For Contact and Account, the Pattern Strategy was Year, so we have files generated in format YYYY.csv

Let us update one of the lead records by appending ‘Updated’ in the last name field.

After the successful sync,

we can see the record updated in the .csv / partition.

The same is the case with the contact record.

Now let us update an account record, it had Append Only specified as Yes.

Here we update the Account Name field from Litware to Litware Updated.

After the sync

We can see a new row appended with the updated record along with the original record.

Let us delete the same account record

As expected, being Append Only mode, we can see a new row added for the Litware record.

We have 2 additional rows apart from the original row, one created for update and the other for delete action.

Export to Data Lake service is Microsoft’s recommended way of synchronizing Dataverse Data with external storage, and we can see them continuously investing and adding enhancements to it.

Get all the details here –

https://docs.microsoft.com/en-us/powerapps/maker/data-platform/export-to-data-lake#data-partition-strategy

Hope it helps..

ADFS Time out settings for Microsoft Dynamics CRM


Marcello Tonarelli Blog

Active Directory Federation Services (ADFS) is used by Microsoft Dynamics CRM for an Internet Facing Deployment (IFD).  Relying Parties are used to allow users to be authenticated when trying to access Microsoft Dynamics CRM.

Your session has expired

The default settings require users to re-authenticate every hour if there is no activity.  This can quickly become annoying if users have to sign in to CRM several times a day.  ADFS gives administrators the ability to increase the timeout and reduce the need for users to repeatedly sign in through out the day.

Your session has expired

Your session has expired

Update the timeout using Microsoft PowerShell

To change the timeout value, you will need to update the TokenLifetime value.  This is achieved using PowerShell.  Before you open PowerShell, you will need to find the name of each Relying Party.

Step 1: Find out the name of the relying party

  1. Open AD FS Management
  2. Navigate to…

View original post 116 more words

Retrieving security role privileges changes (audit) using KingswaySoft’s Dynamics 365 SSIS Integration Toolkit


We can specify Source Type as AuditLogs within the CDS Source Component Editor of KingswaySoft to fetch the Audit information.

Get the details :- https://nishantrana.me/2018/10/08/using-kingswaysofts-cds-crm-source-component-to-get-audit-information-in-dynamics-365-ce-ssis/

The Souce TypeAuditLogs includes Output type – Audit Details (Role Privileges), which can be used to fetch any audit information related to security roles.

Audit Details (Role Privileges) contains 4 columns, which we have mapped to columns in excel.

  • AuditId
  • OldRolePrivileges
  • NewRolePrivileges
  • InvalidRolePrivileges

Let us execute the package

The output à

Here, the 1st row is when a new security role was created, the 2nd row is when an existing security role was updated and 3rd row is when a security role was deleted.

To get the complete information we need to combine this information (id which is auditid – first column) with the Primary output.

The columns of the Primary Output

After mapping, let us run the package with the Primary Output type.

Here below we can see the complete details by combining both the output using the auditid column

Note – to get the Audit Details (Role Privileges), we need to use SOAP – Service End Point type.

Also refer –

http://mscrmshop.blogspot.com/2016/06/auditing-security-roles-in-crm.html

http://www.kingswaysoft.com/blog/2019/10/16/Extracting-Audit-Logs-for-Multiple-CRM-Entities

https://nishantrana.me/2018/11/26/ssis-and-microsoft-dynamics-365/

https://community.dynamics.com/crm/b/mscrmcustomization/posts/ms-crm-audit-database-table-details

Hope it helps..

Advertisements

Write batch size, data integration unit, and degree of copy parallelism in Azure Data Factory for Dynamics CRM / 365 Dataset


Let us take a simple example where we are moving contact records (.CSV) stored in Azure File Share to Dataverse or Dynamics 365 (UPSERT).

CSV file has 50000 sample contact records (generated using https://extendsclass.com/csv-generator.html) stored in Azure File Storage.

Another option of generating sample data

https://nishantrana.me/2020/05/26/using-data-spawner-component-ssis-to-generate-sample-data-in-dynamics-365/

The Source in our Data Factory pipeline.

The Sink is our Dynamics 365 / Dataverse sandbox environment, here we are using the Upsert write behavior.

For the Sink, the default Write batch size is 10.

Max concurrent connections specify the upper limit of concurrent connections that can be specified.

Below is our Mapping configuration

The Settings tab for the pipeline, allows us to specify,

Data Integration Unit specifies is the powerfulness of the copy execution.

Degree of copy parallelism specifies the parallel thread to be used.

Let us run the pipeline with the default values.

  • Write Batch Size (Sink) – 10
  • Degree of copy parallelism – 10
  • Data integration unit – Auto (4)

The results à It took around 58 minutes to create 50K contact records.

We then ran the pipeline few more times by specifying the different batch sizes and degree of copy parallelism.

We kept Max concurrent connections as blank and Data Integration Unit as Auto. (during our testing even if we are setting it to higher values, the used DIUs value as always 4)

Below are the results we got à

Write Batch Size Degree of copy parallelism Data Integration Unit (Auto) Total Time (Minutes)
100 8 4 35
100 16 4 29
1000 32 4 35
       
250 8 4 35
250 16 4 25
250 32 4 55
       
500 8 4 38
500 16 4 29
500 32 4 28
       
750 8 4 37
750 16 4 25
750 32 4 17
       
999 8 4 36
999 16 4 30
999 32 4 20

The results show that increasing the batch size and degree of copy parallelism improves the performance in our scenario.

Ideally, we should run a few tests with different combinations before settling for a specific configuration as it could vary.

On trying to set the batch size to more than 1000,

We would get the below error à
ExecuteMultiple Request batch size exeeds the maximum batch size allowed.

Also refer –

Optimizing Data Migrationhttps://community.dynamics.com/crm/b/crminthefield/posts/optimizing-data-migration-integration-with-power-platform

Using Data Factory with Dynamics 365https://nishantrana.me/2020/10/21/posts-on-azure-data-factory/

Optimum batch size with SSIShttps://nishantrana.me/2018/06/04/optimum-batch-size-while-using-ssis-integration-toolkit-for-microsoft-dynamics-365/

Hope it helps..

Advertisements

Environment Variables in Flows as Parameters


MG

#powerautomate #environmentvariables

I have written a blog post earlier about the using of environment variables to store global environment settings and while I was looking to use it the same way I have described in my blog Post.

GUESS What I HAVE found! They are directly Exposed as Parameters in my flow and can be used directly in my Flow! No Additional calls or anything ..

https://docs.microsoft.com/en-us/powerapps/maker/data-platform/environmentvariables

View original post

Fixed – Initial sync status – Not Started – Azure Synapse Link / Export to Data Lake


Recently while configuring the Export to Data Lake service, we observed the initial sync status being stuck for one of the tables at Not started.

Manage tables option also was not working

All changes for existing tables are temporarily paused when we are in the process of exporting data for new table(s). We will resume writing changes for existing table(s) after we complete exporting data for new table(s) to the Azure data lake.

As it was seemed to be stuck forever, we tried Unlinking the data lake and linking it back.

Select Yes

We left the Delete data lake file system unchecked.

Created a New link to data lake with the same storage account.

We would get the below error à

An error occurred: Container: dataverse-pmaurya105-unqdc8ed1c1df824188bbe2225de96f0 already existed with files for storage account: saazuredatalakecrm. Please clean dataverse-pmaurya105-unqdc8ed1c1df824188bbe2225de96f0

Basically, if we are using the same storage account for linking, we need to first delete or clean the container.

We cleaned the container.

And tried again.

This time it worked

Here we can check the Delete data lake file system option while Unlinking the data lake

This will perform the same step – deleting the files within the container.

If that doesn’t work or is not feasible, we should raise Microsoft Support Ticket.

https://powerusers.microsoft.com/t5/Microsoft-Dataverse/Added-new-Table-to-Export-to-Data-Lake-Now-Sync-is-blocked/m-p/924560#M11400

Check other posts on Azure Data Lake and Dynamics 365 – 

https://nishantrana.me/2020/12/10/posts-on-azure-data-lake/

Hope it helps..

Advertisements