With this new feature (preview) now we can specify the columns to be downloaded on mobile devices for offline use.
Select edit for the Model-driven app.
Select Settings and select an existing offline profile or create a new profile.
Select a new table or an existing table for the profile, we can see the Manage Columns option for it.
We can see key columns already selected as part of Required Columns.
We can select columns from the other columns section for our offline profile. The fewer the columns the faster the app will download the data for offline usage.
One point to note is that we get this option only from the Maker Portal not from the Power Platform Admin Center.
Let us first use the data spawner component to generate sample data for a custom table for which we just have 2 new custom first name and last name fields created as well as mapped with 100 K records.
Let us first run the Package with batch size = 1000, threads as 20, multiplexing user = 5, and homogeneous batch operation disabled.
Below is the User Multiplexing option in the Connection Manger.
Here we have defined 5 different application users.
Now let us run the same with the Homogenous Batch Operation option checked.
Below are the findings with different variations of Batch Size, Threads, Multiplexing Users, Homogenous Batch Operation for the – 100K records – Custom table
Batch Size
Threads
Multiplexing Users
Homogenous Batch Operation
Duration (minutes)
1000
20
5
N
5:48
1000
20
5
Y
1:54
500
20
5
N
4:16
500
20
5
Y
1:29
250
20
5
N
3:58
250
20
5
Y
1:38
100
20
5
N
4:47
100
20
5
Y
1:58
500
50
5
N
4:00
500
50
5
Y
1:24
We can see huge performance improvements while using Bulk Operations (Homogenous batch option) for our custom table, with threads around 20 and multiplexing users as 5. Increasing the number of multiplexing users will provide further performance improvement here.
Now let us run it against the Contact table and this time we take 10K as a sample instead of 100K as a sample.
10K records Contact table
Batch Size
Threads
Multiplexing Users
Homogenous Batch Operation
Duration (minutes)
500
1
1
N
25:26
500
1
1
Y
42:14
100
1
1
N
24:34
100
1
1
Y
36:06
100
5
1
N
21:56
100
5
1
Y
16:45
100
10
1
N
6:59
100
10
1
Y
12:54
100
10
2
N
6:14
100
10
2
Y
11:28
100
10
5
N
3:26
100
10
5
Y
9:36
100
15
5
N
2:56
100
15
5
Y
9:57
100
20
5
N
2:34
100
20
5
Y
10:17(Ran into a server-side throttling error.)
1000
20
5
N
5:30 (Ran into a server-side throttling error.)
1000
20
5
Y
5:02 (Ran into a server-side throttling error.)
500
20
5
N
4:20(Ran into a server-side throttling error.)
500
20
5
Y
2:36 (Ran into a server-side throttling error.)
100
20
1
N
18:00
(Ran into a server-side throttling error.)
100
20
1
Y
11:20
(Ran into a server-side throttling error.)
With the higher Batch size along Threads + Multiplexing users + Homogenous Batch Operation message option, we could get a good performance improvement, however, we can see that we ran into server-side throttling errors on increasing the batch size. So with tables having a higher number of fields/relationships, we need to be more careful than a custom/table with fewer relationships and fields
[CDS Destination] Warning: An exception has occurred while processing the service request, the same request will be attempted again immediately. KingswaySoft.IntegrationToolkit.DynamicsCrm.WebAPI.WebApiServiceException: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server. (Error Type / Reason: KeepAliveFailure, Detailed Message: The underlying connection was closed: A connection that was expected to be kept alive was closed by the server.)
[CDS Destination] Warning: A server side throttling is encountered, the same request will be retried after 5 minutes (as instructed by the returned throttling error message from the server). KingswaySoft.IntegrationToolkit.DynamicsCrm.WebAPI.WebApiServiceException: The remote server returned an error: (429) . (Error Type / Reason: 429, Detailed Message: {“error”:{“code”:”0x80072321″,”message”:”Combined execution time of incoming requests exceeded limit of 1200000 milliseconds over time window of 300 seconds. Decrease number of concurrent requests or reduce the duration of requests and try again later.”}})
Recently while trying to run an SSIS package from within the SSDT, we started getting the below error. The package had been running without any errors a couple of weeks ago.
We tried most of the options suggested, but nothing worked. So eventually tried the Update. That also didn’t work.
Next, we tried the Repair option.
Finally repairing it worked. I think it could be because we had installed another software that was .NET-based, which might have changed a few of the dependent components.
Below we can see the resource cell template (or view) applied that defines the images, values, and fields displayed for the resource in the Schedule Board.
Now suppose we want to show the Account (custom field) value also, that would make it easy for the Dispatcher to schedule them from within the Schedule board.
For this, we need to select Board Settings for the Schedule Board.
Navigate to the Other section within the Board Settings.
We’d first start by adding/defining a new Resource Cell Template and a new Retrieve resource query template.
We have added below Div tag below to show the account name.
Save this new template
Next, add the below attribute tag for the account field in the Fetch XML for it to retrieve the value of the account, here name property holds the schema name of the field.
Select Save as new to add the new template.
On refreshing the Schedule Board we can see the Account value added to the view, however, it shows the Guid of the account record.
To get the label /name, edit the Resource Cell Template and add the below UFX Bag (UFX directives for querying the data) to fetch the name of the account.
Update the Sample Resource Query, Save the changes, and refresh the schedule board.
We can see the Guid replaced by the Account name there.
Import Sequence Number is an internal field (whole number), that acts as a unique identifier of the data import or data migration that created this record that ensures data consistency and traceability during data import/migration.
Here when we imported around 50 contact records from myFile0.csv, without specifying any value or mapping import sequence number, the system generated an import sequence number and associated it with all the 50 records created using the import.
Next, we imported 3 more records, and this time we manually specified the value for the import sequence number.
We could see the record created with the value we had specified.
Next, we again removed the import sequence number and imported it.
We can see the record created with an auto-generated sequence number i.e. 3
On manually created records from the application, we can see the value being null for it.
As we can see, the primary function of the import sequence number is to track which records were created during a specific import. It allows us to identify records created through import vs manually entered. It can also be used for troubleshooting if there are certain records not created during the import process. When migrating data from another system we can include a corresponding import sequence number in our source data. This can help establish a one-to-one link between the source record and the newly created record inside Dataverse, allowing better handling of failed rows.