Data Flow Transformations in SSIS


SSIS has various data flow transformation components that make it easy to manipulate the source data before it can be sent to the destination for processing.

Below are some of the most frequently used one

  • Data Conversion

For changing the data type of the source column.

Simply select the input column and then specify the data type for it.

For e.g., if our source has one of the date columns as a string, we can apply Data Conversion transformation to convert its data type to Date.

  • Derived Column

Can be used if we want to apply the expression to the source column and create a new column or update the existing column.

For e.g. we are interested in the Year part of the Modified Date column. For this we can use the Year Date/Time function, get the year part and add it as a new column.

  • Percentage Sampling and Row Sampling

To reduce the number of rows in the pipeline.

Percentage Sampling allows us to define the percentage of rows for sampling. It generates two output, one output for the rows that are selected and the other for the unselected.

Similarly, Rows Sampling allows us to specify the number of rows directly instead of a percentage. It also gives the option to select the columns.

  • Multicast

Multicast can be used to pass the same data to multiple destination as shown below.

Sort transformation can be used to specify sorting for the data. In the case of SQL, we can specify sort in the query itself however in case of working with flat files, this function can be handy.

To perform aggregation on the source data we can use Aggregation transformation.

  • Union All

Union All allows us to combine data from multiple sources. Here the column structure should be identical of the sources. The data from the first source is passed to the destination, followed by the data from the second.

We’d covered some of the common transformation components in this post, in the next post we’d try to cover the remaining frequently used transformation.

Hope it helps..

D365 V9{Upgrade}: Client API Change for openWebResource


Ajit Patra's avatarAjit Patra

Prior to D365 V9, we were using Xrm.Utility.openWebResource() to open a web resource in a new window. However, going forward in D365 V9 we need to use Xrm.Navigation.openWebResource() to perform the same operation. Let’s take an example:

D365 V8:

D365 V9:

Hope it helps !!

View original post

How to – Stop SSIS Package Execution in SSISDB


Suppose a package has been running for quite a long time or has been mistakenly run and we would like to stop its execution.

To do so

  • Right Click on SSISDB
  • Select All Execution Reports

  • Filter it to see all the Running packages

  • We can see our all the running packages there.

  • Now to stop any specific package, right click SSISDB and select Active Operations

  • From the Active Operation window, select the running job and click on Stop button to stop its execution.

  • Another option is to use the stored procedure à get the operation id of the running package and execute the stored procedure

Exec catalog.stop_operation  @operation_id = 199915


Hope it helps..

Using Alternate Key to set Lookup in SSIS (KingswaySoft)


Alternate Keys which were introduced with CRM 2015 Update 1, gave us the ability to Upsert and Update records, by defining one or more fields as Alternate Key and use that field(s) instead of relying only on Primary Key.

Alternate Keys can also be used for setting or updating the lookup instead of GUID.

CDS\CRM Destination Component in KingswaySoft SSIS Integration Toolkit allows us the option of specifying the Alternate Key for updating the lookup field.

Suppose we have Contact entity with following Alternate Key defined in it

  • Key1 on field last name
  • Key2 on 2 fields emailaddress1 and last name

Within the CDS/CRM Destination Component Editor for the lookup field, click on ellipsis for the Text Lookup column. Here we have selected parent contact field inside Lead Entity.

Inside Text Lookup Editor, select the option “Choose Target Field(s)”, in lookup method select Alternate Key and we will see both our alternate keys listed there.

  • Key1 on field lastname

  • Key 2 on emailaddress 1 and last name

One thing I have observed that it doesn’t list the alternate key if it is defined by using more than 2 fields. Looks like it is by design.

Hope it helps..

Create Azure Machine Learning Web Service using Azure Machine Learning Studio


Azure’s Machine Learning Studio makes it easy to create machine leaning based solution.

To try it free, go to Machine Learning Studio’s home page and sign in with the existing Microsoft Account. (or we can make use of existing Azure Account and add Machine Learning to it)

Azure Machine Learning Studio is an IDE for Machine Learning, that allows us to design, develop, test, deploy the solution easily using drag and drop functionality.

Let us understand the interface of Machine Learning Studio

The project is a collection of Experiments, Web Services, Notebooks, DataSets and Trained Models grouped together. Experiments are where we get, process, transform, train, evaluate our data. WebServices are created from our experiment, which makes the predictive model usable from within the other application. Datasets refer to the data that we use for our experiment.

Let us start with our experiment.

Select Experiments and click on New button in the bottom. Let us start with a blank experiment.

It loads a blank experiment with greyed out workflow diagram.

The typical steps within an experiment include

  • Getting the Data
  • Applying the Algorithm
  • Split and Train the model
  • Score the model
  • Evaluate the model

The first step involves specifying the DataSet, here we will be working with the sample dataset i.e. Restaurant Ratings already provided. Select and drag the Restaurant rating dataset in the experiment designer.

We can right click and can either download or visualize the data.

The dataset contains user id, place id, and rating column.

It basically shows different restaurant visited by the user and the rating given by them.

Suppose our dataset includes a large number of columns or duplicate rows and we might not need all of them for our experiment we can make use of “Select columns in Dataset” module or to remove duplicate rows “Remove Duplicate Rows” module etc. by specifying them as the next step in our flow. Or can use any of the modules that is part of Manipulation section based on our requirement and the dataset that we are dealing with.

Here as we have only three columns and no duplicate rows we won’t be using them.

Next let us apply the appropriate algorithm, based on User ID and Place we want the rating to be predicted, so let us select the Multiclass Decision Jungle algorithm here and drag it to the experiment area.

Now let us take a subset of our dataset for training the model, for which we’d use the Split Data module.

Here we have specified our dataset as input to the Split Data module and specified fraction of rows as 0.5 which means that 50 percent of the data will be used for training and the remaining 50 percent will be our test data.

Click on Run to generate the output till the Split Data module. After the successful run, we’d see the green check on the module.

Now to train the model, drag the Train Model module. Specify the algorithm and the split data as the input to that module and click on Launch column selector.

Select the Rating column which is the column we would like to predict.

Click on Run again.

Next, we’d add the Score Model module and Evaluate Model module to see the how the algorithm is performing.

Add the connection line from Train Model and Split Model (second part) as input to the Score Model.

For Evaluate Model specify the connection line from Score Model as input and click on Run.

After the successful run, right-click the Evaluate Model and select Visualize to see the result.

Evaluation Results à

This completes our Prediction model.

Now we will create and deploy the web service for our prediction which will allow it to be used within the application.

Select Predictive Web Service for Set Up web service button.

This adds the Web Service Input and Output module to our experiment. Run the experiment again and click on Deploy Web Service to deploy the service to Azure.

The successful deployment will open up the Web Service group and will list down the details of the web service deployed. We can click on “Test Preview” to test our model.

Click on Enable for the sample data inside the Test page, which will generate the sample data to be used as input for the web service.

After generation of Sample Data, click on Test Request-Response button to see the output.

The Scored Labels displays the Predictive Rating based on the data that we have passed in.

Now to use this web service in our application, back in our Web Service dashboard, click on Request/Response to open the documentation page generated for our experiment.

Also note that we have API Key there, which would be required to be passed along with the request.

In the Request-Response API Documentation page, we can see all the details like Request URL, Request Headers, Request Body, Response and also the sample code for calling the web service

Similarly, we can click on Batch Execution link inside the web service Dashboard to access the documentation for Batch API Execution.

This we saw how easy it is to generate a machine learning solution using Azure Machine Learning Studio.

Hope it helps..

D365: JavaScript and Business Rule on the same field


JavaScript executes first and then the Business Rules ….

Ajit Patra's avatarAjit Patra

Recently, on change of value of a field, we had business rule earlier and then we decided to perform the same using JavaScript to avoid hard coding of value in Business rule. The logic was to set value of 2 other fields on the form based on the value selected.

The logic was working fine on DEV as we had deactivated the business rule. However, after deploying to TEST and UAT, it stopped working.

We verified that the JavaScript event handler was enabled for the OnChange event of that field.

bb

While debugging the code as well using browser console, we checked that the break point was hitting the code block where we were setting the value of 2 other fields.

aa

After struggling for few minutes, it came to our mind that Business rule was also written which should have been deactivated.

cc

The business rule was basically clearing the values of…

View original post 82 more words