Preventing Plugin Recursion

When writing plugins, it is essential to know when to prevent the plugin from executing. First, this will assist with saving resources. If your plugin doesn’t need to run, preventing its execution will prevent the resources from being used. Secondly, if the plugin is executed repeatedly, without any checks for recursion, you will receive an infinite loop error, and your save will fail. The SDK offers a number of checks and balances you can utilize in order to check for, and prevent, recursion.

Register Step on Pre-Operation Stage. There are three stages you can use when registering your plugin step: Pre-Validation triggers outside the CRM transaction and is usually used to check for data errors, as no data would be written to the CRM during this stage. Pre-Operation is triggered before the data is saved to the CRM, so you can edit and update your data before it is saved to the CRM. Post-Operation is triggered after the data has been saved to the CRM, so you can ensure you are receiving the latest data values after all other Pre-Operation stages have been completed. When updating data in the Pre-Operation stage, there is no additional Update message triggered, while the Post-Operation stage will trigger another update if any values are updated in your plugin.

MSDN: Event Execution Pipeline

Select Filtering Attributes. When registering your plugin on an Update message, you have the option to selet filtering attributes. If you don’t select any filtering attributes, your plugin will execute every time the entity is updated, whether it is updated in the CRM, by a workflow, by a plugin, or by an external source through the Web Services. By selecting specific fields in the filtering attributes, you are telling the plugin to only trigger if those specific attributes have been updated. Of course, one thing to be aware of when using filtering attributes is that your attribute will appear in the target entity. If your plugin calls an update to the target entity, it will update the attribute, even if its value has not changed, thus causing the plugin to trigger again, potentially triggering an infinite loop.

PluginRegistration.png

Filtering Attributes in Plugin. This is essentially the same thing as filtering attributes in the plugin registration process. The difference is that you are verifying that one or more of the fields exist in the target entity, and if they’re not found, the plugin executes gracefully. This would have the same limitation as the Filtering Attributes, in that updates to the target entity directly could cause the plugin to trigger again, risking another infinite loop error.

if (context.InputParameters.Contains("Target") &&
    context.InputParameters["Target"] is Entity)
{
    Entity entity = (Entity)context.InputParameters["Target"];
}

if (!entity.Contains("firstname") ||
    !entity.Contains("lastname"))
{
    return;
}

Compare PreEntityImage to PostEntityImage. This takes the previous step of filtering attributes to the next level. When registering your Update message in the plugin registration, you can specify a pre and post entity image to be loaded into the execution context. The pre image contains the values of the attributes before they were changed in the CRM, while the post image contains the values of the attributes after they were changed in the CRM. By comparing the attribute in the pre image, to the attribute in the post image, you can see which values have changed. If any of the attributes you want to filter on have changed, you can continue executing the plugin, otherwise, you can exit the plugin gracefully. Since you are storing the data from the entity before and after, this will cause additional overhead for the plugin execution. It will be a good idea to make sure you only add the attributes you want to compare to the images, to reduce the overhead as much as possible.

MSDN: IExecutionContext.PreEntityImages Property
MSDN: IExecutionContext.PostEntityImages Property

Don’t Update the Target of Post-Operation Stage. The target entity contains the values that were updated at the time the entity was saved. If you are running your plugin in the Pre-Operation stage, you will do all of your updates to the target entity. This will ensure that your updates will be committed to the CRM, and the Pre-Operation stage updates will not trigger an additional update message. On the otherhand, if you are running your plugin in the Post-Operation stage, you will not want to update the target entity. If you do, you will trigger an update using the same attributes, even if the attributes have not been changed. Instead, create a new entity with the appropriate id. You will add only values that have changed and call the update function on this entity only. This will ensure that only the changed attributes get updated in the CRM, and prevent triggering any workflows or plugins that filter by other attributes.

Entity updatedEntity = new Entity(context.PrimaryEntityName);
updatedEntity.Id = context.PrimaryEntityId;
updatedEntity["name"] = "New Name";
service.Update(updatedEntity);

Check Execution Depth. The execution context in the SDK has a property called Depth, which can be used to check what step the plugin is being triggered on. If you are saving the record in the CRM, the depth will usually be 1, and any subsequent call to the plugin would increase the depth by 1. Unfortunately, this isn’t a foolproof method, as the update could be triggered by a workflow or even another plugin. If that happens, then the depth could be 2, 3 or even 4 before the plugin is triggered for the first time. If you check for depth, and your number is too low, you run the risk of not having your plugin being triggered when necessary. If your number is too high, you could be calling your plugin multiple times, and using additional resources and execution times for the plugin to complete.

MSDN: IExecutionContext.Depth Property

Check CorrelationId. The CorrelationId is another parameter exposed by the execution context. This is a Guid that keeps track of the ID of the id of the plugin or workflow execution. This one is a little harder to validate against, but is an excellent way to keep track of your current execution process and prevent your plugin from triggerng multiple times in the same process. Store the CorrelationId in a threadsafe global variable in your plugin, and when it executes a second time, compare the value stored in the global variable with the one passed in the execution context. If the two match, you know you’re running in the same process, and you can execute the plugin gracefully.

MSDN: IExecutionContext.CorrelationId Property

As you can see, there are many different methods that can be used to ensure that you don’t run your plugin steps more times than is necessary. You can use one or more of these approaches, and be flexible. What works for one plugin, may not work the same for another plugin, depending on the business logic you’re trying to adhere to. Experiment and see what works best for the plugin you’re writing.

Improving Migration Performance

I was recently involved in a data migration project on a very limited timeline. The data would be coming from an older CRM on-premise environment and going to a new CRM Online instance. Although the data was not very complex, there was a significant amount of it and the cutover would need to happen over a single weekend.

After putting together and testing the data maps in Scribe, I was able to benchmark the full runtime; end to end migration would take 5 days. This would need to be greatly reduced in order to accommodate the 48-hour migration window I was given.

The solution I came up with involved 3 changes from the straightforward import approach.

Distributed Processing

The first involved leveraging CRM Online’s distributed computing architecture. Every operation in the Scribe migration package takes place serially, and since I’m connected using the Scribe CRM Adapter all operations are being passed through the CRM API. This triggers data validation and any business rules that are configured, which can introduce unavoidable processing delays. To get around this bottleneck I split each of my Scribe migration processes into multiple files, with source queries filtered by contiguous date ranges, and ran them simultaneously. Each copy of the Scribe Workbench establishes its own independent connection with the CRM API and distributed computing works its magic to turn my serial process into a parallel one.

Pre-Cached Lookups

The second adjustment involved cutting processing time associated with referencing related records, specifically record-owning users. Anyone who has used Scribe to push data to CRM knows that the DBLOOKUP function, while incredibly handy, can put a strain on throughput. Each lookup halts record processing while it reaches out to the CRM API and retrieves data. To avoid this extra processing step I created a table in my local SQL database and populated it with a cached copy of the user names and unique identifiers in the new CRM Online database. I then added another column to this table and filled it with the corresponding identifiers from the older on-premise CRM users. Finally, I adjusted the source queries in my migration processes to join to this new table, so that the necessary owner identifiers would be available for direct mapping within Scribe, removing the need for DBLOOKUPs entirely.

Bulk Imports and Delta Packages

The final change to my migration strategy had to do with timing. Although the previous 2 changes would make the migration jobs fit within the 48-hour window, it wouldn’t provide a lot of room for error. I determined the best approach therefore would be to perform 90% of the migration in the week that led up to the Go-Live weekend, and then wrap up the migration with a few finalization jobs to reconcile changes in the data. I began by importing all records with an Open status. This would allow for any major changes in data on a record to be brought across over the weekend without the need to re-open anything. I then made 2 copies of each of the migration packages. The first set was filtered on CRM’s ModifiedOn date, and would create or update records as necessary with any changes that occurred during the week. The second set was an update-only job that set the final statecode and statuscode. On go-live weekend these two sets were run, one after the other, in about 2 hours.

Time restraints often create an opportunity to improve processing performance. Although this solution was tailored for a specific need, the basic concepts can be applied to most migration scenarios. For assistance with accommodating your migration to Microsoft CRM, contact us today!

Do You Offer the Support Your Employees Deserve?

Many organizations feel that they offer their employees the information they need to succeed and enjoy their jobs. Unfortunately, information is often difficult to locate, spread across multiple locations or challenging to understand. Failing to provide a useful resource platform can cost your organization time, money and productivity.

Designing and developing an effective Employee Self Service, or ESS, portal or resources is vital to the continued success of your business, and the happiness of your employees.

What makes an ESS solution effective?

In order to be useful, your ESS portals and resources must be:

  • Easily accessible: Today’s employee accesses information and systems from multiple locations – the office, the road and from home. Your ESS resources must be accessible from anywhere, and must be easy to reach.
  • Consolidated: Far too often, employees are forced to access multiple systems to find answers. Creating a single repository for key documentation and resources helps your employees quickly find the information they need, without wasted time and effort bouncing from system to system.
  • Organized: An effective catalogue system and search functionality makes it easy for your employees to find the answers they need. Taking a proactive approach to organizing data will help to eliminate much of the frustration related to the search for answers.

If these characteristics don’t describe your current ESS efforts, it may be time to take action.

Make the Investment into Your Employees

A minimal investment of time and money into an effective ESS portal today will save your employees a good deal of frustration and confusion in the future. The Avtex team can help you design and build ESS portals to help employees find information relating to a wide range of issues, including:

  • Human Resources
  • IT and Technology
  • Onboarding
  • Legal
  • Compliance and Regulatory Matters
  • Sales and Marketing

Read more about our ESS services, or contact us to discuss your organization’s needs today.