BizTalk Scheduled Task Adapter - Load Balancing

Apr 26, 2011 at 12:45 PM


Does anyone have experiance of working with the Scheduled Task Adapter within a multiple server environment? I was wondering how the adapter would work if it is configured across multiple servers within a single BizTalk Group? Could you end up with duplicate messages being created? If this is the case do you need to ensure that receive locations which utilise the adapter are only enabled on a single server within the BizTalk Group?

Apr 26, 2011 at 1:20 PM

You can only have a single instance of any Scheduled Task adapter receive location running at any time. Otherwise you will receive duplicate messages.
You cannot enable/disable receive locations on individual servers (the receive location is either enabled or disabled on all servers in the group)
The way to provide fail over in this case is to use a separate host instance for these receive locations and have the host instance service running on one server and stopped on the other server.
To fail over to another server, simply start the host instance on the back-up server. You can use Cluster Services to automate this.

This behaviour is the same as the FTP adapter or any other polling type adapter, except the File adapter. The file adapter uses file locks to prevent two receive location instances consuming the same file.
This problem can only be overcome if the two receive location instances are aware of each other i.e. they share a common resource.
The standard tasks do not implement this feature but the possibility exists to write a custom task that would implement a shared locking mechanism but I would imagine there would be all sorts of timing issues/race conditions you would need to contend with.

Apr 26, 2011 at 5:16 PM

Hi Greg

Thanks for the prompt reply. At present I am exploring two options to address a particular requirement. One is the use of your BizTalk Scheduled Task adapter, the alternative is the Windows Scheduler albeit with the use of a batch file to create a trigger message. Both of the solutions appear to be subject to the same limitation i.e. I need some mechanism to ensure they only run on a single node.

It's back to the drawing board :(.