This thread looks to be a little on the old side and therefore may no longer be relevant. Please see if there is a newer thread on the subject and ensure you're using the most recent build of any software if your question regards a particular product.
This thread has been locked and is no longer accepting new posts, if you have a question regarding this topic please email us at support@mindscape.co.nz
|
Ivan - In this post (https://www.mindscapehq.com/forums/Post.aspx?ThreadID=3471&PostID=11731) you cautioned against a possible race condition, where 2 threads could both think a record doesn't exist and create duplicate new records. I have been trying to come up with a way to prevent this, using the tools provided as part of the LS package, and would appreciate some help or insight. One idea I had is to make one column (or a combination of 2 or more columns) of the table unique, so that the second attempt to save the record fails, thereby causing the second transaction to fail. Barring that, I thought of creating a surrounding TransactionScope and failing the transaction if there is a duplicate record after the SaveChanges runs, but that seems to be a lot of thrashing around to check for the occasional error that may occur. Any other ideas? Thanks, Dave |
|
|
The unique constraint idea is nice, but some care may be required: as you note, a violation will fail the entire transaction, which could be a large batch. Figuring out which record(s) caused the violation and which were okay may involve a bit of reconciliation. Another strategy to consider is whether you can handle this before you even get to the SaveChanges stage. For example, have a single producer thread which hands out work to data access threads. The producer thread could bucket items so that items that would be duplicates always be in the same bucket. If each bucket is serviced by a single thread, it's impossible for two threads to end up creating duplicate records (because those records would be in the same bucket and therefore serviced by the same thread). This doesn't readily extend to multiple processes or machines, and of course depends on a producer-consumer model being viable for your application. And again you may feel this is too much overhead if duplicates are going to be very rare. |
|
|
Thanks Ivan, great idea. I need to do some thinking on whether or not that makes sense in our application, but I hadn't thought of that approach. We will generally submit changes that are all part of a single atomic operation, and the number of records is not huge. My thinking is if the operation fails, we simply try the entire operation again, up to a limited number of times, before failing the whole batch. Duplicates should be rare, so if we get a collision due to a race condition it will probably work the next time we try the operation. Dave |
|