Businesses today are developing high traffic ASP.NET web applications that serve tens of thousands of concurrent users. To handle this type of load, multiple application servers are deployed in a load balanced environment. In such a highly concurrent environment, multiple users often try to access and modify the same data and trigger a race condition. A race condition is when two or more users try to access and change the same shared data at the same time but end up doing it in the wrong order. This leads to high risk of loosing data integrity and consistency. This is where distributed lock mechanism comes in very handy to achieve data consistency.
Distributed Locking for Data Consistency
NCache provides you with a mechanism of distributed locking in .NET/C# that allows you to lock selective cache items during such concurrent updates. This helps ensure that correct update order is always maintained. NCache is a distributed cache for .NET that helps your applications handle extreme transaction loads without your database becoming a bottleneck.
But before going into the details of distributed locking, you first need to know that all operations within NCache are themselves thread-safe. NCache operations also avoid race conditions[1] when updating multiple copies of the same data within the cache cluster. Multiple copies of the data occur due to data replication and NCache ensures that all copies are updated in the same correct order, thereby avoiding any race conditions.
Since we have that part cleared, consider the following code to understand how, without a distributed locking service, data integrity could be violated;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
// BANK WITHDRAWAL APPLICATION // Fetch BankAccount object from NCache BankAccount account = cache.Get("Key") as BankAccount; // balance = 30,000 Money withdrawAmount = 15000; if (account != null && account.IsActive) { // Withdraw money and reduce the balance account.Balance -= withdrawAmount; // Update cache with new balance = 15,000 cache.Insert("Key", account); } |
In this example consider the following possibility;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
// BANK DEPOSIT APPLICATION // Fetch BankAccount object from NCache BankAccount account = cache.Get("Key") as BankAccount; // balance = 30,000 Money depositAmount = 5000; if (account != null && account.IsActive) { // Deposit money and increment the balance account.Balance += depositAmount; // Update cache with new balance = 35,000 cache.Insert("Key", account); } |
- Two users simultaneously access the same Bank Account with balance = 30,000
- One user withdraws 15,000 whereas the other user deposits 5,000.
- If done correctly, the end balance should be 20,000.
- If race condition occurs but not handled, the balance would be either 15,000 or 35,000 as you can see above. Here is how this race condition occurs:
- Time t1: User 1 fetches Bank Account with balance = 30,000
- Time t2: User 2 fetches Bank Account with balance = 30,000
- Time t3: User 1 withdraws 15,000 and updates Bank Account balance = 15,000
- Time t4: User 2 deposits 5,000 and updates Bank Account balance = 35,000
In both the cases, this code block would be disastrous to the Bank. To maintain data consistency in such cases, NCache acts as a distributed lock manager[2] and provides you with two types of locking:
- Optimistic Locking [3](Cache Item Versions)
- Pessimistic Locking (Exclusive Locking)
1. Optimistic Locking (Item Versions)
In optimistic locking, NCache uses cache item versioning. At the server side, every cached object has a version number associated with it which gets incremented at every cache item update. When you fetch a CacheItem object from NCache, it comes with a version number. When you try to update this item in the cache, NCache checks if your version is latest or not. If not, then it rejects your cache update. This way, only one user gets to update and other user updates fail. Take a look at the following code explaining the case we presented above;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
// Cache Item encapsulates the value and its meta data. CacheItem cacheItem = cache.GetCacheItem("Key"); BankAccount account = cacheItem.Value as BankAccount; if (account != null && account.IsActive) { // Withdraw money or Deposit account.Balance -= withdrawAmount; // account.Balance += depositAmount; try { // Update Balance w.r.t the ItemVersion held by the application // cacheItem.Version defines the item version your application holds // OperationFailedException is thrown if there is a version mismatch with NCache server cache.Insert("Key", account, cacheItem.Version); } catch (OperationFailedException operationExcep) { // Item has been updated by another application // Retry } } |
In the above example, if your cacheitem version is the latest, NCache performs the operation successfully. If not then an operation failed exception is thrown with the detailed message. In this case, you should re-fetch the latest version and redo your withdrawal or deposit operation.
With optimistic locking, NCache ensure that every write to the distributed cache is consistent with the version each application holds.
2. Pessimistic Locking (Exclusive Locking)
The other way to ensure data consistency is to acquire an exclusive lock on the cached data. This mechanism is called Pessimistic locking. It is essentially a writer-lock that blocks all other users from reading or writing the locked item.
To clarify it further, take a look at the following code;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
// Instance of the object used to lock and unlock cache items in NCache LockHandle lockHandle = new LockHandle(); // Specify time span of 10 sec for which the item remains locked // NCache will auto release the lock after 10 seconds. TimeSpan lockSpan = new TimeSpan(0, 0, 10); try { // If item fetch is successful, lockHandle object will be populated // The lockHandle object will be used to unlock the cache item // acquireLock should be true if you want to acquire to the lock. // If item does not exists, account will be null BankAccount account = cache.Get(key, lockSpan, ref lockHandle, acquireLock) as BankAccount; // Lock acquired otherwise it will throw LockingException exception if(account != null && account.IsActive) { // Withdraw money or Deposit account.Balance += withdrawAmount; // account.Balance -= depositAmount; // Insert the data in the cache and release the lock simultaneously // LockHandle initially used to lock the item must be provided // releaseLock should be true to release the lock, otherwise false cache.Insert("Key", account, lockHandle, releaseLock); } else { // Either does not exist or unable to cast // Explicitly release the lock in case of errors cache.Unlock("Key", lockHandle); } } catch(LockingException lockException) { // Lock couldn't be acquired // Wait and try again } |
Here, we first try to obtain an exclusive lock on the cache item. If successful, we will get the object along with the lock handle. If another applications had already acquired the lock, a LockingException would be thrown. In this case, you must retry fetching the item after a small delay.
Upon successfully acquiring the lock while fetching the item, the application can now safely perform operations knowing that no other application can fetch or update this item as long as you have this lock. To finally update the data and release the lock, we will call the insert API with the same lock handle. Doing so, it will insert the data in the cache and release the lock, all in one call. After releasing the lock, the cached data will be available for all other applications.
Just remember that you should acquire all locks with a timeout. By default, if the timeout is not specified, NCache will lock the item for an indefinite amount of time. If the application crashes without releasing the lock, the item will remain locked forever. For a work around you could forcefully release it but this practice is ill advised.
Failover Support in Distributed Locking
Since NCache is an In-Memory Distributed Cache, it also provides complete failover support so that there is simply no data loss. In case of a server failure your client applications keep working seamlessly. In a similar fashion, your locks in the distributed system are also replicated and maintained by the replicating nodes. If any node fails while a lock was acquired by one of your applications, the lock will be propagated to a new node automatically with its specified properties e.g. Lock Expiration.
Conclusion
So which locking mechanism is best for you, optimistic or pessimistic? Well, it depends on your use case and what you want to achieve. Optimistic Locking provides an improved performance benefit over Pessimist Locking especially when your applications are read intensive. Whereas, Pessimist Locking is more safe from a data consistency perspective. Choose your locking mechanism carefully. For more details head on to the website. In case of any questions head over to the support page or put in a question either on StackOverFlow or on Alachisoft Forum.
Sources
[1] http://searchstorage.techtarget.com/definition/race-condition
[2] https://en.wikipedia.org/wiki/Distributed_lock_manager
[3] http://stackoverflow.com/questions/129329/optimistic-vs-pessimistic-locking
I like the valuable info you provide in your articles.
I’ll bookmark your blog and check again here frequently.
I am quite sure I’ll learn plenty of new stuff right here!
Good luck for the next!
Pretty! This has been an extremely wonderful post.
Thanks for supplying these details.
I think that is among the such a lot important information for me.
And i am glad studying your article. However
wanna observation on some normal issues, The web site taste is ideal,
the articles is in reality nice : D. Good task, cheers
Great to come to your site as the information shared is good and is explained in simple words. Good stuff you are created, thank you for sharing a nice article.