Businesses today are developing high-traffic ASP.NET Core web applications catering to countless concurrent users. In a clustered architecture with load-balanced application servers, several clients can access cache data. In these circumstances, race conditions (when two or more people try to view and edit the same shared data simultaneously) occur as multiple users try to access and alter the same data.
Such race conditions often cause data inconsistency and integrity issues. Moreover, this poses a risk to applications relying on real-time data accuracy. Fortunately, with NCache, businesses can leverage distributed locking mechanisms to ensure robust data consistency in a scalable in-memory caching environment.
Distributed Locking for Data Consistency
With the help of NCache’s distributed locking mechanism, you can lock specific cache items during concurrent updates. This prevents race situations since only one process can modify an item at a time, ensuring data integrity.
NCache offers two types of locking:
- Optimistic Locking (Cache Item Versioning)
- Pessimistic Locking (Exclusive Locking)
Understanding the Need for Locking
Consider a banking application where two users access the same bank account with a balance of 30,000. One user withdraws 15,000, while the other deposits 5,000. If race conditions are not handled, the final balance may incorrectly become either 15,000 or 35,000 instead of 20,000, as expected.
Breakdown of the Race Condition:
- Time t1: User 1 fetches Bank Account with balance = 30,000
- Time t2: User 2 fetches Bank Account with balance = 30,000
- Time t3: User 1 withdraws 15,000 and updates balance = 15,000
- Time t4: User 2 deposits 5,000 and updates balance = 35,000
To prevent such scenarios, NCache’s locking mechanisms ensure that your application logic is thread-safe and that only one update is allowed at a time.
Optimistic Locking (Cache Item Versions)
Optimistic locking uses cache item versioning to manage concurrent updates. Each cached object has a version number, which increments upon every modification. Before updating a cache item, the application retrieves its version number and verifies it before saving any modifications. If another update has changed the version in the meantime, the update is rejected to maintain data integrity.

Figure 1: Optimistic Lock Sequence Diagram
Implementing Optimistic Locking in NCache
To implement optimistic locking, refer to the code sample below:
Pessimistic Locking (Exclusive Locking)
Pessimistic locking prevents other users from modifying a cache item until the lock is released. This approach is beneficial in scenarios where strict control over data modification is required.

Figure 2: Pessimistic Lock Sequence Diagram
Implementing Pessimistic Locking in NCache
To implement pessimistic locking in NCache, please see the code sample below:
Lock during Get and Release Lock Upon Insert
Another pessimistic locking approach is to lock an item when fetching it and release the lock while updating the item. This ensures the fetched data remains unchanged until the update is complete.
Conclusion
By employing NCache’s distributed locking features, developers can ensure high data consistency in distributed applications. Pessimistic Locking provides strict control for critical updates, while Optimistic Locking delivers higher performance for low-conflict scenarios. Implementing these strategies ensures data integrity, even under high concurrency workloads, keeping applications reliable and scalable.