We can put a cache server between our application and database to make our applications faster. But that’s not enough when we need to scale our applications. Let’s see two caching patterns for better performance and how NCache implements them.
Scalability through Data Partitioning
With data partitioning, we divide large sets of data into smaller ones and distribute them between nodes. This way, we split reads and writes between nodes, improving the overall performance of our applications. NCache supports different caching topologies. In this context, a topology is a data storage, replication, and client connectivity strategy. There are two topologies that implement data partitioning: the Partitioned and Partition-Replica topologies. In these two topologies, NCache divides data into buckets and places those buckets in the nodes of our cluster.
NCache uses 1000 buckets and equally divides them between the nodes in a cluster. For example, if we start a cache cluster with a single node, NCache assigns all buckets to our single node. If we add another node, NCache divides those 1000 buckets into two 500-bucket nodes. Also, if we remove a node, NCache distributes its buckets into the remaining nodes.
Since NCache divides our data into buckets and nodes, a cache client connects to all nodes but performs read and write operations directly on the node that contains an item. Even if a node isn’t available, a cache client reroutes our requests using the active nodes to finish our operations. NCache distributes buckets between nodes keeping the data size in every node almost the same. This way, we not only split reads and writes between nodes but also increase the storage capacity of our cluster with every node we add.
Thanks to data partitioning, Partition and Partition-Replica topologies scale transaction load and storage capacity. Of course, Partition and Partition-Replica are only two of the supported topologies. NCache has more topologies with different data storage and replication strategies. For example, some of them are suited to applications with more reads or writes.
Caching Strategies
With data partitioning, we improve the availability and performance of our applications since we can cache more items in our cluster than in a single server. Also, we can improve the performance of our application by choosing how we populate our cache. There are two strategies to populate our cache: cache-aside and Read-Through/Write-Through.
With the cache-aside strategy, our cache server sits next to our database. If our cache doesn’t contain an item, our application reads it from our database and stores it in the cache. With this strategy, the cache server doesn’t interact with our database directly. Probably, the cache-aside strategy is what comes to our minds first when we think about caching.
Unlike the cache-aside strategy, with the Read-Through/Write-Through strategies, our cache works like the main source of data. Here, our cache reads from and writes data to the database. Therefore, these strategies work better with reference data that we read frequently and change periodically, and with database rows we can easily map to cache items.
With Read-Through/Write-Through, we move some of the data-access code from our application to the cache, making our application code simpler and smaller. NCache supports the Read-Through and Write-Through caching strategies.
Read-Through Caching
NCache uses a custom Read-Through provider to call the underlying database if there’s a cache miss. Also, we can force NCache to always read the database even if we don’t have a cache miss. Our Read-Through provider should implement the IReadThruProvider interface. It contains methods like LoadFromSource and LoadDataTypeFromSource.
Once we have a Read-Through provider deployed to our cache server, either via the NCache Manager or PowerShell scripts, we can use it from our client applications passing the ReadThruOptions object as a parameter to the Get method, like this,
1 2 3 4 5 6 7 8 9 10 11 |
// After having NCache up and running... var key = $"Product:123456"; var readThruOptions = new ReadThruOptions { Mode = ReadMode.ReadThru }; // Retrieve a cached item with Read-Thru enabled var data = cache.Get<Product>(key, readThruOptions); // Do something with the cached product here... |
Write-Through Caching
On the other hand, with the Write-Through strategy, NCache updates our cache first and, only then, our database. NCache can update our database either synchronously or asynchronously. NCache calls asynchronous Write-Through updates: Write-Behind.
Our Write-Through provider should implement the IWriteThruProvider interface. It contains the WriteToDataSource method with overloads for single and multiple items, and data structures. Our Write-Through provider should support write operations to add, delete, and update items.
Similar to deploying a Read-Through provider, we need to deploy our Write-Through provider to our cache. Once we deploy our provider, in our client applications, we should pass the WriteThruOptions object when inserting items into the cache, like this,
1 2 3 4 5 6 7 8 9 10 11 12 13 |
// After having NCache up and running... var product = BuildProduct(); var key = $"Product:{product.ProductId}"; var cacheItem = new CacheItem(product); var writeThruOptions = new WriteThruOptions { Mode = WriteMode.WriteThru; } // Add an item with Write-Thru enabled cache.Insert(key, cacheItem, writeThruOptions); |
Read-Through and Write-Through help us improve the scalability and performance of our applications. With Read-Through, our cached items are always available since NCache can read expired items automatically. And, with Write-through, our application doesn’t have to wait for database writes since NCache can update our database asynchronously and, even with a throttling mechanism, reducing the pressure on our database.
Conclusion
Those are two caching patterns for better performance and scalability: data partitioning and caching strategies. We can use NCache to implement them in our applications. With data partitioning, we split reads and writes between nodes and increase the storage capacity of our cache. We have NCache Partition and Partition-Replica topologies for that. And With Read-Through/Write-Through, we make our cache server the source of data, taking off some pressure from our database.
To learn more details about data partitioning and Read-Through/Write-Through, check these two guides: Partitioned and Partition-Replica Topologies and Data Source Providers for cache. If you want to benefit from these two caching patterns to scale your applications, give NCache a try.