NCache is designed to deliver optimal performance for your application, and achieving this requires a caching environment that scales seamlessly and cost-effectively. Since NCache operates as an in-memory datastore, the primary concern is the limited memory available on a single physical server, followed closely by the computational limits. Beyond basic CRUD operations, NCache supports many advanced features like Pub/Sub, Queries, criteria-based fetch calls, etc. As client demands for these features grow, the need for more processing power increases. This means that sooner or later, your cache server will reach its max processing limit. When this happens, NCache doesn’t leave you stranded—instead, it offers a solution.
Linear Scalability in NCache
When your environment hits the above-mentioned limits, NCache allows you to add a new server node (or multiple nodes) to your cache cluster. This involves adding a new physical node to the cluster through the NCache Management Center or NCache PowerShell Tool that enhances overall memory and provides you with additional resources to handle the growing volume of incoming requests.
This approach ensures Linear Scalability. How? the more nodes you add, the better your application will perform. With NCache, adding nodes to the cluster does not induce any overhead leading to the consistent throughput to falter. As per the recent performance benchmarks, NCache achieved 2 million operations/second with 5 server nodes! If that isn’t a win, then it’s hard to say what is!
And the best part about all of this comes with the dynamic nature of NCache clustering. You can add new nodes to the cluster without interrupting existing processes, applications, or nodes. This seamless scalability ensures uninterrupted operations and easy expansion of your caching infrastructure.
Let’s dig a little deeper and discover what features NCache brings to the scalability table.
Client Operations for Scalability
NCache client has this built-in feature that it automatically connects with every server node. And the NCache distribution map helps the client identify which node has the data it requires. So, the operation coming from the client does not go through multiple hops and nodes. Instead, it takes one straight hop to the server that has the specific data. This simple yet smart functionality helps scale your environment.
Parallelism for Scalability in Operations
NCache supports various advanced operations like Queries, Bulk operations, Tags, and many other operations across multiple nodes efficiently. Instead of sending these operations to every node in a Round Robin fashion, NCache allows you to send every operation to every node in parallel.
For example, a client wants to query the data stored in the cache, so it sends that query to all nodes of the cluster. Every node is going to run that query locally on its dataset and share the results with the client. The client merges results from all server nodes and returns single response to all end applications. This parallel execution not only enhances scalability but also accelerates system performance by distributing the workload across all nodes simultaneously.
Pipelining for Scaling Operations
NCache uses Pipelining to create chunks of operations that need to be sent over a network in single TCP call. This technique reduces the overhead of sending multiple requests individually and waiting for their acknowledgments.
For example, if an NCache client is sending 100 operations to the server, this would typically result in 100 separate I/O operations. Each operation requires a transition from user mode to kernel mode, which can consume a substantial amount of CPU power. Too much CPU consumption is expensive and will lead to potentially degrading application performance. However, with client-side Pipelining, these operations are combined into a single I/O call to the server.
Server-side pipelining ensures that the server receives multiple I/O calls in one simple call with responses that servers send back are received together as well. Not only this, but the server also tries to generate maximum responses to those incoming operations in one go. So, the 100 operations that were sent to the server in dedicated calls are received by the server in one call. And the results of the operations it got, are sent in one call by the server. This technique helps to scale the system tremendously.
Scalable Background Replication in Partition-Replica
When a client performs an update operation on a server node, NCache handles replication to a replica server for fault tolerance and high availability. This replication occurs in the background without any client involvement. On top of being a background process, this replication is done in bulks to save maximum cost. This approach enhances scalability by offloading the replication workload to the background while ensuring that your cache remains highly available. It’s a win-win solution that improves both performance and reliability.
Write-behind Caching for Scalability
When a client requests to write data, NCache ensures that the data is written both to the cache and the database. This is achieved through an asynchronous mechanism known as Write-behind. With Write-behind enabled, the server initially writes the data to the cache and quickly returns control to the client, minimizing computational delays. In the background, NCache employs a batching system to asynchronously update the database with the cached data. This approach ensures that the cache and database remain synchronized while preserving the high performance of the in-memory cache.
The Write-behind mechanism enhances scalability by offloading database write operations to the background, allowing your application to handle more transactions efficiently without compromising on performance.
Scalability through Object Pooling in Memory Management
In the .NET environment, when the automatic Garbage Collector (GC) activates, all activities going on in the application are halted which results in pauses in in-memory data computation. These pauses cause a great hit on your application’s performance. The more objects you create, the more the GC will kick in, and the greater this hit will be.
To avoid these long GC pauses, NCache being a native .NET cache uses Object Pooling technique as its own memory management. In this mechanism, NCache server pools objects and reuses them instead of creating new ones which results in lesser need to invoke GC. The lesser this need is, the more performance you will get out of your application, hence more scalability.
Client Cache to Induce Scalability
Concerning scalability, the Client Cache is the most important feature of NCache. It stores the most frequently used data of your clustered cache on the same machine where your application is running. Using Client Cache between your application and clustered cache provides you with a local cache resource that resides close to your application. This cache will entertain most of your application’s read requests which inevitably results in I/O cost reduction. So, not only do you get fast access to updated data, but your application also scales up.
This scalability can be further optimized if you move from OutProc Client Cache to InProc Client Cache by reducing the cost to access the local node.
Conclusion
While getting the best out of your application, you can encounter two major setbacks. Either the computational load on your cache increases or you reach the set bounds of data storage, both of which can be significantly improved by scaling NCache. You have full control over NCache that is rich with features ready to bring scalability in your environment. So, what are you waiting for? Download NCache today!