Cache Cluster
A cache cluster is a set of interconnected server nodes forming a cluster of servers which behave as a single cache unit from outside. Cache clusters are usually preferred in scenarios where performance and scalability are needed. With additions of more server nodes in a cluster cache, more storage space and higher availability of cached data can be gained for large applications.
Fault Tolerance: Due to their interconnected nature, cluster caches can tolerate runtime failure of any node in the cluster. In cluster topologies where a replica of each server node is maintained on another node, if any node failure occurs, data can be resumed from the replica which provides high fault tolerance and availability without affecting any client request.
Runtime Scalability: Cluster caches are horizontally scalable in terms of server node any time. Server nodes can be added or removed even at runtime whenever needed without any downtime or loss of data.
Single Unit View to Clients: While knowing the fact that there are many peer to peer server nodes in one cache cluster, cache client views the cache as a single unit. It doesn’t matter if any server node is removed or added to the cache, cache client deals with the cache in a homogenous manner. Connections are established with other running nodes at runtime.
TCP Based Connection: All server nodes in a cache cluster are connected through a highly reliable TCP based channel. TCP based connections use a combination of IP and Port for forming and differentiating a cache cluster.
Coordinator: The first or senior most running node of the cluster is declared as coordinator node. A coordinator node has the ownership of controlling the management of membership across the cluster.
Cluster Messaging: Each server in the cache cluster can send message to any or all servers in a cache cluster. Messages can be sent in broadcast, multicast or unicast mode.
Data Distribution Strategy: Data distribution in cache cluster is done according to specified topology of the cluster, e.g., in partitioned cache, data is distributed among each partition.
Retries on Connection Breakage: Cache cluster has the ability to tolerate connection breakages for shorter period of time. If any connection breakage between server nodes occurs, then retries for establishing connection are done for the configured amount of time.
Leave Detection: Cache cluster is flexible in terms of adding and removing server nodes. It can detect at runtime when any node leaves the cluster by a heartbeat mechanism without affecting cache clients.
Message Nagling: Message nagling is used to get the best performance over the TCP connections. This means that the cluster tries to avoid the network trip for each single message, instead it combines multiples messages together. This option is configurable through NCache service exe configuration.
See Also
Cache Topologies
Local Cache
Cache Client
Client Cache
Bridge for WAN Replication