Replica Cache Topology
In a cluster, if all server nodes have the same copy of data, it gives us high availability. That means that the cluster can survive a few node failures without losing data. For this purpose, NCache provides the Replicated Topology to ensure that user data is not lost - even if multiple servers fail at the same time. This topology allows you to have multiple servers, and every server has the same copy of data. So, every server is the exact replica of each other. Hence, the failure of multiple servers at the same time does not cause any data loss.
Note
This feature is also available in NCache Professional.
Synchronize Replication
Whenever a client performs a write operation (adds, updates, or removes), this operation is broadcasted throughout the cluster to replicate this operation on all cache servers before returning the control to the client. The server that receives the operations from the client is responsible for broadcasting these operations. During this process, a sequence token is taken from the coordinator server and associated with the current operation to ensure that this operation is performed on all servers in the same sequence to achieve data consistency.
If a broadcasted write operation fails on any cache server, its failure is also broadcasted to all cache servers to remove this data. This is done to achieve data consistency throughout the cluster, which means if data exists in the cache, all servers have the same data.
Since replication is done synchronously, which means this topology is not suitable for write operations because more servers require more time for the data to be replicated to all cache servers before returning the control to the writer application. It is recommended to limit the cluster size to 3 servers if you don't want to experience any degradation in the performance of write operations.
Role of The Coordinator Server
The Coordinator server (the senior-most server node) performs multiple tasks like state transfer, write-behind operations, data invalidations like expirations and dependencies, etc. After deciding to remove any of the items from the cache, it asks all other nodes to remove these items from their cache store, as well. When the coordinator server leaves the cluster, the next senior-most server becomes the coordinator server and resumes its responsibilities.
Fully Scalable Read Operations
As all servers have the same data, and clients are distributed among all cache servers. So, every server provides the same data to the clients. More servers in the cluster mean more data read requests are served simultaneously.
Connection Load Balancing
The Replicated topology has a special feature to auto-balance client connections between the servers to share data load among cache servers. When a client connects with a server, this server verifies that all other server nodes also have the same number of clients. If other servers have fewer clients, it gracefully rejects the client connection request and redirects it to the other servers. This way, all servers have the same number of clients, and no server is overburdened with more clients as compared to the other cache servers.
Client connectivity
In the Replicated Topology, a client is only connected with one server of a cluster at a time. If the connected server goes down, the client auto-connects with another server of the cluster without any human intervention.
State Transfer
A state transfer is triggered on node join and on node leave in the Replicated Topology. The state transfer triggered when a node leaves the cluster does not do much, as all nodes have the same data. But, on node join, a newly joined node asks the coordinator server to provide all of the cached data to synchronize itself with the rest of the cluster.
See Also
Partitioned Topologies
Mirrored Topology
Cache Cluster
Local Cache