Mirror Cache Topology
In a Mirrored topology, a cache cluster cannot have more than two nodes. At a time, only one of the two nodes works as an active node, while the other one acts as a passive node. The passive node, also known as a backup node, mitigates a single point of failure. When the active node goes down, the passive node assumes the role of the active node, and this ensures no single point of failure exists in this topology. Both read and write operations are performed on the active node, and then writes are asynchronously replicated to the backup or passive node. However, the degree of replication in this topology is limited to one node, unlike the Replicated Topology, where we can have multiple replicas.
Note
This feature is also available in NCache Professional.
Mirrored cache clusters are suitable for caching small amounts of data where user load is not expected to grow. The topology is not scalable for either read or write operations since all client operations are only performed on the active node of the cluster. However, the topology does provide some sort of high availability through the replication to the backup node. When the active node leaves the cluster, the passive node automatically takes over the role of the active node, and all client applications start communicating with this new active node.
Active Node Selection in Mirrored Cache
The senior-most node of the cluster is considered the active node of the cluster. You can also choose the active node when the cluster is stopped or running, but if you change the active node of a running cluster, you have to restart the cluster for your changes to take effect. Once the active node leaves the cluster, the corresponding passive node becomes the active node automatically. When the previously active node comes back online, it joins the cluster as the passive node.
Asynchronous Replication
The client directly connects with the active node of the cluster only, whereas the second node (passive node) of the cluster has a backup of the active node. The active node of the cluster is responsible for maintaining the backup on the passive node. All write operations on the active node replicate through the background queue to the passive node. Every write operation is queued, and the dedicated background threads pick the data from this queue in chunks and replicate it to the passive node. The client gets a response of write operations right after they are successfully performed on the active node. The operation is queued and replicated later to the passive node.
Note
It should be noted that the client application will not experience any degradation in performance while operations are replicated asynchronously from the active to the passive node.
As data replication occurs asynchronously, there is a chance of data loss. If the active node goes down without replicating the queued operations, data loss occurs.
Role of Coordinator Server
The active server node of the cache cluster acts as the coordinator server. The coordinator server is responsible for performing multiple tasks such as state transfer, write-behind operations, data invalidations (expirations & dependencies), etc. After deciding to remove any of the items from the cache, it asks the other node (passive node) to remove them from its cache store, as well. When the coordinator server (active node) leaves the cache cluster, the passive node becomes the coordinator server and resumes its responsibilities.
Client Connectivity
In the Mirrored Topology, the client connects to the active node of the cluster only. Meaning, the client connections are blocked on the passive node of the cluster. The client doesn't need to establish a connection with both nodes (active and passive) of the cluster as they contain the same data. As stated before, when the active node goes down, the passive node automatically becomes the active node, and the client automatically establishes a connection with it.
State Transfer
State transfer is triggered when the second node joins the cache cluster in the Mirrored Topology. In this case, the coordinator server (active node) synchronizes its data with the second (passive) node.
See Also
Partitioned Topologies
Replicated Topology
Cache Cluster
Local Cache