All around the globe, the victory of containerization over virtualization has brought forth performance and reliability between applications running in different computing environments. The concept of containers is that they isolate software codes and all their dependencies to ensure uniformity and consistency in any infrastructure. Amazon (like many other cloud platforms) is rapidly maturing its Elastic Kubernetes Services to meet the growing computing needs of the AWS clients.
While Amazon Elastic Kubernetes Service is a fully managed, secure, and reliable service, it still requires a lot of manual configurations to manage clusters. To ensure your application’s performance in such an environment, you need to use NCache. As an in-memory caching solution NCache boosts your application’s performance and induces scalability by improving latency in your Elastic Kubernetes Service cluster.
NCache Deployment Architecture in Elastic Kubernetes Service
The basic structure of how NCache fits into your EKS cluster is straightforward:
- Load Balancer: An AWS Classic Load Balancer routes HTTP requests to an ingress controller within the EKS cluster.
- Pod: Inside the EKS cluster, these pods are mapped to a Cache Discovery Service, allowing client access to the cluster pods running the cache service.
- Gateway Service: As a NCache Remote Monitoring Gateway service (NGINX Ingress Controller), it provides load balancer configurations to direct the traffic to specific pods with sticky sessions enabled.
- Applications: The remaining part of the cluster comprises various client applications, each deployed in its own environment and connected to the cache cluster through the Cache Discovery Service.
The flow of requests and the structure of an Elastic Kubernetes Service cluster with NCache deployed are shown in the diagram below.
Steps to Deploy NCache in Your EKS Cluster
This blog takes you through deploying NCache inside your AWS Elastic Kubernetes cluster.
Step 1: Create NCache Resources
Your first step should be to deploy NCache resources inside EKS. Deploying NCache will allow you to perform all management operations in your cluster.
You can deploy NCache with the help of certain YAML files. These files contain specific information necessary for seamless NCache functionality inside EKS. These files are:
- NCache Deployment File: This file contains the required pod and image specifications, i.e., the replica count, image repository, ports, etc.
- NCache Service File: This file builds a service on top of the deployment to expose the deployment from the server.
- NCache Ingress File: This file contains the information needed to create a sticky session between a client application and the Management Center inside the Kubernetes cluster.
Among these files, the most critical is the NCache Deployment YAML file, detailed below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
kind: Deployment apiVersion: apps/v1beta1 metadata: name: ncache-deployment labels: app: ncache spec: replicas: 2 template: metadata: labels: app: ncache spec: nodeSelector: "beta.kubernetes.io/os": linux containers: - name: ncache image: docker.io/alachisoft/ncache:enterprise-server-linux-5.0.2 ports: - name: management-tcp containerPort: 8250 - name: management-http containerPort: 8251 - name: client-port containerPort: 9800 |
Once you create this deployment by executing the following command in AWS CLI, Kubernetes will create the exact number of pods mentioned under the replica tag. On each of these pods, you will have a running container. The image this container is made from is provided with the image key. In your case, this will be the path to the NCache Enterprise server on DockerHub. The ports tag holds all the ports necessary for NCache services to function in the cluster.
1 |
kubectl create -f [dir]/filename.yaml |
Refer to the NCache documentation on creating resources in EKS for further details.
Step 2: Create NCache Discovery Service
After successfully executing the previous step, you must create a discovery service that exposes your NCache resources to the client applications. Outside the Kubernetes cluster, successful client communication requires static IP addresses. However, as convenient as it might be, inside the Kubernetes cluster, every deployed pod is assigned a dynamic IP address at runtime, which remains unknown to the clients. This anomaly causes communication problems inside the cluster, where client applications fail to identify NCache servers, making a headless discovery service essential.
This service resolves the issue by exposing the IP addresses of the NCache servers to the client applications. These clients use these IPs to create the required cache handles and to start performing cache operations. To let all clients connect to the headless service with ease, create and deploy a Cache Discovery YAML file as provided:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
kind: Service apiVersion: v1 # depends on underlying Kubernetes version metadata: name: cacheserver labels: app: cacheserver spec: clusterIP: None selector: app: ncache # same label as provided in the ncache deployment yaml ports: - name: management-tcp port: 8250 targetPort: 8250 - name: management-http port: 8251 targetPort: 8251 - name: client-port port: 9800 targetPort: 9800 |
What makes this service a headless one is the tag clusterIP set to “None.” This behavior states that this service is specific to NCache and will not be accessible outside the EKS cluster. The tag selector, set to ncache, helps this service identify all pods whose label is ncache to expose their IPs to the clients. To the client connecting to the servers, only a single IP address is necessary, as the server it connects with to share the IP addresses of all the servers that are a part of that cache cluster. Once the file is modified, execute the following command in the AWS command line interface.
1 |
kubectl create -f [dir]/cachediscovery.yaml |
For a detailed step-by-step deployment, follow the NCache documentation on creating discovery services.
Step 3: Create Access for NCache Management
Set up an ingress controller to allow NCache management access outside the cluster. This controller abstracts standard load balancer container deployment strategies. A frequently used ingress controller is the NGINX Controller which when deployed, is responsible for creating all the required services to expose NCache services outside the cluster.
To deploy the NGINX Ingress Controller in your EKS cluster, you must create multiple file deployments, as discussed below.
- NGINX Mandatory file: These base files are necessary to run the NGINX Controller, which will be a load balancer inside your EKS cluster in this case. You can find this file on GitHub.
- NGINX Service file: This file contains the information on the Layer7 load balancer that exposes the NGINX Ingress Controller to the external Kubernetes environment.
- NGINX Config file: This file contains all the parameters required to configure the Layer7 load balancer.
Among these files, NGINX Service YAML is the file that contains the port information required to create a load balancer aware of NCache management access, as shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx annotations: ... spec: type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: http |
All these files are created as YAML files to make them easy to deploy inside the EKS cluster. All you need to do is run the following command for each file in the AWS configured command line tool.
1 |
kubectl create -f [dir]/<filename>.yaml |
On execution, this command will create a load balancer that enables stickiness inside your cluster. For detailed information, refer to the documentation on creating access for NCache management.
Step 4: Create Cache Cluster
Now that you have your deployments and services in place, you need to create an NCache cluster so your clients can connect to the cache servers. Next, you need to deploy the NCache service and the NCache Management Center, which is fully integrated for management operations. You can use this to create your clustered cache and manage it. Note that the IP addresses of the server nodes you add must match the IP addresses of the server pods assigned by the Kubernetes cluster. You can retrieve these IP addresses and details by executing the get pods command in the AWS command-line tool.
Step 5: Create Client Application Deployment
NCache client deployment, like NCache resource deployment, specifies the number of running client image containers, the private Docker Hub repository where the application is stored, ports, and other details. This information is essential for creating a fully functional client container.
To access the client application from a private repository, you need to provide login credentials each time. To simplify this, you can create a secrets.yaml file containing your login information, which only needs to be populated once and is accessible to every client resource. Refer to the NCache documentation on creating client deployments for detailed steps and YAML file examples.
The actual client deployment is also defined in a YAML file. This file which includes all necessary information for deploying your client application (or applications) in your EKS cluster. Here is an example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
kind: Deployment apiVersion: apps/v1beta1 # it depends on the underlying Kubernetes version metadata: name: client spec: replicas: 1 template: metadata: labels: app: client spec: nodeSelector: "beta.kubernetes.io/os": linux containers: - name: client image: # Your docker client image here ports: - name: app-port containerPort: 80 # In case of NCache client installation, add the following remaining ports - name: management-tcp containerPort: 8250 - name: management-http containerPort: 8251 - name: client-port containerPort: 9800 |
The deployment process involves:
- Accessing the image from the repository.
- Reading the client secrets file from the NCache secret resource for authentication.
- Pulling the image and deploying it in a container with the client application running.
Therefore, as this is a deployment and not a service, you need to enter the pod and execute a batch command in AWS CLI to start the client application:
1 |
kubectl exec --namespace=ncache client-podname -- /app/<clientapplication>/run.sh democlusteredcache cacheserver |
NCache Details Container Deployments NCache EKS Docs
Step 6: Monitor NCache Cluster
At this point, you have set up NCache in your fully functional Amazon EKS cluster. Therefore you have gainined high availability, scalability, reliability, and more. However, NCache offers even more. Within the EKS cluster, amidst operations, data storage, and transfers, NCache provides tools to monitor cache activity. These tools help you assess your cluster’s health, performance, network issues, and more. Check out NCache Monitor for a graphical depiction of real-time performance and NCache Statistics for performance metrics.
Step 7: Scaling NCache Cluster
Given that NCache allows you to scale your cluster up or down at runtime to provide you with extreme scalability to enhance the overall performance of your application. For instance, if you feel that the cache cluster is receiving requests way too frequently for the nodes to keep up with the increasing transactions, NCache allows you to add multiple servers to accommodate the load. To see how you can add or remove server nodes from the cache cluster at runtime all while staying inside the EKS cluster, check out our documentation on Adding Cache Server in EKS and Removing Cache Servers from EKS.
Conclusion
From this article, you got to experience a step-by-step walkthrough of NCache deployment in an Amazon EKS cluster. Essentially, NCache provides you with a top-notch environment to run your application in. So, what are you waiting for? Deploy NCache in your EKS cluster right now and witness the magic for yourself.