AI Tools

Managing an EKS Cluster with Vagrant, AWS, and Kubernetes Tools

Continuing from our initial setup of a virtual environment using Vagrant and configuring AWS CLI, eksctl, kubectl, and Helm, this guide will delve deeper into managing a Kubernetes cluster on AWS EKS. We’ll cover cluster creation, scaling, autoscaling, and role-based access control (RBAC).

Creating and Managing an EKS Cluster

Cluster Configuration File (cluster.yaml):

A cluster.yaml file is used to define the configuration for your EKS cluster. Below is a basic example structure:

kind: ClusterConfig
  name: eks-cluster
  region: us-west-2
  - name: ng-1
    instanceType: t3.medium
    desiredCapacity: 3
    minSize: 1
    maxSize: 4

Create the EKS Cluster:

Use eksctl to create the cluster as defined in the cluster.yaml file.

eksctl create cluster -f cluster.yaml

Check Cluster and Node Group Status:

eksctl get cluster
eksctl get nodegroup --cluster eks-cluster

Scaling the Node Group:

To scale the node group to 5 nodes:

eksctl scale nodegroup --cluster=eks-cluster --nodes=5 --name=ng-1

To scale back to 3 nodes:

eksctl scale nodegroup --cluster=eks-cluster --nodes=3 --name=ng-1

Verify Node Status:

sudo kubectl get nodes

Deleting the Node Group and Cluster:

To delete the node group:

eksctl delete nodegroup --cluster=eks-cluster --name=ng-1 --approve

To delete the cluster:

eksctl delete cluster --name eks-cluster

Deploying Applications and Enabling Autoscaling

Deployment Configuration (deployment.yaml):

A deployment.yaml file defines your application deployment on Kubernetes.

apiVersion: apps/v1
kind: Deployment
  name: test-deployment
  replicas: 3
      app: test
        app: test
      - name: test-container
        image: nginx:1.17.1
        - containerPort: 80

Applying the Deployment:

sudo kubectl apply -f deployment.yaml
sudo kubectl get pods

Setting Up Cluster Autoscaler:

Apply the autoscaler configuration:

sudo kubectl apply -f

Annotate the deployment to ensure it is not evicted:

sudo kubectl -n kube-system annotate deployment.apps/cluster-autoscaler"false"

Edit the autoscaler deployment to set the cluster name and update the image version:

sudo kubectl -n kube-system edit deployment.apps/cluster-autoscaler

Update the --node-group-auto-discovery to reflect your cluster name and set the image to the latest version for your Kubernetes version.

Check Autoscaler Logs and Node Status:

sudo kubectl -n kube-system logs deployment.apps/cluster-autoscaler
sudo kubectl get nodes

Scaling Deployment Replicas:

sudo kubectl scale --replicas=3 deployment/test-autoscaler
sudo kubectl get pods

Configuring RBAC for Secure Access

Role and RoleBinding Configuration Files:

Create role.yaml and rolebinding.yaml to define roles and permissions for users.

Example role.yaml:

kind: Role
  namespace: default
  name: pod-reader
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Example rolebinding.yaml:

kind: RoleBinding
  name: read-pods
  namespace: default
- kind: User
  name: testadminuser
  kind: Role
  name: pod-reader

Apply the RBAC Configuration:

sudo kubectl apply -f role.yaml
sudo kubectl apply -f rolebinding.yaml

Managing AWS IAM Users and Kubernetes Authentication

Configuring AWS for testadminuser:

Set up AWS CLI with testadminuser:

aws configure

Edit the AWS Auth ConfigMap to map the IAM user to Kubernetes roles:

sudo kubectl -n kube-system get configmap aws-auth -o yaml > aws-auth-configmap.yaml
vim aws-auth-configmap.yaml

Add the following under mapUsers:

  mapUsers: |
    - userarn: arn:aws:iam::xxxxxxxxx:user/testadminuser
      username: testadminuser
        - system:masters

Apply the updated ConfigMap:

sudo kubectl apply -f aws-auth-configmap.yaml -n kube-system

Verify the configuration:

sudo kubectl -n kube-system get cm aws-auth
sudo kubectl -n kube-system describe cm aws-auth

Switch AWS Profiles and Verify Access:

Update the ~/.aws/credentials file to include profiles for both users:



aws_access_key_id=…..testadminuser aws_secret_access_key=….. region=us-east-2 output=json

Switch to clusteradmin profile and verify:

aws sts get-caller-identity
export AWS_PROFILE="clusteradmin"
aws sts get-caller-identity
sudo kubectl get nodes
sudo kubectl -n kube-system get pods

This comprehensive guide covers the advanced setup and management of an EKS cluster, including cluster creation, scaling, autoscaling, and role-based access control. By following these steps, you can efficiently manage your Kubernetes infrastructure, ensuring scalability, security, and operational efficiency. As a DevOps professional, mastering these tools and workflows is essential for effective infrastructure management and deployment automation.

Ali Imran
Over the past 20+ years, I have been working as a software engineer, architect, and programmer, creating, designing, and programming various applications. My main focus has always been to achieve business goals and transform business ideas into digital reality. I have successfully solved numerous business problems and increased productivity for small businesses as well as enterprise corporations through the solutions that I created. My strong technical background and ability to work effectively in team environments make me a valuable asset to any organization.

Leave a Reply