AI Tools

Managing an EKS Cluster with Vagrant, AWS, and Kubernetes Tools

Continuing from our initial setup of a virtual environment using Vagrant and configuring AWS CLI, eksctl, kubectl, and Helm, this guide will delve deeper into managing a Kubernetes cluster on AWS EKS. We’ll cover cluster creation, scaling, autoscaling, and role-based access control (RBAC).

Creating and Managing an EKS Cluster

Cluster Configuration File (cluster.yaml):

A cluster.yaml file is used to define the configuration for your EKS cluster. Below is a basic example structure:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: eks-cluster
  region: us-west-2
nodeGroups:
  - name: ng-1
    instanceType: t3.medium
    desiredCapacity: 3
    minSize: 1
    maxSize: 4

Create the EKS Cluster:

Use eksctl to create the cluster as defined in the cluster.yaml file.

eksctl create cluster -f cluster.yaml

Check Cluster and Node Group Status:

eksctl get cluster
eksctl get nodegroup --cluster eks-cluster

Scaling the Node Group:

To scale the node group to 5 nodes:

eksctl scale nodegroup --cluster=eks-cluster --nodes=5 --name=ng-1

To scale back to 3 nodes:

eksctl scale nodegroup --cluster=eks-cluster --nodes=3 --name=ng-1

Verify Node Status:

sudo kubectl get nodes

Deleting the Node Group and Cluster:

To delete the node group:

eksctl delete nodegroup --cluster=eks-cluster --name=ng-1 --approve

To delete the cluster:

eksctl delete cluster --name eks-cluster

Deploying Applications and Enabling Autoscaling

Deployment Configuration (deployment.yaml):

A deployment.yaml file defines your application deployment on Kubernetes.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: test-container
        image: nginx:1.17.1
        ports:
        - containerPort: 80

Applying the Deployment:

sudo kubectl apply -f deployment.yaml
sudo kubectl get pods

Setting Up Cluster Autoscaler:

Apply the autoscaler configuration:

sudo kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml

Annotate the deployment to ensure it is not evicted:

sudo kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"

Edit the autoscaler deployment to set the cluster name and update the image version:

sudo kubectl -n kube-system edit deployment.apps/cluster-autoscaler

Update the --node-group-auto-discovery to reflect your cluster name and set the image to the latest version for your Kubernetes version.

Check Autoscaler Logs and Node Status:

sudo kubectl -n kube-system logs deployment.apps/cluster-autoscaler
sudo kubectl get nodes

Scaling Deployment Replicas:

sudo kubectl scale --replicas=3 deployment/test-autoscaler
sudo kubectl get pods

Configuring RBAC for Secure Access

Role and RoleBinding Configuration Files:

Create role.yaml and rolebinding.yaml to define roles and permissions for users.

Example role.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "watch", "list"]

Example rolebinding.yaml:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: default
subjects:
- kind: User
  name: testadminuser
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Apply the RBAC Configuration:

sudo kubectl apply -f role.yaml
sudo kubectl apply -f rolebinding.yaml

Managing AWS IAM Users and Kubernetes Authentication

Configuring AWS for testadminuser:

Set up AWS CLI with testadminuser:

aws configure

Edit the AWS Auth ConfigMap to map the IAM user to Kubernetes roles:

sudo kubectl -n kube-system get configmap aws-auth -o yaml > aws-auth-configmap.yaml
vim aws-auth-configmap.yaml

Add the following under mapUsers:

  mapUsers: |
    - userarn: arn:aws:iam::xxxxxxxxx:user/testadminuser
      username: testadminuser
      groups:
        - system:masters

Apply the updated ConfigMap:

sudo kubectl apply -f aws-auth-configmap.yaml -n kube-system

Verify the configuration:

sudo kubectl -n kube-system get cm aws-auth
sudo kubectl -n kube-system describe cm aws-auth

Switch AWS Profiles and Verify Access:

Update the ~/.aws/credentials file to include profiles for both users:

[default]
aws_access_key_id=.....testuser
aws_secret_access_key=.....
region=us-east-2
output=json

[clusteradmin]

aws_access_key_id=…..testadminuser aws_secret_access_key=….. region=us-east-2 output=json

Switch to clusteradmin profile and verify:

aws sts get-caller-identity
export AWS_PROFILE="clusteradmin"
aws sts get-caller-identity
sudo kubectl get nodes
sudo kubectl -n kube-system get pods

This comprehensive guide covers the advanced setup and management of an EKS cluster, including cluster creation, scaling, autoscaling, and role-based access control. By following these steps, you can efficiently manage your Kubernetes infrastructure, ensuring scalability, security, and operational efficiency. As a DevOps professional, mastering these tools and workflows is essential for effective infrastructure management and deployment automation.

Ali Imran
Over the past 20+ years, I have been working as a software engineer, architect, and programmer, creating, designing, and programming various applications. My main focus has always been to achieve business goals and transform business ideas into digital reality. I have successfully solved numerous business problems and increased productivity for small businesses as well as enterprise corporations through the solutions that I created. My strong technical background and ability to work effectively in team environments make me a valuable asset to any organization.
https://ITsAli.com

Leave a Reply