Managed K8s – Amazon EKS cluster and worker nodes via Cloud Formation

This demo post is about deploying Amazon EKS cluster with worker nodes and these will be deployed and launched into a new VPC. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability and also automatically detects and replaces unhealthy control plane instances. Amazon EKS is certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community.

Why use Amazon EKS?

  1. Hybrid container deployments – Objectively perform highly available and scalable K8s clusters with their worker nodes on Amazon Web Services, maintaining full compatibility with your K8s deployments
  2. Microservices – In order to run micro-services applications with deep integrations to AWS services, while getting access to the out of the box Kubernetes (K8s) functionality.
  3. Lift and Shift migration – Easily Lift & Shift and migrate existing applications to Amazon EKS without needing to refactor your code or tooling.
  4. Authentication just works with IAM

Pre-requisites:-

To ensure, the following components installed and set up before starting with Amazon EKS:

  1. AWS CLI – while you can use the AWS Console to create a cluster in EKS, the AWS CLI is easier. Minimum requirement version 1.16.73.
  2. Kubectl – used for communicating with the cluster API server. For further instructions on installing, click here.
  3. AWS-IAM-Authenticator – to allow IAM authentication with the Kubernetes cluster

To install aws-iam-authenticator with Homebrew

The easiest way to install the aws-iam-authenticator is with Homebrew.

  1. If you do not already have Homebrew installed on your Mac, install it with the following command./bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  2. Install the aws-iam-authenticator with the following command.brew install aws-iam-authenticator
  3. Test that the aws-iam-authenticator binary works.aws-iam-authenticator help

Set up a new IAM role with EKS permissions

  • Goto IAM Management Console.
  • Click create role

Infra as Code – AWS Cloud Formation

Before we use some templates to provision the Master control panel for Kubernetes and its associated worker nodes. I have already performed the below steps for the git blank branch and push the upstream commit on GitHub.

  1. Create a new GIT Repo
  2. Create a new feature branch
  3. push the empty branch using git upstream command
  4. git push –set-upstream origin feature/deploy-eks-cluster-vpc
git push --set-upstream origin feature/deploy-eks-cluster-vpc

There are plenty of AWS cloud formation templates are available for provisioning AWS EKS clusters and worker nodes. I have picked from AWS official documentation. These can be further modified and supplied with other parameters as per need basis.

https://github.com/varunmaggo/IaCTemplates

In order to execute these Infra as code, we switch over to AWS cloud formation. And create a new stack for EKS Cluster and worker nodes.

Worker network configuration needed for the EKS cluster.

We would need below VPC, Subnet and Security Groups, when we run AWS commands for cluster provisioning.

Switch over to ZSH, to provision the EKS Cluster. more info here — https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

It would take a couple of minutes, before the cluster status turns to successfully created.

And, to proceed with updating our kubeconfig file with the information on the new cluster so kubectl can communicate with it.

To do this, we use the AWS CLI update-kubeconfig command

aws eks --region ap-southeast-2s update-kubeconfig --name demo
We can now test our configurations using the kubectl get svc command:
kubectl get svc




This is handy information for the cluster name and worker nodes, instance sizings, volume size and associated vpc and subnets.

Once the worker node are created, need to ensure we are communicate with them using our machines. for that we need to make use of ARN for the instance role.

We execute the curl command to download the yaml file and make the code change.

curl -O 
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09
/aws-auth-cm.yaml
kubectl apply -f aws-auth-cm.yaml

Next step is see the worker nodes. In the previous steps, we just added the ARN for the worker node. So the worker nodes were already created before, we just have to call them using Kubernetes native commands.

In order to see the worker nodes, we execute the below command:-

kubectl get nodes

That’s It.

That’s all, needed to start playing with the K8s bits, kubectl K8s along! This concludes this post, next post is in continuing deploying a container image and debugging the logs.

Leave a Reply

Your email address will not be published. Required fields are marked *