Managed K8s – Azure AKS – Deploy WordPress CMS with Dockerized image

This demo blog post is about deploying WordPress CMS with creating a docker image from scratch, using Microsoft Azure Container register for privately storing the dockerized image. Also, use Managed K8s, Azure Kubernetes to deploy this image on AKS Cluster and worker nodes.

Approach

  1. Build the docker image for WordPress
  2. Use Azure container registry to store the dockerized image
  3. Create Azure Database for MySQL
  4. Create AKS Cluster and deploy the image
  5. Create Kubernetes manifest file with the desired state for WordPress CMS.

Prerequisites

  1. Microsoft Azure Account – free trial works.
  2. Azure CLI
  3. Kubernetes – Link here
  4. Docker Desktop for MacOS
  5. A link to wordpress to download and build docker image

Build Docker Image

Grab a free version of the latest WordPress version. Create new directory WordPress for your project and use this simple folder structure

Open the compressed file to view the contents for the package.

Rename wp-config-sample.php to wp-config.php

This screenshot below to read the database host, username and password from the Kubernetes manifest file.

Before creating the docker file, just need to restructure the folder structure as below

Create a blank Dockerfile

Create a new Dockerfile and copy code snippet from below screenshot. This below file is setting up Apache web server with PHP and enabling mysqli extension.

Before executing the docker build command, need to ensure the docker desktop is installed and also running successfully.

docker build --tag myblog:latest


It’ll take sometime to build the image with tags. Once its completed, we can open up docker desktop and view the latest image added on the our machine disk. myblog with tag latest is added here.

Use ACR – Azure Container Registry to store the dockerized image

az acr create --resource-group wordpress-project \
  --name wpAzureContainerRegistry2020 --sku Basic

Login to ACR which is recently created

Tag and Push the dockerized image to this ACR.

Docker tag 7eb3872a0449 wpazurecontainerregistry2020.azurecr.io/7eb3872a0449:latest
Docker push wpazurecontainerregistry2020.azurecr.io/7eb3872a0449:latest

Create AKS cluster

az aks create --resource-group wordpress-project --name wordpresscluster--node-count 1 --generate-ssh-keys

Also update the ACR registry, with this same AKS Cluster.

az aks update -n myAKSCluster -g wordpress-project
 --attach-acr wpazurecontainerregistry2020.azurecr.io

Installing the az aks install-cli package for executing kubectl commands.

kubectl get nodes 

Once the nodes are setup, need to ensure Azure Database for MySQL is all up and running.

Post successfully creating the Azure Database for MySQL, back to VSCode.

kubectl get service

It would take some time to get all the pods all setup and running successfully.

Need to use the same external ip [20.53.109.149] from the above screenshot and it just works. It loads up the WordPress admin installation page.

That’s It. 

That’s all, one more demo post done with the K8s bits, Happy kubectl K8s!

Managed K8s – Amazon EKS cluster and worker nodes via Cloud Formation

This demo post is about deploying Amazon EKS cluster with worker nodes and these will be deployed and launched into a new VPC. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability and also automatically detects and replaces unhealthy control plane instances. Amazon EKS is certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community.

Why use Amazon EKS?

  1. Hybrid container deployments – Objectively perform highly available and scalable K8s clusters with their worker nodes on Amazon Web Services, maintaining full compatibility with your K8s deployments
  2. Microservices – In order to run micro-services applications with deep integrations to AWS services, while getting access to the out of the box Kubernetes (K8s) functionality.
  3. Lift and Shift migration – Easily Lift & Shift and migrate existing applications to Amazon EKS without needing to refactor your code or tooling.
  4. Authentication just works with IAM

Pre-requisites:-

To ensure, the following components installed and set up before starting with Amazon EKS:

  1. AWS CLI – while you can use the AWS Console to create a cluster in EKS, the AWS CLI is easier. Minimum requirement version 1.16.73.
  2. Kubectl – used for communicating with the cluster API server. For further instructions on installing, click here.
  3. AWS-IAM-Authenticator – to allow IAM authentication with the Kubernetes cluster

To install aws-iam-authenticator with Homebrew

The easiest way to install the aws-iam-authenticator is with Homebrew.

  1. If you do not already have Homebrew installed on your Mac, install it with the following command./bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  2. Install the aws-iam-authenticator with the following command.brew install aws-iam-authenticator
  3. Test that the aws-iam-authenticator binary works.aws-iam-authenticator help

Set up a new IAM role with EKS permissions

  • Goto IAM Management Console.
  • Click create role

Infra as Code – AWS Cloud Formation

Before we use some templates to provision the Master control panel for Kubernetes and its associated worker nodes. I have already performed the below steps for the git blank branch and push the upstream commit on GitHub.

  1. Create a new GIT Repo
  2. Create a new feature branch
  3. push the empty branch using git upstream command
  4. git push –set-upstream origin feature/deploy-eks-cluster-vpc
git push --set-upstream origin feature/deploy-eks-cluster-vpc

There are plenty of AWS cloud formation templates are available for provisioning AWS EKS clusters and worker nodes. I have picked from AWS official documentation. These can be further modified and supplied with other parameters as per need basis.

https://github.com/varunmaggo/IaCTemplates

In order to execute these Infra as code, we switch over to AWS cloud formation. And create a new stack for EKS Cluster and worker nodes.

Worker network configuration needed for the EKS cluster.

We would need below VPC, Subnet and Security Groups, when we run AWS commands for cluster provisioning.

Switch over to ZSH, to provision the EKS Cluster. more info here — https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

It would take a couple of minutes, before the cluster status turns to successfully created.

And, to proceed with updating our kubeconfig file with the information on the new cluster so kubectl can communicate with it.

To do this, we use the AWS CLI update-kubeconfig command

aws eks --region ap-southeast-2s update-kubeconfig --name demo
We can now test our configurations using the kubectl get svc command:
kubectl get svc




This is handy information for the cluster name and worker nodes, instance sizings, volume size and associated vpc and subnets.

Once the worker node are created, need to ensure we are communicate with them using our machines. for that we need to make use of ARN for the instance role.

We execute the curl command to download the yaml file and make the code change.

curl -O 
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09
/aws-auth-cm.yaml
kubectl apply -f aws-auth-cm.yaml

Next step is see the worker nodes. In the previous steps, we just added the ARN for the worker node. So the worker nodes were already created before, we just have to call them using Kubernetes native commands.

In order to see the worker nodes, we execute the below command:-

kubectl get nodes

That’s It.

That’s all, needed to start playing with the K8s bits, kubectl K8s along! This concludes this post, next post is in continuing deploying a container image and debugging the logs.