Dockerized Mulesoft & deploy with Harness – Canary Deployment

Overview and Scope

This demo post is about containerising Mulesoft Enterprise Edition “mule-ee-distribution”, standalone and using CentOS as base image, adding multiple layers including required java binaries using dockerfile. 30 days trail is used for this demo implementation. The focus will be on containerising and using that image in Harness CI/CD workflow and pipeline using Docker hub for storing image/container.

Harness CI/CD is used to build infrastructure, and configure Application, Services, and Environments, using workflows, pipelines. This post uses, Canary Deployment model, a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers.

Prerequisites :-

  1. Docker desktop for Mac
  2. Kubernetes – kubectl – https://kubernetes.io/docs/tasks/tools/
  3. Harness trial account – for building Infrastructure using workflows and pipelines.
  4. Mulesoft Enterprise edition – Standalone trials from http://www.mulesoft.org
  5. Java Binaries – AdoptOpenJDK [openjdk8-binaries]
  6. IDE – VSCode for building dockerfile
  7. Homebrew – Package Manager for macOS

Dockerize the Mulesoft – Create dockerfile with multiple layers

In order to containerise Mulesoft, we need to have below dependencies in the dockerfile files.

  • java binaries are prerequisites,
  • CentOs is used as base image,
  • Ports are exposed for communication and below are the requirements for this demo post.
Ports Required Description
PORT 8081-8082, 8091-8092Required by the Mule Apps
PORT 1098Mule JMX port (must match config file)
PORT 5000Configuration directory
PORT 7777Required for mule agent
  • Required volume – Mount Points are defined as below
Mount pointDescription
/opt/mule/appsRequired for Application deployment directory
/opt/mule/domainsRequired for Domains deployment directory
/opt/mule/confRequired for Configuration directory
/opt/mule/logsRequired for Logs directory

Docker Image packaging for MuleESB http://www.mulesoft.org

Create a dockerfile that will contain the Mulesoft standalone

FROM         centos:latest

# Define docker build ARGS
ARG JAVA_BINARIES=https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/jdk8u265-b01/OpenJDK8U-jdk_x64_linux_hotspot_8u265b01.tar.gz
ARG RUNTIME_VERSION=4.3.0

# Define environment variables
ENV   JAVA_BINARIES $JAVA_BINARIES
ENV   TMP_DIR /tmp/
ENV   JAVA_HOME /opt/jdk
ENV   PATH $JAVA_HOME/bin:${PATH}
ENV   MULE_ARGS "${MULE_ARGS:-start}"
ENV     RUNTIME_VERSION $RUNTIME_VERSION
ENV     MULE_HOME /opt/mule

WORKDIR     /opt

# Install required binaries
RUN curl -L ${JAVA_BINARIES} -o jdk.tar.gz && \
  mkdir jdk && \
  tar xf jdk.tar.gz -C jdk --strip-components 1 && \
  rm -rf jdk.tar.gz && \
  curl -L http://s3.amazonaws.com/new-mule-artifacts/mule-ee-distribution-standalone-${RUNTIME_VERSION}.tar.gz -O && \
  tar xvf mule-ee-distribution-standalone-${RUNTIME_VERSION}.tar.gz && \
  rm -rf mule-ee-distribution-standalone-${RUNTIME_VERSION}.tar.gz && \
  ln -s /opt/mule-enterprise-standalone-${RUNTIME_VERSION} mule  && \
  adduser mule && \
  chown -R mule:mule /opt/mule-enterprise-standalone-${RUNTIME_VERSION} 
  
# Define mount points
VOLUME ["/opt/mule/logs", "/opt/mule/conf", "/opt/mule/apps", "/opt/mule/domains","/opt/mule/libs"]

# HTTP Service Port
# Required by the Mule Apps
EXPOSE      8081-8082
EXPOSE      8091-8092

# Mule remote debugger
EXPOSE      5000

# Mule agent 
EXPOSE      7777

# Mule JMX port (match Mule config file)
EXPOSE      1098

# Start Mule runtime
USER mule

#execute the commands
CMD [ "/opt/mule/bin/mule", "$MULE_ARGS" ]

In order to save time, I have pushed this image/container on docker hub, this will be used in the Harness CI/CD pipeline in the below following steps.

Setup and Configure Harness CI/CD

Setup and configure Harness CI/CD –

  1. Install the Harness Delegate
  2. Connect to your Infrastructure
  3. Connect to Artifact Repositories [dockerhub is used for this post]
  4. Configure your Application, Services, and Environments
  5. Build your Workflows and Pipeline then deploy!

Canary Deployment, we are starting with 30% of the instances.

Deploy

Using Harness CI/CD, we create a new deployment, use the latest tag for the service created above.

Canary Deployment Success- After a few minutes of running, we will see the container image is successfully rolled out.

Next Steps

Mulesoft setup and configuration is a separate post altogether, and it would be part of a following demo post.

Deploy Apache Solr – Harness on Local K8s via Docker registry

This demo post is about deploying Apache Solr on local Kubernetes(MiniKube) with Harness Kubernetes Deployment. This post will be using Harness Delegates with Minikube and create infrastructure definition, workflow and pipeline to deploy Apache Solr. The container images will be pulled from docker public registry.

  • Apache SOLRSolr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases. Solr runs as a standalone full-text search server.
  • Harness.io – It supports Kubernetes Deployments for rapidly onboard and deploy Microservices with support forCanary, Helm, Istio, and Kubernetes pods, nodes.
  • MiniKube – Run Kubernetes Locally. Minikube implements a local Kubernetes cluster on macOS, Linux, and Windows.

Prerequisite

  • Harness Account – I have used a professional trial account – link here
  • Docker for desktop – this demo is using 3.10
  • Kubectl installed – this demo uses homebrew on Mac for this installation.
  • Minikube – Used for local Kubernetes application development, and to support all Kubernetes features that fit for this demo for sandbox environment.
  • Docker Registry – Apache Solr is pulled from registry.hub.docker.com/library/solr:latest

Local K8s Setup with Minikube

Minikube is downloaded and installed with sudo access.

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64

sudo install minikube-darwin-amd64 /usr/local/bin/minikube

Check for Minikube version installed and start the Minikube.

Harness Local Delegate Setup

We setup a trial account with Harness and use Link here – https://harness.io/free-trial/

The Harness Delegate is a service which run in local network or VPC to connect all of respective artifact, infrastructure, verification and other providers with the Harness Manager.

We need to create Kubernetes YAML file which can be executed and used for workload nodes.

If you downloaded the Delegate, extract the YAML file’s folder from the download and then navigate to the harness-delegate-kubernetes folder that you extracted:

tar -zxvf harness-delegate-kubernetes.tar.gz
cd harness-delegate-kubernetes

In the below file, we will be using below info through out the demo.

  • Namespace – harness-delegate
  • Resource CPU = 1
  • Resource Memory =8Gigs

We switch to the folder and execute the below commands

Check for the kubectl installed and the resources/workloads need to be created.

kubectl apply -f harness-delegate.yaml

Check for the pods created, with the required namespace.

kubectl get pods -n harness-delegate
We wait for a few mins, pod status is still in pending state, 
In order to view events for the pods, we can describe the pod.
kubectl describe pods minik8s-qjrewn-0 -n harness-delegate 

There are a failed scheduling for node, due to insufficient memory.

Reconficure Minikube configuration.

minikube stop 
minikube start --cpus 4 --memory 8192

We check for the events for the workload while container is getting created.

Since the harness delegate is all setup, we can add a Kubernetes Cluster, and this cluster will be used later on for creating workloads.

Harness Workflow and Pipelines

In order to deploy Apache Solr, we will use Harness workflow and pipeline. We will fetch a helm package from

  • Create an Application
  • Setup environment for the infra definition
  • Create workflow to execute the pipeline.

Screenshots below for the steps performed.

Next step is to create Services, Environments, Workflow and Pipelines.

We will be using Docker Public Repository. Docker Registry – Apache Solr is pulled from registry.hub.docker.com/library/solr:latest

Infrastructure Definition is created with the cloud providers and Deployment type.

Since this is a simple image deployment, a single stage pipeline can be demoed. Based on the complexity, we can add further stages.

Start a new deployment with the latest service tag. This will trigger the workflow/pipeline for automated rolling deployment.

In the image below, we can see status of rolling deployment, this will take a couple of minutes to complete it. Meanwhile we can also have a look at Minikube dashboard.

In the image below, we can see the rolling deployment is completed.

  • Deployment – harness-example-deployment
  • Namespace – harness-example

That’s It. 

That’s all, needed to start playing with the K8s, Harness, Docker bits! 

Managed K8s – Azure AKS – Deploy WordPress CMS with Dockerized image

This demo blog post is about deploying WordPress CMS with creating a docker image from scratch, using Microsoft Azure Container register for privately storing the dockerized image. Also, use Managed K8s, Azure Kubernetes to deploy this image on AKS Cluster and worker nodes.

Approach

  1. Build the docker image for WordPress
  2. Use Azure container registry to store the dockerized image
  3. Create Azure Database for MySQL
  4. Create AKS Cluster and deploy the image
  5. Create Kubernetes manifest file with the desired state for WordPress CMS.

Prerequisites

  1. Microsoft Azure Account – free trial works.
  2. Azure CLI
  3. Kubernetes – Link here
  4. Docker Desktop for MacOS
  5. A link to wordpress to download and build docker image

Build Docker Image

Grab a free version of the latest WordPress version. Create new directory WordPress for your project and use this simple folder structure

Open the compressed file to view the contents for the package.

Rename wp-config-sample.php to wp-config.php

This screenshot below to read the database host, username and password from the Kubernetes manifest file.

Before creating the docker file, just need to restructure the folder structure as below

Create a blank Dockerfile

Create a new Dockerfile and copy code snippet from below screenshot. This below file is setting up Apache web server with PHP and enabling mysqli extension.

Before executing the docker build command, need to ensure the docker desktop is installed and also running successfully.

docker build --tag myblog:latest


It’ll take sometime to build the image with tags. Once its completed, we can open up docker desktop and view the latest image added on the our machine disk. myblog with tag latest is added here.

Use ACR – Azure Container Registry to store the dockerized image

az acr create --resource-group wordpress-project \
  --name wpAzureContainerRegistry2020 --sku Basic

Login to ACR which is recently created

Tag and Push the dockerized image to this ACR.

Docker tag 7eb3872a0449 wpazurecontainerregistry2020.azurecr.io/7eb3872a0449:latest
Docker push wpazurecontainerregistry2020.azurecr.io/7eb3872a0449:latest

Create AKS cluster

az aks create --resource-group wordpress-project --name wordpresscluster--node-count 1 --generate-ssh-keys

Also update the ACR registry, with this same AKS Cluster.

az aks update -n myAKSCluster -g wordpress-project
 --attach-acr wpazurecontainerregistry2020.azurecr.io

Installing the az aks install-cli package for executing kubectl commands.

kubectl get nodes 

Once the nodes are setup, need to ensure Azure Database for MySQL is all up and running.

Post successfully creating the Azure Database for MySQL, back to VSCode.

kubectl get service

It would take some time to get all the pods all setup and running successfully.

Need to use the same external ip [20.53.109.149] from the above screenshot and it just works. It loads up the WordPress admin installation page.

That’s It. 

That’s all, one more demo post done with the K8s bits, Happy kubectl K8s!

Managed K8s – Amazon EKS cluster and worker nodes via Cloud Formation

This demo post is about deploying Amazon EKS cluster with worker nodes and these will be deployed and launched into a new VPC. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability and also automatically detects and replaces unhealthy control plane instances. Amazon EKS is certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community.

Why use Amazon EKS?

  1. Hybrid container deployments – Objectively perform highly available and scalable K8s clusters with their worker nodes on Amazon Web Services, maintaining full compatibility with your K8s deployments
  2. Microservices – In order to run micro-services applications with deep integrations to AWS services, while getting access to the out of the box Kubernetes (K8s) functionality.
  3. Lift and Shift migration – Easily Lift & Shift and migrate existing applications to Amazon EKS without needing to refactor your code or tooling.
  4. Authentication just works with IAM

Pre-requisites:-

To ensure, the following components installed and set up before starting with Amazon EKS:

  1. AWS CLI – while you can use the AWS Console to create a cluster in EKS, the AWS CLI is easier. Minimum requirement version 1.16.73.
  2. Kubectl – used for communicating with the cluster API server. For further instructions on installing, click here.
  3. AWS-IAM-Authenticator – to allow IAM authentication with the Kubernetes cluster

To install aws-iam-authenticator with Homebrew

The easiest way to install the aws-iam-authenticator is with Homebrew.

  1. If you do not already have Homebrew installed on your Mac, install it with the following command./bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  2. Install the aws-iam-authenticator with the following command.brew install aws-iam-authenticator
  3. Test that the aws-iam-authenticator binary works.aws-iam-authenticator help

Set up a new IAM role with EKS permissions

  • Goto IAM Management Console.
  • Click create role

Infra as Code – AWS Cloud Formation

Before we use some templates to provision the Master control panel for Kubernetes and its associated worker nodes. I have already performed the below steps for the git blank branch and push the upstream commit on GitHub.

  1. Create a new GIT Repo
  2. Create a new feature branch
  3. push the empty branch using git upstream command
  4. git push –set-upstream origin feature/deploy-eks-cluster-vpc
git push --set-upstream origin feature/deploy-eks-cluster-vpc

There are plenty of AWS cloud formation templates are available for provisioning AWS EKS clusters and worker nodes. I have picked from AWS official documentation. These can be further modified and supplied with other parameters as per need basis.

https://github.com/varunmaggo/IaCTemplates

In order to execute these Infra as code, we switch over to AWS cloud formation. And create a new stack for EKS Cluster and worker nodes.

Worker network configuration needed for the EKS cluster.

We would need below VPC, Subnet and Security Groups, when we run AWS commands for cluster provisioning.

Switch over to ZSH, to provision the EKS Cluster. more info here — https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

It would take a couple of minutes, before the cluster status turns to successfully created.

And, to proceed with updating our kubeconfig file with the information on the new cluster so kubectl can communicate with it.

To do this, we use the AWS CLI update-kubeconfig command

aws eks --region ap-southeast-2s update-kubeconfig --name demo
We can now test our configurations using the kubectl get svc command:
kubectl get svc




This is handy information for the cluster name and worker nodes, instance sizings, volume size and associated vpc and subnets.

Once the worker node are created, need to ensure we are communicate with them using our machines. for that we need to make use of ARN for the instance role.

We execute the curl command to download the yaml file and make the code change.

curl -O 
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09
/aws-auth-cm.yaml
kubectl apply -f aws-auth-cm.yaml

Next step is see the worker nodes. In the previous steps, we just added the ARN for the worker node. So the worker nodes were already created before, we just have to call them using Kubernetes native commands.

In order to see the worker nodes, we execute the below command:-

kubectl get nodes

That’s It.

That’s all, needed to start playing with the K8s bits, kubectl K8s along! This concludes this post, next post is in continuing deploying a container image and debugging the logs.

Deploy multiple docker container to Azure container registry

This post is about deploying multiple docker container to Azure container registry.The below demo customer manager application and micro services will be containerised using docker and then pushed over to Azure as a private container registry.

The application uses below tech stack which is forked from https://github.com/DanWahlin/Angular-Docker-Microservices

Tech StackVersion/Toolset
Frontend AppAngular 6, Node.js, MongoDB
Container/ImagingDocker Desktop
Web ServerNginx
MicroservicesASP.NET Core/PostgreSQL 
Private Registry Azure container registry

Prerequities

  • Homebrew on Mac
  • Git version control
  • npm installed v14.2.0s
  • Install Docker Community Edition – https://docs.docker.com/docker-for-mac/install/
  • Have an account in Microsoft Azure to create a container registry using the Azure portal.

Build the source code on local

Clone the source code onto your local working directory.

Check for Git to be installed and clone the GitHub repository on your local machine working folder.

git clone https://github.com/varunmaggo/Angular-Docker-Microservices

Once the git clone is completed, the local folder will have the required files. I use VSCode for Editing the configuration files.

In order to build the solution on local, we install the local packages/dependencies. Execute the below commands, it takes a few minutes to install required packages/dependencies.

  1. Install Angular CLI: npm install @angular/cli -g
  2. Run npm install at the root of the project
  3. Run npm install in ./microservices/node

Once the packages are done, just execute the command ng build

Dockerize the Application/Micro-services

Now we will need to add and configure docker-compose.yml file, since we have multiple services so need to have multiple ports being configured as well.

Before building via docker, adding Docker Compose support to the project, along with multiple docker files which are as follows:-

  • container_name: nginx: .docker/nginx.dockerfile
  • container_name: nodeapp: .docker/node.development.dockerfile
  • container_name: ‘aspnetcoreapp’: .docker/aspnetcore.development.dockerfile

Build the solution via Docker commands.

Run docker-compose build

Once the solution builds successfully, all the images are visible. Also check for tagging which are very crucial for later on docker images to be pushed. I had tried this demo yesterday, so my images created are of yesterday.

Run docker images

Run docker-compose up

It builds, (re)creates, starts, and attaches to containers for a service.

Now the application has started and can be accessible on localhost.

Also, we can see the docker dashboard, for their available ports and services availability.

Create Azure Container Registry to push dockerized application/micro-services

Open the Microsoft Azure portal, to create Azure Container Registry. https://portal.azure.com

We create a new azure container registry – MiniCRM.

minicrmdemo.azurecr.io

docker login minicrmdemo.azurecr.io

Before pushing an image to ACR registry, we must tag it with the fully qualified name of ACR login server. In this case, minicrmdemo.azurecr.io

docker tag nginx minicrmdemo.azurecr.io/angular-docker-microservices:latest 

Similar docker tag commands can be executed for multiple images. Next step is to push the docker container to ACR.

docker push minicrmdemo.azurecr.io/angular-docker-microservices:latest

Next Demo

To use this docker image from container, we can launch web app in Azure, that will be a separate post in itself.

Multi-Cloud with Terraform – Any infra, anywhere provision

This post is about provisioning Amazon Web Services EC2 instances using Terraform IaaS, we can create reproducible infrastructure for Dev, UAT, SIT, and Production environments using automation of Terraform scripts. Terraform allows you to split your configuration into as many files as you wish.

We can execute these same scripts against multiple clouds providers, Microsoft Azure, Amazon Web Services, GCP etc. Terraform as infrastructure as code tool, makes this task simple and easy to perform. We can use Terraform to use Infrastructure as Code to provision and manage any cloud, infrastructure, or service.

Terraform infrastructure is configured and provision using language, HCL – HashiCorp Configuration Language. Below are the most commonly used commands by HCL infra provisioning.

CommandsPurpose
INITInitialize the working directory
PLANTo execute the plan for the resources managed by .tf files
APPLYBuild or changes the infrastructure as code which is defined
VERSIONPrints of the Terraform versions
DESTROYDestroy the infrastructure managed by Terraform
VALIDATEValidate the terraform files against the targetted schema
GRAPHOutputs the visual graph of Terraform resources.

There can be multiple reason for the client to go for Multi-Cloud approach:-

  • Cost Considerations,
  • Vendor diversity,
  • Vendor leverage,
  • Desire to use unique services,
  • Need for a particular region [Low latency],

Prerequisites

Once you download terraform binaries, please ensure to have environment variables setup. I have set up my variables, considering I have placed binary in the C folder. Run a version check on the command prompt in the below screenshot.

Implementation – Demo for AWS provisioning

To keep the length of the post limited, since most of the code is very similar. I will limit this demo to AWS. The similar approach can be adopted for Azure. Terraform has separate providers for Microsoft Azure and they can be used to configure infrastructure in Microsoft Azure using the Azure Resource Manager API’s.

More info here – https://www.terraform.io/docs/providers/azurerm/index.html

This configuration provisions infrastructure on Amazon Web Service. I will post the .tf file below for AWS which needed to be executed using AWS access keys and secret access keys.

I have added Azure Terraform plugin, as a Visual Studio Code plugin for IntelliSense but you can use any other tool for the same.

Note – AWS Access Key and AWS Secret Access Key should be configured on the host running this Terraform configuration.

export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Below is the code required to provision for AWS provider:-

provider “aws” {
region = “eu-west-1”
}

module “aws_vpc” {
source = “terraform-aws-modules/vpc/aws”
version = “1.5.1”
name = “${var.configuration_name}-vpc”
cidr = “10.0.0.0/16”
enable_dns_hostnames = true
enable_dns_support = true
azs = [“eu-west-1a”, “eu-west-1b”, “eu-west-1c”]
public_subnets = [“10.0.101.0/24”, “10.0.102.0/24”, “10.0.103.0/24”]
}

module “aws_asg” {
source = “terraform-aws-modules/autoscaling/aws”
version = “2.0.0”
name = “${var.configuration_name}-asg”
image_id = “${data.aws_ami.amazon_linux.id}”
instance_type = “t2.nano”
security_groups = [“${aws_security_group.sg.id}”]
user_data = “${data.template_file.web_server_aws.rendered}”
load_balancers = [“${module.aws_elb.this_elb_id}”]

root_block_device = [
{
volume_size = “8”
volume_type = “gp2”
},
]

vpc_zone_identifier = “${module.aws_vpc.public_subnets}”

health_check_type = “EC2”
min_size = 3
max_size = 3
desired_capacity = 3
wait_for_capacity_timeout = 0
}

module “aws_elb” {
source = “terraform-aws-modules/elb/aws”
version = “1.4.1”
name = “elb”
subnets = [“${module.aws_vpc.public_subnets}”]
security_groups = [“${aws_security_group.sg.id}”]
internal = false

listener = [
{
instance_port = “80”
instance_protocol = “HTTP”
lb_port = “80”
lb_protocol = “HTTP”
},
]

health_check = [
{
target = “HTTP:80/”
interval = 30
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 5
},
]
}

resource “aws_security_group” “sg” {
name = “${var.configuration_name}-sg”
description = “security group for ${var.configuration_name}”
vpc_id = “${module.aws_vpc.vpc_id}”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 65535
protocol = “udp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 65535
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
}

data “aws_ami” “amazon_linux” {
most_recent = true

filter {
name = “name”
values = [“amzn-ami-hvm-*-x86_64-gp2”]
}

filter {
name = “owner-alias”
values = [“amazon”]
}
}

data “template_file” “web_server_aws” {
template = “${file(“${path.module}/web-server.tpl”)}”

vars {
cloud = “aws”
}
}

data “aws_availability_zones” “available” {
state = “available”
}

Commands to be executed for Terraform.

  1. Initialise Terraform :terraform init
  2. Execute a plan :terraform plan -out=1.tfplan
  3. Execute an Apply :terraform apply 1.tfplan
  4. Destroy if you are done with the task in hand and you no longer need the resources.

Upcoming Demo – Azure provisioning

This is pending for another post, I will put this as demo with screenshots in a separate post.

Azure Active Directory – Bulk user import to a new organisation via .csv with PowerShell

As a part of this demo post, the objective is to use PowerShell and do a bulk user import into Azure Active Directory. We have a specific TenantId in an organisation and we would like to use PowerShell since it would make things a bit easy.

As a test sample size, we will take 10 users who are there in the .csv file to be imported via PowerShell.

For this demo, we create a new organization below and a separate domain.

  • Organization Name – VanillaCaffeine
  • Domain Name – VMADDemo.OnMicrosoft.com



Open Windows PowerShell IDE with Admin rights on powershell and install the below plugin. This will be used to connect to Azure Active Directory from your local machine.

Install-Module AzureAD

We connect to Azure Cloud and see the tenant id for this demo post.


The below PowerShell is going to read the .csv file from your local drive and read through the list of collection and import to Azure Active Directory.

The powershell scripts and template is there on the github link.

https://github.com/varunmaggo/PowerShellScripts

[CmdletBinding()]
Param(
[Parameter(Position = 0, Mandatory = $True, HelpMessage = 'CSV file')]
[Alias('CSVFile')]
[string]$FilePath,
[Parameter(Position = 1, Mandatory = $false, HelpMessage = 'Put Credentials')]
[Alias('Cred')]
[PSCredential]$Credential,
#MFA Account for Azure AD Account
[Parameter(Position = 2, Mandatory = $false, HelpMessage = 'MFA enabled?')]
[Alias('2FA')]
[Switch]$MFA,
[Parameter(Position = 3, Mandatory = $false, HelpMessage = 'Azure AD Group Name')]
[Alias('AADGN')]
[string]$AadGroupName
)
Function Install-AzureAD {
Set-PSRepository -Name PSGallery -Installation Trusted -Verbose:$false
Install-Module -Name AzureAD -AllowClobber -Verbose:$false
}
Try {
$CSVData = @(Import-CSV -Path $FilePath -ErrorAction Stop)
Write-Verbose "Successfully imported entries from $FilePath"
Write-Verbose "Total no. of entries in CSV are : $($CSVData.count)"
}
Catch {
Write-Verbose "Failed to read from the CSV file, PS $FilePath Exiting!"
Break
}
Try {
Import-Module -Name AzureAD -ErrorAction Stop -Verbose:$false | Out-Null
}
Catch {
Write-Verbose "Azure AD PowerShell Module not found…"
Write-Verbose "Installing Azure AD PowerShell Module…"
Install-AzureAD
}
Try {
Write-Verbose "Connecting to Azure AD…"
if ($MFA) {
Connect-AzureAD -TenantId efcb2733-e012-4628-bae4-a96147285b5a -ErrorAction Stop | Out-Null
}
Else {
Connect-AzureAD -TenantId efcb2733-e012-4628-bae4-a96147285b5a
}
}
Catch {
Write-Verbose "Cannot connect to Azure AD. Please check your credentials. Exiting!"
Break
}
Foreach ($Entry in $CSVData) {
# Verify that mandatory properties are defined for each object
$DisplayName = $Entry.DisplayName
$MailNickName = $Entry.MailNickName
$UserPrincipalName = $Entry.UserPrincipalName
$Password = $Entry.PasswordProfile
If (!$DisplayName) { Write-Warning '$DisplayName is not provided. Continue to the next record' Continue } If (!$MailNickName) { Write-Warning '$MailNickName is not provided. Continue to the next record' Continue } If (!$UserPrincipalName) { Write-Warning '$UserPrincipalName is not provided. Continue to the next record' Continue } If (!$Password) { Write-Warning "Password is not provided for $DisplayName in the CSV file!" $Password = Read-Host -Prompt "Enter desired Password" -AsSecureString $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($Password) $Password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR) $PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $PasswordProfile.Password = $Password $PasswordProfile.EnforceChangePasswordPolicy = 1 $PasswordProfile.ForceChangePasswordNextLogin = 1 } Else { $PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $PasswordProfile.Password = $Password $PasswordProfile.EnforceChangePasswordPolicy = 1 $PasswordProfile.ForceChangePasswordNextLogin = 1 } Try { New-AzureADUser -DisplayName $DisplayName ` -AccountEnabled $true ` -MailNickName $MailNickName ` -UserPrincipalName $UserPrincipalName ` -PasswordProfile $PasswordProfile ` -City $Entry.City ` -Country $Entry.Country ` -Department $Entry.Department ` -JobTitle $Entry.JobTitle ` -Mobile $Entry.Mobile | Out-Null Write-Verbose "$DisplayName : AAD Account is created successfully!" If ($AadGroupName) { Try { $AadGroupID = Get-AzureADGroup -SearchString "$AadGroupName" } Catch { Write-Error "$AadGroupName : does not exist. $_" Break } $ADuser = Get-AzureADUser -ObjectId "$UserPrincipalName" Add-AzureADGroupMember -ObjectId $AadGroupID.ObjectID -RefObjectId $ADuser.ObjectID Write-Verbose "Assigning the user $DisplayName to Azure AD Group $AadGroupName" } } Catch { Write-Error "$DisplayName : Error occurred $_" }
}

Once the Authentication is successful, we can log in over to our Azure Login section to cross-check the user’s list. Our sandbox area and all the users are present there. All the sampled users are present in the Users Azure Active Directory area.

AzureDevOps Build Pipeline – OWASP Dependency Check with universal packages

G’day, this blog post is about adding OWASP dependency check to Azure Build pipeline and universal packages being used for containing all the build output. SonarQube plugin will be MsBuild.exe and will perform the quality of the continuous integration.

OWASP Dependency Check – The purpose of dependency check it to check your dependencies for known vulnerabilities. It works cross-platform and Integrates well with SonarQube. Works both in Azure DevOps (online) and Server (on premise)

This plugin can be downloaded here – https://marketplace.visualstudio.com/items?itemName=InfoSupport.infosupport-owasp-dependecy-checker

For this demo project, we create a new project in AzureDevOps, LegacyToVogue. And, its scope to the build pipeline for the continuous integration part.

We have an empty Azure Repository over here, we will quickly import the code from github.

After importing the source code from github, we have all the required files from github now.

Once the source code is available we will create the Azure Build Pipeline, and later on we will add the required plugins. Also please move over to the azure marketplace and download the required plugins.

For Universal packages – According to Microsoft definition, Universal Packages store one or more files together in a single unit that has a name and version. You can publish Universal Packages from the command line by using the Azure CLI. In simple terms, we will keep it to hold the build output and later use it for Release pipeline.

Lets create an artefacts feed, we will use this for storing build output.



Lets move over to our Azure Build Pipeline, and add a new task for OWASP Dependency Check.



Lets also add Universal package task, to ensure we are storing the build output will all the required dependencies to be used for release pipeline. Please have the destination feed ready in Azure Artifacts.

Once the tasks are done, and YAML tasks code matches as above, we will trigger the build and we will see the outcome.

Azure DNS – Custom domain names

G’Day, this blog post is about using using custom domain names in Azure. I have recently bought a TLD – domain from Godaddy and would like to hook up my website to redirect to the same.

So this blog post will walk through how to go about it.

Prerequisites:-

  • Have an Azure web application ready.
  • Buy a domain name and have access to the DNS registry for your domain provider
  • have patience for 24 hours, because nameservers do take their propagation time to reflect changes.

Get Started

To map a custom DNS name to a web based application, your App Service plan must be a paid tier In this step, you make sure that the App Service app is in the supported pricing tier.

Login to your Azure Portal – portal.azure.com and find out the tier with which your application is deployed.

This application is fairly light,a blog site.

Map Domain –

We can use either a CNAME record or an A record to map a custom DNS name to App Service. Preference should be to use CNAME records for all custom DNS names except root domains (use A records).

  • An A record to map to the app’s IP address.
  • TXT record to map to the app’s default domain name <app_name>.azurewebsites.net. App Service uses this record only at configuration time, to verify that you own the custom domain. This can be deleted later on.

Sign in to the website of your domain provider, I have used Godaddy, domain registrar.

Every domain provider has its own DNS records interface, so consult the provider’s documentation. Look for areas of the site labeled Domain Name, DNS Configuration. Often, on that page and you can look for a link that is named something like Zone file, DNS Records, or Advanced configuration.

Also, lets add a TXT record.

End of post

Please be aware that depending on your DNS provider it can take up to 48 hours for the DNS entry changes to propagate. You can verify that the DNS propagation is working as expected by using http://digwebinterface.com/. Learn more

Serverless Azure Function – Run a cron job using time trigger

This post is about using serverless Azure function for running a batch job
or cron job. There are many instances where in, we can execute the cron job using Azure Function using time trigger.

Serverless Azure Function allows us to schedule a custom block of code to execute on time the client wanted and gain immediate visibility of logging in the Azure portal. Since this is serverless, we need not bother about setting up infrastructure. This does not need to have Virtual machines to setup or IIS to be configured in order to host any service.

We will create a demo project which is going to run as a batch job and fetch the BBC news rss feed for asia region. And we are going to send an email using SMTP settings for gmail. So every 6 hours this cron job is going to sniff bbc news reader rss feed and email me the top headings to my gmail inbox.

Before we start creating new project and proceed, please ensure you have installed the Azure Development workload for visual studio 2017.

Azure function gives us multiple triggers types, which we can use as a template and use.

1.HTTPTrigger – To Trigger the execution of your code by using an HTTP request.
2. TimerTrigger – To Execute cleanup or other batch tasks on a predefined schedule.
3. CosmosDBTrigger – To Process Azure Cosmos DB documents when they are added or updated in collections in a NoSQL database.
4. BlobTrigger – To Process Azure Storage blobs when they are added to containers. You might use this function for image resizing.
5. QueueTrigger – To Respond to messages as they arrive in an Azure Storage queue.

We need to Add System.ServiceModel in references for using SyndicationFeed and it will fetch and read rss feeds.

PM> Install-Package System.ServiceModel.Syndication

We would like to run this Azure function in every 6 hours, so 4 times in a day. We would select time trigger as Azure function type.

For testing we will change the time trigger to 20 seconds, in the below image we see the the cron job is being executed every 20 seconds.

Publish Azure Function to Azure Cloud

We will publish the azure function to cloud. We need to created a new profile in Azure for us to publish this Azure Time Trigger function.

We will name our Azure function as RSSFeedSniffer

After being successful published, it shall be redirected to
https://rssfeedsniffer.azurewebsites.net/

It can also be seen in Azure portal dashboard.