Dockerized Mulesoft & deploy with Harness – Canary Deployment

Overview and Scope

This demo post is about containerising Mulesoft Enterprise Edition “mule-ee-distribution”, standalone and using CentOS as base image, adding multiple layers including required java binaries using dockerfile. 30 days trail is used for this demo implementation. The focus will be on containerising and using that image in Harness CI/CD workflow and pipeline using Docker hub for storing image/container.

Harness CI/CD is used to build infrastructure, and configure Application, Services, and Environments, using workflows, pipelines. This post uses, Canary Deployment model, a pattern for rolling out releases to a subset of users or servers. The idea is to first deploy the change to a small subset of servers, test it, and then roll the change out to the rest of the servers.

Prerequisites :-

  1. Docker desktop for Mac
  2. Kubernetes – kubectl – https://kubernetes.io/docs/tasks/tools/
  3. Harness trial account – for building Infrastructure using workflows and pipelines.
  4. Mulesoft Enterprise edition – Standalone trials from http://www.mulesoft.org
  5. Java Binaries – AdoptOpenJDK [openjdk8-binaries]
  6. IDE – VSCode for building dockerfile
  7. Homebrew – Package Manager for macOS

Dockerize the Mulesoft – Create dockerfile with multiple layers

In order to containerise Mulesoft, we need to have below dependencies in the dockerfile files.

  • java binaries are prerequisites,
  • CentOs is used as base image,
  • Ports are exposed for communication and below are the requirements for this demo post.
Ports Required Description
PORT 8081-8082, 8091-8092Required by the Mule Apps
PORT 1098Mule JMX port (must match config file)
PORT 5000Configuration directory
PORT 7777Required for mule agent
  • Required volume – Mount Points are defined as below
Mount pointDescription
/opt/mule/appsRequired for Application deployment directory
/opt/mule/domainsRequired for Domains deployment directory
/opt/mule/confRequired for Configuration directory
/opt/mule/logsRequired for Logs directory

Docker Image packaging for MuleESB http://www.mulesoft.org

Create a dockerfile that will contain the Mulesoft standalone

FROM         centos:latest

# Define docker build ARGS
ARG JAVA_BINARIES=https://github.com/AdoptOpenJDK/openjdk8-binaries/releases/download/jdk8u265-b01/OpenJDK8U-jdk_x64_linux_hotspot_8u265b01.tar.gz
ARG RUNTIME_VERSION=4.3.0

# Define environment variables
ENV   JAVA_BINARIES $JAVA_BINARIES
ENV   TMP_DIR /tmp/
ENV   JAVA_HOME /opt/jdk
ENV   PATH $JAVA_HOME/bin:${PATH}
ENV   MULE_ARGS "${MULE_ARGS:-start}"
ENV     RUNTIME_VERSION $RUNTIME_VERSION
ENV     MULE_HOME /opt/mule

WORKDIR     /opt

# Install required binaries
RUN curl -L ${JAVA_BINARIES} -o jdk.tar.gz && \
  mkdir jdk && \
  tar xf jdk.tar.gz -C jdk --strip-components 1 && \
  rm -rf jdk.tar.gz && \
  curl -L http://s3.amazonaws.com/new-mule-artifacts/mule-ee-distribution-standalone-${RUNTIME_VERSION}.tar.gz -O && \
  tar xvf mule-ee-distribution-standalone-${RUNTIME_VERSION}.tar.gz && \
  rm -rf mule-ee-distribution-standalone-${RUNTIME_VERSION}.tar.gz && \
  ln -s /opt/mule-enterprise-standalone-${RUNTIME_VERSION} mule  && \
  adduser mule && \
  chown -R mule:mule /opt/mule-enterprise-standalone-${RUNTIME_VERSION} 
  
# Define mount points
VOLUME ["/opt/mule/logs", "/opt/mule/conf", "/opt/mule/apps", "/opt/mule/domains","/opt/mule/libs"]

# HTTP Service Port
# Required by the Mule Apps
EXPOSE      8081-8082
EXPOSE      8091-8092

# Mule remote debugger
EXPOSE      5000

# Mule agent 
EXPOSE      7777

# Mule JMX port (match Mule config file)
EXPOSE      1098

# Start Mule runtime
USER mule

#execute the commands
CMD [ "/opt/mule/bin/mule", "$MULE_ARGS" ]

In order to save time, I have pushed this image/container on docker hub, this will be used in the Harness CI/CD pipeline in the below following steps.

Setup and Configure Harness CI/CD

Setup and configure Harness CI/CD –

  1. Install the Harness Delegate
  2. Connect to your Infrastructure
  3. Connect to Artifact Repositories [dockerhub is used for this post]
  4. Configure your Application, Services, and Environments
  5. Build your Workflows and Pipeline then deploy!

Canary Deployment, we are starting with 30% of the instances.

Deploy

Using Harness CI/CD, we create a new deployment, use the latest tag for the service created above.

Canary Deployment Success- After a few minutes of running, we will see the container image is successfully rolled out.

Next Steps

Mulesoft setup and configuration is a separate post altogether, and it would be part of a following demo post.

Deploy Apache Solr – Harness on Local K8s via Docker registry

This demo post is about deploying Apache Solr on local Kubernetes(MiniKube) with Harness Kubernetes Deployment. This post will be using Harness Delegates with Minikube and create infrastructure definition, workflow and pipeline to deploy Apache Solr. The container images will be pulled from docker public registry.

  • Apache SOLRSolr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases. Solr runs as a standalone full-text search server.
  • Harness.io – It supports Kubernetes Deployments for rapidly onboard and deploy Microservices with support forCanary, Helm, Istio, and Kubernetes pods, nodes.
  • MiniKube – Run Kubernetes Locally. Minikube implements a local Kubernetes cluster on macOS, Linux, and Windows.

Prerequisite

  • Harness Account – I have used a professional trial account – link here
  • Docker for desktop – this demo is using 3.10
  • Kubectl installed – this demo uses homebrew on Mac for this installation.
  • Minikube – Used for local Kubernetes application development, and to support all Kubernetes features that fit for this demo for sandbox environment.
  • Docker Registry – Apache Solr is pulled from registry.hub.docker.com/library/solr:latest

Local K8s Setup with Minikube

Minikube is downloaded and installed with sudo access.

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64

sudo install minikube-darwin-amd64 /usr/local/bin/minikube

Check for Minikube version installed and start the Minikube.

Harness Local Delegate Setup

We setup a trial account with Harness and use Link here – https://harness.io/free-trial/

The Harness Delegate is a service which run in local network or VPC to connect all of respective artifact, infrastructure, verification and other providers with the Harness Manager.

We need to create Kubernetes YAML file which can be executed and used for workload nodes.

If you downloaded the Delegate, extract the YAML file’s folder from the download and then navigate to the harness-delegate-kubernetes folder that you extracted:

tar -zxvf harness-delegate-kubernetes.tar.gz
cd harness-delegate-kubernetes

In the below file, we will be using below info through out the demo.

  • Namespace – harness-delegate
  • Resource CPU = 1
  • Resource Memory =8Gigs

We switch to the folder and execute the below commands

Check for the kubectl installed and the resources/workloads need to be created.

kubectl apply -f harness-delegate.yaml

Check for the pods created, with the required namespace.

kubectl get pods -n harness-delegate
We wait for a few mins, pod status is still in pending state, 
In order to view events for the pods, we can describe the pod.
kubectl describe pods minik8s-qjrewn-0 -n harness-delegate 

There are a failed scheduling for node, due to insufficient memory.

Reconficure Minikube configuration.

minikube stop 
minikube start --cpus 4 --memory 8192

We check for the events for the workload while container is getting created.

Since the harness delegate is all setup, we can add a Kubernetes Cluster, and this cluster will be used later on for creating workloads.

Harness Workflow and Pipelines

In order to deploy Apache Solr, we will use Harness workflow and pipeline. We will fetch a helm package from

  • Create an Application
  • Setup environment for the infra definition
  • Create workflow to execute the pipeline.

Screenshots below for the steps performed.

Next step is to create Services, Environments, Workflow and Pipelines.

We will be using Docker Public Repository. Docker Registry – Apache Solr is pulled from registry.hub.docker.com/library/solr:latest

Infrastructure Definition is created with the cloud providers and Deployment type.

Since this is a simple image deployment, a single stage pipeline can be demoed. Based on the complexity, we can add further stages.

Start a new deployment with the latest service tag. This will trigger the workflow/pipeline for automated rolling deployment.

In the image below, we can see status of rolling deployment, this will take a couple of minutes to complete it. Meanwhile we can also have a look at Minikube dashboard.

In the image below, we can see the rolling deployment is completed.

  • Deployment – harness-example-deployment
  • Namespace – harness-example

That’s It. 

That’s all, needed to start playing with the K8s, Harness, Docker bits! 

Managed K8s – Azure AKS – Deploy WordPress CMS with Dockerized image

This demo blog post is about deploying WordPress CMS with creating a docker image from scratch, using Microsoft Azure Container register for privately storing the dockerized image. Also, use Managed K8s, Azure Kubernetes to deploy this image on AKS Cluster and worker nodes.

Approach

  1. Build the docker image for WordPress
  2. Use Azure container registry to store the dockerized image
  3. Create Azure Database for MySQL
  4. Create AKS Cluster and deploy the image
  5. Create Kubernetes manifest file with the desired state for WordPress CMS.

Prerequisites

  1. Microsoft Azure Account – free trial works.
  2. Azure CLI
  3. Kubernetes – Link here
  4. Docker Desktop for MacOS
  5. A link to wordpress to download and build docker image

Build Docker Image

Grab a free version of the latest WordPress version. Create new directory WordPress for your project and use this simple folder structure

Open the compressed file to view the contents for the package.

Rename wp-config-sample.php to wp-config.php

This screenshot below to read the database host, username and password from the Kubernetes manifest file.

Before creating the docker file, just need to restructure the folder structure as below

Create a blank Dockerfile

Create a new Dockerfile and copy code snippet from below screenshot. This below file is setting up Apache web server with PHP and enabling mysqli extension.

Before executing the docker build command, need to ensure the docker desktop is installed and also running successfully.

docker build --tag myblog:latest


It’ll take sometime to build the image with tags. Once its completed, we can open up docker desktop and view the latest image added on the our machine disk. myblog with tag latest is added here.

Use ACR – Azure Container Registry to store the dockerized image

az acr create --resource-group wordpress-project \
  --name wpAzureContainerRegistry2020 --sku Basic

Login to ACR which is recently created

Tag and Push the dockerized image to this ACR.

Docker tag 7eb3872a0449 wpazurecontainerregistry2020.azurecr.io/7eb3872a0449:latest
Docker push wpazurecontainerregistry2020.azurecr.io/7eb3872a0449:latest

Create AKS cluster

az aks create --resource-group wordpress-project --name wordpresscluster--node-count 1 --generate-ssh-keys

Also update the ACR registry, with this same AKS Cluster.

az aks update -n myAKSCluster -g wordpress-project
 --attach-acr wpazurecontainerregistry2020.azurecr.io

Installing the az aks install-cli package for executing kubectl commands.

kubectl get nodes 

Once the nodes are setup, need to ensure Azure Database for MySQL is all up and running.

Post successfully creating the Azure Database for MySQL, back to VSCode.

kubectl get service

It would take some time to get all the pods all setup and running successfully.

Need to use the same external ip [20.53.109.149] from the above screenshot and it just works. It loads up the WordPress admin installation page.

That’s It. 

That’s all, one more demo post done with the K8s bits, Happy kubectl K8s!

Managed K8s – Amazon EKS cluster and worker nodes via Cloud Formation

This demo post is about deploying Amazon EKS cluster with worker nodes and these will be deployed and launched into a new VPC. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability and also automatically detects and replaces unhealthy control plane instances. Amazon EKS is certified Kubernetes conformant so you can use existing tooling and plugins from partners and the Kubernetes community.

Why use Amazon EKS?

  1. Hybrid container deployments – Objectively perform highly available and scalable K8s clusters with their worker nodes on Amazon Web Services, maintaining full compatibility with your K8s deployments
  2. Microservices – In order to run micro-services applications with deep integrations to AWS services, while getting access to the out of the box Kubernetes (K8s) functionality.
  3. Lift and Shift migration – Easily Lift & Shift and migrate existing applications to Amazon EKS without needing to refactor your code or tooling.
  4. Authentication just works with IAM

Pre-requisites:-

To ensure, the following components installed and set up before starting with Amazon EKS:

  1. AWS CLI – while you can use the AWS Console to create a cluster in EKS, the AWS CLI is easier. Minimum requirement version 1.16.73.
  2. Kubectl – used for communicating with the cluster API server. For further instructions on installing, click here.
  3. AWS-IAM-Authenticator – to allow IAM authentication with the Kubernetes cluster

To install aws-iam-authenticator with Homebrew

The easiest way to install the aws-iam-authenticator is with Homebrew.

  1. If you do not already have Homebrew installed on your Mac, install it with the following command./bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  2. Install the aws-iam-authenticator with the following command.brew install aws-iam-authenticator
  3. Test that the aws-iam-authenticator binary works.aws-iam-authenticator help

Set up a new IAM role with EKS permissions

  • Goto IAM Management Console.
  • Click create role

Infra as Code – AWS Cloud Formation

Before we use some templates to provision the Master control panel for Kubernetes and its associated worker nodes. I have already performed the below steps for the git blank branch and push the upstream commit on GitHub.

  1. Create a new GIT Repo
  2. Create a new feature branch
  3. push the empty branch using git upstream command
  4. git push –set-upstream origin feature/deploy-eks-cluster-vpc
git push --set-upstream origin feature/deploy-eks-cluster-vpc

There are plenty of AWS cloud formation templates are available for provisioning AWS EKS clusters and worker nodes. I have picked from AWS official documentation. These can be further modified and supplied with other parameters as per need basis.

https://github.com/varunmaggo/IaCTemplates

In order to execute these Infra as code, we switch over to AWS cloud formation. And create a new stack for EKS Cluster and worker nodes.

Worker network configuration needed for the EKS cluster.

We would need below VPC, Subnet and Security Groups, when we run AWS commands for cluster provisioning.

Switch over to ZSH, to provision the EKS Cluster. more info here — https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html

It would take a couple of minutes, before the cluster status turns to successfully created.

And, to proceed with updating our kubeconfig file with the information on the new cluster so kubectl can communicate with it.

To do this, we use the AWS CLI update-kubeconfig command

aws eks --region ap-southeast-2s update-kubeconfig --name demo
We can now test our configurations using the kubectl get svc command:
kubectl get svc




This is handy information for the cluster name and worker nodes, instance sizings, volume size and associated vpc and subnets.

Once the worker node are created, need to ensure we are communicate with them using our machines. for that we need to make use of ARN for the instance role.

We execute the curl command to download the yaml file and make the code change.

curl -O 
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-01-09
/aws-auth-cm.yaml
kubectl apply -f aws-auth-cm.yaml

Next step is see the worker nodes. In the previous steps, we just added the ARN for the worker node. So the worker nodes were already created before, we just have to call them using Kubernetes native commands.

In order to see the worker nodes, we execute the below command:-

kubectl get nodes

That’s It.

That’s all, needed to start playing with the K8s bits, kubectl K8s along! This concludes this post, next post is in continuing deploying a container image and debugging the logs.

Deploy multiple docker container to Azure container registry

This post is about deploying multiple docker container to Azure container registry.The below demo customer manager application and micro services will be containerised using docker and then pushed over to Azure as a private container registry.

The application uses below tech stack which is forked from https://github.com/DanWahlin/Angular-Docker-Microservices

Tech StackVersion/Toolset
Frontend AppAngular 6, Node.js, MongoDB
Container/ImagingDocker Desktop
Web ServerNginx
MicroservicesASP.NET Core/PostgreSQL 
Private Registry Azure container registry

Prerequities

  • Homebrew on Mac
  • Git version control
  • npm installed v14.2.0s
  • Install Docker Community Edition – https://docs.docker.com/docker-for-mac/install/
  • Have an account in Microsoft Azure to create a container registry using the Azure portal.

Build the source code on local

Clone the source code onto your local working directory.

Check for Git to be installed and clone the GitHub repository on your local machine working folder.

git clone https://github.com/varunmaggo/Angular-Docker-Microservices

Once the git clone is completed, the local folder will have the required files. I use VSCode for Editing the configuration files.

In order to build the solution on local, we install the local packages/dependencies. Execute the below commands, it takes a few minutes to install required packages/dependencies.

  1. Install Angular CLI: npm install @angular/cli -g
  2. Run npm install at the root of the project
  3. Run npm install in ./microservices/node

Once the packages are done, just execute the command ng build

Dockerize the Application/Micro-services

Now we will need to add and configure docker-compose.yml file, since we have multiple services so need to have multiple ports being configured as well.

Before building via docker, adding Docker Compose support to the project, along with multiple docker files which are as follows:-

  • container_name: nginx: .docker/nginx.dockerfile
  • container_name: nodeapp: .docker/node.development.dockerfile
  • container_name: ‘aspnetcoreapp’: .docker/aspnetcore.development.dockerfile

Build the solution via Docker commands.

Run docker-compose build

Once the solution builds successfully, all the images are visible. Also check for tagging which are very crucial for later on docker images to be pushed. I had tried this demo yesterday, so my images created are of yesterday.

Run docker images

Run docker-compose up

It builds, (re)creates, starts, and attaches to containers for a service.

Now the application has started and can be accessible on localhost.

Also, we can see the docker dashboard, for their available ports and services availability.

Create Azure Container Registry to push dockerized application/micro-services

Open the Microsoft Azure portal, to create Azure Container Registry. https://portal.azure.com

We create a new azure container registry – MiniCRM.

minicrmdemo.azurecr.io

docker login minicrmdemo.azurecr.io

Before pushing an image to ACR registry, we must tag it with the fully qualified name of ACR login server. In this case, minicrmdemo.azurecr.io

docker tag nginx minicrmdemo.azurecr.io/angular-docker-microservices:latest 

Similar docker tag commands can be executed for multiple images. Next step is to push the docker container to ACR.

docker push minicrmdemo.azurecr.io/angular-docker-microservices:latest

Next Demo

To use this docker image from container, we can launch web app in Azure, that will be a separate post in itself.

AzureDevOps Build Pipeline – OWASP Dependency Check with universal packages

G’day, this blog post is about adding OWASP dependency check to Azure Build pipeline and universal packages being used for containing all the build output. SonarQube plugin will be MsBuild.exe and will perform the quality of the continuous integration.

OWASP Dependency Check – The purpose of dependency check it to check your dependencies for known vulnerabilities. It works cross-platform and Integrates well with SonarQube. Works both in Azure DevOps (online) and Server (on premise)

This plugin can be downloaded here – https://marketplace.visualstudio.com/items?itemName=InfoSupport.infosupport-owasp-dependecy-checker

For this demo project, we create a new project in AzureDevOps, LegacyToVogue. And, its scope to the build pipeline for the continuous integration part.

We have an empty Azure Repository over here, we will quickly import the code from github.

After importing the source code from github, we have all the required files from github now.

Once the source code is available we will create the Azure Build Pipeline, and later on we will add the required plugins. Also please move over to the azure marketplace and download the required plugins.

For Universal packages – According to Microsoft definition, Universal Packages store one or more files together in a single unit that has a name and version. You can publish Universal Packages from the command line by using the Azure CLI. In simple terms, we will keep it to hold the build output and later use it for Release pipeline.

Lets create an artefacts feed, we will use this for storing build output.



Lets move over to our Azure Build Pipeline, and add a new task for OWASP Dependency Check.



Lets also add Universal package task, to ensure we are storing the build output will all the required dependencies to be used for release pipeline. Please have the destination feed ready in Azure Artifacts.

Once the tasks are done, and YAML tasks code matches as above, we will trigger the build and we will see the outcome.

Azure DNS – Custom domain names

G’Day, this blog post is about using using custom domain names in Azure. I have recently bought a TLD – domain from Godaddy and would like to hook up my website to redirect to the same.

So this blog post will walk through how to go about it.

Prerequisites:-

  • Have an Azure web application ready.
  • Buy a domain name and have access to the DNS registry for your domain provider
  • have patience for 24 hours, because nameservers do take their propagation time to reflect changes.

Get Started

To map a custom DNS name to a web based application, your App Service plan must be a paid tier In this step, you make sure that the App Service app is in the supported pricing tier.

Login to your Azure Portal – portal.azure.com and find out the tier with which your application is deployed.

This application is fairly light,a blog site.

Map Domain –

We can use either a CNAME record or an A record to map a custom DNS name to App Service. Preference should be to use CNAME records for all custom DNS names except root domains (use A records).

  • An A record to map to the app’s IP address.
  • TXT record to map to the app’s default domain name <app_name>.azurewebsites.net. App Service uses this record only at configuration time, to verify that you own the custom domain. This can be deleted later on.

Sign in to the website of your domain provider, I have used Godaddy, domain registrar.

Every domain provider has its own DNS records interface, so consult the provider’s documentation. Look for areas of the site labeled Domain Name, DNS Configuration. Often, on that page and you can look for a link that is named something like Zone file, DNS Records, or Advanced configuration.

Also, lets add a TXT record.

End of post

Please be aware that depending on your DNS provider it can take up to 48 hours for the DNS entry changes to propagate. You can verify that the DNS propagation is working as expected by using http://digwebinterface.com/. Learn more

Serverless Azure Function – Run a cron job using time trigger

This post is about using serverless Azure function for running a batch job
or cron job. There are many instances where in, we can execute the cron job using Azure Function using time trigger.

Serverless Azure Function allows us to schedule a custom block of code to execute on time the client wanted and gain immediate visibility of logging in the Azure portal. Since this is serverless, we need not bother about setting up infrastructure. This does not need to have Virtual machines to setup or IIS to be configured in order to host any service.

We will create a demo project which is going to run as a batch job and fetch the BBC news rss feed for asia region. And we are going to send an email using SMTP settings for gmail. So every 6 hours this cron job is going to sniff bbc news reader rss feed and email me the top headings to my gmail inbox.

Before we start creating new project and proceed, please ensure you have installed the Azure Development workload for visual studio 2017.

Azure function gives us multiple triggers types, which we can use as a template and use.

1.HTTPTrigger – To Trigger the execution of your code by using an HTTP request.
2. TimerTrigger – To Execute cleanup or other batch tasks on a predefined schedule.
3. CosmosDBTrigger – To Process Azure Cosmos DB documents when they are added or updated in collections in a NoSQL database.
4. BlobTrigger – To Process Azure Storage blobs when they are added to containers. You might use this function for image resizing.
5. QueueTrigger – To Respond to messages as they arrive in an Azure Storage queue.

We need to Add System.ServiceModel in references for using SyndicationFeed and it will fetch and read rss feeds.

PM> Install-Package System.ServiceModel.Syndication

We would like to run this Azure function in every 6 hours, so 4 times in a day. We would select time trigger as Azure function type.

For testing we will change the time trigger to 20 seconds, in the below image we see the the cron job is being executed every 20 seconds.

Publish Azure Function to Azure Cloud

We will publish the azure function to cloud. We need to created a new profile in Azure for us to publish this Azure Time Trigger function.

We will name our Azure function as RSSFeedSniffer

After being successful published, it shall be redirected to
https://rssfeedsniffer.azurewebsites.net/

It can also be seen in Azure portal dashboard.

Azure DevOps – Angular 5 with .Net WebApi

This article shows how to set up AzureDevOps CI/CD pipelines to be set for a full stack application. Here I use a multilayered solution set up which show the tasks based on users. Below is the technology stack which is used.

  • Frontend UI – Angular 5
  • Middleware APIs- ASP.NET WebAPI
  • RDBMS – SQL Server 2012
  • Dependency Injection – Unity
  • ORM – EntityFramework
  • Framework – .NET Core

I have code hosted on github as public repo.

Our projects in the solution would look like below:-

This image has an empty alt attribute; its file name is image.png

First, create an AzureDevOps account by going to azure web portal. https://dev.azure.com

As for pricing, Azure DevOps is free for open source projects and small projects (up to five users).

I have created Tracker as a private project, which I will be using for my Azure CI/CD pipeline.

Once you create a project there will be an option on the left-hand side, which gives us a submenu to show files and its metadata.

After I click files, I would not see any files. Since I am using git version control and the source is hosted on GitHub. I will fire these two commands in the command prompt to push my latest changes.

git remote add origin https://github.com/varunmaggo/Tracker.gith
git push -u origin --all

Now the code is pushed and the next step is to create AzureDevOps pipeline, on the left-hand side, there is an option. We would need to create Build and Release pipelines separately.

Again goto build option, we would see an option, where we need to create a build agent with multiple configurations.

  • Install node package manager,
  • Install angular cli
  • Build packages for angular,
  • Nuget restore to install packages for WebApi’s solution.
  • run unit tests,
  • And finally, publish artefacts.

Once we are set with these build pipelines, we would need to proceed for Release Pipeline.

In order to

In order to make it simple, I would use the staging environment. In real-world scenarios, we can have dev, stage and prod separately.

In the image below, we select the build source type, which is our build pipeline, and we would like to