Deploy multiple docker container to Azure container registry

This post is about deploying multiple docker container to Azure container registry.The below demo customer manager application and micro services will be containerised using docker and then pushed over to Azure as a private container registry.

The application uses below tech stack which is forked from https://github.com/DanWahlin/Angular-Docker-Microservices

Tech StackVersion/Toolset
Frontend AppAngular 6, Node.js, MongoDB
Container/ImagingDocker Desktop
Web ServerNginx
MicroservicesASP.NET Core/PostgreSQL 
Private Registry Azure container registry

Prerequities

  • Homebrew on Mac
  • Git version control
  • npm installed v14.2.0s
  • Install Docker Community Edition – https://docs.docker.com/docker-for-mac/install/
  • Have an account in Microsoft Azure to create a container registry using the Azure portal.

Build the source code on local

Clone the source code onto your local working directory.

Check for Git to be installed and clone the GitHub repository on your local machine working folder.

git clone https://github.com/varunmaggo/Angular-Docker-Microservices

Once the git clone is completed, the local folder will have the required files. I use VSCode for Editing the configuration files.

In order to build the solution on local, we install the local packages/dependencies. Execute the below commands, it takes a few minutes to install required packages/dependencies.

  1. Install Angular CLI: npm install @angular/cli -g
  2. Run npm install at the root of the project
  3. Run npm install in ./microservices/node

Once the packages are done, just execute the command ng build

Dockerize the Application/Micro-services

Now we will need to add and configure docker-compose.yml file, since we have multiple services so need to have multiple ports being configured as well.

Before building via docker, adding Docker Compose support to the project, along with multiple docker files which are as follows:-

  • container_name: nginx: .docker/nginx.dockerfile
  • container_name: nodeapp: .docker/node.development.dockerfile
  • container_name: ‘aspnetcoreapp’: .docker/aspnetcore.development.dockerfile

Build the solution via Docker commands.

Run docker-compose build

Once the solution builds successfully, all the images are visible. Also check for tagging which are very crucial for later on docker images to be pushed. I had tried this demo yesterday, so my images created are of yesterday.

Run docker images

Run docker-compose up

It builds, (re)creates, starts, and attaches to containers for a service.

Now the application has started and can be accessible on localhost.

Also, we can see the docker dashboard, for their available ports and services availability.

Create Azure Container Registry to push dockerized application/micro-services

Open the Microsoft Azure portal, to create Azure Container Registry. https://portal.azure.com

We create a new azure container registry – MiniCRM.

minicrmdemo.azurecr.io

docker login minicrmdemo.azurecr.io

Before pushing an image to ACR registry, we must tag it with the fully qualified name of ACR login server. In this case, minicrmdemo.azurecr.io

docker tag nginx minicrmdemo.azurecr.io/angular-docker-microservices:latest 

Similar docker tag commands can be executed for multiple images. Next step is to push the docker container to ACR.

docker push minicrmdemo.azurecr.io/angular-docker-microservices:latest

Next Demo

To use this docker image from container, we can launch web app in Azure, that will be a separate post in itself.

Multi-Cloud with Terraform – Any infra, anywhere provision

This post is about provisioning Amazon Web Services EC2 instances using Terraform IaaS, we can create reproducible infrastructure for Dev, UAT, SIT, and Production environments using automation of Terraform scripts. Terraform allows you to split your configuration into as many files as you wish.

We can execute these same scripts against multiple clouds providers, Microsoft Azure, Amazon Web Services, GCP etc. Terraform as infrastructure as code tool, makes this task simple and easy to perform. We can use Terraform to use Infrastructure as Code to provision and manage any cloud, infrastructure, or service.

Terraform infrastructure is configured and provision using language, HCL – HashiCorp Configuration Language. Below are the most commonly used commands by HCL infra provisioning.

CommandsPurpose
INITInitialize the working directory
PLANTo execute the plan for the resources managed by .tf files
APPLYBuild or changes the infrastructure as code which is defined
VERSIONPrints of the Terraform versions
DESTROYDestroy the infrastructure managed by Terraform
VALIDATEValidate the terraform files against the targetted schema
GRAPHOutputs the visual graph of Terraform resources.

There can be multiple reason for the client to go for Multi-Cloud approach:-

  • Cost Considerations,
  • Vendor diversity,
  • Vendor leverage,
  • Desire to use unique services,
  • Need for a particular region [Low latency],

Prerequisites

Once you download terraform binaries, please ensure to have environment variables setup. I have set up my variables, considering I have placed binary in the C folder. Run a version check on the command prompt in the below screenshot.

Implementation – Demo for AWS provisioning

To keep the length of the post limited, since most of the code is very similar. I will limit this demo to AWS. The similar approach can be adopted for Azure. Terraform has separate providers for Microsoft Azure and they can be used to configure infrastructure in Microsoft Azure using the Azure Resource Manager API’s.

More info here – https://www.terraform.io/docs/providers/azurerm/index.html

This configuration provisions infrastructure on Amazon Web Service. I will post the .tf file below for AWS which needed to be executed using AWS access keys and secret access keys.

I have added Azure Terraform plugin, as a Visual Studio Code plugin for IntelliSense but you can use any other tool for the same.

Note – AWS Access Key and AWS Secret Access Key should be configured on the host running this Terraform configuration.

export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Below is the code required to provision for AWS provider:-

provider “aws” {
region = “eu-west-1”
}

module “aws_vpc” {
source = “terraform-aws-modules/vpc/aws”
version = “1.5.1”
name = “${var.configuration_name}-vpc”
cidr = “10.0.0.0/16”
enable_dns_hostnames = true
enable_dns_support = true
azs = [“eu-west-1a”, “eu-west-1b”, “eu-west-1c”]
public_subnets = [“10.0.101.0/24”, “10.0.102.0/24”, “10.0.103.0/24”]
}

module “aws_asg” {
source = “terraform-aws-modules/autoscaling/aws”
version = “2.0.0”
name = “${var.configuration_name}-asg”
image_id = “${data.aws_ami.amazon_linux.id}”
instance_type = “t2.nano”
security_groups = [“${aws_security_group.sg.id}”]
user_data = “${data.template_file.web_server_aws.rendered}”
load_balancers = [“${module.aws_elb.this_elb_id}”]

root_block_device = [
{
volume_size = “8”
volume_type = “gp2”
},
]

vpc_zone_identifier = “${module.aws_vpc.public_subnets}”

health_check_type = “EC2”
min_size = 3
max_size = 3
desired_capacity = 3
wait_for_capacity_timeout = 0
}

module “aws_elb” {
source = “terraform-aws-modules/elb/aws”
version = “1.4.1”
name = “elb”
subnets = [“${module.aws_vpc.public_subnets}”]
security_groups = [“${aws_security_group.sg.id}”]
internal = false

listener = [
{
instance_port = “80”
instance_protocol = “HTTP”
lb_port = “80”
lb_protocol = “HTTP”
},
]

health_check = [
{
target = “HTTP:80/”
interval = 30
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 5
},
]
}

resource “aws_security_group” “sg” {
name = “${var.configuration_name}-sg”
description = “security group for ${var.configuration_name}”
vpc_id = “${module.aws_vpc.vpc_id}”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 65535
protocol = “udp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 65535
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
}

data “aws_ami” “amazon_linux” {
most_recent = true

filter {
name = “name”
values = [“amzn-ami-hvm-*-x86_64-gp2”]
}

filter {
name = “owner-alias”
values = [“amazon”]
}
}

data “template_file” “web_server_aws” {
template = “${file(“${path.module}/web-server.tpl”)}”

vars {
cloud = “aws”
}
}

data “aws_availability_zones” “available” {
state = “available”
}

Commands to be executed for Terraform.

  1. Initialise Terraform :terraform init
  2. Execute a plan :terraform plan -out=1.tfplan
  3. Execute an Apply :terraform apply 1.tfplan
  4. Destroy if you are done with the task in hand and you no longer need the resources.

Upcoming Demo – Azure provisioning

This is pending for another post, I will put this as demo with screenshots in a separate post.

Azure Active Directory – Bulk user import to a new organisation via .csv with PowerShell

As a part of this demo post, the objective is to use PowerShell and do a bulk user import into Azure Active Directory. We have a specific TenantId in an organisation and we would like to use PowerShell since it would make things a bit easy.

As a test sample size, we will take 10 users who are there in the .csv file to be imported via PowerShell.

For this demo, we create a new organization below and a separate domain.

  • Organization Name – VanillaCaffeine
  • Domain Name – VMADDemo.OnMicrosoft.com



Open Windows PowerShell IDE with Admin rights on powershell and install the below plugin. This will be used to connect to Azure Active Directory from your local machine.

Install-Module AzureAD

We connect to Azure Cloud and see the tenant id for this demo post.


The below PowerShell is going to read the .csv file from your local drive and read through the list of collection and import to Azure Active Directory.

The powershell scripts and template is there on the github link.

https://github.com/varunmaggo/PowerShellScripts

[CmdletBinding()]
Param(
[Parameter(Position = 0, Mandatory = $True, HelpMessage = 'CSV file')]
[Alias('CSVFile')]
[string]$FilePath,
[Parameter(Position = 1, Mandatory = $false, HelpMessage = 'Put Credentials')]
[Alias('Cred')]
[PSCredential]$Credential,
#MFA Account for Azure AD Account
[Parameter(Position = 2, Mandatory = $false, HelpMessage = 'MFA enabled?')]
[Alias('2FA')]
[Switch]$MFA,
[Parameter(Position = 3, Mandatory = $false, HelpMessage = 'Azure AD Group Name')]
[Alias('AADGN')]
[string]$AadGroupName
)
Function Install-AzureAD {
Set-PSRepository -Name PSGallery -Installation Trusted -Verbose:$false
Install-Module -Name AzureAD -AllowClobber -Verbose:$false
}
Try {
$CSVData = @(Import-CSV -Path $FilePath -ErrorAction Stop)
Write-Verbose "Successfully imported entries from $FilePath"
Write-Verbose "Total no. of entries in CSV are : $($CSVData.count)"
}
Catch {
Write-Verbose "Failed to read from the CSV file, PS $FilePath Exiting!"
Break
}
Try {
Import-Module -Name AzureAD -ErrorAction Stop -Verbose:$false | Out-Null
}
Catch {
Write-Verbose "Azure AD PowerShell Module not found…"
Write-Verbose "Installing Azure AD PowerShell Module…"
Install-AzureAD
}
Try {
Write-Verbose "Connecting to Azure AD…"
if ($MFA) {
Connect-AzureAD -TenantId efcb2733-e012-4628-bae4-a96147285b5a -ErrorAction Stop | Out-Null
}
Else {
Connect-AzureAD -TenantId efcb2733-e012-4628-bae4-a96147285b5a
}
}
Catch {
Write-Verbose "Cannot connect to Azure AD. Please check your credentials. Exiting!"
Break
}
Foreach ($Entry in $CSVData) {
# Verify that mandatory properties are defined for each object
$DisplayName = $Entry.DisplayName
$MailNickName = $Entry.MailNickName
$UserPrincipalName = $Entry.UserPrincipalName
$Password = $Entry.PasswordProfile
If (!$DisplayName) { Write-Warning '$DisplayName is not provided. Continue to the next record' Continue } If (!$MailNickName) { Write-Warning '$MailNickName is not provided. Continue to the next record' Continue } If (!$UserPrincipalName) { Write-Warning '$UserPrincipalName is not provided. Continue to the next record' Continue } If (!$Password) { Write-Warning "Password is not provided for $DisplayName in the CSV file!" $Password = Read-Host -Prompt "Enter desired Password" -AsSecureString $BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($Password) $Password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR) $PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $PasswordProfile.Password = $Password $PasswordProfile.EnforceChangePasswordPolicy = 1 $PasswordProfile.ForceChangePasswordNextLogin = 1 } Else { $PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile $PasswordProfile.Password = $Password $PasswordProfile.EnforceChangePasswordPolicy = 1 $PasswordProfile.ForceChangePasswordNextLogin = 1 } Try { New-AzureADUser -DisplayName $DisplayName ` -AccountEnabled $true ` -MailNickName $MailNickName ` -UserPrincipalName $UserPrincipalName ` -PasswordProfile $PasswordProfile ` -City $Entry.City ` -Country $Entry.Country ` -Department $Entry.Department ` -JobTitle $Entry.JobTitle ` -Mobile $Entry.Mobile | Out-Null Write-Verbose "$DisplayName : AAD Account is created successfully!" If ($AadGroupName) { Try { $AadGroupID = Get-AzureADGroup -SearchString "$AadGroupName" } Catch { Write-Error "$AadGroupName : does not exist. $_" Break } $ADuser = Get-AzureADUser -ObjectId "$UserPrincipalName" Add-AzureADGroupMember -ObjectId $AadGroupID.ObjectID -RefObjectId $ADuser.ObjectID Write-Verbose "Assigning the user $DisplayName to Azure AD Group $AadGroupName" } } Catch { Write-Error "$DisplayName : Error occurred $_" }
}

Once the Authentication is successful, we can log in over to our Azure Login section to cross-check the user’s list. Our sandbox area and all the users are present there. All the sampled users are present in the Users Azure Active Directory area.

AzureDevOps Build Pipeline – OWASP Dependency Check with universal packages

G’day, this blog post is about adding OWASP dependency check to Azure Build pipeline and universal packages being used for containing all the build output. SonarQube plugin will be MsBuild.exe and will perform the quality of the continuous integration.

OWASP Dependency Check – The purpose of dependency check it to check your dependencies for known vulnerabilities. It works cross-platform and Integrates well with SonarQube. Works both in Azure DevOps (online) and Server (on premise)

This plugin can be downloaded here – https://marketplace.visualstudio.com/items?itemName=InfoSupport.infosupport-owasp-dependecy-checker

For this demo project, we create a new project in AzureDevOps, LegacyToVogue. And, its scope to the build pipeline for the continuous integration part.

We have an empty Azure Repository over here, we will quickly import the code from github.

After importing the source code from github, we have all the required files from github now.

Once the source code is available we will create the Azure Build Pipeline, and later on we will add the required plugins. Also please move over to the azure marketplace and download the required plugins.

For Universal packages – According to Microsoft definition, Universal Packages store one or more files together in a single unit that has a name and version. You can publish Universal Packages from the command line by using the Azure CLI. In simple terms, we will keep it to hold the build output and later use it for Release pipeline.

Lets create an artefacts feed, we will use this for storing build output.



Lets move over to our Azure Build Pipeline, and add a new task for OWASP Dependency Check.



Lets also add Universal package task, to ensure we are storing the build output will all the required dependencies to be used for release pipeline. Please have the destination feed ready in Azure Artifacts.

Once the tasks are done, and YAML tasks code matches as above, we will trigger the build and we will see the outcome.

Azure DNS – Custom domain names

G’Day, this blog post is about using using custom domain names in Azure. I have recently bought a TLD – domain from Godaddy and would like to hook up my website to redirect to the same.

So this blog post will walk through how to go about it.

Prerequisites:-

  • Have an Azure web application ready.
  • Buy a domain name and have access to the DNS registry for your domain provider
  • have patience for 24 hours, because nameservers do take their propagation time to reflect changes.

Get Started

To map a custom DNS name to a web based application, your App Service plan must be a paid tier In this step, you make sure that the App Service app is in the supported pricing tier.

Login to your Azure Portal – portal.azure.com and find out the tier with which your application is deployed.

This application is fairly light,a blog site.

Map Domain –

We can use either a CNAME record or an A record to map a custom DNS name to App Service. Preference should be to use CNAME records for all custom DNS names except root domains (use A records).

  • An A record to map to the app’s IP address.
  • TXT record to map to the app’s default domain name <app_name>.azurewebsites.net. App Service uses this record only at configuration time, to verify that you own the custom domain. This can be deleted later on.

Sign in to the website of your domain provider, I have used Godaddy, domain registrar.

Every domain provider has its own DNS records interface, so consult the provider’s documentation. Look for areas of the site labeled Domain Name, DNS Configuration. Often, on that page and you can look for a link that is named something like Zone file, DNS Records, or Advanced configuration.

Also, lets add a TXT record.

End of post

Please be aware that depending on your DNS provider it can take up to 48 hours for the DNS entry changes to propagate. You can verify that the DNS propagation is working as expected by using http://digwebinterface.com/. Learn more

Serverless Azure Function – Run a cron job using time trigger

This post is about using serverless Azure function for running a batch job
or cron job. There are many instances where in, we can execute the cron job using Azure Function using time trigger.

Serverless Azure Function allows us to schedule a custom block of code to execute on time the client wanted and gain immediate visibility of logging in the Azure portal. Since this is serverless, we need not bother about setting up infrastructure. This does not need to have Virtual machines to setup or IIS to be configured in order to host any service.

We will create a demo project which is going to run as a batch job and fetch the BBC news rss feed for asia region. And we are going to send an email using SMTP settings for gmail. So every 6 hours this cron job is going to sniff bbc news reader rss feed and email me the top headings to my gmail inbox.

Before we start creating new project and proceed, please ensure you have installed the Azure Development workload for visual studio 2017.

Azure function gives us multiple triggers types, which we can use as a template and use.

1.HTTPTrigger – To Trigger the execution of your code by using an HTTP request.
2. TimerTrigger – To Execute cleanup or other batch tasks on a predefined schedule.
3. CosmosDBTrigger – To Process Azure Cosmos DB documents when they are added or updated in collections in a NoSQL database.
4. BlobTrigger – To Process Azure Storage blobs when they are added to containers. You might use this function for image resizing.
5. QueueTrigger – To Respond to messages as they arrive in an Azure Storage queue.

We need to Add System.ServiceModel in references for using SyndicationFeed and it will fetch and read rss feeds.

PM> Install-Package System.ServiceModel.Syndication

We would like to run this Azure function in every 6 hours, so 4 times in a day. We would select time trigger as Azure function type.

For testing we will change the time trigger to 20 seconds, in the below image we see the the cron job is being executed every 20 seconds.

Publish Azure Function to Azure Cloud

We will publish the azure function to cloud. We need to created a new profile in Azure for us to publish this Azure Time Trigger function.

We will name our Azure function as RSSFeedSniffer

After being successful published, it shall be redirected to
https://rssfeedsniffer.azurewebsites.net/

It can also be seen in Azure portal dashboard.

Azure Cosmos DB – Angular with WebAPI

This article is about using Azure Cosmos DB with Angular and WebAPI. Why Azure Cosmos DB, Any web, mobile, gaming, and IoT application that needs to manage extensive amounts of data, reads, and writes, are great of use cases.

Few differences between a document DB and a relational database:-

Document DB Relational Database
De-normalize data (Think about JSON format, key value pairs) Normalized data (Plain SQL queries)
Referential integrity NOT enforced Referential integrity FORCED through normalization and relationship
Mixed data in a collection Uniform data in tables
Flexible schema The schema is not so flexible
SQL like language as well as Javascript Pure T-SQL

More info here – https://azure.microsoft.com/en-ca/services/cosmos-db/

We will create a project – beer tracker and it will use angular, webapi and cosmosdb. We use azure cosmos db emulator to not get into configuring Azure CosmosDB which will make it post a really long one.

Download Azure CosmosDB Emulator here – https://docs.microsoft.com/en-us/azure/cosmos-db/local-emulator

This is how the final demo would look:-

Please note that in this application we are using Cosmos DB local emulator instead of real Azure Cosmos DB service. Once installed properly, it should show like below in your browser. I have installed and it is working in my chrome browser.

We will start with WebAPI, let’s create a new WebAPI project from visual studio 2017. And post that we will hook up Azure cosmosDB into webAPI and last part will be froentend UI by Angular.

In the below step, we are going to install Microsoft.Azure.DocumentDB NuGet package. DocumentDB is a true schema-free NoSQL document database service designed for modern mobile and web applications.

Install-Package Microsoft.Azure.DocumentDB -Version 2.2.3

We need to enable CORS in our Web API to allow requests from front end Angular application. For that, install Microsoft.AspNet.WebApi.Cors using NuGet

Install-Package Microsoft.AspNet.WebApi.Cors -Version 5.2.7

Lets go the the AppStart and update the WebApiConfig.cs

public static class WebApiConfig
     {
         public static void Register(HttpConfiguration config)
         {
             // Web API configuration and services       
             // Web API routes
        config.MapHttpAttributeRoutes();

        EnableCorsAttribute cors = new EnableCorsAttribute("*", "*", "*");
        config.EnableCors(cors);

        config.Routes.MapHttpRoute(
            name: "DefaultApi",
            routeTemplate: "api/{controller}/{id}",
            defaults: new { id = RouteParameter.Optional }
        );
    }
}

We will also update the web.config of our webapi project to use cosmosDB emulator. Later on it can be changed for Azure CosmosDB keys. It will have no impact on other part of this demo.

<!--config keys for cosmos DB-->
<add key="endpoint" value="https://localhost:8081" />  
<add key="authKey" value="C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==" />  
<add key="database" value="AngularBeerMeetup" />  
<add key="collection" value="BeerMeetupCollection" />
<!--config keys for cosmos DB-->

Now lets add the models and API controllers.

namespace BeerMeetupSolution.Models
{
using Newtonsoft.Json;
public class BeerMeetup
{
[JsonProperty(PropertyName = "id")]
public string Id { get; set; }
[JsonProperty(PropertyName = "uid")]
public string UId { get; set; }
[JsonProperty(PropertyName = "location")]
public string Location { get; set; }
[JsonProperty(PropertyName = "brand")]
public string Brand { get; set; }
[JsonProperty(PropertyName = "cheers")]
public string Cheers { get; set; }
}
}

We also create a DocumentDBRepository class, just to keep here CosmosDB CRUD operations. These CRUD methods will be called by API controllers.

public static class DocumentDBRepository where T : class
     {
         private static readonly string DatabaseId = ConfigurationManager.AppSettings["database"];
         private static readonly string CollectionId = ConfigurationManager.AppSettings["collection"];
         private static DocumentClient client;
    
public static async Task<T> GetItemAsync(string id)
    {
        try
        {
            Document document = await client.ReadDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
            return (T)(dynamic)document;
        }
        catch (DocumentClientException e)
        {
            if (e.StatusCode == System.Net.HttpStatusCode.NotFound)
            {
                return null;
            }
            else
            {
                throw;
            }
        }
    }

    public static async Task<IEnumerable<T>> GetItemsAsync()
    {
        IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
            UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
            new FeedOptions { MaxItemCount = -1 })
            .AsDocumentQuery();

        List<T> results = new List<T>();
        while (query.HasMoreResults)
        {
            results.AddRange(await query.ExecuteNextAsync<T>());
        }

        return results;
    }

    public static async Task<IEnumerable<T>> GetItemsAsync(Expression<Func<T, bool>> predicate)
    {
        IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
            UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
            new FeedOptions { MaxItemCount = -1 })
            .Where(predicate)
            .AsDocumentQuery();

        List<T> results = new List<T>();
        while (query.HasMoreResults)
        {
            results.AddRange(await query.ExecuteNextAsync<T>());
        }

        return results;
    }

    public static async Task<T> GetSingleItemAsync(Expression<Func<T, bool>> predicate)
    {
        IDocumentQuery<T> query = client.CreateDocumentQuery<T>(
            UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),
            new FeedOptions { MaxItemCount = -1 })
            .Where(predicate)
            .AsDocumentQuery();
        List<T> results = new List<T>();
        results.AddRange(await query.ExecuteNextAsync<T>());
        return results.SingleOrDefault();
    }

    public static async Task<Document> CreateItemAsync(T item)
    {
        return await client.CreateDocumentAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId), item);
    }

    public static async Task<Document> UpdateItemAsync(string id, T item)
    {
        return await client.ReplaceDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id), item);
    }

    public static async Task DeleteItemAsync(string id)
    {
        await client.DeleteDocumentAsync(UriFactory.CreateDocumentUri(DatabaseId, CollectionId, id));
    }

    public static void Initialize()
    {
        client = new DocumentClient(new Uri(ConfigurationManager.AppSettings["endpoint"]), ConfigurationManager.AppSettings["authKey"]);
        CreateDatabaseIfNotExistsAsync().Wait();
        CreateCollectionIfNotExistsAsync().Wait();
    }

    private static async Task CreateDatabaseIfNotExistsAsync()
    {
        try
        {
            await client.ReadDatabaseAsync(UriFactory.CreateDatabaseUri(DatabaseId));
        }
        catch (DocumentClientException e)
        {
            if (e.StatusCode == System.Net.HttpStatusCode.NotFound)
            {
                await client.CreateDatabaseAsync(new Database { Id = DatabaseId });
            }
            else
            {
                throw;
            }
        }
    }

    private static async Task CreateCollectionIfNotExistsAsync()
    {
        try
        {
            await client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId));
        }
        catch (DocumentClientException e)
        {
            if (e.StatusCode == System.Net.HttpStatusCode.NotFound)
            {
                await client.CreateDocumentCollectionAsync(
                    UriFactory.CreateDatabaseUri(DatabaseId),
                    new DocumentCollection { Id = CollectionId },
                    new RequestOptions { OfferThroughput = 1000 });
            }
            else
            {
                throw;
            }
        }
    }
}

Here is the APIs which are accessible and can be tested by Fiddlers or PostMan tool. All these methods are exposed via end points and accessible by HTTP protocols. We have also used WebAPI route prefix, and this will help angular code to resolve the url and access them.

[RoutePrefix("api/beermeetup")]
namespace BeerMeetupSolution.Controllers
 {
     [RoutePrefix("api/beermeetup")]
     public class BeerMeetupController : ApiController
     {    
    [HttpGet]
    public async Task<IEnumerable<Models.BeerMeetup>> GetAsync()
    {

        IEnumerable<Models.BeerMeetup> value = await DocumentDBRepository<Models.BeerMeetup>.GetItemsAsync();
        return value;
    }

    [HttpPost]
    public async Task<Models.BeerMeetup> CreateAsync([FromBody] Models.BeerMeetup objbm)
    {
        if (ModelState.IsValid)
        {
            await DocumentDBRepository<Models.BeerMeetup>.CreateItemAsync(objbm);
            return objbm;
        }
        return null;
    }
    public async Task<string> Delete(string uid)
    {
        try
        {
            Models.BeerMeetup item = await DocumentDBRepository<Models.BeerMeetup>.GetSingleItemAsync(d => d.UId == uid);
            if (item == null)
            {
                return "Failed";
            }
            await DocumentDBRepository<Models.BeerMeetup>.DeleteItemAsync(item.Id);
            return "Success";
        }
        catch (Exception ex)
        {
            return ex.ToString();
        }
    }
    public async Task<Models.BeerMeetup> Put(string uid, [FromBody] Models.BeerMeetup o)
    {
        try
        {
            if (ModelState.IsValid)
            {
                Models.BeerMeetup item = await DocumentDBRepository<Models.BeerMeetup>.GetSingleItemAsync(d => d.UId == uid);
                if (item == null)
                {
                    return null;
                }
                o.Id = item.Id;
                await DocumentDBRepository<Models.BeerMeetup>.UpdateItemAsync(item.Id, o);
                return o;
            }
            return null; ;
        }
        catch (Exception ex)
        {
            return null;
        }

    }
}
}

Now we will switch to our front end. The Angular part. Ensure you have install NPM and its configured in your system.

Type into the black command prompt

ng new AngularUI 

It will take some time for Angular CLI to create a new project and once it completes we will switch to Visual Studio Code.

The boiler plate generated by AngularCLI would be ready, and we will start to build our frontend code.

Adding model for BeerMeetup,

export class BeerMeetup {  
uid: string;
location: string;
brand: string;
cheers: string;
}

After model, we add service which will have crud methods.

  
import { Injectable } from '@angular/core';
import { HttpClient } from '@angular/common/http';

import { BeerMeetup } from './beermeetup';

const api = 'http://localhost:53090//api';

@Injectable()
export class BeerMeetupService {
constructor(private http: HttpClient) { }

getBM() {
return this.http.get<Array<BeerMeetup>>(`${api}/beermeetup`);
}

deleteBM(beermeetup: BeerMeetup) {
return this.http.delete(`${api}/beermeetup?uid=${beermeetup.uid}`);
}

addBM(beermeetup: BeerMeetup) {
return this.http.post<BeerMeetup>(`${api}/beermeetup/`, beermeetup);
}

updateBM(beermeetup: BeerMeetup) {
return this.http.put<BeerMeetup>(`${api}/beermeetup?uid=${beermeetup.uid}`, beermeetup);
}
}

Lastly, we will add the component

  
import { Component, OnInit } from '@angular/core';

import { BeerMeetup } from './beermeetup';
import { BeerMeetupService } from './beermeetup.service';

@Component({
selector: 'app-ohs',
templateUrl: './beermeetup.component.html'
})
export class BeerMeetupComponent implements OnInit {
addingBM = false;
deleteButtonSelected = false;
heroes: any = [];
selectedBM: BeerMeetup;

constructor(private beermeetupService: BeerMeetupService) { }

ngOnInit() {
this.getBM();
}

cancel() {
this.addingBM = false;
this.selectedBM = null;
}

deleteBM(hero: BeerMeetup) {
this.deleteButtonSelected = true;
let value: boolean;
value = confirm("Are you sure want to delete this meetup?");
if (value != true) {
return;
}
this.beermeetupService.deleteBM(oh).subscribe(res => {
this.ohs= this.heroes.filter(h => h !== oh);
if (this.selectedBM === oh) {
this.selectedBM = null;
}
});
}

getBM() {
return this.beermeetupService.getBM().subscribe(ohs=> {
this.ohs= ohs;
});
}

enableAddMode() {
this.addingBM = true;
this.selectedBM = new BeerMeetup();
}

onSelect(hero: BeerMeetup) {
if (this.deleteButtonSelected == false) {
this.addingBM = false;
this.selectedBM = oh;
}
this.deleteButtonSelected = false;
}

save() {
if (this.addingBM) {
this.beermeetupService.addBM(this.selectedBM).subscribe(
obj => {
this.addingBM = false;
this.selectedBM = null;
this.ohs.push(oh);
});
} else {
this.beermeetupService.updateBM(this.selectedBM).subscribe(obj=> {
this.addingBM = false;
this.selectedBM = null;
});
}
}
}

Once this is done, lets switch over to app.module.ts, we will ensure our modules are registered here.

  
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpClientModule } from '@angular/common/http';

import { AppComponent } from './app.component';
import { BeerMeetupService } from './beermeetup.service';
import { BeerMeetupComponent } from './beermeetup.component';
@NgModule({
declarations: [
AppComponent,
BeerMeetupComponent
],
imports: [
BrowserModule,
FormsModule,
HttpClientModule
],
providers: [BeerMeetupService],
bootstrap: [AppComponent]
})
export class AppModule { }



Now we will move to Visual Studio Code, Terminal and fire

ng serve -o

Azure DevOps – Angular 5 with .Net WebApi

This article shows how to set up AzureDevOps CI/CD pipelines to be set for a full stack application. Here I use a multilayered solution set up which show the tasks based on users. Below is the technology stack which is used.

  • Frontend UI – Angular 5
  • Middleware APIs- ASP.NET WebAPI
  • RDBMS – SQL Server 2012
  • Dependency Injection – Unity
  • ORM – EntityFramework
  • Framework – .NET Core

I have code hosted on github as public repo.

Our projects in the solution would look like below:-

This image has an empty alt attribute; its file name is image.png

First, create an AzureDevOps account by going to azure web portal. https://dev.azure.com

As for pricing, Azure DevOps is free for open source projects and small projects (up to five users).

I have created Tracker as a private project, which I will be using for my Azure CI/CD pipeline.

Once you create a project there will be an option on the left-hand side, which gives us a submenu to show files and its metadata.

After I click files, I would not see any files. Since I am using git version control and the source is hosted on GitHub. I will fire these two commands in the command prompt to push my latest changes.

git remote add origin https://github.com/varunmaggo/Tracker.gith
git push -u origin --all

Now the code is pushed and the next step is to create AzureDevOps pipeline, on the left-hand side, there is an option. We would need to create Build and Release pipelines separately.

Again goto build option, we would see an option, where we need to create a build agent with multiple configurations.

  • Install node package manager,
  • Install angular cli
  • Build packages for angular,
  • Nuget restore to install packages for WebApi’s solution.
  • run unit tests,
  • And finally, publish artefacts.

Once we are set with these build pipelines, we would need to proceed for Release Pipeline.

In order to

In order to make it simple, I would use the staging environment. In real-world scenarios, we can have dev, stage and prod separately.

In the image below, we select the build source type, which is our build pipeline, and we would like to

JWT Authentication for WebAPI

This post is about securing web api using JWT token based authentication. JWT stands for JSON Web Tokens. JSON Web Tokens are an open, industry standard method for representing claims securely between two parties. In token based authentication, the user sends a username and password, and in exchange gets a token that can be used to authenticate requests.

A JWT token look like:

Header.Payload.Signature

HEADER PAYLOAD SIGNATURE
AAAAAAAAAAAAA. BBBBBBBBBBBBBBBBB. CCCCCCCCCCCCC
<base64-encoded header>.<base64-encoded claims>.<base64-encoded signature>

.NET has build in support for JWT tokens in the below namespace.

using System.IdentityModel.Tokens.Jwt;

JWT token has three sections:

  • Header: JSON format which is encoded as a base64
  • Claims: JSON format which is encoded as a base64.
  • Signature: Created and signed based on Header and Claims which is encoded as a base64.

In the below project, we will see how the JWT token authentication has been implemented.

Step 1 – A browser client is going to send a http request with username and password. This is going to be validated using WebAPI filter attribute.

AuthorizationFilterAttribute

Step 2 – Server validates the username and password and completes a handshake. Post handshake, the server generates the token and send it to the client.

The below code is going to generate the token for the user(client)


We need to add below two nuget packages from Nuget Package manager,

Install-Package Microsoft.IdentityModel.Tokens -Version 5.4.0   
Install-Package System.IdentityModel.Tokens.Jwt -Version 5.4.0

Step 3 — Check for token validation

We used System.IdentityModel.Tokens.Jwt library for generating and validating tokens. To implement JWT in Web API, we created a filter for authentication which will be executed before every request. It will verify the token contained in the request header and will deny/allow resource based on token.