Technology11 minute read

Zero Downtime Jenkins Continuous Deployment with Terraform on AWS

When your app’s next iteration is ready to deploy, you have two choices: either stop the entire application and deploy the new version manually every time or build an automated zero downtime CI/CD deployment pipeline once.

In this article, Toptal Freelance DevOps Engineer Gaurav Kohli demonstrates the latter using the Jenkins-powered continuous deployment pipeline of a three-tier web application built in Node.js, deployed on AWS Cloud, and using Terraform as an infrastructure orchestrator.


Toptalauthors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.

When your app’s next iteration is ready to deploy, you have two choices: either stop the entire application and deploy the new version manually every time or build an automated zero downtime CI/CD deployment pipeline once.

In this article, Toptal Freelance DevOps Engineer Gaurav Kohli demonstrates the latter using the Jenkins-powered continuous deployment pipeline of a three-tier web application built in Node.js, deployed on AWS Cloud, and using Terraform as an infrastructure orchestrator.


Toptalauthors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.
Gaurav Kohli
Verified Expert in Engineering

Gaurav has 12+ years cumulative experience working as a developer, scrum master, senior consultant, and product owner.

Read More

PREVIOUSLY AT

Booking.com
Share

In today’s world of the internet where literally everything needs to be up 24/7, reliability is key. This translates into close to zero downtime for your websites, dodging the dreaded “Not found: 404” error page, or other service disruptions while you roll out your newest release.

Suppose you’ve built a new application for your client, or maybe yourself, and have managed to get a good user base that likes your application. You’ve gathered feedback from your users, and you go to your developers and ask them to build new features and make the application ready for deployment. With that ready, you can either stop the entire application and deploy the new version or build a zero downtime CI/CD deployment pipeline which would do all the tedious work of pushing a new release to users without manual intervention.

In this article, we will talk exactly about the latter, how we can have a continuous deployment pipeline of a three-tier web application built in Node.js on AWS Cloud using Terraform as an infrastructure orchestrator. We’ll be using Jenkins for the continuous deployment part and Bitbucket to host our codebase.

Code Repository

We will be using a demo three-tier web application for which you can find the code here.

The repo contains code for both the web and the API layer. It’s a simple application wherein the web module calls one of the endpoints in the API layer which internally fetches information about the current time from the database and returns to the web layer.

The structure of the repo is as follows:

  • API: Code for the API layer
  • Web: Code for the web layer
  • Terraform: Code for infrastructure orchestration using Terraform
  • Jenkins: Code for infrastructure orchestrator for Jenkins server used for the CI/CD pipeline.

Now that we understand what we need to deploy, let’s discuss things we have to do to deploy this application on AWS and then we would talk about how to make that part of the CI/CD pipeline.

Baking Images

Since we are using Terraform for infrastructure orchestrator, it makes the most sense to have prebaked images for each tier or application you want to deploy. And for that, we would be using another product from Hashicorp—that is, Packer.

Packer is an open-source tool which helps to build an Amazon Machine Image or AMI, which will be used for deployment on AWS. It can be used to build images for different platforms like EC2, VirtualBox, VMware, and others.

Here is a snippet of how the Packer config file (terraform/packer-ami-api.json) is used to create an AMI for the API layer.

{
 "builders": [{
    "type": "amazon-ebs",
    "region": "eu-west-1",
    "source_ami": "ami-844e0bf7",
    "instance_type": "t2.micro",
    "ssh_username": "ubuntu",
    "ami_name": "api-instance {{timestamp}}"
  }],
  "provisioners": [
    {
      "type": "shell",
      "inline": ["mkdir api", "sudo apt-get update", "sudo apt-get -y install npm nodejs-legacy"],
      "pause_before": "10s"
    },
    {
      "type": "file",
      "source" : "../api/",
      "destination" : "api"
    },

    {
    "type": "shell",
    "inline": ["cd api", "npm install"],
    "pause_before": "10s"
    }
  ]
}

And you need to run the following command to create the AMI:

packer build -machine-readable packer-ami-api.json

We will be running this command from the Jenkins build later in this article. In a similar fashion, we will be using the Packer config file (terraform/packer-ami-web.json) for the web layer as well.

Let’s go through the above Packer config file and understand what is it trying to do.

  1. As mentioned earlier, Packer can be used to build images for many platforms, and since we are deploying our application to AWS we would be using the builder “amazon-ebs,” as that is the easiest builder to get started with.
  2. The second part of the config takes a list of provisioners which are more like scripts or code blocks which you can use to configure your image.
    • Step 1 runs a shell provisioner to create an API folder and install Node.js on the image using the inline property, which is a set of commands you want to run.
    • Step 2 runs a file provisioner to copy our source code from the API folder on to the instance.
    • Step 3 again runs a shell provisioner but this time uses a script property to specify a file (terraform/scripts/install_api_software.sh) with the commands which need to be run.
    • Step 4 copies a config file to the instance which is needed for Cloudwatch, which is installed in the next step.
    • Step 5 runs a shell provisioner to install the AWS Cloudwatch agent. The input to this command would be the config file copied in the previous step. We’ll talk about Cloudwatch in details later in the article.

So, in essence, the Packer config contains info about which builder you want and then a set of provisioners which you can define in any order depending on how you want to configure your image.

Setting Up a Jenkins Continuous Deployment

Next, we will look into setting up a Jenkins server which will be used for our CI/CD pipeline. We will be using Terraform and AWS for setting this up as well.

The Terraform code for setting Jenkins is inside the folder jenkins/setup. Let’s go through some of the interesting things about this setup.

  1. AWS credentials: You can either provide the AWS access key ID and secret access key to the Terraform AWS provider (instance.tf) or you can give the location of credentials file to the property shared_credentials_file in the AWS provider.
  2. IAM role: Since we will be running Packer and Terraform from the Jenkins server, they would be accessing S3, EC2, RDS, IAM, load balancing, and autoscaling services on AWS. So either we provide our credentials on Jenkins for Packer & Terraform to access these services or we can create an IAM Profile (iam.tf), using which we would create a Jenkins instance.
  3. Terraform state: Terraform has to maintain the state of the infrastructure somewhere in a file and, with S3 (backend.tf), you could just maintain it there, so you can collaborate with other coworkers, and anyone can change and deploy since the state is maintained in a remote location.
  4. Public/private key pair: You will need to upload the public key of your key pair along with the instance so that you can ssh into the Jenkins instance once it is up. We have defined an aws_key_pair resource (key.tf) in which you specify the location of your public key using Terraform variables.

Steps for setting up Jenkins:

Step 1: For keeping the remote state of Terraform, you would need to manually create a bucket in S3 which can be used by Terraform. This would be the only step done outside of Terraform. Make sure you run AWS configure before running the command below to specify your AWS credentials.

aws s3api create-bucket --bucket node-aws-jenkins-terraform --region eu-west-1 --create-bucket-configuration LocationConstraint=eu-west-1

Step 2: Run terraform init. This will initialize the state and configure it to be stored on S3 and download the AWS provider plugin.

Step 3: Run terraform apply. This will check all the Terraform code and create a plan and show how many resources will be created after this step has finished.

Step 4: Type yes, and then the previous step will start creating all the resources. After the command finishes, you will get the public IP address of the Jenkins server.

Step 5: Ssh into the Jenkins server, using your private key. ubuntu is the default username for AWS EBS-backed instances. Use the IP address returned by the terraform apply command.

ssh -i mykey ubuntu@34.245.4.73

Step 6: Start the Jenkins web UI by going to http://34.245.4.73:8080. The password can be found at /var/lib/jenkins/secrets/initialAdminPassword.

Step 7: Choose “Install Suggested Plugins” and Create an Admin user for Jenkins.

Setting the CI Pipeline Between Jenkins and Bitbucket

  1. For this, we need to install the Bitbucket plugin in Jenkins. Go to Manage Jenkins → Manage Plugins and from Available plugins install the Bitbucket plugin.
  2. On the Bitbucket repo side, go to Settings → Webhooks, add a new webhook. This hook will send all the changes in the repository to Jenkins and that will trigger the pipelines.
    Adding a webhook to Jenkins continuous deployment via Bitbucker

Jenkins Pipeline to Bake/Build Images

  1. The next step will be to create pipelines in Jenkins.
  2. The first pipeline will be a Freestyle project which would be used to build the application’s AMI using Packer.
  3. You need to specify the credentials and URL for your Bitbucket repository.
    Add credentials to bitbucket
  4. Specify the Build trigger.
    Configuring the build trigger
  5. Add two build steps, one for building AMI for app module and others to build the AMI for the web module.
    Adding AMI build steps
  6. Once this is done, you can save the Jenkins project and now, when you push anything to your Bitbucket repository, it will trigger a new build in Jenkins which would create the AMI and push a Terraform file containing the AMI number of that Image to the S3 bucket that you can see from the last two lines in the build step.
echo 'variable "WEB_INSTANCE_AMI" { default = "'${AMI_ID_WEB}'" }' > amivar_web.tf
aws s3 cp amivar_web.tf s3://node-aws-jenkins-terraform/amivar_web.tf

Jenkins Pipeline to Trigger Terraform Script

Now that we have the AMIs for the API and web modules, we will trigger a build to run Terraform code for setting up the entire application and later go through the components in Terraform code which makes this pipeline deploy the changes with zero downtime of service.

  1. We create another freestyle Jenkins project, nodejs-terraform, which would be running the Terraform code to deploy the application.
  2. We will first create a “secret text” type credential in the global credentials domain, which will be used as an input to the Terraform script. Since we don’t want to hard-code the password for RDS service inside Terraform and Git, we pass that property using Jenkins credentials.
    creating a secret for use with Terraform ci cd
  3. You need to define the credentials and URL similar to the other project.
  4. In the build trigger section, we will link this project with the other one in a way so that this project starts when the previous one is finished.
    Link projects together
  5. Then we could configure the credentials which we added earlier to the project using bindings, so it’s available in the build step.
    Configuring bindings
  6. Now we are ready to add a build step, which will download the Terraform script files (amivar_api.tf and amivar_web.tf) which were uploaded to S3 by the previous project and then run Terraform code to build the entire application on AWS.
    Adding the build script

If everything is configured properly, now if you push any code to your Bitbucket repository, it should trigger the first Jenkins project followed by the second and you should have your application deployed to AWS.

Terraform Zero Downtime Config for AWS

Now let’s discuss what it is in the Terraform code that makes this pipeline deploy the code with zero downtime.

The first thing is that Terraform provides these lifecycle configuration blocks for resources within which you have an option create_before_destroy as a flag which literally means that Terraform should create a new resource of the same type before destroying the current resource.

Now we exploit this feature in aws_autoscaling_group and aws_launch_configuration resources. So aws_launch_configuration configures which type of EC2 instance should be provisioned and how we install software on that instance, and the aws_autoscaling_group resource provides an AWS autoscaling group.

An interesting catch here is that all the resources in Terraform should have a unique name and type combination. So unless you have a different name for the new aws_autoscaling_group and aws_launch_configuration, it won’t be possible to destroy the current one.

Terraform handles this constraint by providing a name_prefix property to the aws_launch_configuration resource. Once this property is defined, Terraform will add a unique suffix to all the aws_launch_configuration resources and then you can use that unique name to create an aws_autoscaling_group resource.

You can check the code for all the above in terraform/autoscaling-api.tf

resource "aws_launch_configuration" "api-launchconfig" {
  name_prefix          = "api-launchconfig-"
  image_id             = "${var.API_INSTANCE_AMI}"
  instance_type        = "t2.micro"
  security_groups      = ["${aws_security_group.api-instance.id}"]

  user_data = "${data.template_file.api-shell-script.rendered}"

  iam_instance_profile = "${aws_iam_instance_profile.CloudWatchAgentServerRole-instanceprofile.name}"

  connection {
    user = "${var.INSTANCE_USERNAME}"
    private_key = "${file("${var.PATH_TO_PRIVATE_KEY}")}"
  }

  lifecycle {
    create_before_destroy = true
  }

}

resource "aws_autoscaling_group" "api-autoscaling" {
  name = "${aws_launch_configuration.api-launchconfig.name}-asg"

  vpc_zone_identifier  = ["${aws_subnet.main-public-1.id}"]
  launch_configuration = "${aws_launch_configuration.api-launchconfig.name}"
  min_size             = 2
  max_size             = 2
  health_check_grace_period = 300
  health_check_type = "ELB"
  load_balancers = ["${aws_elb.api-elb.name}"]
  force_delete = true

  lifecycle {
    create_before_destroy = true
  }

  tag {
    key = "Name"
    value = "api ec2 instance"
    propagate_at_launch = true
  }
}

And the second challenge with zero downtime deployments is to make sure your new deployment is ready to start receiving the request. Just deploying and starting a new EC2 instance is not enough in some situations.

To solve that problem, aws_launch_configuration has a property user_data which supports the native AWS autoscaling user_data property using which you can pass any script you would like to run on startup of new instances as part of the autoscaling group. In our example, we tail the log of the app server and wait for the startup message to be there. You can also check the HTTP server and see when they are up.

until tail /var/log/syslog | grep 'node ./bin/www' > /dev/null; do sleep 5; done

Along with that, you can also enable an ELB check on the aws_autoscaling_group resource level, which will make sure the new instance been added to pass the ELB check before Terraform destroys the old instances. This is how the ELB check for API layer looks like; it checks for the /api/status endpoint to return success.

resource "aws_elb" "api-elb" {
  name = "api-elb"
  subnets = ["${aws_subnet.main-public-1.id}"]
  security_groups = ["${aws_security_group.elb-securitygroup.id}"]
  listener {
    instance_port = "${var.API_PORT}"
    instance_protocol = "http"
    lb_port = 80
    lb_protocol = "http"
  }
  health_check {
    healthy_threshold = 2
    unhealthy_threshold = 2
    timeout = 3
    target = "HTTP:${var.API_PORT}/api/status"
    interval = 30
  }

  cross_zone_load_balancing = true
  connection_draining = true
  connection_draining_timeout = 400
  tags {
    Name = "my-elb"
  }
}

Summary and Next Steps

So this brings us to the end of this article; hopefully, by now, either you already have your application deployed and running with a zero-downtime CI/CD pipeline using a Jenkins deployment and Terraform best practices or you are slightly more comfortable exploring this territory and making your deployments need as little manual intervention as possible.

In this article, the deployment strategy being used is called Blue-Green deployment wherein we have a current installation (Blue) which receives live traffic while we are deploying and testing the new version (Green) and then we replace them once the new version is all ready. Aside from this strategy, there are other ways to go about deploying your application, which is explained nicely in this article, Intro to Deployment Strategies. Adapting another strategy is now as simple as configuring your Jenkins pipeline.

Also, in this article, I assumed that all the new changes in API, web, and data layers are compatible so you don’t have to worry about the new version talking to an older version. But, in reality, that might not always be the case. To solve that problem, while designing your new release/features, always think about the backward compatibility layer or else you will need to tweak your deployments to handle that situation as well.

Integration testing is something missing from this deployment pipeline as well. As you don’t want anything to released to end user without being tested, it’s definitely something to keep in mind when the time comes to apply these strategies to your own projects.

If you’re interesting in learning more about how Terraform works and how you can deploy to AWS using the technology, I recommend Terraform AWS Cloud: Sane Infrastructure Management where fellow Toptaler Radosław Szalski explains Terraform and then shows you the steps needed to configure a multi-environment and production-ready Terraform setup for a team

Understanding the basics

  • What is Terraform?

    Terraform is a tool which makes it easy to write version-controlled infrastructure code. You can use it to orchestrate the infrastructure on more than 100 different service providers like AWS, Alicoud, GCP, Azure, OpenStack and many more.

  • What is CI/CD and what are its benefits?

    Continuous integration and continuous deployment is a practice wherein you integrate and test your software on every code change. Later on, that code is deployed to production.

    The main benefit is that it reduces manual work and the chances of human error during deployments.

  • What are different deployment strategies?

    Depending on your product and how’s your technical implementation, you can choose to do a rolling strategy, recreate strategy, blue-green, A/B testing, canary deployment, or shadow strategy.

  • What is Packer?

    Packer is a tool which makes it easy to build machine images for different platforms like AWS EC2, Virtual Box, and VMWare.

  • What can you do with Jenkins?

    Jenkins is a continuous integration tool which enables software teams to build the integration pipelines for their projects. You can customize your Jenkins-powered pipelines to include different software development processes like building, testing, and staging as well as perform static analysis of your code.

  • Is Terraform a language?

    No. Terraform is a tool, which in turn uses Hashicorp Configuration Language (HCL) to describe your infrastructure as a code. HCL is a declarative language, defining the desired state and not the steps needed to be there.

Hire a Toptal expert on this topic.
Hire Now
Gaurav Kohli

Gaurav Kohli

Verified Expert in Engineering

Amsterdam, Netherlands

Member since November 9, 2018

About the author

Gaurav has 12+ years cumulative experience working as a developer, scrum master, senior consultant, and product owner.

Read More
authors are vetted experts in their fields and write on topics in which they have demonstrated experience. All of our content is peer reviewed and validated by Toptal experts in the same field.

PREVIOUSLY AT

Booking.com

World-class articles, delivered weekly.

Subscription implies consent to our privacy policy

World-class articles, delivered weekly.

Subscription implies consent to our privacy policy

Join the Toptal® community.