Infrastructure as Code

CI/CD stands for Continuous Integration/Continuous Deployment, which is a software development approach that emphasizes the automation of building, testing, and deploying code changes to production as quickly and frequently as possible. CI/CD aims to streamline the software development process, increase collaboration among team members, and reduce the risk of errors and conflicts during development and deployment.

CI/CD flow

The key phrase from the description above is “automation,” “deploying changes” to production “quickly,” and “frequently.” Infrastructure as Code supports CI/CD processes by deploying development, test, QA, or production infrastructure environments using code. More importantly, these environments can be ad-hoc CI/CD environments, except for production, which most likely needs to run 24/7; all the other environments can be quickly built when required, and just as quickly dismantled. Development, test, and QA environments can be destroyed until the underlying infrastructure is required again.

Software engineers care about the application’s code, and not so much about the underlying infrastructure that supports the application. However, management cares very much about the cost. Security cares about the “security” of the underlying infrastructure. This is where IaC comes to the rescue. It can deploy the infrastructure using CI/CD processes. IaC can perform infrastructure deployments in minutes using a repeatable, version-controlled, pre-approved, and security-compliant infrastructure.

This is where “Infrastructure as Code (IaC)” benefits everyone.

Benefits of IaC include:

  • Consistency: IaC enables consistent and repeatable infrastructure provisioning across multiple environments and deployments.
  • Scalability: IaC makes it easy to scale infrastructure up or down as needed, based on changes in demand or usage patterns.
  • Automation: IaC reduces manual efforts and errors by automating repetitive tasks such as configuration and deployment.
  • Version control: IaC allows infrastructure changes to be version-controlled, making it easy to track changes, roll back to previous versions, and collaborate among team members.
  • Cost savings: IaC can reduce infrastructure costs by enabling developers and system administrators to optimize infrastructure usage and avoid overprovisioning.

Infrastructure as code (IaC) is a software development practice that involves managing and provisioning IT infrastructure using code and automation tools. Instead of manually setting up and configuring servers, networks, and other infrastructure components, IaC enables developers and system administrators to define their infrastructure in code, which can be version-controlled, tested, and deployed like any other software.

Believe it or not, IAC also enables developers, administrators, or engineers more freedom during the development or testing of new systems. IaC allows developers and engineers to deploy pre-approved infrastructure in support of their development and test environments quickly, efficiently, and without manual deployment methods; it can be completely automated. IaC infrastructure deployments can even be included as a procedural step in tools such as Jenkins, Circle CI, Gitlab, etc.

Reduction in operating expense
Infrastructure as Code (IaC) can create a complete infrastructure in minutes in a repeatable, consistent, and agreed configuration. In contrast, manual configuration can take tremendously longer (because it is a manual process). And prone to errors. Engineers will use agreed IaC platforms, reducing the probability of deploying infrastructure that is not required or improperly sized.

Better use of Time (Manpower cost savings)
An automated installation allows all involved to focus on critical, high-value tasks (not spending half a day manually setting infrastructure, for example). Or worse, it can eliminate permanent infrastructure because a manual setup of infrastructure is prone to error and time-consuming; there is a tendency to leave it running. Instead, IaC can be deployed only when required and destroyed until the infrastructure is required once again.

Disposable Environments (CapEx cost savings)
Improving the velocity, Iac makes the build of infrastructure more efficient by allowing someone to quickly set up a complete infrastructure in minutes (not hours).

Terraform (& Ansible) to the rescue

An excellent article by one of the employees at Gruntwork.io is an excellent read about Terraform. The article is about why Gruntworks use Terraform as opposed to other tools. The article logically discusses choosing between Chef, Puppet, Ansible, SaltStack, Cloudformation, and Terraform.

First, let me state for the record, this process does not belong exclusively to software development (not by a long shot). Just having the ability for any IT department to create test and production environments that utilize a documented, repeatable, standard configuration, and easily migrated from test into production, in my opinion, has to be attractive to any IT shop.

“Infrastructure as Code” scripts work with the most popular cloud platforms and on-premise platforms. Note: While Infrastructure as Code works with many platforms, the scripts are not automatically transferable from one platform to another platform. 

Terraform is platform-agnostic; you can use it to manage bare metal servers or cloud servers like AWS, Google Cloud Platform, OpenStack, and Azure. Or on-prem in private clouds such as VMWare vSphere, OpenStack, or CloudStack. In Terraform lingo, the supported platforms are called providers.

Terraform & Ansible coding empowers conventional businesses, software development businesses, and small startup businesses, all of the above, to deploy standardized, immutable, and repeatable infrastructure into an on-premises data center or cloud environment using Infrastructure as Code. The code is put into configuration management and stored in a repository for all engineers to deploy infrastructure configuration from development through QA tests and release it into production.

AWS Auto-Scaling Groups

Deploy an Auto Scale Group and Application Load Balancer in AWS

AWS no longer provides a NAT AMI. This exercise is based on utilizing AWS NAT AMIs. Therefore, at this time, this exercise will not work.

Application Load Balancer

This exercise will demonstrate using Terraform to deploy an AWS Auto Scaling Group and an application load balancer.

A simple website that shows EC2 Instance

I have created a bit of code that is a simple HTML page that will display some information about the AWS EC2 instance that is the host server of the web page. When you connect to our load balancer, the load balancer will route the end user to one of the EC2 Instances within the auto-scaling group. The web page display will show you the EC2 server details. When closing the web page and reconnecting to the load balancer, you will most likely see different host details, proving the load balancer connects to different servers.

The web page will look something like the following:

ALB – Application Load Balancer

A bit of information on AWS load balancers first:

Classic Load BalancerLayer 4/7 (HTTP/TCP/SSL traffic)
Network Load BalancerLayer 4 (TLS/TCP/UDP traffic)
Application Load BalancerLayer 7 (HTTP/HTTPS traffic)

Classic Load Balancer (CLB) – AWS recommends that you do not use their classic load balancer. The classic load balancer will eventually be deprecated.

Network Load Balancer (NLB) – The network load balancer works at layers 3 & 4 (network and transport layers). The NLB only cares about TLS, TCP, or UDP traffic and port numbers. The network load balancer just forward requests, whereas the application load balancer examines the contents of the HTTP request header to determine where to route the request. This is the distribution of traffic based on network variables, such as IP address and destination ports.

It is layer 4 (TCP) and below and is not designed to take into consideration anything at the application layer such as content type, cookie data, custom headers, user location, or the application behavior.

Application Load Balancer (ALB) – The application load balancer is the distribution of requests based on multiple variables, from the network layer to the application layer. The ALB can route HTTP and HTTPS traffic based on host or path-based rules. Like an NLB, each Target can be on different ports.

The NLB bases its route decisions solely on network and TCP-layer variables and has no awareness of the application. Generally, a network load balancer will determine “availability” based on the ability of a server to respond to ICMP ping or to complete the three-way TCP handshake correctly. Whereas, an application load balancer goes much deeper and can determine availability based on a successful HTTP GET of a particular page and the verification that the content is as expected based on the input parameters.

ASG – Auto Scaling Group

The auto-scaled web servers can automatically scale up or down according to load on the servers using an Auto-Scaling Group (ASG). However, this is a demonstration and not written for production deployments, so this code does not provide scaling servers based on demand, and instead, the code is written to provide scaling based on desired capacity.

I also have some code about using Terraform to deploy a Load Balanced WordPress Server with ASG and EFS as the persistent storage. That will probably be my next post.

This code can be found in my GitHub Repository


Features

Application Load Balancer – to distribute load amongst more than one server
Auto Scaling Group – with launch template and ELB health check
Simple Web servers – That will display EC2 instance data like Region, ID, and IP address
Using Terraform to deploy infrastructure as code into the AWS cloud

All resources created in this exercise are compliant with an AWS Free Tier Plan

The resources are free only if you don’t leave them running! There is a limit of EC2 hours allowed per month!

You might incur a charge if you leave the Application Load Balancer running for very long. I usually spin this up, prove that it works for about 10 minutes, then run “Terraform Destroy” to ensure I’ve accomplished this exercise for free.

This exercise will perform the following tasks:

  • Create a VPC with two public and two private subnets
  • Create NAT instances instead of a NAT Gateway, security groups and network routing
  • Create an Auto Scaling Group with a launch template
    • The Auto Scaling Group will create EC2 instances running Apache Webpage
      • I created a webpage displaying EC2 ID, EC2 hostname, Region, and private IP address. It will demonstrate which EC2 server you connect to via the load balancer by showing its unique IP address.
  • Create an Application Load Balancer that automatically registers the EC2 servers created by the Auto Scaling Group

Requirements

So let’s get started

In previous exercises, I demonstrated Terraform using modular code. In this exercise, the code will not be modular. All of the code will be placed in one folder.

So first, create your folder to place our code, a folder named “ALB-Website,” perhaps?

Building the VPC

You do not need to create a “Terraform remote state” for this exercise. However, as a best practice, I use an S3 bucket to hold “Terraform’s remote state.” And I will write code that provides output data such that if I need a jump server to troubleshoot an EC2 server in the private network, I can use my modular code to deploy a server I call “the Controller (jump server).”

vpc.tf

variables.tf

In the “variables.tf” file, we usually declar a default value to each variable. With this exercise, though, I’m creating a “terraform.tfvars” file. This allows us to add “terraform.tfvars” into our gitignore. A “gitignore” file controls GIT publishing to GitHub, by providin a list of files that informs GITl not be push the list of files in GitHub. The reason we do this is so that some values are not published to the public. By adding “terraform.tfvars” as part of the list in “.gitignore” file, we are informing GIT not to publish the file “terraform.tfvars” This allows us to safely assign values to our variables in “terraform.tfvars” file, like “my public IP address”, I really don’t want to my public IP address to be available to the public in GITHUB.

Terraform will by default look for a “terraform.tfvars” file in your folder. When declared variables do not include a default assignment (as is the case above, the variables are not assigned a default value.”

terraform.tfvars

security_groups.tf

bootstrap_nat.sh

Lines 24-33, queries AWS API to retrieve the latest image for an EC2 instance that is configured as a NAT Server. You will notice that I was getting the instance manually and hardcoding the AMI ID (using the method in the following paragraph). Then I realized that all I have to do is query AWS API for the latest AWS NAT instance AMI ID.

Note: How to manually get an AMI Instance ID for a AWS NAT server:
To find a “NAT AMI” for your AWS region, open the AWS Management Console. Go to the EC2 services. Select the AWS region of your choice in the Menu Bar. In the left-hand panel, find “AMIs” and click on the Amazon Machine Images (AMIs) panel. Select “Public Images” and filter on “amzn-ami-vpc-nat,” then find the most recent creation date and copy the AMI ID to use as your NAT AMI image for your selected region.


Creating the Auto-Scaling Group

One of the early decisions on using an Auto-Scaling Group (ASG) is how the ASG will determine “Load” to scale up or scale down according to a load on our application. I’m not going to write about the different health checks used to determine “Load” on our application in this exercise. That will be a post to write at a later date. Suffice it to say that I’ve selected “Elastic Load Balancing health checks” to check whether the load balancer reports an instance is healthy, confirming whether the instance is available to handle requests to our website.

Building the Auto Scaling Group Code

First, we need to declare to Terraform that we are creating an ASG resource and give our ASG a name.

Then declare the health check and parameters of our health check.

You’ll notice we’ve stated the health check type as “ELB” and provided the ALB target group name and ARN (find the ALB target group Terraform code below). We have also selected 300 seconds (5 minutes) for a grace period. And finally, “force_delete = true”. We are telling AWS that if any of our website servers are unhealthy for more than 5 minutes, delete the server, which then causes ASG to build another server to meet our desired capacity.

Our next step is to declare ASG sizing by stating the minimum, maximum, and desired number of servers.

Next and perhaps the most essential part of our ASG code is to declare if we are using a launch configuration or a launch template. We are going to use a launch template. Amazon Web Services recommends the use of launch templates but still supports (at the time of this writing) launch configurations.

We need to inform Terraform which launch template to use. We’ll use the “latest” parameter in this case as we have only one template version. Specifying a different version of our template is useful for Blue/Green deployments as an example.

Now, let’s put all of this together.

asg.tf

Building the Launch Template

I always throw in a bit of code to obtain the latest data for an AMI (in this case, Ubuntu vs. Amazon Linux). This sets up the ability for Terraform to query AWS API for data regarding the latest regional AMI image to use in a Launch template.

We first declare the type of resource and the resource name.

Then we declare the image ID, instance type, lifecycle rule, and security groups. Note the usage of “data.aws_ssm_parameter” obtaining the AMI by telling Terraform to query AWS API for ssm_parameters for AMI ID. The EC2 key pair is not required in the launch template; however, I always include the EC2 key name so that I can SSH into the servers in case of the need to troubleshoot the deployment.

We, of course, need to configure our servers

We’ll be using launch templates to configure our servers. For ASG the configuration code is required to be encrypted. So in this example, we add a line that tells Terraform to encode with Base64 and render a template file with the following line:

Next up is the reference to a template file. We are using a file with an extension “TPL.” A bootstrap.tpl appears the same as creating a simple shell command like bootstrap.sh file. The contents look the same, and the difference is how Terraform handles the file in a launch template. Using the extension “.tpl” will allow us to pass external variables into the script (which I will demonstrate in my next exercise).

We need an IAM policy for our server

I have created some code to create an HTML page for our auto-scaled Apache Web Servers. The code will allow our Web server to show information about the host server, specifically the EC2 details. The launched server web page is just a few lines that display EC2 attributes. The HTML will show the region, the AMI ID, the server’s hostname, and the IP address. Seeing host server information will demonstrate which server our browser has been connected to via our load balancer.

So first, let’s create the IAM policy for our servers that enable our servers to describe the EC2 host details. I have two JSON files. One for creating a role that allows our servers to assume a role that will enable our EC2 to use the service “ec2.amazonaws.com”. The second JSON file creates the IAM policy, allowing the action to “ec2:Describe*”. Our Terraform code below creates the role, the policy, and the profile of our EC2 servers.

Now let’s put this all together (asg.tf)


Create the Application Load Balancer (ALB)

To build our application load balancer, we need to create several key elements:

  • ALB
  • ALB listener
  • Target

ALB

The Terraform code to create an Application Load Balancer requires several key components:

  • Resource type and name
    • resource “aws_lb” “website-alb”
  • Type of load balancer(the load balancer will be an application load balancer in this exercise)
    • “load_balancer_type = “application”
  • Name of the Load Balancer in AWS, this must be a unique name so we have a random string generator in our code. We’ll append the string to the ALB name
  • Public or Private load balancer(internal vs. external load balancer)
    • “internal = false” (so it will be publicly addressable from the internet)
  • Subnets to locate our load balancer(since this will be a public-facing load balancer, Terraform will place the load balancer into our public subnets)
    • “subnets = [aws_subnet.public-1.id, aws_subnet.public-2.id]”
  • Declare the security group or groups for our load balancer

Alb.tf

ALB Listener

Key components of our listener:

  • resource type and name
    • resource “aws_lb_listener” “website-alb-listener”
  • Make reference to which ALB is using this “listener” setup
    • “load_balancer_arn = aws_lb.website-alb.arn”
  • Specify port and protocol
  • Default action (forward, redirect, fixed response, and/or authenticate)
    • Some of these actions can be combined, for example, forward and authenticate. This exercise will be using a simple webpage, so we will be simply forwarding it to our Web servers
    • We will use the action “forward” and set stickiness to false. Stickiness is when our ALB sends clients to the same server in the auto-scaling group in case the client gets disconnected. Since this is a simple demonstration, we don’t care which server a user is connected to when hitting refresh or reconnecting to our servers.

alb_listener.tf

The ALB target

When you create a target group, you specify its target type, which determines the type of target you specify when registering targets with this target group. After you create a target group, you cannot change its target type.

The following are the possible target types:

  • Instance (The targets are specified by instance ID)
  • IP (The targets are IP addresses)
  • Lambda (The target is a Lambda function)
  • Use the attachment function (in our case to an Auto-scaling Group ARN)

We could just put a couple of servers placed in one or more availability zones and list the Instance IDs or the IP addresses of those servers. Or, we could simply list the CIDR blocks of one or more private subnets, and the target would be any server in the private subnet(s). You could do this, but in our exercise, we want to use an auto-scaling group (ASG) and have the ALB health checks work with the ASG to rebuild servers if they become unhealthy. Therefore we do not want to point at IP addresses or instance IDs.

alb_target.tf

output.tf

Deploy our Resources using Terraform

Be sure to edit the variables in terraform.tfvars (currently, it has bogus values)

If you are placing this into any other region than us-west-1, you will have to change the AMI ID for the NAT instances in the file “vpc.tf”.

In your terminal, go to the VPC folder and execute the following commands:

  1. Terraform init
  2. terraform validate
  3. Terraform apply

Once the deployment is successful, the terminal will output something like the following output:

Copy the lb_dns_name, without the quotes, and paste the DNS name into any browser. If you have followed along and placed all of the code correctly, you should see something like the following:

Screen Shot

Notice Sometimes servers in an ASG take a few minutes to configure. Wait a couple of minutes if you get an error from our website and try again.

Open up your AWS Management Console, and go to the EC2 dashboard. Configure your EC2 dashboard to show tag columns with a tag value “Name.” A great way to identify your resources is by using TAGS!!

If you have configured the dashboard to display the tag column "Names" in your EC2 dashboard, you should quickly be able to see TWO NAT instances with the tag name "Test-NAT1" and "Test-NAT2" and TWO servers with the Tag Name "Website_ASG".

As an experiment, perhaps you would like to expand the number of Web servers. We can manually expand the number of desired capacity, and the Auto Scaling Group will automatically scale up or down the number of servers based on your command to change desired capacity.

The AWS CLI command is as follows:

Where ASG_Name in the command line above will be the terminals output of lb_dns_name (without the quotes, of course). If you successfully execute the command line in your terminal, you should eventually see in the EC2 dashboard four instances with the tag name “Website_ASG.” Demonstrating our ability to manually change the number of servers to four instead of two.

Once completed with this exercise, feel free to remove all resources by issuing the following command in the terminal:

  • terraform destroy

This is not for production!

All public websites should have an application firewall in between the Web Server and its internet connection, this exercise doesn’t create the application firewall. So do not use this configuration for production

All websites should have monitoring and a method to scrape log events to detect and alert for potential problems with the deployment.

This exercise uses resources compatible with the AWS Free Tier plan. It does not have sufficient compute sizing to support a production workload.

It is a good idea to remove All resources when you have completed this exercise so as not to incur costs

AWS Classic Load balancer

Using Infrastructure as Code with Terraform to create an AWS Load-balanced website

OOPS: Things Change. The code in Github was completely operational. Now it doesn’t work. It was based on Amazon NAT instances that are no longer available.

All of the Terraform code for this exercise is in Github repository

Features

  • AWS Classic Load Balancer
  • VPC using NAT instances instead of NAT gateways
  • Docker Containers running on EC2 instances

This exercise creates a load-balanced website (similar to the previous exercise) but with essential differences (NAT Instances instead of NAT gateway and using Docker container instead of a custom AMI as a web server).

  • AWS as a cloud provider
  • Compliant with the Free Tier plan
  • Using Terraform to create the deployment Infrastructure as Code
  • The ability to provision resources into AWS using “modular code.”
  • Four Web Servers behind a Classic load balancer
  • Ability to launch or destroy bastion host (jump server) only when required
    • Can add/remove bastion host (jump server) at any time without impact to other resources (Bastion Hosts – Provides administrators SSH access to servers located in a private network)

Difference – NAT Instance instead of NAT gateway

One of the differences between this code and the code sample in the previous exercise is that we’ll use NAT instances instead of a NAT gateway. A NAT gateway incurs costs even when using AWS under a free tier plan. It might only be a dollar or two per day. Still, it is a cost. So just for grins, I’ve created a VPC that uses AWS NAT instances to save a couple of dollars. A NAT instance does not compare to the performance of AWS NAT Gateways, so probably not a good solution for production. Considering we are simply running test environments, a NAT instance that performs a bit slower, and saves a few dollars, is fine with me!

Docker-based website

In the previous exercise, we used a custom AMI saved into our EC2 AMI library. A custom-built AMI works well because it allows us to customize an EC2 instance with our application and configuration and save it as a dedicated AMI image in our AWS account. A custom AMI enables greater control from a release management standpoint because our team has control of the composition of an AMI image.

However, creating a custom AMI and then saving an AMI into our EC2 library produces costs even when using a Free Tier plan. While it is great to use a custom AMI, it’s also essential to save money when we are simply studying AWS deployments within a Free Tier plan.

Docker to the rescue. We can create a custom docker container with our specific application and/or configuration like a custom AMI.

We will be using a boot script to install Docker and launch a Docker container, saving costs by not using a custom AMI image.

I’ve created a few websites (to use as docker containers). These containers utilize website templates that are free to use under a Creative Commons license. We’ll use one of my docker containers in this exercise with the intent to eventually jump into using docker containers in ECS and EKS deployments in future activities.

The change from NAT gateway to NAT instance has an impact on our VPC configuration

VPC Changes

  1. We will use standard Terraform AWS resources code instead of a Terraform Module. Hence we’ll be using standard Terraform code to create a VPC.
  2. Also had to change the security group’s code from using Terraform Modules to using Terraform resource code and the methods of referencing AWS resources instead of modules.
  3. Terraform Outputs had to be changed as well to recognize the above changes

ELB changes

  1. We will use standard Terraform AWS resource code instead of the Terraform community module to create a classic load balancer.

Requirements

Note:

If you performed the previous exercise, you might be tempted to try and use the same VPC code. Unfortunately, we are using NAT instances instead of a NAT gateway. We require a new code to create this VPC. The other modules in this exercise are explicitly written with references to this type of VPC found below.

So let us get started

First, please create the following folder structure shown below.

VPC

The following code “vpc.tf”, “var.tf”, and “security_groups.tf” will be created and placed into the VPC folder.

The code below creates a VPC, two public subnets, two private subnets, two NAT instances (one for each public subnet), routing for the public subnets, and routing for the private subnets.

Create the VPC code file “VPC.tf”

Variables for VPC module (var.tf)

Security Groups (security_groups.tf)

Outputs for the VPC module (output.tf)

Code for Classic Load Balancer and Docker web servers (ELB-Web.tf)

The following code “elb-web.tf”, “var.tf”, and “bootstrap_docker.sh” will create an AWS classic load balancer, and four web servers (two in each public subnet). These files will need to be placed into a separate folder, as the code is written to be modular and to obtain data from Terraform Remote state output data. It literally will not work if placed into the same folder as the VPC code.

The load-balanced web servers will be running a docker container as a web server. If you want to test the load balancer, feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

Variables for ELB-Web (variables.tf)

Bootstrap to install and run Docker container (file name “bootstrap_docker.sh”)

#!/bin/bash
sudo yum -y update
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker

sudo docker run -d --name mywebsite -p 80:80 surfingjoe/mywebsite:latest
hostnamectl set-hostname Docker-server

Controller

It is not required to even create the following code for the load-balanced web servers to work. But, because the VPC code is different from the previous exercise, I’m including the code for a jump server (aka bastion host, or as I call it a controller because I use the jump server to deploy ansible configurations on occasion). A jump server is also sometimes necessary to SSH into servers on a private network for analyzing failed deployments. It certainly comes in handy to have a jump server!

The following files will be placed into a separate folder, in this case, named “controller”. The files “controller.tf”, “variables.tf”, and “bootstratp_controller.sh” will create the jump server (Controller).

Once again this is modular code, and won’t work if these files are placed into the same folder as the VPC code. The code depends on output data being placed into Terraform remote state S3 bucket and this code references the output data as inputs to the controller code.

Create file “controller.tf”

Note; I have some code commented out in case you want the controller to be an UBUNTU server instead of an AMI Linux server. I’ve used both flavors over time and hence my module allows me to use choose at the time of deployment by manipulating which lines are commented.

Create the Variables file “variable.tf”

Create the bootstrap “bootstrap_controller.tf”

#!/bin/bash
sudo yum -y update

hostnamectl set-hostname Controller
sudo yum install unzip
sudo yum install -y awscli
sudo amazon-linux-extras list | grep ansible2
sudo amazon-linux-extras enable ansible2

Provisioning

  1. Be sure to change the S3 Bucket name in S3_policy.tf (lines 16 & 17), shown above in Red, into your S3 bucket name
  2. Be sure to change the test.tfvars in the VPC folder, variables of your choice
  3. Be sure to change the test.tfvars in the ELB-WEB folder, to variables of your choice
  4. Be sure to change the main.tf lines 11-13 with the configuration for your S3 bucket to store terraform backend state
  5. In your terminal, go to the VPC folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply
  6. In your terminal, go to the elb-web folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply

      That is it, we have launched and should now have a load-balanced static website with resilience across availability zones and within each zone have at least two web servers for high availability

If you want to actually test the load-balancer feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

The controller (bastion host), can be launched at any time. Quite often, I’ll launch the controller to troubleshoot a test deployment.

It goes without saying, but it has to be said anyway. This is not for production!

All public websites should have some type of application firewall in between the Web Server and its internet connection!

All websites should have monitoring and a method to scrape log events to detect potential problems with the deployment.

It is a good idea to remove an EC2 instance or an ELB when you are finished with the exercise so as not to incur costs

Exit mobile version
%%footer%%