
OOPS: Things Change. The code in Github was completely operational. Now it doesn’t work. It was based on Amazon NAT instances that are no longer available.
All of the Terraform code for this exercise is in Github repository
Features
- AWS Classic Load Balancer
- VPC using NAT instances instead of NAT gateways
- Docker Containers running on EC2 instances
This exercise creates a load-balanced website (similar to the previous exercise) but with essential differences (NAT Instances instead of NAT gateway and using Docker container instead of a custom AMI as a web server).
- AWS as a cloud provider
- Compliant with the Free Tier plan
- Using Terraform to create the deployment Infrastructure as Code
- The ability to provision resources into AWS using “modular code.”
- Four Web Servers behind a Classic load balancer
- Ability to launch or destroy bastion host (jump server) only when required
- Can add/remove bastion host (jump server) at any time without impact to other resources (Bastion Hosts – Provides administrators SSH access to servers located in a private network)
Difference – NAT Instance instead of NAT gateway
One of the differences between this code and the code sample in the previous exercise is that we’ll use NAT instances instead of a NAT gateway. A NAT gateway incurs costs even when using AWS under a free tier plan. It might only be a dollar or two per day. Still, it is a cost. So just for grins, I’ve created a VPC that uses AWS NAT instances to save a couple of dollars. A NAT instance does not compare to the performance of AWS NAT Gateways, so probably not a good solution for production. Considering we are simply running test environments, a NAT instance that performs a bit slower, and saves a few dollars, is fine with me!
Docker-based website
In the previous exercise, we used a custom AMI saved into our EC2 AMI library. A custom-built AMI works well because it allows us to customize an EC2 instance with our application and configuration and save it as a dedicated AMI image in our AWS account. A custom AMI enables greater control from a release management standpoint because our team has control of the composition of an AMI image.
However, creating a custom AMI and then saving an AMI into our EC2 library produces costs even when using a Free Tier plan. While it is great to use a custom AMI, it’s also essential to save money when we are simply studying AWS deployments within a Free Tier plan.
Docker to the rescue. We can create a custom docker container with our specific application and/or configuration like a custom AMI.
We will be using a boot script to install Docker and launch a Docker container, saving costs by not using a custom AMI image.
I’ve created a few websites (to use as docker containers). These containers utilize website templates that are free to use under a Creative Commons license. We’ll use one of my docker containers in this exercise with the intent to eventually jump into using docker containers in ECS and EKS deployments in future activities.
The change from NAT gateway to NAT instance has an impact on our VPC configuration
VPC Changes
- We will use standard Terraform AWS resources code instead of a Terraform Module. Hence we’ll be using standard Terraform code to create a VPC.
- Also had to change the security group’s code from using Terraform Modules to using Terraform resource code and the methods of referencing AWS resources instead of modules.
- Terraform Outputs had to be changed as well to recognize the above changes
ELB changes
- We will use standard Terraform AWS resource code instead of the Terraform community module to create a classic load balancer.
Requirements
- Must have an AWS account
- Install AWS CLI, Configure AWS CLI, Install Terraform
- AWS Administrator account or an account with the following permissions:
- create, read & write an S3 bucket
- create an IAM profile
- create VPC, subnets, and security groups
- create security groups
- create a load balancer, internet gateway, and subnets
- create EC2 images and manage EC2 resources
- Ec2 Key Pair for the region
- Create an S3 Bucket for Terraform Remote State
- Using Dry code
- Using Modular code
Note:
If you performed the previous exercise, you might be tempted to try and use the same VPC code. Unfortunately, we are using NAT instances instead of a NAT gateway. We require a new code to create this VPC. The other modules in this exercise are explicitly written with references to this type of VPC found below.
So let us get started
First, please create the following folder structure shown below.
VPC
The following code “vpc.tf”, “var.tf”, and “security_groups.tf” will be created and placed into the VPC folder.
The code below creates a VPC, two public subnets, two private subnets, two NAT instances (one for each public subnet), routing for the public subnets, and routing for the private subnets.
Create the VPC code file “VPC.tf”
Variables for VPC module (var.tf)
Security Groups (security_groups.tf)
Outputs for the VPC module (output.tf)
Code for Classic Load Balancer and Docker web servers (ELB-Web.tf)
The following code “elb-web.tf”, “var.tf”, and “bootstrap_docker.sh” will create an AWS classic load balancer, and four web servers (two in each public subnet). These files will need to be placed into a separate folder, as the code is written to be modular and to obtain data from Terraform Remote state output data. It literally will not work if placed into the same folder as the VPC code.
The load-balanced web servers will be running a docker container as a web server. If you want to test the load balancer, feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.
Variables for ELB-Web (variables.tf)
Bootstrap to install and run Docker container (file name “bootstrap_docker.sh”)
#!/bin/bash
sudo yum -y update
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker
sudo docker run -d --name mywebsite -p 80:80 surfingjoe/mywebsite:latest
hostnamectl set-hostname Docker-server
Controller
It is not required to even create the following code for the load-balanced web servers to work. But, because the VPC code is different from the previous exercise, I’m including the code for a jump server (aka bastion host, or as I call it a controller because I use the jump server to deploy ansible configurations on occasion). A jump server is also sometimes necessary to SSH into servers on a private network for analyzing failed deployments. It certainly comes in handy to have a jump server!
The following files will be placed into a separate folder, in this case, named “controller”. The files “controller.tf”, “variables.tf”, and “bootstratp_controller.sh” will create the jump server (Controller).
Once again this is modular code, and won’t work if these files are placed into the same folder as the VPC code. The code depends on output data being placed into Terraform remote state S3 bucket and this code references the output data as inputs to the controller code.
Create file “controller.tf”
Note; I have some code commented out in case you want the controller to be an UBUNTU server instead of an AMI Linux server. I’ve used both flavors over time and hence my module allows me to use choose at the time of deployment by manipulating which lines are commented.
Create the Variables file “variable.tf”
Create the bootstrap “bootstrap_controller.tf”
#!/bin/bash
sudo yum -y update
hostnamectl set-hostname Controller
sudo yum install unzip
sudo yum install -y awscli
sudo amazon-linux-extras list | grep ansible2
sudo amazon-linux-extras enable ansible2
Provisioning
- Be sure to change the S3 Bucket name in S3_policy.tf (lines 16 & 17), shown above in Red, into your S3 bucket name
- Be sure to change the test.tfvars in the VPC folder, variables of your choice
- Be sure to change the test.tfvars in the ELB-WEB folder, to variables of your choice
- Be sure to change the main.tf lines 11-13 with the configuration for your S3 bucket to store terraform backend state
- In your terminal, go to the VPC folder and execute the following commands:
Terraform initterraform validateTerraform apply
- In your terminal, go to the elb-web folder and execute the following commands:
Terraform initterraform validateTerraform apply
That is it, we have launched and should now have a load-balanced static website with resilience across availability zones and within each zone have at least two web servers for high availability
If you want to actually test the load-balancer feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.
The controller (bastion host), can be launched at any time. Quite often, I’ll launch the controller to troubleshoot a test deployment.
It goes without saying, but it has to be said anyway. This is not for production!
All public websites should have some type of application firewall in between the Web Server and its internet connection!
All websites should have monitoring and a method to scrape log events to detect potential problems with the deployment.
It is a good idea to remove an EC2 instance or an ELB when you are finished with the exercise so as not to incur costs