AWS Classic Load balancer

Using Infrastructure as Code with Terraform to create an AWS Load-balanced website

OOPS: Things Change. The code in Github was completely operational. Now it doesn’t work. It was based on Amazon NAT instances that are no longer available.

All of the Terraform code for this exercise is in Github repository

Features

  • AWS Classic Load Balancer
  • VPC using NAT instances instead of NAT gateways
  • Docker Containers running on EC2 instances

This exercise creates a load-balanced website (similar to the previous exercise) but with essential differences (NAT Instances instead of NAT gateway and using Docker container instead of a custom AMI as a web server).

  • AWS as a cloud provider
  • Compliant with the Free Tier plan
  • Using Terraform to create the deployment Infrastructure as Code
  • The ability to provision resources into AWS using “modular code.”
  • Four Web Servers behind a Classic load balancer
  • Ability to launch or destroy bastion host (jump server) only when required
    • Can add/remove bastion host (jump server) at any time without impact to other resources (Bastion Hosts – Provides administrators SSH access to servers located in a private network)

Difference – NAT Instance instead of NAT gateway

One of the differences between this code and the code sample in the previous exercise is that we’ll use NAT instances instead of a NAT gateway. A NAT gateway incurs costs even when using AWS under a free tier plan. It might only be a dollar or two per day. Still, it is a cost. So just for grins, I’ve created a VPC that uses AWS NAT instances to save a couple of dollars. A NAT instance does not compare to the performance of AWS NAT Gateways, so probably not a good solution for production. Considering we are simply running test environments, a NAT instance that performs a bit slower, and saves a few dollars, is fine with me!

Docker-based website

In the previous exercise, we used a custom AMI saved into our EC2 AMI library. A custom-built AMI works well because it allows us to customize an EC2 instance with our application and configuration and save it as a dedicated AMI image in our AWS account. A custom AMI enables greater control from a release management standpoint because our team has control of the composition of an AMI image.

However, creating a custom AMI and then saving an AMI into our EC2 library produces costs even when using a Free Tier plan. While it is great to use a custom AMI, it’s also essential to save money when we are simply studying AWS deployments within a Free Tier plan.

Docker to the rescue. We can create a custom docker container with our specific application and/or configuration like a custom AMI.

We will be using a boot script to install Docker and launch a Docker container, saving costs by not using a custom AMI image.

I’ve created a few websites (to use as docker containers). These containers utilize website templates that are free to use under a Creative Commons license. We’ll use one of my docker containers in this exercise with the intent to eventually jump into using docker containers in ECS and EKS deployments in future activities.

The change from NAT gateway to NAT instance has an impact on our VPC configuration

VPC Changes

  1. We will use standard Terraform AWS resources code instead of a Terraform Module. Hence we’ll be using standard Terraform code to create a VPC.
  2. Also had to change the security group’s code from using Terraform Modules to using Terraform resource code and the methods of referencing AWS resources instead of modules.
  3. Terraform Outputs had to be changed as well to recognize the above changes

ELB changes

  1. We will use standard Terraform AWS resource code instead of the Terraform community module to create a classic load balancer.

Requirements

Note:

If you performed the previous exercise, you might be tempted to try and use the same VPC code. Unfortunately, we are using NAT instances instead of a NAT gateway. We require a new code to create this VPC. The other modules in this exercise are explicitly written with references to this type of VPC found below.

So let us get started

First, please create the following folder structure shown below.

VPC

The following code “vpc.tf”, “var.tf”, and “security_groups.tf” will be created and placed into the VPC folder.

The code below creates a VPC, two public subnets, two private subnets, two NAT instances (one for each public subnet), routing for the public subnets, and routing for the private subnets.

Create the VPC code file “VPC.tf”

# ----------  Assign the Provisioner --------
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

# ----------  Store Terraform Backend in S3 Bucket --------
terraform {
  backend "s3" {
    bucket = "Change to your S3 bucket name"
    key    = "terraform.tfstate"
    region = "Change to your region"
  }
}

# ----------  Region -----------------
provider "aws" {
  region = var.region
}
data "aws_region" "current" {}

# ------------------ Create the VPC -----------------------
resource "aws_vpc" "my-vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name  = "${var.environment}-VPC"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# --------------------- Public Subnet #1 -------------------
resource "aws_subnet" "public-1" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = true
  availability_zone       = var.av-zone1
  cidr_block              = var.public_cidr
  tags = {
    Name  = "${var.environment}-public-1"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# --------------------- Public Subnet #2 ---------------------
resource "aws_subnet" "public-2" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = true
  availability_zone       = var.av-zone2
  cidr_block              = var.public_cidr2
  tags = {
    Name  = "${var.environment}-public-2"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ----------------- Internet Gateway -----------------------
resource "aws_internet_gateway" "test-igw" {
  vpc_id = aws_vpc.my-vpc.id

  tags = {
    Name  = "${var.environment}-IGW"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ------------------ Setup Route table to IGW  -----------------
resource "aws_route_table" "public-route" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.test-igw.id
  }
  tags = {
    Name  = "${var.environment}-Public-Route"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ----------- Setup Public subnet Route table association -------
resource "aws_route_table_association" "public-1-assoc" {
  subnet_id      = aws_subnet.public-1.id
  route_table_id = aws_route_table.public-route.id
}

# -------- Setup Public subnet #2 Route table association -------
resource "aws_route_table_association" "public-2-assoc" {
  subnet_id      = aws_subnet.public-2.id
  route_table_id = aws_route_table.public-route.id
}

# --------------------- Private Subnet #1 -------------------
resource "aws_subnet" "private-1" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = false
  availability_zone       = var.av-zone1
  cidr_block              = var.private_cidr
  tags = {
    Name  = "${var.environment}-private-1"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# --------------------- Private Subnet #2 ---------------------
resource "aws_subnet" "private-2" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = false
  availability_zone       = var.av-zone2
  cidr_block              = var.private_cidr2
  tags = {
    Name  = "${var.environment}-private-2"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# --------------- Setup NAT for Private Subnet traffic --------------------
resource "aws_instance" "nat" {
  ami = "ami-084f9c6fa14e0b9a5" # AWS NAT instance Publish date: 2022-05-04 
  #ami                         = "ami-0c0b98943e04de199" # Publish date: 2022-03-07
  # "ami-084f9c6fa14e0b9a5" Publish date: 2022-05-04
  # "ami-0046c079820366dc3" Publish date: 2021-07-22
  # "ami-082cb501db4725a9b"  # VNS3 NATe Free (NAT Gateway Appliance)
  instance_type               = var.instance_type
  subnet_id                   = aws_subnet.public-1.id
  vpc_security_group_ids      = ["${aws_security_group.nat-sg.id}", "${aws_security_group.controller-ssh.id}"]
  associate_public_ip_address = true
  source_dest_check           = false
  user_data                   = file("bootstrap_nat.sh")
  monitoring                  = true
  key_name                    = var.aws_key_name

  tags = {
    Name  = "${var.environment}-NAT1"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
# --------------- Setup NAT for Private Subnet 2 traffic ------------------
resource "aws_instance" "nat2" {
  ami = "ami-084f9c6fa14e0b9a5" # AWS NAT instance Publish date: 2022-05-04 
  #ami                         = "ami-0c0b98943e04de199" # Publish date: 2022-03-07
  # "ami-084f9c6fa14e0b9a5" Publish date: 2022-05-04
  # "ami-0046c079820366dc3" Publish date: 2021-07-22
  # "ami-082cb501db4725a9b"  # VNS3 NATe Free (NAT Gateway Appliance)
  instance_type               = var.instance_type
  subnet_id                   = aws_subnet.public-2.id
  vpc_security_group_ids      = ["${aws_security_group.nat-sg.id}", "${aws_security_group.controller-ssh.id}"]
  associate_public_ip_address = true
  source_dest_check           = false

  user_data  = file("bootstrap_nat.sh")
  monitoring = true
  key_name   = var.aws_key_name

  tags = {
    Name  = "${var.environment}-NAT2"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ------------------ Setup Route to NAT  -----------------
resource "aws_route_table" "nat-route" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block  = "0.0.0.0/0"
    instance_id = aws_instance.nat.id
  }
  tags = {
    Name  = "${var.environment}-Public-Route"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

resource "aws_route_table_association" "private-route-association" {
  subnet_id      = aws_subnet.private-1.id
  route_table_id = aws_route_table.nat-route.id
}

# ------------------ Setup Route to NAT2  -----------------
resource "aws_route_table" "nat-route-2" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block  = "0.0.0.0/0"
    instance_id = aws_instance.nat2.id
  }
  tags = {
    Name  = "${var.environment}-Public-Route"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

resource "aws_route_table_association" "private-route-association-2" {
  subnet_id      = aws_subnet.private-2.id
  route_table_id = aws_route_table.nat-route-2.id
}

Variables for VPC module (var.tf)

variable "aws_key_name" {
  type    = string
  default = "Your Key Name"
}
variable "region" {
  type    = string
  default = "Your Region"
}
variable "environment" {
  description = "User selects environment"
  type        = string
  default     = "Test"
}
variable "your_name" {
  description = "Your Name?"
  type        = string
  default     = "Your Name"
}
variable "av-zone1" {
  type    = string
  default = "us-west-1a"
}
variable "av-zone2" {
  type    = string
  default = "us-west-1c"
}
variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
  default     = "Your Public IP address"
}
variable "vpc_cidr" {
  type    = string
  default = "10.0.0.0/16"
}
variable "public_cidr" {
  type    = string
  default = "10.0.1.0/24"
}
variable "public_cidr2" {
  type    = string
  default = "10.0.2.0/24"
}
variable "private_cidr" {
  type    = string
  default = "10.0.101.0/24"
}
variable "private_cidr2" {
  type    = string
  default = "10.0.102.0/24"
}
variable "instance_type" {
  type    = string
  default = "t2.micro"
}

Security Groups (security_groups.tf)

# -------------- Security Group for Controller -----------------------
resource "aws_security_group" "controller-ssh" {
  name        = "ssh"
  description = "allow SSH from MyIP"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    cidr_blocks = ["${var.ssh_location}"]

  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-SSH_SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
# -------------- Security Group for NAT instances -----------------------
resource "aws_security_group" "nat-sg" {
  name        = "nat-sg"
  description = "Allow traffic to pass from the private subnet to the internet"
  vpc_id      = aws_vpc.my-vpc.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port   = 8000
    to_port     = 8000
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"
    #cidr_blocks = ["0.0.0.0/0"]
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }
  ingress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }

  egress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 8000
    to_port     = 8000
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }
  egress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }

  tags = {
    Name  = "${var.environment}-NAT-Sg"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
# -------------- Security Group for Docker Servers -----------------------
resource "aws_security_group" "docker-sg" {
  name        = "Docker-SG"
  description = "allow SSH from Controller and HTTP from my IP"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    protocol        = "tcp"
    from_port       = 22
    to_port         = 22
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    security_groups = ["${aws_security_group.lb-sg.id}"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    security_groups = ["${aws_security_group.lb-sg.id}"]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-web-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# -------------- Security Group for Load Balancer -----------------------
resource "aws_security_group" "lb-sg" {
  name        = "${var.environment}-lb-SG"
  description = "allow HTTP and HTTPS"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-lb-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

Outputs for the VPC module (output.tf)

# -------------- Security Group for Controller -----------------------
resource "aws_security_group" "controller-ssh" {
  name        = "ssh"
  description = "allow SSH from MyIP"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    cidr_blocks = ["${var.ssh_location}"]

  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-SSH_SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
# -------------- Security Group for NAT instances -----------------------
resource "aws_security_group" "nat-sg" {
  name        = "nat-sg"
  description = "Allow traffic to pass from the private subnet to the internet"
  vpc_id      = aws_vpc.my-vpc.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port   = 8000
    to_port     = 8000
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"
    #cidr_blocks = ["0.0.0.0/0"]
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }
  ingress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }

  egress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = 8000
    to_port     = 8000
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }
  egress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }

  tags = {
    Name  = "${var.environment}-NAT-Sg"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
# -------------- Security Group for Docker Servers -----------------------
resource "aws_security_group" "docker-sg" {
  name        = "Docker-SG"
  description = "allow SSH from Controller and HTTP from my IP"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    protocol        = "tcp"
    from_port       = 22
    to_port         = 22
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    security_groups = ["${aws_security_group.lb-sg.id}"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    security_groups = ["${aws_security_group.lb-sg.id}"]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-web-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# -------------- Security Group for Load Balancer -----------------------
resource "aws_security_group" "lb-sg" {
  name        = "${var.environment}-lb-SG"
  description = "allow HTTP and HTTPS"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-lb-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

Code for Classic Load Balancer and Docker web servers (ELB-Web.tf)

The following code “elb-web.tf”, “var.tf”, and “bootstrap_docker.sh” will create an AWS classic load balancer, and four web servers (two in each public subnet). These files will need to be placed into a separate folder, as the code is written to be modular and to obtain data from Terraform Remote state output data. It literally will not work if placed into the same folder as the VPC code.

The load-balanced web servers will be running a docker container as a web server. If you want to test the load balancer, feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

# ------------- Stipulate provider -----------------------------
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}
# ------------- Configure the S3 backend for Terraform State ---
data "terraform_remote_state" "vpc" {
  backend = "s3" 
  config = {
    bucket = "Your S3 bucket name"
    key    = "terraform.tfstate"
    region = "Your region for the S3 bucket"
  }
}
# ------------- Get the Region data ----------------------------
provider "aws" {
  region = data.terraform_remote_state.vpc.outputs.aws_region
}
data "aws_availability_zones" "available" {
  state = "available"
}
# ------------- Get the latest Amazon Linux AMI data -----------
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}
# ------------- Creat the Load Balancer ------------------------
resource "aws_elb" "elb" {
name= "test-elb"
subnets=[data.terraform_remote_state.vpc.outputs.public_subnet_1,data.terraform_remote_state.vpc.outputs.public_subnet_2]
security_groups = data.terraform_remote_state.vpc.outputs.lb_security_group_id
internal = false
listener {
  instance_port = 80
  instance_protocol = "http"
  lb_port = 80
  lb_protocol = "http"
  }
health_check {
  healthy_threshold = 2
  unhealthy_threshold = 2
  timeout = 3
  target = "HTTP:80/"
  interval = 30
  }
instances = ["${aws_instance.docker1.id}","${aws_instance.docker2.id}","${aws_instance.docker3.id}","${aws_instance.docker4.id}"]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = {
  Name          = "${var.environment}-elb"
  Stage         = "${var.environment}"
  Owner         = "${var.your_name}"
  }
}
# ------------- Creat the Docker Web Servers--------------------
resource "aws_instance" "docker1" {
  ami = data.aws_ami.amazon_linux.id

  instance_type = var.instance_type
  key_name               = var.aws_key_name
  subnet_id              = data.terraform_remote_state.vpc.outputs.private_subnet_1
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.docker-sg_id
  user_data              = file("bootstrap_docker.sh")
  private_ip             = "10.0.101.10"
  tags = {
    Name          = "${var.environment}-Docker_Server1"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}
resource "aws_instance" "docker2" {
  ami = data.aws_ami.amazon_linux.id
  instance_type = var.instance_type
  key_name               = var.aws_key_name
  subnet_id              = data.terraform_remote_state.vpc.outputs.private_subnet_1
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.docker-sg_id
  user_data              = file("bootstrap_docker.sh")
  private_ip             = "10.0.101.11"
  tags = {
    Name          = "${var.environment}-Docker_Server2"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}
resource "aws_instance" "docker3" {
  ami = data.aws_ami.amazon_linux.id

  instance_type = var.instance_type
  key_name               = var.aws_key_name
  subnet_id              = data.terraform_remote_state.vpc.outputs.private_subnet_2
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.docker-sg_id
  user_data              = file("bootstrap_docker.sh")
  private_ip             = "10.0.102.20"
  tags = {
    Name          = "${var.environment}-Docker_Server3"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}
resource "aws_instance" "docker4" {
  ami = data.aws_ami.amazon_linux.id
  
  instance_type = var.instance_type
  key_name               = var.aws_key_name
  subnet_id              = data.terraform_remote_state.vpc.outputs.private_subnet_2
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.docker-sg_id
  user_data              = file("bootstrap_docker.sh")
  private_ip             = "10.0.102.21"
  tags = {
    Name          = "${var.environment}-Docker_Server4"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}
# ------------- Output the load balancer DNS name --------------
output "elb-dns" {
value = "${aws_elb.elb.dns_name}"
}

Variables for ELB-Web (variables.tf)

variable "aws_region" {
  type    = string
  default = "Your region"
}

variable "key" {
  type    = string
  default = "Your key name"
}
variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}
variable "environment" {
  description = "User selects environment"
  type        = string
  default     = "Test"
}
variable "your_name" {
  description = "Your Name?"
  type        = string
  default     = "Your Name"
}

Bootstrap to install and run Docker container (file name “bootstrap_docker.sh”)

#!/bin/bash
sudo yum -y update
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker

sudo docker run -d --name mywebsite -p 80:80 surfingjoe/mywebsite:latest
hostnamectl set-hostname Docker-server

Controller

It is not required to even create the following code for the load-balanced web servers to work. But, because the VPC code is different from the previous exercise, I’m including the code for a jump server (aka bastion host, or as I call it a controller because I use the jump server to deploy ansible configurations on occasion). A jump server is also sometimes necessary to SSH into servers on a private network for analyzing failed deployments. It certainly comes in handy to have a jump server!

The following files will be placed into a separate folder, in this case, named “controller”. The files “controller.tf”, “variables.tf”, and “bootstratp_controller.sh” will create the jump server (Controller).

Once again this is modular code, and won’t work if these files are placed into the same folder as the VPC code. The code depends on output data being placed into Terraform remote state S3 bucket and this code references the output data as inputs to the controller code.

Create file “controller.tf”

Note; I have some code commented out in case you want the controller to be an UBUNTU server instead of an AMI Linux server. I’ve used both flavors over time and hence my module allows me to use choose at the time of deployment by manipulating which lines are commented.

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}
#------------------------- State terraform backend location-----
data "terraform_remote_state" "vpc" {
  backend = "s3"
  config = {
    bucket = "surfingjoes-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}
# --------------------- Determine region from backend data -----
provider "aws" {
  region = data.terraform_remote_state.vpc.outputs.aws_region
}
# #--------- Get Ubuntu 20.04 AMI image (SSM Parameter data) ---
# data "aws_ssm_parameter" "ubuntu-focal" {
#   name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
# }

data "aws_ami""amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}
# Creating controller node
resource "aws_instance" "controller" {
  #ami                    = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
  ami = data.aws_ami.amazon_linux.id
  instance_type          = var.instance_type
  subnet_id              = data.terraform_remote_state.vpc.outputs.public_subnet_1
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.Controller-sg_id
  user_data              = file("bootstrap_controller.sh")
  private_ip             = "10.0.1.10"
  monitoring             = true
  key_name               = var.key
  tags = {
    Name  = "${var.environment}-Controller"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
output "Controller" {
  value = [aws_instance.controller.public_ip]
}

Create the Variables file “variable.tf”

variable "aws_region" {
  type    = string
  default = "your region"
}

variable "key" {
  type    = string
  default = "your EC2 key" 
}
variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}
variable "environment" {
  description = "User selects environment"
  type        = string
  default     = "Test"
}
variable "your_name" {
  description = "Your Name?"
  type        = string
  default     = "your name"
}

Create the bootstrap “bootstrap_controller.tf”

#!/bin/bash
sudo yum -y update

hostnamectl set-hostname Controller
sudo yum install unzip
sudo yum install -y awscli
sudo amazon-linux-extras list | grep ansible2
sudo amazon-linux-extras enable ansible2

Provisioning

  1. Be sure to change the S3 Bucket name in S3_policy.tf (lines 16 & 17), shown above in Red, into your S3 bucket name
  2. Be sure to change the test.tfvars in the VPC folder, variables of your choice
  3. Be sure to change the test.tfvars in the ELB-WEB folder, to variables of your choice
  4. Be sure to change the main.tf lines 11-13 with the configuration for your S3 bucket to store terraform backend state
  5. In your terminal, go to the VPC folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply
  6. In your terminal, go to the elb-web folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply

      That is it, we have launched and should now have a load-balanced static website with resilience across availability zones and within each zone have at least two web servers for high availability

If you want to actually test the load-balancer feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

The controller (bastion host), can be launched at any time. Quite often, I’ll launch the controller to troubleshoot a test deployment.

It goes without saying, but it has to be said anyway. This is not for production!

All public websites should have some type of application firewall in between the Web Server and its internet connection!

All websites should have monitoring and a method to scrape log events to detect potential problems with the deployment.

It is a good idea to remove an EC2 instance or an ELB when you are finished with the exercise so as not to incur costs