Deploy a simple WordPress in AWS

Using Terraform and Ansible to create a simple WordPress Deployment

WordPress Diagram

The code discussed in this article can be found in my public GitHub repository.


Features

Deploy WordPress using Infrastructure as Code into AWS

Terraform – Terraform’s Infrastructure as Code to Deploy resources in AWS.

Ansible will be used to construct and configure the MariaDB database server in the private subnet and a WordPress server in the public subnet.

All the AWS resources created below comply with an AWS Free Tier Plan.

These resources are compliant with a free tier plan. But don’t leave them running! There is a limit of EC2 hours allowed in a free tier plan!

This exercise will perform the following tasks using Terraform HCL code:

  • A module to create a VPC with security groups and routing
  • A module to create two S3 buckets
    • We’ll copy some ansible code to one of the S3 buckets
    • We’ll put Terraform remote state in the other S3 bucket
  • Build code that creates EC2 instances for a WordPress server and a MariaDB server using Terraform
  • Create an IAM policy that allows an EC2 instance to copy files from the S3 bucket
  • Build a code that creates the “Controller” server
  • Use the Controller to execute Ansible playbooks for configuring WordPress and MariaDB

Requirements

  • Must have an AWS account
  • Install AWS CLI, and Configure AWS CLI
  • Install Terraform
  • An EC2 Key Pair for AWS CLI (for connecting using SSH protocol)
  • Recommend using your own domain (either register a new domain in AWS or register a domain from any registrar service of your choice).
  • AWS Administrator account or an account with the following permissions:
    • ability to create IAM policies
    • Create and Manage changes in Route 53
    • Create VPC, subnets, routing, and security groups
    • Create and manage EC2 resources
    • Create an Ec2 Key Pair for the region
    • Create and manage an S3 Bucket for Terraform Remote State
    • Create an S3 bucket for the Ansible playbooks and other configuration files

So let’s get started

Please create the folder structure shown below:

S3 Buckets

An S3 bucket can be created using the AWS Management console, or an even faster method would be to use AWS Command Line Interface (AWS CLI) to create an S3 Bucket.

You may not require server-side encryption or versioning for this exercise. If multiple personnel use the same S3 bucket, you would undoubtedly want to consider enabling version control. If your team believes specific parameters are sensitive information, I suggest server-site encryption.

An S3 bucket name must be globally unique, that means literrally a unique name in ALL AWS regions

  • Create an S3 bucket for the Terraform remote state
    • It can be any S3 bucket name of your choice, but of course, I recommend a name something like “your_nickname.terraform.state”
  • Create an S3 bucket to hold configuration files for WordPress, MariaDB, and Ansible playbooks
    • I recommend a name like “your_nickname.ansible_files”

Creating the VPC

The VPC will be making:

  • One public subnet for the WordPress website and another server I call the Controller
  • One private subnet for the database server
  • NAT instances instead of a NAT gateway and the associative routing
  • Security Groups
  • Output data that other modules will use to obtain data.

Make sure you have configured an S3 bucket for Terraform Remote State and name the bucket something like “name-terraform-states.”

Create the following code “vpc.tf” in the VPC folder

# ----------  Stipulate AWS as Cloud provider --------
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

# ----------  Store Terraform Backend in S3 Bucket --------
terraform {
  backend "s3" {
    bucket = "Your Terraform state bucket name here"
    key    = "terraform.tfstate"
    region = "Your region"
  }
}

# ----------  Region -----------------
provider "aws" {
  region = var.region
}
data "aws_region" "current" {}

# ------------------ Create the VPC -----------------------
resource "aws_vpc" "my-vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name  = "${var.environment}-VPC"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# --------------------- Public Subnet -------------------
resource "aws_subnet" "public" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = true
  availability_zone       = var.av-zone1
  cidr_block              = var.public_cidr
  tags = {
    Name  = "${var.environment}-public"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ----------------- Internet Gateway -----------------------
resource "aws_internet_gateway" "test-igw" {
  vpc_id = aws_vpc.my-vpc.id

  tags = {
    Name  = "${var.environment}-IGW"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ------------------ Setup Route table to IGW  -----------------
resource "aws_route_table" "public-route" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.test-igw.id
  }
  tags = {
    Name  = "${var.environment}-Public-Route"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ----------- Setup Public subnet Route table association -----
resource "aws_route_table_association" "public-assoc" {
  subnet_id      = aws_subnet.public.id
  route_table_id = aws_route_table.public-route.id
}

# --------------------- Private Subnet  -------------------
resource "aws_subnet" "private" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = false
  availability_zone       = var.av-zone1
  cidr_block              = var.private_cidr
  tags = {
    Name  = "${var.environment}-private"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# --------------- Setup NAT for Private Subnet traffic ---------
resource "aws_instance" "nat" {
  ami = "ami-084f9c6fa14e0b9a5" # AWS NAT instance Publish date: 2022-05-04 
  instance_type               = var.instance_type
  subnet_id                   = aws_subnet.public.id
  vpc_security_group_ids      = ["${aws_security_group.nat-sg.id}", "${aws_security_group.controller-ssh.id}"]
  associate_public_ip_address = true
  source_dest_check           = false
  user_data                   = file("bootstrap_nat.sh")
  monitoring                  = true
  key_name                    = var.aws_key_name

  tags = {
    Name  = "${var.environment}-NAT Instance"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

# ------------------ Setup Route to NAT  -----------------
resource "aws_route_table" "nat-route" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block  = "0.0.0.0/0"
    instance_id = aws_instance.nat.id
  }
  tags = {
    Name  = "${var.environment}-Private-Route"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

resource "aws_route_table_association" "private-route-association" {
  subnet_id      = aws_subnet.private.id
  route_table_id = aws_route_table.nat-route.id
}

Create the code “variables.tf” in the VPC folder

variable "aws_key_name" {
  type    = string
  default = "your key name"
}
variable "region" {
  type    = string
  default = "your region"
}
variable "environment" {
  description = "User selects environment"
  type        = string
  default     = "Test"
}
variable "your_name" {
  description = "Your Name?"
  type        = string
  default     = "Your Name"
}
variable "av-zone1" {
  type    = string
  default = "us-west-1a"
}
variable "av-zone2" {
  type    = string
  default = "us-west-1c"
}
variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
  default     = "Your IP address"
}
variable "vpc_cidr" {
  type    = string
  default = "10.0.0.0/16"
}
variable "public_cidr" {
  type    = string
  default = "10.0.1.0/24"
}
variable "private_cidr" {
  type    = string
  default = "10.0.101.0/24"
}
variable "instance_type" {
  type    = string
  default = "t2.micro"
}

Create a file “security_groups.tf in the VPC folder

# -------------- Security Group for Controller -----------------
resource "aws_security_group" "controller-ssh" {
  name        = "ssh"
  description = "allow SSH from MyIP"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    cidr_blocks = ["${var.ssh_location}"]
  }
  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-SSH_SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
# -------------- Security Group for NAT instances --------------
resource "aws_security_group" "nat-sg" {
  name        = "nat-sg"
  description = "Allow traffic to pass from the private subnet to the internet"
  vpc_id      = aws_vpc.my-vpc.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}"]
  }
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}"]
  }
  ingress {
    from_port = 22
    to_port   = 22
    protocol  = "tcp"
    #cidr_blocks = ["0.0.0.0/0"]
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }
  ingress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }

  egress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }
  egress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }

  tags = {
    Name  = "${var.environment}-NAT-Sg"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
# -------------- Security Group for WordPress Server -----------
resource "aws_security_group" "wp-sg" {
  name        = "WordPress-SG"
  description = "allow SSH from Controller and HTTP & HTTPS from my IP"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    protocol        = "tcp"
    from_port       = 22
    to_port         = 22
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-WordPress-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
resource "aws_security_group" "mysql-sg" {
  name        = "MySQL-SG"
  description = "allow SSH from Controller and MySQL from WordPress server"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    security_groups  = ["${aws_security_group.controller-ssh.id}"]
    }

    ingress {
    from_port   = 3306
    to_port     = 3306
    protocol    = "tcp"
    security_groups  = ["${aws_security_group.wp-sg.id}"]
    }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
    Name          = "${var.environment}-MySQL-SG"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

Create “output.tf” in the VPC folder

output "aws_region" {
  description = "AWS region"
  value       = data.aws_region.current.name
}
output "vpc_id" {
  description = "VPC ID"
  value       = aws_vpc.my-vpc.id
}
output "public_subnet" {
  description = "Public Subnet"
  value       = aws_subnet.public.id
}
output "private_subnet" {
  description = "Private Subnet"
  value       = aws_subnet.private.id
}
output "Controller-sg" {
  description = "Security group IDs for Controller"
  value       = [aws_security_group.controller-ssh.id]
}
output "wordpress-sg" {
  description = "Security group IDs for WordPress"
  value       = [aws_security_group.wp-sg.id]
}
output "mysql-sg" {
  description = "Security group IDs for MySQL"
  value       = [aws_security_group.mysql-sg.id]
}

Create a folder named “Servers”

We are creating an Ec2 instance for WordPress in the public subnet and another EC2 instance in the private subnet.

Create the file “servers.tf” in the servers folder

# ----------  Stipulate AWS as Cloud provider --------
terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}
#------------------------- State terraform backend location-----
data "terraform_remote_state" "vpc" {
  backend = "s3" 
  config = {
    bucket = "your-bucket-name-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}
# --------------------- Determine region from backend data -----
provider "aws" {
  region = data.terraform_remote_state.vpc.outputs.aws_region
}
#--------- Get Ubuntu 20.04 AMI image (SSM Parameter data) -----
data "aws_ssm_parameter" "ubuntu-focal" {
  name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
}
# Creating MariaDB server
resource "aws_instance" "mariadb" {
  ami                    = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
  instance_type          = var.instance_type
  subnet_id              = data.terraform_remote_state.vpc.outputs.private_subnet
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.mysql-sg
  private_ip             = "10.0.101.30"
  user_data              = file("bootstrap_db.sh")
  monitoring             = true
  key_name               = var.key

    tags = {
    Name          = "${var.environment}-MariaDB"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}
# Creating WordPress server
resource "aws_instance" "Wordpress" {
  ami                    = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
  instance_type          = var.instance_type
  subnet_id              = data.terraform_remote_state.vpc.outputs.public_subnet
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.wordpress-sg
  private_ip             = "10.0.1.20"
  user_data              = file("bootstrap_wp.sh")
  key_name               = var.key

    tags = {
    Name          = "${var.environment}-Wordpress"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

Create “bootstrap_db.sh”

#!/bin/bash
sudo apt-get update
sudo apt-get -y upgrade
hostnamectl set-hostname mariadb

Create “bootstrap_wp.sh”

#!/bin/bash
sudo apt-get update
sudo apt-get -y upgrade
hostnamectl set-hostname WordPress

Create a folder named Ansible

There are Nine files placed in the Ansible folder. All the files for Ansible will be in my GitHub repository. Be sure to edit the appropriate files as stated below to personalize your choices for things like DB password.

  • ansible.cfg
  • hosts.ini
  • provision-db.yml
  • provision-wp.yml

The other five files will be placed into the MariaDB server and the WordPress server by Ansible.

  • Files for MariaDB
    • 50-server.cnf
    • vars.yml
  • Files for WordPress
    • dir.conf
    • example.com.conf
    • wordpress.zip

Edit “vars.yml” to reflect your choices of USERNAME, PASSWORD, DBNAME, NEW_ADMIN, NEW_ADMIN_PASSWORD. Ensure you also edit wp-config.php in the “wordpress.zip” archive to reflect the same.

Uncompress wordpress.zip and edit “wp-config.php.” Lines 23,26, and 29 reflect your choices of DB_NAME, DB_USER, and DB_PASSWORD. Make sure they match “vars.yml.”

Create an S3 bucket for the above files

If you have not already created an S3 bucket to hold the ansible files, please do so now.

Copy the files from my Github repository Ansible folder to the S3 bucket.

When we use the command to apply Terraform, the files will automatically be copied from our S3 bucket into the controller server below as part of the bootstrap_controller.sh. So make sure you have configured our S3 bucket and placed the Ansible files into that bucket before running Terraform Apply command.

Create a folder named Controller

A Controller is where all the magic happens. We are creating a jump server (I call it a controller).

After deploying the VPC infrastructure and placing the Ansible files into an S3 bucket, we create three servers (WordPress, MariaDB, and Controller).

We will then use SSH to connect to our Controller and Ansible playbooks to configure MySQL on the MariaDB server and to configure WordPress settings on the WordPress server.

Create the “controller.tf” file in the controller folder

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

#------------------------- State terraform backend location---------------------
data "terraform_remote_state" "vpc" {
  backend = "s3"
  config = {
    bucket = "surfingjoes-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}

# --------------------- Determine region from backend data -------------------
provider "aws" {
  region = data.terraform_remote_state.vpc.outputs.aws_region
}

# #--------- Get Ubuntu 20.04 AMI image (SSM Parameter data) -------------------
# data "aws_ssm_parameter" "ubuntu-focal" {
#   name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
# }

data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}

# Creating controller node
resource "aws_instance" "controller" {
  #ami                    = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
  ami = data.aws_ami.amazon_linux.id
  instance_type          = var.instance_type
  subnet_id              = data.terraform_remote_state.vpc.outputs.public_subnet
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.Controller-sg
  iam_instance_profile   = "${aws_iam_instance_profile.assume_role_profile.name}" 
  user_data              = file("bootstrap_controller.sh")
  private_ip             = "10.0.1.10"
  monitoring             = true
  key_name               = var.key

  tags = {
    Name  = "${var.environment}-Controller"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

output "Controller" {
  value = [aws_instance.controller.public_ip]
}

Create the “variables.tf” file in the Controller folder

variable "aws_region" {
  type    = string
  default = "us-west-1"
}

variable "key" {
  type    = string
  default = "Mykey" #be sure to update with the name of your EC2 Key pair for your region
}
variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}
variable "environment" {
  description = "User selects environment"
  type        = string
  default     = "Test"
}
variable "your_name" {
  description = "Your Name?"
  type        = string
  default     = "Joe"
}

Create the file “bootstrap_controller.sh”

#!/bin/bash
sudo yum -y update
hostnamectl set-hostname Controller
sudo yum install unzip
sudo yum install -y awscli
sudo amazon-linux-extras list | grep ansible2
sudo amazon-linux-extras enable ansible2
sudo yum install -y ansible
aws s3 cp s3://your-bucket-name-ansible/WordPress /home/ec2-user/WordPress --recursive

Create the S3 Policy in the Controller folder

This will give our controller permission to copy files from our S3 bucket. Be sure to edit the ARN information to reflect the appropriate S3 bucket name for your deployment.

resource "aws_iam_policy" "copy-policy" {
  name        = "copy-anible-files"
  description = "IAM policy to allow copy files from S3 bucket"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],

      "Resource": ["arn:aws:s3:::bucket-name-ansible", "arn:aws:s3:::bucke-name-ansible/*"]
    }
  ]
}
EOF
}

resource  "aws_iam_role" "assume-role" {
    name                = "assume-role"
    description         = "IAM policy that allows assume role"
    assume_role_policy  = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Effect": "Allow",
        "Sid": ""
      }
    ]
}
EOF
}

resource "aws_iam_role_policy_attachment" "assign-copy-policy" {
  role       = aws_iam_role.assume-role.name
  policy_arn = aws_iam_policy.copy-policy.arn
  depends_on = [aws_iam_policy.copy-policy]
}

resource "aws_iam_instance_profile" "assume_role_profile" {
  name = "assume_role_profile"
  role = aws_iam_role.assume-role.name
}

Provisioning

  1. Make an S3 bucket for Terraform Remote state.
    • Be sure to use the name of the S3 bucket in “VPC.tf, servers.tf, controller.tf
  2. Make an S3 bucket for the Ansible files.
    • Be sure to change the S3 Bucket name in S3_policy.tf (lines 16), shown above, to your S3 bucket name for Ansible files.
  3. Be sure to change the variables in the VPC folder with variables of your choice.
  4. Be sure to change the variables in the server folder to variables of your choice.
  5. Change the vars.yml to reflect your dbname, user name, etc.
  6. Be sure to uncompress the wordpress.zip file, edit the wp-config.php to reflect your dbname and user name of your choice, update the example.com.conf file to your domain name, and as well edit the content of example.com.conf to your domain name. Once the edits are complete, compress the files back into wordpress.zip.
  7. Be sure to copy the ansible files into the bucket you created for the Ansible files
  8. In your terminal, go to the VPC folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply 
  9. In your terminal, go to the server folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply 
  10. In your terminal, go to the controller folder and execute the following commands:
    1. Terraform init
    1. terraform validate
    2. Terraform apply 

Running the Ansible configuration on the Controller

First, we are going to set up our SSH credentials. This assumes you have already configured an EC2 key pair and assigned the key in the above terraform code. You must have already placed an EC2 key into your ssh directory.

  • Add EC2 key pair into SSH credentials by issuing the following command
    • ssh-add ~/.ssh/your-key-name.pem
  • Then connect to the Controller with “-A” option of SSH command to forward the authentication agent to our Controller. “-A” allows us to connect to the Controller, then jump to our servers inside AWS public and private networks. “-A” as an SSH option also enables the authentication agent for Ansible playbooks to use the very same EC2 key pair to manage MariaDB and WordPress servers.”
    • ssh -A ec2-user@1.2.3.4 (where 1.2.3.4 represents the public IP address of the Controller)
  • Once connected to the Controller, change the directory to the WordPress directory (this directory may not exist if you connect to the Controller too soon, as it takes a couple of minutes for the bootstrap to configure the Controller)
    • cd WordPress
  • First, we can configure the MariaDB server with Ansible
    • ansible-playbook provision-db.yml
    • for some reason, the handler at the end of the Ansible playbook doesn’t restart MySQL, and WordPress gets a “cannot connect to database error.” If this does happen to you (most likely, I might add), then we need to connect to the DB server and restart MySQL
      • ssh ubuntu@10.0.101.30
      • sudo -i
      • service MySQL restart
    • The final step is to run the Ansible playbook to configure the WordPress server
      • ansible-playbook provision-wp.yml

If you want to actually test WordPress feel free, you’ll need to create a record for your domain. Goto Route 53, register a domain, create an “A record” and assign the public IP address of WordPress Instance.

Or, if you do not want to use your domain (I recommend the practice of using your domain, but hey, what do I know, hee hee), open your browser and put in the public IP address of the WordPress server.

Open a browser and type in your domain.

If you have followed the exercise correctly, then you should see the following

Initial WordPress Screen

This is not for production!

All public websites should have an application firewall in between the Web Server and its internet connection, this exercise doesn’t create the application firewall. So do not use this configuration for production

All websites should have monitoring and a method to scrape log events to detect and alert for potential problems with the deployment.

This exercise uses resources compatible with the AWS Free Tier plan. It does not have sufficient compute sizing to support a production workload.

It is a good idea to remove All resources when you have completed this exercise, so as not to incur costs

Create AWS load-balanced website using a custom AMI image

Load balanced Website servers

Repository

All of the Terraform code for this exercise is in Github repository

Features

  • AWS as cloud provider
  • Compliant with Free Tier plan
  • The ability to provision resources into AWS using “modular code.”
  • Using a community module to create the VPC, public and private subnets
  • Four EC2 Web Servers behind a Classic load balancer
  • Ability to launch or destroy bastion host (jump server) only when needed
    • Can add/remove bastion host (jump server) at any time without impact to other resources (Bastion Hosts – Provides administrators SSH access to servers located in a private network)

Requirements

  • Must have an AWS account
  • Install AWS CLI, Configure AWS CLI, Install Terraform
  • AWS Administrator account or an account with the following permissions:
    • Privilege to create, read & write an S3 bucket
    • Privilege to create an IAM profile
    • Privilege to create VPC, subnets, and security groups
    • Privilege to create security groups
    • Privilege to create a load balancer, internet gateway, and NAT gateway
    • Privilege to create EC2 images and manage EC2 resources
    • Ec2 Key Pair for the region
  • Create an S3 Bucket for Terraform State
  • In the previous exercise, we created a web server that was configured with a static website. We will use that configuration (AMI ID), for this exercise. Use the previous exercise EC2 image, saved as an EC2 image (We will need the AMI ID of that image for this exercise).

Infrastructure

New Infrastructure

Dry Code (reusable and repeatable)

Dry code (the principle of “do not repeat yourself”) means creating lines of code once and using or referencing that code many times. The benefit to everyone is re-usable code. 

  • Someone writes a bit of code and puts the code in a shared location
  • This allows other team members to copy the code or make references to the code
  • Everyone uses the same code but varies the utilization of code with variables

In the case of AWS deployments with Terraform, referenced code applied to a test environment using variables will create smaller or fewer resources in a test environment. In contrast, the same code with variables would deploy a larger resource or a greater scale of resources in production.

It makes sense to test and validate code in a test environment, then deploy the same code in production using variables that change the parameters of deployment.

We can accomplish dry code in Terraform by placing the “Infrastructure as Code” in a shared location such as Git, GitHub, AWS S3 buckets, shared files on your network, or a folder structure on your workstation. Then using the shared code in different deployments simply by using environment variables.

independent and modular

Modular coding allows code to be deployed “independent” of other code. For example, the ability to launch and test security groups, load balancers, EC2 instances, or containers as deployment modules, with or without dependencies on other resources.

Consider a bastion host (I call it a “Controller” as I also use a bastion host to run Ansible code). Using modular code we can launch a jump server (bastion-host) using Terraform, do some administration using SSH into some private servers, and when finished, we can shut down the controller. Meanwhile, Infrastructure launched with other modular code remains operational and not impacted by our addition and subsequent removal of a bastion host.

The Secret ingredient to modular terraform (Outputs, Inputs)

Output/Input -Seriously, the secret to modular and reusable Terraform code is wrapping our heads around putting code into a folder and using code to output certain parameters from that code into a remote state. Then using the outputted parameters from the remote state; as parameter inputs. Hence, we are passing data between modules. For example, code to create a VPC will include an output of the “VPC – ID”, and other modules will know the VPC ID by essentially getting the ID from Terraforms “Output.”

Location, Location, Location – The other secret is to place the output in a location for other modules to use as input “data.”, for example placing a remote state into an S3 bucket.

Using AWS S3 bucket

The diagram above represents storing Terraform state in an AWS S3 bucket. Create a Terraform Output parameter, which is placed into Terraform’s state file. Another module then gets the data.

Say for example we create a VPC and use an output statement as follows;

output "vpc_id" {
  description = "Output VPC ID"
  value       = module.vpc.vpc_id
}

Another module will know what VPC to use by getting the data about the VPC ID;

vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id 

So one module outputs the property value of an AWS resource using an Output statement with a name, in this case, “vpc_id”, another module gets the data of the AWS resource by getting the data from Terraform State referencing the Output name, in this case, “vpc_id”.


So let us get started

First, please create the following folder structure shown below.

After creating the folders, we will place code into each folder and then use “Terraform apply” a few times to demonstrate the independence of modular Terraform code.


Create VPC.tf (in the VPC folder)

Note: this code is using a community module for the creation of a VPC. See the registry of community modules at:
https://registry.terraform.io/namespaces/terraform-aws-modules.

I like the community-built module AWS VPC Terraform module because it can create a VPC with public and private subnets, an internet gateway, and a Nat gateway with just a few lines of code.

However, to my knowledge, it is not written or supported by Hashicorp. It is written and supported by antonbabenko. I’m sure it’s a great module, and I personally use it, but I don’t know enough about it to recommend it for production usage. I have done some rudimentary tests, it works great, makes it far easier to produce the VPC & subnets in my test account. But, treat this module like any other community or open-source code before using it in production and do your own research.

vpc.tf

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

terraform {
  backend "s3" {
    bucket = "randomName-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"                    # Change to the region you selected for your S3 bucket
  }
}

provider "aws" {
  region = var.aws_region
}

data "aws_availability_zones" "available" {
  state = "available"
}

data "aws_region" "current" { }

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.6.0"

  cidr            = var.vpc_cidr_block
  azs             = data.aws_availability_zones.available.names
  private_subnets = slice(var.private_subnet_cidr_blocks, 0, 2)
  public_subnets  = slice(var.public_subnet_cidr_blocks, 0, 2)
  # database_subnets= slice(var.database_subnet_cidr_blocks, 0, 2)
  enable_dns_support = true
  enable_nat_gateway = true
  #enable_vpn_gateway = false
  single_nat_gateway = true
    tags = {
    Name          = "${var.environment}-VPC"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

Note: This will create a NAT gateway that is not free in AWS Free Tier; there will be a cost! For example: about a dollar per day in the US-West -1 region if left running.

Create variables.tf (in the VPC folder)

variable "aws_region" {
  description = "AWS region"
  type        = string
}
variable "environment" {
  description = "User selects environment"
  type = string
}
variable "your_name" {
  description = "Your Name?"
  type = string

}
variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
}

variable "vpc_cidr_block" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "public_subnet_cidr_blocks" {
  description = "Available cidr blocks for public subnets"
  type        = list(string)
  default = [
    "10.0.1.0/24",
    "10.0.2.0/24",
    "10.0.3.0/24",
    "10.0.4.0/24",
    "10.0.5.0/24",
    "10.0.6.0/24",
    "10.0.7.0/24",
    "10.0.8.0/24"
  ]
}

variable "private_subnet_cidr_blocks" {
  description = "Available cidr blocks for private subnets"
  type        = list(string)
  default = [
    "10.0.101.0/24",
    "10.0.102.0/24",
    "10.0.103.0/24",
    "10.0.104.0/24",
    "10.0.105.0/24",
    "10.0.106.0/24",
    "10.0.107.0/24",
    "10.0.108.0/24"
  ]
}

variable "database_subnet_cidr_blocks" {
  description = "Available cidr blocks for database subnets"
  type        = list(string)
  default = [
    "100.201.0/24",
    "100.202.0/24",
    "100.203.0/24",
    "100.204.0/24",
    "100.205.0/24",
    "100.206.0/24",
    "100.207.0/24",
    "100.208.0/24"
  ]
}
variable "public_subnet_count" {
  description = "Number of public subnets"
  type        = number
  default     = 2
}

variable "private_subnet_count" {
  description = "Number of private subnets"
  type        = number
  default     = 2
}

variable "database_subnet_count" {
  description = "Number of database subnets"
  type        = number
  default     = 2
}

Note: No “default” settings for the following variables.

  • Region
  • Environment
  • Your_Name
  • ssh_location

When creating variables without a “default”, it will cause “terraform apply,” to ask for your input for each of the variables that do not have a default setting. This allows an admin to stipulate a region of choice upon execution. Giving a Tag and optional input allows us to tag a deployment as “Test” or Development”. Using a variable with no default for “My public IP address” I named in this exercise as SSH_Location, allows you to input your public IP address and not have the IP address embedded in code. Hence, we can deploy the same code into different regions and environments, simply by changing the input to variables.

Instead of inputting answers manually for the above variables every time the code is executed, a common practice would be to create an “answer file using “.tfvars”. For example, we can create a “test.tfvars” file and then use that answer file as part of the Terraform Apply command, where the command would be:
“Terraform apply -var-file=test.tfvars
And the file would look something like the following:

test.tfvars

your_name       = "Joe"
ssh_location    = "1.2.3.4/32"
environment     = "Test"
region         = "us-west-1"

Note: A benefit of putting your answers into a file like “test.tfvars”, is that you can protect your answers from the public. By adding “*.tfvars” into .gitignore. A .gitignore file will force git to ignore stated file patterns in the .gitignore when pushing files into Github, which assures your sensitive dat is not copied into Git or GitHub.

Create security_groups.tf (in vpc folder)

Create a security group for the controller in the same folder “VPC”.

# -------------- Security Group for bastion host -----------------------
resource "aws_security_group" "controller-ssh" {
  name        = "ssh"
  description = "allow SSH from MyIP"
  vpc_id      = module.vpc.vpc_id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    cidr_blocks = ["${var.ssh_location}"]

  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
    tags = {
    Name          = "${var.environment}-Controller-SG"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}
# -------------- Security Group for ELB Web Servers -----------------------
resource "aws_security_group" "elb_web_sg" {
  name        = "${var.environment}-elb_web_sg"
  description = "allow SSH from Controller and HTTP from my IP"
  vpc_id      = module.vpc.vpc_id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    #security_groups  = ["sg-09812181ec902d546"]
    security_groups  = ["${aws_security_group.controller-ssh.id}"]
    }

    ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    security_groups = ["${aws_security_group.lb-sg.id}"]
    }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
    Name          = "${var.environment}-elb_web_sg"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

# -------------- Security Group for Load Balancer -----------------------
resource "aws_security_group" "lb-sg" {
  name        = "${var.environment}-lb-SG"
  description = "allow HTTP and HTTPS"
  vpc_id      = module.vpc.vpc_id

    ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
    Name          = "${var.environment}-lb-SG"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

“Output.tf” will be used as data for other modules to use as “Input” data
In example : (elb-tf folder).

Outputs

Outputs.tf

# ------ Output Region ------------------------------
output "aws_region" {
  description = "AWS region"
  value       = data.aws_region.current.name
}
# ------- Output VPC ID ------------------------------
output "vpc_id" {
  description = "Output VPC ID"
  value       = module.vpc.vpc_id
}
# ------- Output Controller Security Group ID --------
output "Controller-sg_id" {
  description = "Security group IDs for Controller"
  value       = [aws_security_group.controller-ssh.id]
}
# ---- Output Load Balancer Security Group ID --------
output "lb_security_group_id" {
  description = "Security group IDs for load balancer"
  value       = [aws_security_group.lb-sg.id]
}
# ------- Output Web Servers Security Group ID --------
output "elb_web-sg_id" {
  description = "Security group IDs for elb-Web servers"
  value       = [aws_security_group.elb_web_sg.id]
}
# ------- Output Public Subnet Group IDs -------------
output "public_subnet_ids" {
  description = "Public subnet IDs"
  value       = module.vpc.public_subnets
}
# ------- Output Private Subnet Group IDs ------------
output "private_subnet_ids" {
  description = "Private subnet IDs"
  value       = module.vpc.private_subnets
}

As shown above, the “outputs.tf” is providing output data for:
Region, vpc_id, controller-sg_id, public_subnet_ids, private_subnet_ids.

After applying “Terraform apply -var-file=tfvars”, you will see the above outputs displayed in the terminal console.


New Module and New Folder
Load Balancer and distributed Web Servers

We are going to provision the Elastic Load Balancer and Web Servers from a different folder. A separate folder automatically becomes a module to Terraform. This module is isolated, and we can provision using this module from another workstation or even using a different privileged IAM user within an AWS account.

If you want to actually test the load-balancer feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

Create a new folder “elb-web” cd into the directory and let’s get started.

elb-web.tf

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

# ------------- Configure the S3 backend for Terraform State -----------
data "terraform_remote_state" "vpc" {
  backend = "s3" 
  config = {
    bucket = "randomName-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}

# ------------ Pull the remote state data to determine region ----------
provider "aws" {

  region = data.terraform_remote_state.vpc.outputs.aws_region
}

So we begin making statements, AWS is the cloud platform, and Hashicorp AWS is the module provider. Then stipulate an S3 bucket as the remote state and acquire our first “data input, from the S3 bucket, ” which is “data.terraform_remote_state.vpc.outputs.” and acquire the “Name” another input from the remote state, “aws_region”.

Inputs

elb-web.tf – continued

module "elb_http" {
  source  = "terraform-aws-modules/elb/aws"
  version = "3.0.0"

  # Ensure load balancer name is unique
  name = "lb-${random_string.lb_id.result}-${var.environment}-lb"

  internal = false

  security_groups = data.terraform_remote_state.vpc.outputs.lb_security_group_id 
  subnets         = data.terraform_remote_state.vpc.outputs.public_subnet_ids # pulling remote state data to obtain the public subnet IDS

  number_of_instances = length(aws_instance.app)
  instances           = aws_instance.app.*.id

  listener = [{
    instance_port     = "80"
    instance_protocol = "HTTP"
    lb_port           = "80"
    lb_protocol       = "HTTP"
  }]

  health_check = {
    target              = "HTTP:80/index.html"
    interval            = 10
    healthy_threshold   = 3
    unhealthy_threshold = 10
    timeout             = 5
  }
}

The code above uses another community module. In this case, the “Elastic Load Balancer (ELB) Terraform module“. This module was also written and supported by antonbabenko.

elb-web.tf – continued

resource "aws_instance" "web" {
  ami = "ami-08f38617285ff6cbd" # this is my AMI ID from previous exercise - an EC2 instance configured with a static website and saved as an EC2 image 
  count = var.instances_per_subnet * length(data.terraform_remote_state.vpc.outputs.private_subnet_ids)
  instance_type = var.instance_type
  key_name               = var.key
  # get the subnet IDs from remote state S3 buckets
  subnet_id              = data.terraform_remote_state.vpc.outputs.public_subnet_ids[count.index % length(data.terraform_remote_state.vpc.outputs.private_subnet_ids)]
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.elb_web-sg_id # Will create the security groups a bit later in this exercise
  tags = {
    Name          = "${var.environment}-Static_Web_Server"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

“Count” is a resource configuration that tells Terraform how many EC2 instances to create, and the length tells how many subnets to place the count of instances. In this case, we have two private subnets, so the “count” configuration will place two instances of the EC2 AMI into the two private subnets.

Note: once again, we are using “remote state” to obtain the private subnet information from the VPC module by using outputs placed into Terraform remote state S3 bucket by using “data_remote_state” to get the data for private subnets. .

variables.tf (for elb-web folder)

variable "instances_per_subnet" {
  description = "Number of EC2 instances in each private subnet"
  type        = number
  default     = 2
}

variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}

variable "environment" {
  description = "User selects environment"
  type = string
  default = "Test"
}

variable "key" {
  type    = string
}

variable "your_name" {
  description = "Your Name?"
  type        = string
}

variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
}

variable "controller_sg" {
  type = string
}

variable "lb_sg" {
  type = string
}

test.tfvars

your_name       = "Your Name"
ssh_location    = "1.2.3.4/32"
environment     = "Test"
key             = "Your EC2 key pair"

New Module and New Folder
Controller

Create and cd into a directory named “controller”. We will create three files: controller.tf, s3_policy.tf, and variables.tf

controller.tf

Note: We do not have to create or launch the controller for the load-balanced website to work. The controller (jump server) is handy if you want to SSH into one of the private servers for maintenance or troubleshooting. You don’t really need it, until you need it. hehe!

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

#------------------------- State terraform backend location---------------------
data "terraform_remote_state" "vpc" {
  backend = "s3" 
  config = {
    bucket = "Your bucket name"  # be sure to update with name of your bucket
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}

# --------------------- Determine region from backend data -------------------
provider "aws" {
  region = data.terraform_remote_state.vpc.outputs.aws_region
}

#--------- Get Ubuntu 20.04 AMI image (SSM Parameter data) -------------------
data "aws_ssm_parameter" "ubuntu-focal" {
  name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
}


# Creating controller node
resource "aws_instance" "controller" {
  ami                    = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
  instance_type          = var.instance_type
  subnet_id              = data.terraform_remote_state.vpc.outputs.public_subnet_ids [0]
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.Controller-sg_id
  iam_instance_profile   = "${aws_iam_instance_profile.assume_role_profile.name}" 
  user_data              = file("bootstrap_controller.sh")
  private_ip             = "10.0.1.10"
  monitoring             = true
  key_name               = var.key

    tags = {
    Name          = "${var.environment}-Controller"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

output "Controller" {
  value = [aws_instance.controller.public_ip]
}

s3_policy.tf

The S3 policy is not required for a Jump Server. We might need some files for common maintenance of server configuration using Ansible. I like to place these files into an S3 bucket such that Ansible playbooks can be applied to multiple servers. An S3 policy allows our Jump server (controller) access to an S3 bucket

# ------------ Create the actual S3 read & copy files policy ----
resource "aws_iam_policy" "copy-policy" {
  name        = "S3_Copy_policy"
  description = "IAM policy to allow copy files from S3 bucket"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],
      "Resource": ["arn:aws:s3:::S3-bucket-for-Ansible-Files",
                    "arn:aws:s3:::S3-bucket-for-Ansible-Files/*"]
    }
  ]
}
EOF
}

# ------------------ create assume role -----------------
resource "aws_iam_role" "assume-role" {
  name               = "assume-role"
  description        = "IAM policy that allows assume role"
  assume_role_policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Effect": "Allow",
        "Sid": ""
      }
    ]
}
EOF
}
# ------------ attach the role to the policy ----------------
resource "aws_iam_role_policy_attachment" "assign-copy-policy" {
  role       = aws_iam_role.assume-role.name
  policy_arn = aws_iam_policy.copy-policy.arn
  depends_on = [aws_iam_policy.copy-policy]
}

# ------------ create a profile to be used by EC2 instance ----
resource "aws_iam_instance_profile" "assume_role_profile" {
  name = "assume_role_profile"
  role = aws_iam_role.assume-role.name
}

variables.tf

variable "key" {
  type    = string
  default = "EC2 key pair name"  #be sure to update with the name of your EC2 Key pair for your region
}
variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}
variable "environment" {
  description = "User selects environment"
  type = string
  default = "Test"
}
variable "your_name" {
  description = "Your Name?"
  type = string
  default = "Your Name"
}

Provisioning

  1. Be sure to change the S3 Bucket name in S3_policy.tf (lines 16 & 17), shown above in Red, into your S3 bucket name
  2. Be sure to change the test.tfvars in the VPC folder, variables of your choice
  3. Be sure to change the test.tfvars in the ELB-WEB folder, to variables of your choice
  4. Be sure to change the main.tf lines 11-13 with the configuration for your S3 bucket to store terraform backend state
  5. In your terminal, go to the VPC folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply -var-file=test.tfvars
  6. In your terminal, go to the elb-web folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply -var-file=test.tfvars

      That is it, we have launched and should now have a load-balanced static website with resilience across availability zones and within each zone have at least two web servers for high availability

The controller (bastion host), can be launched at any time. Quite often, I’ll launch the controller to troubleshoot a test deployment.

It goes without saying, but it has to be said anyway. This is not for production!

All public websites should have some type of application firewall in between the Web Server and its internet connection!

All websites should have monitoring and a method to scrape log events to detect potential problems with the deployment.

It is a good idea to remove an EC2 instance, or and ELB, when you are finished with the exercise, so as not to incur costs