Terraform – Scalable WordPress in AWS, using an ALB, ASG, and EFS

Using Terraform to deploy an auto-scaled WordPress site in AWS, with an application load balancer, while using EFS as storage for WordPress front end servers

Load balanced and Auto-Scaled WordPress deployment

This exercise will build an auto-scaled WordPress solution. While using EFS as the persistent storage solution. An auto-scaled front end can expand the number of front-end servers to handle growth in the number of users during peak hours. We also need a load-balancer that automatically distributes users amongst front-end servers to accommodate load distribution.

Ideally, we should use a scaling solution based on demand. I could write scaling an ASG based on demand, but demonstrating compliance by increasing client demand (representing peak load), could incur a substantial cost, and I’m trying to keep my exercises to be “compliant with a Free Tier plan.” Soooo, simply using an AWS ASG with desired capacity will be the solution for today.

Ideally, we should also use RDS for our database, which can scale based on demand. Using one MariaDB server that does not scale to user load kind of defeats the purpose of a scalable architecture. However, I’ve written this exercise to demonstrate deploying scaling WordPress front-end servers with an EFS shared file service and not so much as an ideal production architecture. Soooo, one MariaDB that is free tier compliant is our plan for today.

Why are we using EFS?

When scaling more than one WordPress front-end server, we’ll need a method to keep track of users amongst the front-end servers. We need storage common to all front-end servers to ensure each auto-scaled WordPress server is aware of user settings, activity, and configuration. AWS provides a shared file storage system called Elastic File Services (EFS). EFS is a serverless file storage system. EFS is compliant with NFS versions 4.0 and 4.1. Therefore, the latest versions of Amazon Linux, Red Hat, CentOS, and MAC operating systems are capable of using EFS as an NFS server. Amazon EC2 and other AWS compute instances running in multiple Availability Zones within the same AWS Region can access the file system so that many users can access and share a common data source.

Each front-end server using EFS has access to shared storage, allowing each server to have all user settings, configuration, and activity information.

Docker

We will be using Docker containers for our WordPress and MariaDB servers. The previous WordPress exercise used Ansible to configure servers with WordPress and MariaDB. But we are using auto-scaling, so I would like a method to deploy WordPress quickly rather than scripts or playbooks in this exercise—Docker to the rescue.

This exercise will be using official Docker images “WordPress” and “MariaDB.”

Terraform

We will be using Terraform to construct our AWS resources. Our Terraform code will build a new VPC, two public subnets, two private subnets, and the associative routing and security groups. Terraform will also construct our ALB, ASG, EC2, and EFS resources.

Requirements

  • Must have an AWS account
  • Install AWS CLIConfigure AWS CLIInstall Terraform
  • An EC2 Key Pair for AWS CLI (for connecting using SSH protocol)
  • AWS Administrator account or an account with the following permissions:
    • create VPC, subnets, routing, and security groups
    • create EC2 Instances and manage EC2 resources
    • create auto-scaling groups and load balancers
    • create and manage EFS and EFS mount points

GitHub Repository

https://github.com/surfingjoe/Wordpress-deployment-into-AWS-with-EFS-ALB-ASG-and-Docker

Building our Scaled WordPress Solution

vpc.tf

vpc_variables.tf

Security

The load balancer security group will only allow HTTP inbound traffic from my public IP address (in this exercise) at the time of this writing. I will possibly alter this exercise to include the configuration of a domain using Route 53 and a certificate for that domain, such that we can use HTTPS encrypted traffic instead of HTTP traffic. Using a certificate incurs costs because a Route 53 certificate for a domain is not included in a free tier plan. Therefore, I might write managing Route 53 using Terraform as an optional configuration later.

The WordPress Security group will only allow HTTP inbound traffic from the ALB security group and SSH only from the Controller security group.

The MySQL group will only allow MySQL protocol from the WordPress security group and SSH protocol from the Controller security group.

The optional Controller will only allow SSH inbound from My Public IP address.

security_groups.tf

efs.tf

We are writing the Terraform code to create a general-purpose EFS deployment. You’ll note that I’m using a variable called “nickname” to create a unique EFS name. We are using “general purpose” performance and “bursting” throughput mode to stay within free tier plans and not incur costs. You’ll notice that we are creating a mount point in each private subnet so that our EC2 instances can make NFS mounts to an AWS EFS service.

wordpress.tf

The method of creating an auto-scaled WordPress deployment uses the same kind of Terraform code found in my previous exercise. If you would like to see more discussions about key attributes, and decisions to make about Terraform coding of an Auto Scaling Group please refer to my previous article.

Notice that I added a dependency on MariaDB in the code. It is not required, it will work with or without this dependency, but I like the idea of telling Terraform that I want our database to be active before creating WordPress.

Notice that we assign variables for EFS ID, dbhost, database name, the admin password, and the root password in the launch template.

vars.tf

This covers the variables needed for WordPress and MariaDB servers.

bootstrap_wordpress.tpl

This Terraform code will be used to configure each WordPress server with Docker and launch the WordPress Docker container with associative variables to configure EFS ID, dbhost, database name, and admin password, and root password.

mariadb.tf

Notice that we are once again passing variables to our bootstrap by using a launch template.

bootstrap_mariadb.tpl

alb.tf

alb_target.tf

output.tf

terraform.tfvars

This file will be used to assign values to our variables. I have dummy values placed in the code below, of course, you will want to change the values.

Deploy our Resources using Terraform

Be sure to edit the variables in terraform.tfvars (currently, it has bogus values)

If you are placing this into any other region than us-west-1, you will have to change the AMI ID for the NAT instances in the file “vpc.tf”.

In your terminal, go to the VPC folder and execute the following commands:

  1. Terraform init
  2. terraform validate
  3. Terraform apply

Once the deployment is successful, the terminal will output something like the following output:

Copy the lb_dns_name, without the quotes, and paste the DNS name into any browser. If you have followed along and placed all of the code correctly, you should see something like the following:

Screen Shot

Notice Sometimes servers in an ASG take a few minutes to configure. Wait a couple of minutes if you get an error from our website and try again.

Open up your AWS Management Console, and go to the EC2 dashboard. Be sure to configure your EC2 dashboard to show tag columns with a tag value “Name”. A great way to identify your resources is using TAGS!!

If you have configured the dashboard to display the tag column "Names" in your EC2 dashboard, you should quickly be able to see one instance with the tag name "Test-MariaDB" and "Test-NAT2" and TWO servers with the Tag Name "Wordpress_ASG".

As an experiment, perhaps you would like to expand the number of Web servers. We can manually expand the number of desired capacity, and the Auto Scaling Group will automatically scale up or down the number of servers based on your command to change desired capacity.

The AWS CLI command is as follows:

Where ASG_Name in the command line above will be the terminals output of lb_dns_name (without the quotes of course). If you successfully executed the command line in your terminal, you should eventually see in the EC2 dashboard FOUR instances with the tag name “WordPress_ASG”. It does take a few minutes to execute the change. Demonstrating our ability to manually change the number of servers to four instead of two.

Now, go to your EC2 dashboard. Select one of the “WordPress_ASG” instances and select the drop-down box “Instance state”, then select “Stop Instance”. Your Instance will stop and what should happen, is the Auto Scaling Group and Load Balancer health checks will see that one of the instances is no longer working. The Auto Scaling Group will automatically take it out of service and create a new instance.

Now go to the Auto Scaling Groups panel (find this in the EC2 dashboard, left-hand pane under “Auto Scaling”. Click on the tab “Activity”. You should in a few minutes see an activity announcing:

“an instance was taken out of service in response to an EC2 health check indicating it has been terminated or stopped.”

The next activity will be to start a new instance. How about that! Working just like we designed the ASG to do for us. The ASG is automatically keeping our desired state of servers in a healthy state by creating new instances if one becomes unhealthy.


Once completed with this exercise, feel free to remove all resources by issuing the following command in the terminal:

  • terraform destroy

This is not for production!

All public websites should have security protection with a firewall (not just a security group). Since this is just an exercise, you can you in AWS free tier account, I do recommend the use of this configuration for production.

Most cloud deployments should have monitoring in place to detect and alert someone should an event occur to any resources that require remediation. this exercise does not include any monitoring

It is a good idea to remove All resources when you have completed this exercise so as not to incur costs

AWS Classic Load balancer

Using Infrastructure as Code with Terraform to create an AWS Load-balanced website

OOPS: Things Change. The code in Github was completely operational. Now it doesn’t work. It was based on Amazon NAT instances that are no longer available.

All of the Terraform code for this exercise is in Github repository

Features

  • AWS Classic Load Balancer
  • VPC using NAT instances instead of NAT gateways
  • Docker Containers running on EC2 instances

This exercise creates a load-balanced website (similar to the previous exercise) but with essential differences (NAT Instances instead of NAT gateway and using Docker container instead of a custom AMI as a web server).

  • AWS as a cloud provider
  • Compliant with the Free Tier plan
  • Using Terraform to create the deployment Infrastructure as Code
  • The ability to provision resources into AWS using “modular code.”
  • Four Web Servers behind a Classic load balancer
  • Ability to launch or destroy bastion host (jump server) only when required
    • Can add/remove bastion host (jump server) at any time without impact to other resources (Bastion Hosts – Provides administrators SSH access to servers located in a private network)

Difference – NAT Instance instead of NAT gateway

One of the differences between this code and the code sample in the previous exercise is that we’ll use NAT instances instead of a NAT gateway. A NAT gateway incurs costs even when using AWS under a free tier plan. It might only be a dollar or two per day. Still, it is a cost. So just for grins, I’ve created a VPC that uses AWS NAT instances to save a couple of dollars. A NAT instance does not compare to the performance of AWS NAT Gateways, so probably not a good solution for production. Considering we are simply running test environments, a NAT instance that performs a bit slower, and saves a few dollars, is fine with me!

Docker-based website

In the previous exercise, we used a custom AMI saved into our EC2 AMI library. A custom-built AMI works well because it allows us to customize an EC2 instance with our application and configuration and save it as a dedicated AMI image in our AWS account. A custom AMI enables greater control from a release management standpoint because our team has control of the composition of an AMI image.

However, creating a custom AMI and then saving an AMI into our EC2 library produces costs even when using a Free Tier plan. While it is great to use a custom AMI, it’s also essential to save money when we are simply studying AWS deployments within a Free Tier plan.

Docker to the rescue. We can create a custom docker container with our specific application and/or configuration like a custom AMI.

We will be using a boot script to install Docker and launch a Docker container, saving costs by not using a custom AMI image.

I’ve created a few websites (to use as docker containers). These containers utilize website templates that are free to use under a Creative Commons license. We’ll use one of my docker containers in this exercise with the intent to eventually jump into using docker containers in ECS and EKS deployments in future activities.

The change from NAT gateway to NAT instance has an impact on our VPC configuration

VPC Changes

  1. We will use standard Terraform AWS resources code instead of a Terraform Module. Hence we’ll be using standard Terraform code to create a VPC.
  2. Also had to change the security group’s code from using Terraform Modules to using Terraform resource code and the methods of referencing AWS resources instead of modules.
  3. Terraform Outputs had to be changed as well to recognize the above changes

ELB changes

  1. We will use standard Terraform AWS resource code instead of the Terraform community module to create a classic load balancer.

Requirements

Note:

If you performed the previous exercise, you might be tempted to try and use the same VPC code. Unfortunately, we are using NAT instances instead of a NAT gateway. We require a new code to create this VPC. The other modules in this exercise are explicitly written with references to this type of VPC found below.

So let us get started

First, please create the following folder structure shown below.

VPC

The following code “vpc.tf”, “var.tf”, and “security_groups.tf” will be created and placed into the VPC folder.

The code below creates a VPC, two public subnets, two private subnets, two NAT instances (one for each public subnet), routing for the public subnets, and routing for the private subnets.

Create the VPC code file “VPC.tf”

Variables for VPC module (var.tf)

Security Groups (security_groups.tf)

Outputs for the VPC module (output.tf)

Code for Classic Load Balancer and Docker web servers (ELB-Web.tf)

The following code “elb-web.tf”, “var.tf”, and “bootstrap_docker.sh” will create an AWS classic load balancer, and four web servers (two in each public subnet). These files will need to be placed into a separate folder, as the code is written to be modular and to obtain data from Terraform Remote state output data. It literally will not work if placed into the same folder as the VPC code.

The load-balanced web servers will be running a docker container as a web server. If you want to test the load balancer, feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

Variables for ELB-Web (variables.tf)

Bootstrap to install and run Docker container (file name “bootstrap_docker.sh”)

#!/bin/bash
sudo yum -y update
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker

sudo docker run -d --name mywebsite -p 80:80 surfingjoe/mywebsite:latest
hostnamectl set-hostname Docker-server

Controller

It is not required to even create the following code for the load-balanced web servers to work. But, because the VPC code is different from the previous exercise, I’m including the code for a jump server (aka bastion host, or as I call it a controller because I use the jump server to deploy ansible configurations on occasion). A jump server is also sometimes necessary to SSH into servers on a private network for analyzing failed deployments. It certainly comes in handy to have a jump server!

The following files will be placed into a separate folder, in this case, named “controller”. The files “controller.tf”, “variables.tf”, and “bootstratp_controller.sh” will create the jump server (Controller).

Once again this is modular code, and won’t work if these files are placed into the same folder as the VPC code. The code depends on output data being placed into Terraform remote state S3 bucket and this code references the output data as inputs to the controller code.

Create file “controller.tf”

Note; I have some code commented out in case you want the controller to be an UBUNTU server instead of an AMI Linux server. I’ve used both flavors over time and hence my module allows me to use choose at the time of deployment by manipulating which lines are commented.

Create the Variables file “variable.tf”

Create the bootstrap “bootstrap_controller.tf”

#!/bin/bash
sudo yum -y update

hostnamectl set-hostname Controller
sudo yum install unzip
sudo yum install -y awscli
sudo amazon-linux-extras list | grep ansible2
sudo amazon-linux-extras enable ansible2

Provisioning

  1. Be sure to change the S3 Bucket name in S3_policy.tf (lines 16 & 17), shown above in Red, into your S3 bucket name
  2. Be sure to change the test.tfvars in the VPC folder, variables of your choice
  3. Be sure to change the test.tfvars in the ELB-WEB folder, to variables of your choice
  4. Be sure to change the main.tf lines 11-13 with the configuration for your S3 bucket to store terraform backend state
  5. In your terminal, go to the VPC folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply
  6. In your terminal, go to the elb-web folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply

      That is it, we have launched and should now have a load-balanced static website with resilience across availability zones and within each zone have at least two web servers for high availability

If you want to actually test the load-balancer feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

The controller (bastion host), can be launched at any time. Quite often, I’ll launch the controller to troubleshoot a test deployment.

It goes without saying, but it has to be said anyway. This is not for production!

All public websites should have some type of application firewall in between the Web Server and its internet connection!

All websites should have monitoring and a method to scrape log events to detect potential problems with the deployment.

It is a good idea to remove an EC2 instance or an ELB when you are finished with the exercise so as not to incur costs

Exit mobile version
%%footer%%