
This exercise will build an auto-scaled WordPress solution. While using EFS as the persistent storage solution. An auto-scaled front end can expand the number of front-end servers to handle growth in the number of users during peak hours. We also need a load-balancer that automatically distributes users amongst front-end servers to accommodate load distribution.
Ideally, we should use a scaling solution based on demand. I could write scaling an ASG based on demand, but demonstrating compliance by increasing client demand (representing peak load), could incur a substantial cost, and I’m trying to keep my exercises to be “compliant with a Free Tier plan.” Soooo, simply using an AWS ASG with desired capacity will be the solution for today.
Ideally, we should also use RDS for our database, which can scale based on demand. Using one MariaDB server that does not scale to user load kind of defeats the purpose of a scalable architecture. However, I’ve written this exercise to demonstrate deploying scaling WordPress front-end servers with an EFS shared file service and not so much as an ideal production architecture. Soooo, one MariaDB that is free tier compliant is our plan for today.
Why are we using EFS?
When scaling more than one WordPress front-end server, we’ll need a method to keep track of users amongst the front-end servers. We need storage common to all front-end servers to ensure each auto-scaled WordPress server is aware of user settings, activity, and configuration. AWS provides a shared file storage system called Elastic File Services (EFS). EFS is a serverless file storage system. EFS is compliant with NFS versions 4.0 and 4.1. Therefore, the latest versions of Amazon Linux, Red Hat, CentOS, and MAC operating systems are capable of using EFS as an NFS server. Amazon EC2 and other AWS compute instances running in multiple Availability Zones within the same AWS Region can access the file system so that many users can access and share a common data source.
Each front-end server using EFS has access to shared storage, allowing each server to have all user settings, configuration, and activity information.
Docker
We will be using Docker containers for our WordPress and MariaDB servers. The previous WordPress exercise used Ansible to configure servers with WordPress and MariaDB. But we are using auto-scaling, so I would like a method to deploy WordPress quickly rather than scripts or playbooks in this exercise—Docker to the rescue.
This exercise will be using official Docker images “WordPress” and “MariaDB.”
Terraform
We will be using Terraform to construct our AWS resources. Our Terraform code will build a new VPC, two public subnets, two private subnets, and the associative routing and security groups. Terraform will also construct our ALB, ASG, EC2, and EFS resources.
Requirements
- Must have an AWS account
- Install AWS CLI, Configure AWS CLI, Install Terraform
- An EC2 Key Pair for AWS CLI (for connecting using SSH protocol)
- AWS Administrator account or an account with the following permissions:
- create VPC, subnets, routing, and security groups
- create EC2 Instances and manage EC2 resources
- create auto-scaling groups and load balancers
- create and manage EFS and EFS mount points
GitHub Repository
https://github.com/surfingjoe/Wordpress-deployment-into-AWS-with-EFS-ALB-ASG-and-Docker
Building our Scaled WordPress Solution
vpc.tf
provider "aws" {
region = var.region
}
data "aws_availability_zones" "all" {}
terraform {
backend "s3" {
bucket = "nickname-terraform-states"
key = "terraform.tfstate"
region = "us-west-1"
}
}
data "aws_availability_zones" "available" {
state = "available"
}
data "aws_region" "current" {}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
#version => "2.64.0"
name = "${var.environment}-vpc"
cidr = var.vpc_cidr_block
azs = data.aws_availability_zones.available.names
private_subnets = slice(var.private_subnet_cidr_blocks, 0, var.private_subnet_count)
public_subnets = slice(var.public_subnet_cidr_blocks, 0, var.public_subnet_count)
intra_subnets = slice(var.intra_subnet_cidr_blocks, 0, 2)
enable_dns_support = true
enable_dns_hostnames = true
single_nat_gateway = true
enable_nat_gateway = true
enable_vpn_gateway = false
tags = {
Name = "${var.environment}-VPC"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
vpc_variables.tf
variable "region" {
description = "The region Terraform deploys your instances"
type = string
}
variable "ssh_location" {
type = string
description = "My Public IP Address"
}
variable "vpc_cidr_block" {
description = "CIDR block for VPC"
type = string
default = "10.0.0.0/16"
}
variable "public_subnet_count" {
description = "Number of public subnets."
type = number
}
variable "private_subnet_count" {
description = "Number of private subnets."
type = number
}
variable "intra_subnet_count" {
description = "Number of private subnets"
type = number
}
variable "public_subnet_cidr_blocks" {
description = "Available cidr blocks for public subnets"
type = list(string)
default = [
"10.0.1.0/24",
"10.0.2.0/24",
"10.0.3.0/24",
"10.0.4.0/24",
"10.0.5.0/24",
"10.0.6.0/24",
"10.0.7.0/24",
"10.0.8.0/24",
]
}
variable "private_subnet_cidr_blocks" {
description = "Available cidr blocks for private subnets"
type = list(string)
default = [
"10.0.101.0/24",
"10.0.102.0/24",
"10.0.103.0/24",
"10.0.104.0/24",
"10.0.105.0/24",
"10.0.106.0/24",
"10.0.107.0/24",
"10.0.108.0/24",
]
}
variable "intra_subnet_cidr_blocks" {
description = "Available cidr blocks for database subnets"
type = list(string)
default = [
"10.0.201.0/24",
"10.0.202.0/24",
"10.0.203.0/24",
"10.0.204.0/24",
"10.0.205.0/24",
"10.0.206.0/24",
"10.0.207.0/24",
"10.0.208.0/24"
]
}
Security

The load balancer security group will only allow HTTP inbound traffic from my public IP address (in this exercise) at the time of this writing. I will possibly alter this exercise to include the configuration of a domain using Route 53 and a certificate for that domain, such that we can use HTTPS encrypted traffic instead of HTTP traffic. Using a certificate incurs costs because a Route 53 certificate for a domain is not included in a free tier plan. Therefore, I might write managing Route 53 using Terraform as an optional configuration later.
The WordPress Security group will only allow HTTP inbound traffic from the ALB security group and SSH only from the Controller security group.
The MySQL group will only allow MySQL protocol from the WordPress security group and SSH protocol from the Controller security group.
The optional Controller will only allow SSH inbound from My Public IP address.
security_groups.tf
resource "aws_security_group" "controller-ssh" {
name = "Controller-SG"
description = "allow SSH from my location"
vpc_id = module.vpc.vpc_id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
cidr_blocks = ["${var.ssh_location}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-Controller-SG"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
resource "aws_security_group" "web-sg" {
name = "Web-SG"
description = "allow HTTP from Load Balancer, & SSH from controller"
vpc_id = module.vpc.vpc_id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
security_groups = ["${aws_security_group.controller-ssh.id}"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = ["${aws_security_group.alb-sg.id}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-Web-SG"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
resource "aws_security_group" "alb-sg" {
name = "ALB-SG"
description = "allow Http, HTTPS"
vpc_id = module.vpc.vpc_id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${var.ssh_location}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["${var.ssh_location}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-ALB-SG"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
resource "aws_security_group" "efs-sg" {
name = "ingress-efs-sg"
vpc_id = module.vpc.vpc_id
// NFS
ingress {
security_groups = ["${aws_security_group.controller-ssh.id}", "${aws_security_group.web-sg.id}"]
from_port = 2049
to_port = 2049
protocol = "tcp"
}
egress {
security_groups = ["${aws_security_group.controller-ssh.id}", "${aws_security_group.web-sg.id}"]
from_port = 0
to_port = 0
protocol = "-1"
}
tags = {
Name = "${var.environment}-MyEFS-SG"
stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
resource "aws_security_group" "MySQL-sg" {
name = "MySQL-SG"
description = "allow SSH from Controller and MySQL from my IP and from web servers"
vpc_id = module.vpc.vpc_id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
security_groups = ["${aws_security_group.controller-ssh.id}"]
}
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = ["${aws_security_group.web-sg.id}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-MySQL-SG"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
efs.tf
We are writing the Terraform code to create a general-purpose EFS deployment. You’ll note that I’m using a variable called “nickname” to create a unique EFS name. We are using “general purpose” performance and “bursting” throughput mode to stay within free tier plans and not incur costs. You’ll notice that we are creating a mount point in each private subnet so that our EC2 instances can make NFS mounts to an AWS EFS service.
resource "aws_efs_file_system" "my_efs" {
creation_token = "${var.nickname}-efs"
performance_mode = "generalPurpose"
throughput_mode = "bursting"
encrypted = "true"
tags = {
Name = "${var.environment}-MyEFS"
stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
resource "aws_efs_mount_target" "efs-mt-A" {
file_system_id = aws_efs_file_system.my_efs.id
subnet_id = module.vpc.intra_subnets[0]
security_groups = ["${aws_security_group.efs-sg.id}"]
}
resource "aws_efs_mount_target" "efs-mt-B" {
file_system_id = aws_efs_file_system.my_efs.id
subnet_id = module.vpc.intra_subnets[1]
security_groups = ["${aws_security_group.efs-sg.id}"]
}
wordpress.tf
The method of creating an auto-scaled WordPress deployment uses the same kind of Terraform code found in my previous exercise. If you would like to see more discussions about key attributes, and decisions to make about Terraform coding of an Auto Scaling Group please refer to my previous article.
Notice that I added a dependency on MariaDB in the code. It is not required, it will work with or without this dependency, but I like the idea of telling Terraform that I want our database to be active before creating WordPress.
Notice that we assign variables for EFS ID, dbhost, database name, the admin password, and the root password in the launch template.
#--------- Get Amazon Linux 2 AMI image -------------------
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
# --------- Create Launch Template ----------------------------
resource "aws_launch_template" "wordpress" {
image_id = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
key_name = var.key
vpc_security_group_ids = ["${aws_security_group.web-sg.id}"]
user_data = base64encode("${data.template_file.bootstrap.rendered}")
depends_on = [aws_efs_file_system.my_efs, aws_instance.mariadb]
lifecycle { create_before_destroy = true }
#monitoring = true
}
# ------------ Create Auto Scaling Group ----------------------
resource "aws_autoscaling_group" "wordpress_asg" {
launch_template {
name = aws_launch_template.wordpress.name
version = aws_launch_template.wordpress.latest_version
}
vpc_zone_identifier = module.vpc.private_subnets
min_size = 2
max_size = 6
desired_capacity = 2
tag {
key = "Name"
value = "Wordpress_ASG"
propagate_at_launch = true
}
depends_on = [aws_instance.mariadb]
}
data "template_file" "bootstrap" {
template = file("bootstrap_wordpress.tpl")
vars = {
efs_id = "${aws_efs_file_system.my_efs.id}"
dbhost = "${aws_instance.mariadb.private_ip}"
user = var.user
password = var.password
dbname = var.dbname
domain_name = var.domain_name
}
}
vars.tf
This covers the variables needed for WordPress and MariaDB servers.
variable "instance_type" {
description = "Type of EC2 instance to use"
type = string
default = "t2.micro"
}
variable "environment" {
description = "User selects environment"
type = string
}
variable "your_name" {
description = "Your Name?"
type = string
}
variable "key" {
description = "EC2 Key Pair Name"
type = string
}
variable "user" {
description = "SQL User for WordPress"
type = string
}
variable "dbname" {
description = "Database name for WordPress"
type = string
}
variable "password" {
description = "User password for WordPress"
type = string
}
variable "root_password" {
description = "User password for WordPress"
type = string
}
variable "domain_name" {
description = "My Domain Name"
type = string
}
bootstrap_wordpress.tpl
This Terraform code will be used to configure each WordPress server with Docker and launch the WordPress Docker container with associative variables to configure EFS ID, dbhost, database name, and admin password, and root password.
#!/bin/bash
sudo yum -y update
hostnamectl set-hostname wordpress
# ----- Install AWS EFS Utilities --------------------
yum install -y amazon-efs-utils
# ----- Create EFS Mount --------------------
mkdir /efs
mount -t efs ${efs_id}:/ /efs
# ----- Edit fstab so EFS automatically loads on reboot
echo ${efs_id}:/ /efs efs defaults,_netdev 0 0 >> /etc/fstab
# Install & Run Docker --------------------
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker
# ----- Install docker compose --------------------
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
# ----- Docker run wordpress with assigning env variables -------
docker run -d -e WORDPRESS_DB_HOST=${dbhost} -e WORDPRESS_DB_PASSWORD=${password} -e WORDPRESS_DB_USER=${user} -e WORDPRESS_DB_NAME=${dbname} -v /efs/wordpress:/var/www/html -p 80:80 wordpress:latest
mariadb.tf
Notice that we are once again passing variables to our bootstrap by using a launch template.
# Creating controller node
resource "aws_instance" "mariadb" {
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = module.vpc.private_subnets[1]
vpc_security_group_ids = ["${aws_security_group.MySQL-sg.id}"]
user_data = data.template_file.bootstrap-db.rendered
monitoring = true
key_name = var.key
depends_on = [aws_efs_file_system.my_efs]
tags = {
Name = "${var.environment}-MariaDB"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
data "template_file" "bootstrap-db" {
template = file("bootstrap_mariadb.tpl")
vars = {
efs_id = "${aws_efs_file_system.my_efs.id}"
root_password = var.root_password
user = var.user
password = var.password
dbname = var.dbname
}
}
bootstrap_mariadb.tpl
#!/bin/bash
sudo yum -y update
hostnamectl set-hostname mariadb
# ----- Install AWS EFS Utilities ---------------
yum install -y amazon-efs-utils
# ----- Create the EFS Mount --------------------
mkdir /efs
mkdir /efs/mariadb
mount -t efs $efs_id:/ /efs
# ----- Edit fstab so EFS automatically loads on reboot ---
echo $efs_id:/ /efs efs defaults,_netdev 0 0 >> /etc/fstab
# ----- Install & Run Docker ---------------------
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=${root_password} -e MYSQL_USER=${user} -e MYSQL_PASSWORD=${password} -e MYSQL_DATABASE=${dbname} -p 3306:3306 -d -v /efs/mariadb:/var/lib/mysql docker.io/library/mariadb
alb.tf
resource "random_pet" "app" {
length = 2
separator = "-"
}
resource "aws_lb" "wordpress-alb" {
name = "main-app-${random_pet.app.id}-lb"
internal = false
load_balancer_type = "application"
subnets = module.vpc.public_subnets
security_groups = ["${aws_security_group.alb-sg.id}"]
}
resource "aws_lb_listener" "wordpress-alb-listner" {
load_balancer_arn = aws_lb.wordpress-alb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
forward {
target_group {
arn = aws_lb_target_group.wordpress-target.arn
}
stickiness {
enabled = true
duration = 1
}
}
}
}
alb_target.tf
resource "aws_lb_target_group" "wordpress-target" {
name = "wordpress-${random_pet.app.id}-lb"
port = 80
protocol = "HTTP"
vpc_id = module.vpc.vpc_id
health_check {
port = 80
protocol = "HTTP"
timeout = 5
interval = 10
}
}
# ----- Create a new ALB Target Group attachment. ------
resource "aws_autoscaling_attachment" "asg_attachment_website" {
autoscaling_group_name = aws_autoscaling_group.wordpress_asg.id
lb_target_group_arn = aws_lb_target_group.wordpress-target.arn
}
output.tf
output "Controller-sg_id" {
value = [aws_security_group.controller-ssh.id]
}
output "vpc_id" {
description = "Output VPC ID"
value = module.vpc.vpc_id
}
output "public_subnet_ids" {
description = "Public subnet IDs"
value = module.vpc.public_subnets
}
output "private_subnet_ids" {
description = "Private subnet IDs"
value = module.vpc.private_subnets
}
output "lb_dns_name" {
value = aws_lb.wordpress-alb.dns_name
}
output "Auto_Scaling_Group_Name" {
value = aws_autoscaling_group.wordpress_asg.name
}
terraform.tfvars
This file will be used to assign values to our variables. I have dummy values placed in the code below, of course, you will want to change the values.
your_name = "Your name"
ssh_location = "1.2.3.4/32"
root_password = "Password"
user = "wordpress"
password = "Password"
dbname = "Wordpress"
environment = "Test"
key = "Your EC2 Key name"
region = "us-west-1"
public_subnet_count = "2"
private_subnet_count = "2"
intra_subnet_count. = "2"
nickname = "Your nickname"
domain = "Your domain name"
Deploy our Resources using Terraform
Be sure to edit the variables in terraform.tfvars (currently, it has bogus values)
If you are placing this into any other region than us-west-1, you will have to change the AMI ID for the NAT instances in the file “vpc.tf”.
In your terminal, go to the VPC folder and execute the following commands:
Terraform initterraform validateTerraform apply
Once the deployment is successful, the terminal will output something like the following output:
Auto_Scaling_Group_Name = "terraform-20220624191901645100000004"
Controller-sg_id = [
"sg-03fbbf2bf5df75562",
]
aws_region = "us-west-1"
lb_dns_name = "main-app-nearby-lab-lb-73970083.us-west-1.elb.amazonaws.com"
vpc_id = "vpc-0ae0cd8eef3139128"
Copy the lb_dns_name, without the quotes, and paste the DNS name into any browser. If you have followed along and placed all of the code correctly, you should see something like the following:

Notice Sometimes servers in an ASG take a few minutes to configure. Wait a couple of minutes if you get an error from our website and try again.
Open up your AWS Management Console, and go to the EC2 dashboard. Be sure to configure your EC2 dashboard to show tag columns with a tag value “Name”. A great way to identify your resources is using TAGS!!
If you have configured the dashboard to display the tag column "Names" in your EC2 dashboard, you should quickly be able to see one instance with the tag name "Test-MariaDB" and "Test-NAT2" and TWO servers with the Tag Name "Wordpress_ASG".
As an experiment, perhaps you would like to expand the number of Web servers. We can manually expand the number of desired capacity, and the Auto Scaling Group will automatically scale up or down the number of servers based on your command to change desired capacity.
The AWS CLI command is as follows:
aws autoscaling set-desired-capacity \
--auto-scaling-group-name ASG_Name \
--desired-capacity 4 \
--honor-cooldown
Where ASG_Name in the command line above will be the terminals output of lb_dns_name (without the quotes of course). If you successfully executed the command line in your terminal, you should eventually see in the EC2 dashboard FOUR instances with the tag name “WordPress_ASG”. It does take a few minutes to execute the change. Demonstrating our ability to manually change the number of servers to four instead of two.
Now, go to your EC2 dashboard. Select one of the “WordPress_ASG” instances and select the drop-down box “Instance state”, then select “Stop Instance”. Your Instance will stop and what should happen, is the Auto Scaling Group and Load Balancer health checks will see that one of the instances is no longer working. The Auto Scaling Group will automatically take it out of service and create a new instance.
Now go to the Auto Scaling Groups panel (find this in the EC2 dashboard, left-hand pane under “Auto Scaling”. Click on the tab “Activity”. You should in a few minutes see an activity announcing:
“an instance was taken out of service in response to an EC2 health check indicating it has been terminated or stopped.”
The next activity will be to start a new instance. How about that! Working just like we designed the ASG to do for us. The ASG is automatically keeping our desired state of servers in a healthy state by creating new instances if one becomes unhealthy.
Once completed with this exercise, feel free to remove all resources by issuing the following command in the terminal:
- terraform destroy

This is not for production!
All public websites should have security protection with a firewall (not just a security group). Since this is just an exercise, you can you in AWS free tier account, I do recommend the use of this configuration for production.
Most cloud deployments should have monitoring in place to detect and alert someone should an event occur to any resources that require remediation. this exercise does not include any monitoring
It is a good idea to remove All resources when you have completed this exercise so as not to incur costs
Hi Joseph. Thanks so much for sharing all your work here.
I would like to deploy a WordPress site, hosted on an AWS instance in eu-central-1. I’m VERY new to AWS and Terraform (but keen to learn).
In your notes above (‘Deploy our Resources using Terraform’) you mention:
“If you are placing this into any other region than us-west-1, you will have to change the AMI ID for the NAT instances in the file ‘vpc.tf’.” That sounds like me.
But when I look at the code for ‘vpc.tf’, I don’t see mention of AMI ID for the NAT instances. Could you help me out here? I realise the AMI ID needs to go in there somewhere… but where/how?
Thanks again.
Hello Andrew,
Sorry for the confusion, I accidentally copied that note from another post where I created resources using dedicated NAT instead of a NAT Gateway. In the VPC.TF, you see a line as follows: “single_nat_gateway = true”. This establishes the use of AWS NAT Gateway and hence you do not have to find a unique NAT for you region.
Note: For deployments that are compliant with the AWS Free Tier Plan, I avoid using NAT Gateways because they are not part of the free tier and will have a small cost per day.
The code as is (well at least at the time of writing the article) works and you and ignore that statement in my article. I will edit and remove the statement about having to change the NAT if its a different region.
PS. Since I wrote the article, AWS no longer provides a NAT EC2 resource, they are no longer deploying updated versions AWS AMI images. Since they no longer support updated NAT AMI images, I have since created my own NAT Build and I will attempt to create a new post in the next couple of days with the code to create a custom NAT. Changing the code to use a custom NAT will then avoid the cost of using AWS NAT gateways.