Using Terraform and Ansible to create a simple WordPress Deployment

The code discussed in this article can be found in my public GitHub repository.
Features
Deploy WordPress using Infrastructure as Code into AWS
Terraform – Terraform’s Infrastructure as Code to Deploy resources in AWS.
Ansible will be used to construct and configure the MariaDB database server in the private subnet and a WordPress server in the public subnet.
All the AWS resources created below comply with an AWS Free Tier Plan.
These resources are compliant with a free tier plan. But don’t leave them running! There is a limit of EC2 hours allowed in a free tier plan!
This exercise will perform the following tasks using Terraform HCL code:
- A module to create a VPC with security groups and routing
- A module to create two S3 buckets
- We’ll copy some ansible code to one of the S3 buckets
- We’ll put Terraform remote state in the other S3 bucket
- Build code that creates EC2 instances for a WordPress server and a MariaDB server using Terraform
- Create an IAM policy that allows an EC2 instance to copy files from the S3 bucket
- Build a code that creates the “Controller” server
- Use the Controller to execute Ansible playbooks for configuring WordPress and MariaDB
Requirements
- Must have an AWS account
- Install AWS CLI, and Configure AWS CLI
- Install Terraform
- An EC2 Key Pair for AWS CLI (for connecting using SSH protocol)
- Recommend using your own domain (either register a new domain in AWS or register a domain from any registrar service of your choice).
- AWS Administrator account or an account with the following permissions:
- ability to create IAM policies
- Create and Manage changes in Route 53
- Create VPC, subnets, routing, and security groups
- Create and manage EC2 resources
- Create an Ec2 Key Pair for the region
- Create and manage an S3 Bucket for Terraform Remote State
- Create an S3 bucket for the Ansible playbooks and other configuration files
So let’s get started
Please create the folder structure shown below:

S3 Buckets
An S3 bucket can be created using the AWS Management console, or an even faster method would be to use AWS Command Line Interface (AWS CLI) to create an S3 Bucket.
You may not require server-side encryption or versioning for this exercise. If multiple personnel use the same S3 bucket, you would undoubtedly want to consider enabling version control. If your team believes specific parameters are sensitive information, I suggest server-site encryption.
An S3 bucket name must be globally unique, that means literrally a unique name in ALL AWS regions
- Create an S3 bucket for the Terraform remote state
- It can be any S3 bucket name of your choice, but of course, I recommend a name something like “your_nickname.terraform.state”
- Create an S3 bucket to hold configuration files for WordPress, MariaDB, and Ansible playbooks
- I recommend a name like “your_nickname.ansible_files”
Creating the VPC
The VPC will be making:
- One public subnet for the WordPress website and another server I call the Controller
- One private subnet for the database server
- NAT instances instead of a NAT gateway and the associative routing
- Security Groups
- Output data that other modules will use to obtain data.
Make sure you have configured an S3 bucket for Terraform Remote State and name the bucket something like “name-terraform-states.”
Create the following code “vpc.tf” in the VPC folder
# ---------- Stipulate AWS as Cloud provider --------
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
# ---------- Store Terraform Backend in S3 Bucket --------
terraform {
backend "s3" {
bucket = "Your Terraform state bucket name here"
key = "terraform.tfstate"
region = "Your region"
}
}
# ---------- Region -----------------
provider "aws" {
region = var.region
}
data "aws_region" "current" {}
# ------------------ Create the VPC -----------------------
resource "aws_vpc" "my-vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "${var.environment}-VPC"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# --------------------- Public Subnet -------------------
resource "aws_subnet" "public" {
vpc_id = aws_vpc.my-vpc.id
map_public_ip_on_launch = true
availability_zone = var.av-zone1
cidr_block = var.public_cidr
tags = {
Name = "${var.environment}-public"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# ----------------- Internet Gateway -----------------------
resource "aws_internet_gateway" "test-igw" {
vpc_id = aws_vpc.my-vpc.id
tags = {
Name = "${var.environment}-IGW"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# ------------------ Setup Route table to IGW -----------------
resource "aws_route_table" "public-route" {
vpc_id = aws_vpc.my-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.test-igw.id
}
tags = {
Name = "${var.environment}-Public-Route"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# ----------- Setup Public subnet Route table association -----
resource "aws_route_table_association" "public-assoc" {
subnet_id = aws_subnet.public.id
route_table_id = aws_route_table.public-route.id
}
# --------------------- Private Subnet -------------------
resource "aws_subnet" "private" {
vpc_id = aws_vpc.my-vpc.id
map_public_ip_on_launch = false
availability_zone = var.av-zone1
cidr_block = var.private_cidr
tags = {
Name = "${var.environment}-private"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# --------------- Setup NAT for Private Subnet traffic ---------
resource "aws_instance" "nat" {
ami = "ami-084f9c6fa14e0b9a5" # AWS NAT instance Publish date: 2022-05-04
instance_type = var.instance_type
subnet_id = aws_subnet.public.id
vpc_security_group_ids = ["${aws_security_group.nat-sg.id}", "${aws_security_group.controller-ssh.id}"]
associate_public_ip_address = true
source_dest_check = false
user_data = file("bootstrap_nat.sh")
monitoring = true
key_name = var.aws_key_name
tags = {
Name = "${var.environment}-NAT Instance"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# ------------------ Setup Route to NAT -----------------
resource "aws_route_table" "nat-route" {
vpc_id = aws_vpc.my-vpc.id
route {
cidr_block = "0.0.0.0/0"
instance_id = aws_instance.nat.id
}
tags = {
Name = "${var.environment}-Private-Route"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
resource "aws_route_table_association" "private-route-association" {
subnet_id = aws_subnet.private.id
route_table_id = aws_route_table.nat-route.id
}
Create the code “variables.tf” in the VPC folder
variable "aws_key_name" {
type = string
default = "your key name"
}
variable "region" {
type = string
default = "your region"
}
variable "environment" {
description = "User selects environment"
type = string
default = "Test"
}
variable "your_name" {
description = "Your Name?"
type = string
default = "Your Name"
}
variable "av-zone1" {
type = string
default = "us-west-1a"
}
variable "av-zone2" {
type = string
default = "us-west-1c"
}
variable "ssh_location" {
type = string
description = "My Public IP Address"
default = "Your IP address"
}
variable "vpc_cidr" {
type = string
default = "10.0.0.0/16"
}
variable "public_cidr" {
type = string
default = "10.0.1.0/24"
}
variable "private_cidr" {
type = string
default = "10.0.101.0/24"
}
variable "instance_type" {
type = string
default = "t2.micro"
}
Create a file “security_groups.tf in the VPC folder
# -------------- Security Group for Controller -----------------
resource "aws_security_group" "controller-ssh" {
name = "ssh"
description = "allow SSH from MyIP"
vpc_id = aws_vpc.my-vpc.id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
cidr_blocks = ["${var.ssh_location}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-SSH_SG"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# -------------- Security Group for NAT instances --------------
resource "aws_security_group" "nat-sg" {
name = "nat-sg"
description = "Allow traffic to pass from the private subnet to the internet"
vpc_id = aws_vpc.my-vpc.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${var.private_cidr}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["${var.private_cidr}"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
#cidr_blocks = ["0.0.0.0/0"]
security_groups = ["${aws_security_group.controller-ssh.id}"]
}
ingress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["${var.vpc_cidr}"]
}
egress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.vpc_cidr}"]
}
egress {
from_port = -1
to_port = -1
protocol = "icmp"
cidr_blocks = ["${var.vpc_cidr}"]
}
tags = {
Name = "${var.environment}-NAT-Sg"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# -------------- Security Group for WordPress Server -----------
resource "aws_security_group" "wp-sg" {
name = "WordPress-SG"
description = "allow SSH from Controller and HTTP & HTTPS from my IP"
vpc_id = aws_vpc.my-vpc.id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
security_groups = ["${aws_security_group.controller-ssh.id}"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${var.ssh_location}"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["${var.ssh_location}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-WordPress-SG"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
resource "aws_security_group" "mysql-sg" {
name = "MySQL-SG"
description = "allow SSH from Controller and MySQL from WordPress server"
vpc_id = aws_vpc.my-vpc.id
ingress {
protocol = "tcp"
from_port = 22
to_port = 22
security_groups = ["${aws_security_group.controller-ssh.id}"]
}
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
security_groups = ["${aws_security_group.wp-sg.id}"]
}
egress {
protocol = "-1"
from_port = 0
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "${var.environment}-MySQL-SG"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
Create “output.tf” in the VPC folder
output "aws_region" {
description = "AWS region"
value = data.aws_region.current.name
}
output "vpc_id" {
description = "VPC ID"
value = aws_vpc.my-vpc.id
}
output "public_subnet" {
description = "Public Subnet"
value = aws_subnet.public.id
}
output "private_subnet" {
description = "Private Subnet"
value = aws_subnet.private.id
}
output "Controller-sg" {
description = "Security group IDs for Controller"
value = [aws_security_group.controller-ssh.id]
}
output "wordpress-sg" {
description = "Security group IDs for WordPress"
value = [aws_security_group.wp-sg.id]
}
output "mysql-sg" {
description = "Security group IDs for MySQL"
value = [aws_security_group.mysql-sg.id]
}
Create a folder named “Servers”
We are creating an Ec2 instance for WordPress in the public subnet and another EC2 instance in the private subnet.
Create the file “servers.tf” in the servers folder
# ---------- Stipulate AWS as Cloud provider --------
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
#------------------------- State terraform backend location-----
data "terraform_remote_state" "vpc" {
backend = "s3"
config = {
bucket = "your-bucket-name-terraform-states"
key = "terraform.tfstate"
region = "us-west-1"
}
}
# --------------------- Determine region from backend data -----
provider "aws" {
region = data.terraform_remote_state.vpc.outputs.aws_region
}
#--------- Get Ubuntu 20.04 AMI image (SSM Parameter data) -----
data "aws_ssm_parameter" "ubuntu-focal" {
name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
}
# Creating MariaDB server
resource "aws_instance" "mariadb" {
ami = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
instance_type = var.instance_type
subnet_id = data.terraform_remote_state.vpc.outputs.private_subnet
vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.mysql-sg
private_ip = "10.0.101.30"
user_data = file("bootstrap_db.sh")
monitoring = true
key_name = var.key
tags = {
Name = "${var.environment}-MariaDB"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
# Creating WordPress server
resource "aws_instance" "Wordpress" {
ami = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
instance_type = var.instance_type
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnet
vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.wordpress-sg
private_ip = "10.0.1.20"
user_data = file("bootstrap_wp.sh")
key_name = var.key
tags = {
Name = "${var.environment}-Wordpress"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
Create “bootstrap_db.sh”
#!/bin/bash
sudo apt-get update
sudo apt-get -y upgrade
hostnamectl set-hostname mariadb
Create “bootstrap_wp.sh”
#!/bin/bash
sudo apt-get update
sudo apt-get -y upgrade
hostnamectl set-hostname WordPress
Create a folder named Ansible
There are Nine files placed in the Ansible folder. All the files for Ansible will be in my GitHub repository. Be sure to edit the appropriate files as stated below to personalize your choices for things like DB password.
- ansible.cfg
- hosts.ini
- provision-db.yml
- provision-wp.yml
The other five files will be placed into the MariaDB server and the WordPress server by Ansible.
- Files for MariaDB
- 50-server.cnf
- vars.yml
- Files for WordPress
- dir.conf
- example.com.conf
- wordpress.zip
Edit “vars.yml” to reflect your choices of USERNAME, PASSWORD, DBNAME, NEW_ADMIN, NEW_ADMIN_PASSWORD. Ensure you also edit wp-config.php in the “wordpress.zip” archive to reflect the same.
Uncompress wordpress.zip and edit “wp-config.php.” Lines 23,26, and 29 reflect your choices of DB_NAME, DB_USER, and DB_PASSWORD. Make sure they match “vars.yml.”
Create an S3 bucket for the above files
If you have not already created an S3 bucket to hold the ansible files, please do so now.
Copy the files from my Github repository Ansible folder to the S3 bucket.
When we use the command to apply Terraform, the files will automatically be copied from our S3 bucket into the controller server below as part of the bootstrap_controller.sh. So make sure you have configured our S3 bucket and placed the Ansible files into that bucket before running Terraform Apply command.
Create a folder named Controller
A Controller is where all the magic happens. We are creating a jump server (I call it a controller).
After deploying the VPC infrastructure and placing the Ansible files into an S3 bucket, we create three servers (WordPress, MariaDB, and Controller).
We will then use SSH to connect to our Controller and Ansible playbooks to configure MySQL on the MariaDB server and to configure WordPress settings on the WordPress server.
Create the “controller.tf” file in the controller folder
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
#------------------------- State terraform backend location---------------------
data "terraform_remote_state" "vpc" {
backend = "s3"
config = {
bucket = "surfingjoes-terraform-states"
key = "terraform.tfstate"
region = "us-west-1"
}
}
# --------------------- Determine region from backend data -------------------
provider "aws" {
region = data.terraform_remote_state.vpc.outputs.aws_region
}
# #--------- Get Ubuntu 20.04 AMI image (SSM Parameter data) -------------------
# data "aws_ssm_parameter" "ubuntu-focal" {
# name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
# }
data "aws_ami" "amazon_linux" {
most_recent = true
owners = ["amazon"]
filter {
name = "name"
values = ["amzn2-ami-hvm-*-x86_64-gp2"]
}
}
# Creating controller node
resource "aws_instance" "controller" {
#ami = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
ami = data.aws_ami.amazon_linux.id
instance_type = var.instance_type
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnet
vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.Controller-sg
iam_instance_profile = "${aws_iam_instance_profile.assume_role_profile.name}"
user_data = file("bootstrap_controller.sh")
private_ip = "10.0.1.10"
monitoring = true
key_name = var.key
tags = {
Name = "${var.environment}-Controller"
Stage = "${var.environment}"
Owner = "${var.your_name}"
}
}
output "Controller" {
value = [aws_instance.controller.public_ip]
}
Create the “variables.tf” file in the Controller folder
variable "aws_region" {
type = string
default = "us-west-1"
}
variable "key" {
type = string
default = "Mykey" #be sure to update with the name of your EC2 Key pair for your region
}
variable "instance_type" {
description = "Type of EC2 instance to use"
type = string
default = "t2.micro"
}
variable "environment" {
description = "User selects environment"
type = string
default = "Test"
}
variable "your_name" {
description = "Your Name?"
type = string
default = "Joe"
}
Create the file “bootstrap_controller.sh”
#!/bin/bash
sudo yum -y update
hostnamectl set-hostname Controller
sudo yum install unzip
sudo yum install -y awscli
sudo amazon-linux-extras list | grep ansible2
sudo amazon-linux-extras enable ansible2
sudo yum install -y ansible
aws s3 cp s3://your-bucket-name-ansible/WordPress /home/ec2-user/WordPress --recursive
Create the S3 Policy in the Controller folder
This will give our controller permission to copy files from our S3 bucket. Be sure to edit the ARN information to reflect the appropriate S3 bucket name for your deployment.
resource "aws_iam_policy" "copy-policy" {
name = "copy-anible-files"
description = "IAM policy to allow copy files from S3 bucket"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::bucket-name-ansible", "arn:aws:s3:::bucke-name-ansible/*"]
}
]
}
EOF
}
resource "aws_iam_role" "assume-role" {
name = "assume-role"
description = "IAM policy that allows assume role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {"Service": "ec2.amazonaws.com"},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "assign-copy-policy" {
role = aws_iam_role.assume-role.name
policy_arn = aws_iam_policy.copy-policy.arn
depends_on = [aws_iam_policy.copy-policy]
}
resource "aws_iam_instance_profile" "assume_role_profile" {
name = "assume_role_profile"
role = aws_iam_role.assume-role.name
}
Provisioning
- Make an S3 bucket for Terraform Remote state.
- Be sure to use the name of the S3 bucket in “VPC.tf, servers.tf, controller.tf
- Make an S3 bucket for the Ansible files.
- Be sure to change the S3 Bucket name in S3_policy.tf (lines 16), shown above, to your S3 bucket name for Ansible files.
- Be sure to change the variables in the VPC folder with variables of your choice.
- Be sure to change the variables in the server folder to variables of your choice.
- Change the vars.yml to reflect your dbname, user name, etc.
- Be sure to uncompress the wordpress.zip file, edit the wp-config.php to reflect your dbname and user name of your choice, update the example.com.conf file to your domain name, and as well edit the content of example.com.conf to your domain name. Once the edits are complete, compress the files back into wordpress.zip.
- Be sure to copy the ansible files into the bucket you created for the Ansible files
- In your terminal, go to the VPC folder and execute the following commands:
Terraform initterraform validateTerraform apply
- In your terminal, go to the server folder and execute the following commands:
Terraform initterraform validateTerraform apply
- In your terminal, go to the controller folder and execute the following commands:
Terraform init
terraform validateTerraform apply
Running the Ansible configuration on the Controller
First, we are going to set up our SSH credentials. This assumes you have already configured an EC2 key pair and assigned the key in the above terraform code. You must have already placed an EC2 key into your ssh directory.
- Add EC2 key pair into SSH credentials by issuing the following command
- ssh-add ~/.ssh/your-key-name.pem
- Then connect to the Controller with “-A” option of SSH command to forward the authentication agent to our Controller. “-A” allows us to connect to the Controller, then jump to our servers inside AWS public and private networks. “-A” as an SSH option also enables the authentication agent for Ansible playbooks to use the very same EC2 key pair to manage MariaDB and WordPress servers.”
- ssh -A ec2-user@1.2.3.4 (where 1.2.3.4 represents the public IP address of the Controller)
- Once connected to the Controller, change the directory to the WordPress directory (this directory may not exist if you connect to the Controller too soon, as it takes a couple of minutes for the bootstrap to configure the Controller)
- cd WordPress
- First, we can configure the MariaDB server with Ansible
- ansible-playbook provision-db.yml
- for some reason, the handler at the end of the Ansible playbook doesn’t restart MySQL, and WordPress gets a “cannot connect to database error.” If this does happen to you (most likely, I might add), then we need to connect to the DB server and restart MySQL
- ssh ubuntu@10.0.101.30
- sudo -i
- service MySQL restart
- The final step is to run the Ansible playbook to configure the WordPress server
- ansible-playbook provision-wp.yml
If you want to actually test WordPress feel free, you’ll need to create a record for your domain. Goto Route 53, register a domain, create an “A record” and assign the public IP address of WordPress Instance.
Or, if you do not want to use your domain (I recommend the practice of using your domain, but hey, what do I know, hee hee), open your browser and put in the public IP address of the WordPress server.
Open a browser and type in your domain.
If you have followed the exercise correctly, then you should see the following


This is not for production!
All public websites should have an application firewall in between the Web Server and its internet connection, this exercise doesn’t create the application firewall. So do not use this configuration for production
All websites should have monitoring and a method to scrape log events to detect and alert for potential problems with the deployment.
This exercise uses resources compatible with the AWS Free Tier plan. It does not have sufficient compute sizing to support a production workload.
It is a good idea to remove All resources when you have completed this exercise, so as not to incur costs





