Terraform “Reusable Modules”

Create reusable modules in Terraform to deploy resource in AWS. An example of two teams using the same code to deploy separate AWS resources.

In the previous examples, I’ve shown more and more complicated deployments of AWS infrastructure and resources. The method used in previous examples works great if I am the only person to use the code. Just me, myself, and I deploying architectural wonders into the cloud. Well, that is just not reality, now is it?

The reality is that we create technical solutions by working together as a team. After all, working together is where a team has advantages to form a better solution when we collaborate and share our knowledge and expertise.

This exercise will show continuous improvement (CI) elements by creating reusable modules and continuous development (CD) by deploying the modules.

The code discussed in this example is posted in my GitHub repository

What is discussed in this Post

Caveats & assumptions

Caveat: The VPC reusable module is simple, creating private and public subnets with only two availability zones in any region. It doesn’t accommodate choosing more than two availability zones, it doesn’t accommodate choices like if you want to disable or enable IPV6 or set up a VPN gateway, etc.

Caveat: Docker Website is also very simple. It is a simple Docker container that is a website developed for this demonstration. You can technically use your docker container or any generic website container.

Assumption: To keep costs down, we are using the latest NAT instance published by AWS. NAT Gateways in a free tier account will incur costs if left in the running state for more than an hour, so I opt to use NAT instances instead of NAT gateways to save a bit of change.

Assumption: You are interested in the methods to create modular code. This blog post will discuss in detail not just the how and the why of modular coding, but also I’m trying to express the logic behind some of the requirements as I understand them.

Reusable modules

Simply put, a “module” is a folder. We put some Terraform code into that folder, and Terraform understands the content of a folder as a “MODULE.” The folders (Terraform modules) separate our Code into logical groups.

Think of modules as putting the pieces together to make a complete solution.

In the chart above, we have an example of five different folders. Each folder represents a module, and each module contributes to a complete deployment solution.

We can have a developer publish the VPC module and Security Groups. Have another person develop and publish the Web Template, and yet another developer create the Auto Scaling Group (ASG) and Load Balancer (ALB) modules. Then finally, a deployment team pulls the different modules from a published location and deploys the modules to production.


We’ll start by understanding the use of reusable modules (dry code). As we progress in writing Infrastructure as Code, we need to share code between teams like production, development, and Quality Assurance environments. This exercise will create a “reusable module” for a VPC and another “reusable module” to create an EC2 instance as a website server.

We will then simulate a Development team using the reusable modules to deploy resources into a development environment. After deploying resources for the development team, we will simulate a Quality Assurance (QA) team using the same modules (with different variables) to deploy a QA environment. The Development Team and the QA team will use the same modules (dry code) to deploy different resources in the same region or regions using the same AWS account but different credentials or even launching from different accounts.

The Terraform method for reusable code is to use Terraform module source.

The source argument in module block tells Terraform where to find the source code for the desired child module. Terraform uses this during the module installation step terraform init to download the source code to a directory on a local disk so that it can be used by other Terraform commands.

HashiCorp Terrafrom

in this exercise, we will place our Terraform code into a shared location and, as per normal practice, refer to the shared location as the “source module.” We can create a “source module” in any folder. The folder can be your local drive or a source code management system (SCM) like GitHub, Artifactory, or Bitbucket. A “source module” can be any network folder in your local area network (LAN) or Wide Area Network (WAN), so long as you, the user, has permission to read and write to the network shared folder.

I believe the best place for reusable code is a source code management system (SCM) like GitHub, BitBucket, GitLab, or Artifactory. At the time of this writing, my personal preference is to use GitHub.

We create a reference to a source module by putting a statement in Terraform like the following (which becomes the module configuration block):

module "<name> {
  source   = "<path to folder>"
  variable = "<values we need to pass to the module>"
}

Remember that the module’s “name” can be any name you desire when declaring the module. It does not have to be the same or similar to the source code for the source module to work.

Why are we using S3 remote Terraform State and DynamoDB 

Let’s use an example of a three-tier application that is under development. The first tier is a front-end Web service for our customers. Another tier is the application layer that performs ERP1 services, and the third tier will hold the database (back-end services).

We have a developer (Developer-A) responsible for developing and releasing changes to our front-end web service. Another developer (Developer-B) is responsible for developing the ERP1 application service. Both developers have access to make changes in the development environment. Both developers can launch, create and destroy resources in the development environment.

Both developers perform most of their work offline and use the AWS Cloud developer’s environment on a limited basis because most of the development is performed offline and not in the cloud environments. Developer A is ready to test his changes and performs Terraform Init and Terraform Apply to create the environment. So the development environment is now running in AWS and operational.

On the very same day, Developer B will make a major change to the ERP application server. Developer B wants to move the ERP server to a different subnet. Developer B modifies his version of a reusable module, and then Developer B executes the change by performing Terraform Init and Terraform Apply, thus moving the ERP server to a different subnet. Suddenly Developer A, who is working in the same environment, observed major errors on the Front End servers that he had already deployed because developer B had moved the application servers; hence, Developer B’s change impacted developer A’s development test.

Developer B went into our reusable module after Developer A had already used the same module to launch the AWS resources. Terraform happily made the changes which caused Developer A to see unexpected failures. If we use “Terraform Remote state” in an AWS S3 bucket and DynamoDB to lock our remote state, Developer B would be prevented from executing changes to AWS resources after Developer A has locked the Terraform State. Developer B would then need to communicate and coordinate any necessary change with Developer A.

By putting a Lock on the S3 remote state, we can prevent team members from making a change that impacts AWS resources without coordination between members.

DynamoDB’s locking of Terraform State doesn’t prevent us from making a change to our resources, it simply prevents other team members from making unexpected changes after a resource is deployed.


OK, let’s get started and set up the folders

Let’s create our folder structure before getting started. The first folder, named “Modules,” will hold the reusable modules, and the second folder, named “Teams,” will be used by our team members. The third folder holds a few things to help us manage our Terraform state.

Reusable modules folder structure

You can place the “Modules” folder and the “Teams” folder anywhere. For example, you can put the “modules folder” and its content on a separate computer from the “Teams folder.”

For brevity, why don’t we keep it simple for now and place everything in a folder structure like the following:

reusable_modules_exercise
 └Modules
   └VPC
   └Docker_website
 └Teams
   └Quality_Assurance
   └Development
 └MGMT
   └S3_Bucket
     └create_S3_bucket
     └Dev_remote_state
     └QA_remote_state
     └Create_DynamoDB_Table
   └Bastion_Host

Creating the AWS S3 bucket, Terraform state file, and DynamoDB table

Before using an S3 bucket and Terraform remote state file. We should create the bucket and Terraform remote state file independently and, most importantly, create the DynamoDB for locking Terraform remote state before creating any AWS resources that utilize the Terraform remote state.

We will create one AWS S3 bucket. Two Terraform state files, one for our Development team and one for our Test team. And one DynamoDB table that keeps the data regarding locks put in place for our Terraform remote state.

Creating the S3 bucket

Change directory to the folder ~/reusable_modules_exercise/mgmt/S3_bucket/create_s3_bucket

S3_bucket.tf
provider "aws" {
  region = "us-west-1"
}
# ----- Create an S3 bucket for remote state ----------
resource "aws_s3_bucket" "terraform-state" {
  bucket = "<unique_name>-terraform-states"
  lifecycle { prevent_destroy = true }
  tags = {
    Name = "Remote State Bucket"
  }
}
resource "aws_s3_bucket_server_side_encryption_configuration" "Encryption" {
  bucket = aws_s3_bucket.terraform-state.id
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
  bucket = aws_s3_bucket.terraform-state.id
  versioning_configuration {
    status = "Enabled"
  }
}

Reminder! Be sure to change the name of the bucket into a “unique name” of your choice

After creating this file, perform terraform init, terraform validate, and terraform apply to create the S3 bucket.

A few things to consume about our “S3_bucket.tf”. The line  lifecycle {prevent_destroy = true} prevents someone from accidentally deleting an S3 bucket.

 resource "aws_s3_bucket_server_side_encryption_configuration" This block of code enables server-side encryption. You certainly want to read up on your choices regarding encryption choices to use either “Amazon S3-managed keys (SSE-S3)” or “AWS key management service key (SSE-KMS).” I recommend reading the Terraform registry and Amazon Docs. As you can see, I’m letting AWS create and manage the key for our bucket by configuring the block with the choice of “sse_algorithm.” Amazon S3-Managed Keys (SSE-S3).

 resource "aws_s3_bucket_versioning" "bucket_versioning" This code block establishes if you want to use versioning in the S3 bucket. Versioning allows reverting to a previous version of Terraform state from a disaster recovery standpoint, it makes sense to use versioning. When teams use reusable modules without a DyanamoDB lock, you most definitely want to version your code with a source code management system like GitHub. Nothing wrong with enabling it by default. You might never need to revert to a previous version of Terraform remote state UNTIL you need it, and boy, you’ll wish you had versioning in place when that happens. Especially in a production deployment, maybe not so much in a development environment.

 resource "aws_s3_bucket_public_access_block" You might see some examples applying this resource setting via Terraform. Personally, I recommend skipping this optional block of code for an S3 bucket. By default, Public Access is denied for all S3 buckets unless you specifically allow public access (For example – turning an S3 bucket into a static website). I recommend leaving it out, AWS by default denies public access, which is perfect for a Terraform Remote State S3 bucket.

Creating the Remote state files

Remote state for the development team

Change directory to the folder ~terraform/reusable_modules_exercise/mgmt/s3_bucket/Dev_remote_state

dev_remote_state.tf
provider "aws" {
  region = "us-west-1"
}
# ------------ configure remote state  -------------------------
terraform {
  backend "s3" {
    bucket = "<unique-name>-terraform-states"
    key    = "development-terraform.tfstate"
    region = "us-west-1"
  }
}

After creating this file, perform terraform init, terraform validate, and terraform apply to create the remote state file for our development team.

Don’t worry if Terraform says that nothing happened. If this is the first time executing this code, it does, in fact, create the “tfstate” file.

The code should be in its own folder, separate from creating an S3 bucket. Because the bucket must also already exist to place the “tfstate” file in the bucket.

 

Remote state for the Test team

Change directory to the folder ~terraform /reusable_modules_exercise/mgmt/s3_bucket/QA_remote_state

test_remote_state.tf
provider "aws" {
  region = "us-west-1"
}
# ------------ configure remote state  -------------------------
terraform {
  backend "s3" {
    bucket = "<unique-name>-terraform-states"
    key    = "qa-terraform.tfstate"
    region = "us-west-1"
  }
}

After creating this file, perform terraform init, terraform validate, and terraform apply to create the remote state file for our QA team.

Creating the DyanmoDB database

Change directory to the folder ~terraform/reusable_modules_exercise/mgmt/s3_bucket/Create_DynamoDB_Table

Create_DynamoDB_table.tf
provider "aws" {
  region = "us-west-1"
}
# ----------  Create DynamoDB for Locking S3 state -------------
resource "aws_dynamodb_table" "test" { 
  name = "test_db_locks" 
  billing_mode = "PAY_PER_REQUEST" 
  hash_key = "LockID" 
  attribute {
     name = "LockID" 
     type = "S" 
     } 
} 

The secret to creating this DynamoDB table is in the “hash_key.” When Terraform is pointed to a DynamoDB table, it will place the Terraform remote state into DynamoDB’s NoSQL database using the HASH_KEY as the primary ID for each Terraform Remote State. Yes, that’s right, we need only ONE DyanamoDB table that can handle multiple Terraform states. We will be using the DynamoDB database twice in this exercise. Once the Development team with a unique “tfstate” file is placed into DyanamoDB, our QA team will have its own unique “tfstate” file in DynamoDB. Terraform will simply create a new unique “LockID” for each Terraform state file.

Once again, I recommend separating the code into its own folder from the above code. Primarily because we need only one Database for all our teams using the same DynamoDB. Development, Test, QA, and Production deployments can use the same DynamoDB database because each will have its own “tfstate” file and a unique LockID in the database.

Code for Reusable Modules

Reusable modules are “child modules” because when we execute terraform init , the reusable modules are downloaded into the calling directory (which becomes the “parent” module). There is a relationship between the parent module and the child module. The parent module uses the following:

module "Name" {
  source = "path to child module"
  variable = "value"
}

When we initiate terraform init, Terraform knows where to get the child module because of the “Source” module configuration block (shown above). Terraform will download the reusable module from the source into the current directory and configure the downloaded module with values stipulated by the “variable =” as shown above.

The Terraform workflow is different from previous exercises. Here are a few pointers;

  • Remember the module that is doing all the work is the reusable module which is the “source (child) module” which is downloaded into the current directory.
  • The “Parent” module calls the “child module” (reusable module) and passes variables to the child module.
  • We are using Terraform Remote State, BUT there is a really big caveat as to how we use Terraform Remote State in this scenario;
    • In the previous exercises, we used “inputs” and “outputs” to Terraform Remote State. In this case, while we are still using outputs, but in this case, we are using Terraform State to lock our configuration and not so much to pass inputs and outputs to/from our remote state file.

Code that creates our reusable modules

So now that we have created our S3 bucket, the Terraform state file, and a DynamoDB table, we are ready to set up some code as a reusable module.

Change directory to ~/reusable_modules_exercise/modules

Now let’s create our first reusable module, the VPC. We will start with a terraform_remote_state S3 bucket configuration. It is important to use variables for the bucket name and even more important to use a variable for the key name. Why you might ask? Well, that’s a great question; let me explain. ☺︎

It is recommended that each team use a unique terraform state file. Terraform writes the state data to a remote data store, which can then be shared between all members of a team. We want to separate our team environments because they usually have different requirements and process. Also each team usually requires its own control of release and configuration management. Therefore each team will use a unique terraform state file.

Me

We are going to use a lot of variables. Since we are using the same reusable code for different teams, we will need a method to cause a change of configuration for AWS resources per each team’s requirements. Hence, we use variables for each team to have the ability to apply a variance to an AWS resource.

Examples of variance

Size – A development team might use a “t2.micro” size for an AWS EC2 resource, but our Production team needs to assign a larger type “t3.large” instance type.

Stage – We need to differentiate between development, QA, and production, so we’ll use a variable called “Stage.” Creating a tag called “Stage” and assigning an appropriate variable identifying the team that owns the new resource. We will take advantage of this in other modules by using a filter to identify resources managed by which team.

Region – Our teams might be in different regions, so we’ll enable deployments into different regions using the same code but setting a “Region” variable.

Variables are key! Defining what needs to be configured for the different teams is a very important element when planning the use of reusable code.

Using Variables in reusable modules

Let’s start with an understanding of the usage of variables.

  • Reusable modules may have variables declared and used only in the module.
  • Reusable modules will have variables declared in the parent module and passed to the reusable module. This is exactly how we create a variance in deploying a reusable module.
    • For example, a development team uses region (us-west-1), and the QA team uses region (us-east-1). We will create our variable in the reusable module, the parent module, and the parent module’s configuration block to accomplish the variance.
      • reusable module declares – variable “region” {}
      • parent module also declares – variable “region” {}
      • parent module assigns a value to the variable in the module’s configuration block. See below:
module "foo" {
  source = "../foobar"
  region = "us-west-1"
}

There is one more variable discussion. When we want to prevent sensitive information from being published on GitHub, we will move an assignment of a value into a private file like “terraform.tfvars”.

In the module configuration block below, we normally assign values to variables, in this case, “bucket” with a value “my-bucket-terraform-states.” However, I don’t want the general public to know the name of my S3 bucket. Instead, I assign a variable in the configuration block and input the value in a file named”terraform.tfvars” instead of the configuration block. We also set up a special file called “gitignore” to instruct GIT to ignore the file “terraform.tfvars” when pushing to GitHub. Hence, the bucket name will not be published on GitHub and thus becomes a privately assigned value.

module "foo" {
  source        = "../foobar"

  region        = "us-west-1"
  bucket        = var.bucket-name
  instance_type = var.instance_type
  state-key     = "development-terraform.tfstate" 
}

For example, in the line of code (instance_type = var.instance_type) in the example above, we use a variable where we would normally assign a value.

With any module, a simple thing like creating a variable for “Instance_type” needs to be declared, assigned to a resource, and given a value.

But when using reusable modules, the variables declaration, assignment to a resource, and then giving the variable a value will be placed into at least three, possibly four, different files.

The first rule is to declare the variable in both the parent and child modules. We assign a value to the variable in a configuration block in the parent module.

TypeModuleFile
Declare variableParent Moduleteams/…/variables.tf
Declare variableReusable modulemodules/vpc/variables.tf
assign variable to a resourceReusable modulemodules/vpc/vpc.tf
Assign a value to the variableParent Module (Module configuration block)teams/…/vpc.tf

To summarize:

The parent module and the child module must both declare a variable that is going to be configured in the parent module and assigned to a resource in the child (reusable) module:
variable "instance_type" {}

The child (reusable) module will assign a variable to a resource:
instance_type = var.instance_type

Normally, the parent module then assigns a value to the variable in the parent module configuration block:

module "example"
  source = "../foobar"
  instance_type = "t2.micro"
} 

But when it’s sensitive information, we skip the above step and assign the value in Terraform’s environment file, “terraform.tfvars”.

Let’s pretend that “instance_type” is sensitive information, and we do not want the value of instance_type published to GitHub. So instead of assigning a value in the module’s configuration block, as shown above, we will pass the buck to “Terraform.tfvars.” We instead assign a variable once again in the configuration block and assign a value in “terraform.tfvars, as shown in the example below:

module "example"
  source = "../foobar"
  instance_type = var.instance_type
} 

Then assign the value in the Terraform.tfvars file:
instance_type = "t2.micro"

So let’s start with the first reusable file

The first reusable module – will be an AWS Virtual Private Cloud (VPC) reusable module.

First, we must decide what is configurable when creating the VPC. Different teams will want some control over the VPC configuration. So what would they want to configure (variance):

  • We want the S3 remote state bucket, State key, bucket region, and DynamoDB assignment to be configurable, as we want each team to manage their own VPC and the VPC Remote State
  • We need a tag to identify which team the VPC belongs to and a tag as to who takes ownership of the VPC
  • We want the region to be configurable by our teams
  • We want the NAT instance to have configurable sizing as per Team requirements
  • We might want the SSH inbound CIDR block to change as our teams might be in different regions and networks. Therefore, we need the SSH inbound CIDR block (I call it SSH_location) to be configurable by our teams
  • We probably want a different EC2 Key pair per team, especially if they are in different regions. I’d go so far as to say that production should be managed from a different account, using different EC2 key pairs and unique IAM policies. So we need the EC2 key pair configurable with reusable code.

As per the above conversation, we must declare the following variables in the parent and child modules that allow different teams to apply their configuration (variance) to the reusable modules.

We will then assign a value to each variable in the parent module.

Remember: All folders are considered Modules in Terraform

So first, we create a “variables.tf’ file in ALL reusable (child) modules:
~/terraform/reusable_modules/modules/vpc/variables.tf
~/terraform/reusable_modules/modules/Docker_Website/variables.tf
and we’ll create the same variables file in ALL parent modules, we’ll create a variables file for the development team:
~/terraform/reusable_modules/team/development/variables.tf
and we’ll create the same file for the QA team:
~/terraform/reusable_modules/team/QA/variables.tf

Variables that are declared and configured only in the reusable module

Note: in a future version, I might try my hand at doing the same as some of the more famous community VPC modules where we can create a subnet per AZ and/or stipulate how many subnets, like two subnets vs. four subnets. For now, I have hard-coded into the VPC module

Note 2: We want to use our own VPC coding simply because we want to use NAT instances vs. NAT gateways. It’s not an option in any of the community modules.

VPC (reusable module)

Change directory to ~/terraform/reusable_modules_exercise/modules/vpc, and include the following files vpc.tf, variables.tf, security_groups.tg and outputs.tf (documented below and included in my GitHub repository)

variables.tf (in the reusable module)
variable "bucket" {
  description = "Name of the S3 bucket that will be holding Terraform Remote State"
  type = string
}
variable "state-key" {
  description = "Name of the file for the terraform state key"
  type = string
}
variable "dynamodb_table" {
  description = "Name to be assigned to the DynamoDB table"
  type = string
}
variable "region" {
  description = "Region where VPC will be located"
  type = string
}
variable "bucket-region" {
  description = "Region where S3 bucket is placed"
  type = string
}
variable "ec2-key" {
  description = "Regional EC2 key used by the team"
  type = string
}
variable "instance_type" {
  description = "EC2 instance type"
  type = string
}
variable "ssh_location" {
  description = "CIDR block allowed SSH access into resource"
  type = string
}
variable "environment" {
  description = "Identify the Team's Environment i.e. QA or Development"
  type = string
}
variable "owner_name" {
  description = "Name to be used on all the resources as deployment owner"
  type        = string
}
variable "enable_ipv6" {
  description = "Requests an Amazon-provided IPv6 CIDR block with a /56 prefix length for the VPC. You cannot specify the range of IP addresses, or the size of the CIDR block."
  type        = bool
  default     = false
}
variable "enable_dns_hostnames" {
  description = "Should be true to enable DNS hostnames in the VPC"
  type        = bool
  default     = true
}
variable "enable_dns_support" {
  description = "Should be true to enable DNS support in the VPC"
  type        = bool
  default     = true
}
variable "map_public_ip_on_launch" {
  description = "Whether to map the public IP on launch. "
  type = bool
  default     = true
}
Security_Groups.tf

The following code establishes security groups for our (VPC) reusable module.

The security group for NAT instances allows HTTP and HTTPS only from the private subnets (thus allowing any instances in the private subnets to reach out to the internet for updates, patches, and download new software).

The security group for Docker Server allows HTTP and HTTPS from my Public IP address (ssh_location variable) and all traffic outbound to the internet. Allowing all traffic outbound to the internet is typical of a “Public Subnet.”

We are placing our Docker server in the public subnet, which is Ok for this exercise. So technically, we don’t need the NAT instances or the private subnets because we only place one EC2 Instance in one public subnet. Just for grins, I kept the private subnets.

# -------------- Security Group for NAT instances --------------
resource "aws_security_group" "nat-sg" {
  name        = "nat-sg"
  description = "Allow traffic to pass from the private subnet to the internet"
  vpc_id      = aws_vpc.my-vpc.id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${var.private_cidr}", "${var.private_cidr2}"]
  }
  ingress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }
  egress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = -1
    to_port     = -1
    protocol    = "icmp"
    cidr_blocks = ["${var.vpc_cidr}"]
  }
  tags = {
    Name  = "NAT-Sg"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# ----- Security Group for Web Server in Public Network --------resource "aws_security_group" "web-sg" {
  name        = "Web-SG"
  description = "allow HTTP and HTTPS"
  vpc_id      = aws_vpc.my-vpc.id
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }
  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "Web-SG"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
vpc.tf (reusable module)
#--Get Terraform Remote State from Parent Module -------
data "terraform_remote_state" "Terraform-Remote-State" {
  backend = "s3"

  config = {
    bucket         = var.bucket
    key            = var.state-key
    region         = var.bucket-region
    dynamodb_table = var.dynamodb_table

  }
}
# ---  Get an AMI to use for NAT instance -------------
data "aws_ami" "amazon_nat" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn-ami-vpc-nat*"]
  }
}
# ----------  Get current region data -----------------
data "aws_region" "current" {}
# ----------  Get availability zones ------------------
data "aws_availability_zones" "available" {
  state = "available"
}
# ------------------ Create the VPC -----------------------
resource "aws_vpc" "my-vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = var.enable_dns_support
  enable_dns_hostnames = var.enable_dns_hostnames
  tags = {
    Name  = "My-VPC"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# ----------------- Internet Gateway -----------------------
resource "aws_internet_gateway" "IGW" {
  vpc_id = aws_vpc.my-vpc.id

  tags = {
    Name  = "${var.environment}-IGW"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# ------------------ Setup Route table to IGW  -----------------
resource "aws_route_table" "public-route" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.IGW.id
  }
  tags = {
    Name  = "${var.environment}-Public_route"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}

# ********************  Public Subnet **********************
# --------------------- Public Subnet #1 -------------------
resource "aws_subnet" "public-1" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = var.map_public_ip_on_launch
  availability_zone       = data.aws_availability_zones.available.names[0]
  cidr_block              = var.public_cidr
  tags = {
    Name  = "public_subnet-1"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# --------------------- Public Subnet #2 ---------------------
resource "aws_subnet" "public-2" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = var.map_public_ip_on_launch
  availability_zone       = data.aws_availability_zones.available.names[1]
  cidr_block              = var.public_cidr2
  tags = {
    Name  = "public_subnet-2"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# ----------- Associate route to IGW for public subnet #1 -------
resource "aws_route_table_association" "public-1-assoc" {
  subnet_id      = aws_subnet.public-1.id
  route_table_id = aws_route_table.public-route.id
}
# -------- Associate route to IGW for public subnet #2 -------
resource "aws_route_table_association" "public-2-assoc" {
  subnet_id      = aws_subnet.public-2.id
  route_table_id = aws_route_table.public-route.id
}
# **** Establish NAT Instances and Routes to NAT ***********
# --------------- Setup NAT Instance #1 --------------------
resource "aws_instance" "nat" {
  ami                         = data.aws_ami.amazon_nat.id
  instance_type               = var.instance_type
  subnet_id                   = aws_subnet.public-1.id
  vpc_security_group_ids      = ["${aws_security_group.nat-sg.id}"]
  associate_public_ip_address = true
  source_dest_check           = false
  monitoring                  = true
  key_name                    = var.ec2-key
  tags = {
    Name  = "${var.environment}-NAT1"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# --------------- Setup NAT Instance #2 ------------------
resource "aws_instance" "nat2" {
  ami                         = data.aws_ami.amazon_nat.id
  instance_type               = var.instance_type
  subnet_id                   = aws_subnet.public-2.id
  vpc_security_group_ids      = ["${aws_security_group.nat-sg.id}"]
  associate_public_ip_address = true
  source_dest_check           = false
  monitoring                  = true
  key_name                    = var.ec2-key
  tags = {
    Name  = "${var.environment}-NAT2"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# ------------------ Setup Route to NAT  -----------------
resource "aws_route_table" "nat-route" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    # instance_id = aws_instance.nat.id
    network_interface_id = aws_instance.nat.primary_network_interface_id
  }
  tags = {
    Name  = "${var.environment}-route_to_nat1"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# ------------------ Setup Route to NAT2  -----------------
resource "aws_route_table" "nat-route-2" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    # instance_id = aws_instance.nat2.id
    network_interface_id = aws_instance.nat.primary_network_interface_id
  }
  tags = {
    Name  = "${var.environment}-route_to_nat2"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# ************* Create Private Subnets **********************
# --------------------- Private Subnet #1 -------------------
resource "aws_subnet" "private-1" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = false
  availability_zone       = data.aws_availability_zones.available.names[0]
  cidr_block              = var.private_cidr
  tags = {
    Name  = "private_subnet-1"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# -------- Associate private subnet 1 to NAT 1 route -------
resource "aws_route_table_association" "private-route-association" {
  subnet_id      = aws_subnet.private-1.id
  route_table_id = aws_route_table.nat-route.id
}
# --------------------- Private Subnet #2 ---------------------
resource "aws_subnet" "private-2" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = false
  availability_zone       = data.aws_availability_zones.available.names[1]
  cidr_block              = var.private_cidr2
  tags = {
    Name  = "private_subnet-2"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
# -------- Associate private subnet 2 to NAT 2 route -------
resource "aws_route_table_association" "private-route-association-2" {
  subnet_id      = aws_subnet.private-2.id
  route_table_id = aws_route_table.nat-route-2.id
}
Outputs.tf
output "region" {
  description = "AWS region"
  value       = data.aws_region.current.name
}
output "vpc_id" {
  description = "VPC ID"
  value       = aws_vpc.my-vpc.id
}
output "public_subnet_1" {
  description = "Public Subnet 1"
  value       = aws_subnet.public-1.id
}
output "public_subnet_2" {
  description = "Public Subnet 2"
  value       = aws_subnet.public-2.id
}
output "private_subnet_1" {
  description = "Private Subnet 1"
  value       = aws_subnet.private-1.id
}
output "private_subnet_2" {
  description = "Private Subnet 2"
  value       = aws_subnet.private-2.id
}
output "NAT_sg_id" {
  description = "Security group ID for nat-sg"
  value      = aws_security_group.nat-sg.id
}
output "web-sg_id" {
  description = "Security group ID for web-sg"
  value       = [aws_security_group.web-sg.id]
}

Docker_website (reusable module)

Our teams will use this module to deploy an AWS EC2 instance with scripts to install Docker and launch one Docker container that I created and published publicly in Docker Hub.

Several features to understand about this reusable module.

  • There is a dependency that the team’s VPC is already deployed
  • The module first communicates with AWS API to get data about the team’s VPC
    • For instance, data “aws_vpcs” “vpc” gets data for all VPCs in the region
    • Our data query to the API includes a filter, which will filter our query to return only the VPC with an environment value whose value is set by the parent module. For instance, if the parent module sets var.enviroment = development , then our query to the API will return only the ID of the VPC created by our development team.
  • You will notice that we have similar queries to find the team’s public subnet and the team’s security group for a web server.

Change directory to ~/terraform/reusable_modules_exercise/modules/Docker_website and create the following files: docker.tf, variables.tf, bootstrap_docker_web.sh, outputs.tf

docker.tf
#------------------------- State terraform backend location-----
data "terraform_remote_state" "Terraform-State" {
  backend = "s3"
  config = {
    bucket         = var.bucket
    key            = var.state-key
    region         = var.bucket-region
    dynamodb_table = var.dynamodb_table
  }
}
# ----------------------- Get existing VPC ---------------------
data "aws_vpcs" "vpc" {
  tags = {
    Stage = var.environment
    Name  = "My-VPC"
  }
}
# ----------------------- Get region data ----------------------
data "aws_region" "current" {}
# ----------------------- Get existing Public Subnet -----------
data "aws_subnet" "public_subnets" {
  tags = {
    Stage = var.environment
    Name  = "public_subnet-1"
  }
}
# ---- Get existing Security Group for Web server --------------
data "aws_security_group" "web-sg" {

  tags = {
    Stage = var.environment
    Name  = "Web-SG"
  }
}
#--------- Get most recent Amazon Linux2 image -----------------
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}
# -------------------- Creating Web Server ---------------------
resource "aws_instance" "web-server" {
  ami                    = data.aws_ami.amazon_linux.id
  instance_type          = var.instance_type
  subnet_id              = data.aws_subnet.public_subnets.id
  vpc_security_group_ids = [data.aws_security_group.web-sg.id]
  #subnet_id              = data.terraform_remote_state.dev_vpc.outputs.public_subnet_1
  #vpc_security_group_ids = [data.terraform_remote_state.dev_vpc.outputs.web-sg_id]

  user_data = file("${path.module}/bootstrap_docker_web.sh")
  #user_data  = file("bootstrap_docker_web.sh")
  monitoring = true
  key_name   = var.ec2-key

  tags = {
    Name  = "Web-Server"
    Stage = "${var.environment}"
    Owner = "${var.owner_name}"
  }
}
variables.tf
variable "bucket" {}
variable "state-key" {}
variable "dynamodb_table" {}
variable "region" {}
variable "bucket-region" {}
variable "ec2-key" {}
variable "instance_type" {}
variable "ssh_location" {}
variable "environment" {}
variable "owner_name" {}
bootstrap_docker_web.sh
#!/bin/bash
sudo yum install curl
sudo yum -y update
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker
sudo docker run -d --name MyWebsite -p 80:80  surfingjoe/mywebsite
hostnamectl set-hostname Docker-server
outputs.tf
output "Web_IP" {
  value = [aws_instance.web-server.public_ip]
}

Creating code for the parent modules

Now comes the fun part. This code might appear similar to some community modules developed and published by different companies. Many community modules are complex in trying to solve all possible permutations someone might require of their module. For instance, many community VPC modules try to accommodate someone who may or may not require a VPN or a DirectLink connection to their VPC. Most published community modules allow a VPC to choose how many availability zones to deploy a subnet.

The VPC module in this example, the child module, and the parent module have simple requirements because my goal is to demonstrate how to create a module and only just a simple demonstration. Simplicity is the easiest method to reach a broader audience, right?

I already have a more complex demonstration planned for my next blog post, which will be a method for different teams to deploy an auto-scaled and load-balanced WordPress website using EFS for persistent storage that can use the reusable modules for the development team or a QA team etc. Soon to be published.

So first, let’s look at the variables configuring the reusable module AWS resources specifically for each team’s requirement.

  • The development team requires its own S3 bucket and remote state file, so it will declare the necessary variables and assign values unique to the development team
  • The same applies to an EC2-key pair, EC2 instance type, in-bound SSH CIDR block (SSH-Location), etc.
  • Some of the variables will be assigned a value in the parent modules configuration blog
  • Some sensitive information variables will assign a value in our “terraform.tfvars” file.

Let’s start with the Development team

Change directory to ~/terraform/reusable_modules_exercise/teams/development and add the following files.

variables.tf (development team)

The variables for our Development team

variable "bucket" {
  description = "Name of the S3 bucket that will be holding Terraform Remote State"
  type = string
}
variable "state-key" {
  description = "Name of the file for the terraform state key"
  type = string
}
variable "dynamodb_table" {
  description = "Name to be assigned to the DynamoDB table"
  type = string
}
variable "region" {
  description = "Region where VPC will be located"
  type = string
}
variable "bucket-region" {
  description = "Region where S3 bucket is placed"
  type = string
}
variable "ec2-key" {
  description = "Regional EC2 key used by the team"
  type = string
}
variable "instance_type" {
  description = "EC2 instance type"
  type = string
}
variable "ssh_location" {
  description = "CIDR block allowed SSH access into resource"
  type = string
}
variable "environment" {
  description = "Identify the Team's Environment i.e. QA or Development"
  type = string
}
variable "owner_name" {
  description = "Name to be used on all the resources as deployment owner"
  type        = string
}
variable "enable_ipv6" {
  description = "Requests an Amazon-provided IPv6 CIDR block with a /56 prefix length for the VPC. You cannot specify the range of IP addresses, or the size of the CIDR block."
  type        = bool
  default     = false
}
variable "enable_dns_hostnames" {
  description = "Should be true to enable DNS hostnames in the VPC"
  type        = bool
  default     = true
}
variable "enable_dns_support" {
  description = "Should be true to enable DNS support in the VPC"
  type        = bool
  default     = true
}
variable "map_public_ip_on_launch" {
  description = "Whether to map the public IP on launch. "
  type = bool
  default     = true
}

terraform.tfvars

With sensitive values, our Development team’s values will be declared in the file “terraform.tfvars. Teams can utilize the same S3 bucket for Terraform Remote State; it is the “state-key” that must be unique for each team.”

region               = "us-west-1"
environment          = "development"
instance_type        = "t2.micro"
ec2-key              = "&lt;EC2 Key pair for development team>"
ssh_location         = "&lt;Your public IP address>"
owner_name           = "&lt;Your name, your teams name or email>"
bucket               = "&lt;The name of your S3 bucket"
state-key            = "development-terraform.tfstate"
dynamodb_table       = "test_db_locks"
bucket-region        = "&lt;the region for the S3 bucket>"
enable_dns_hostnames = true
enable_dns_support   = true
enable_ipv6          = false

main.tf (parent module for development)

We are going to declare the VPC module and the Docker_website module. In this file (parent module), we will declare the source (path) of the child modules and the configuration to be applied to the child modules (by giving values to variables).

Note: module configuration block named “module “Docker_web” below has the line depends_on = [module.dev_vpc]. When putting together different modules like first the VPC, followed by creating our docker website, Terraform does not easily determine the dependencies. Without the “depends_on,” Terraform will try to deploy both modules simultaneously, and without the VPC already in place, our docker website will fail. This is easily fixed by the “depends_on” statement, which tells Terraform the VPC module must be completed before executing the “Docker_web” module.

terraform {
  required_version = ">= 0.13.0"
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}
provider "aws" {
  region = var.region
}
module "dev_vpc" {
  source = "../../modules/vpc"

  bucket               = var.bucket
  state-key            = var.state-key
  dynamodb_table       = var.dynamodb_table
  bucket-region        = var.bucket-region
  region               = var.region
  environment          = var.environment
  owner_name           = var.owner_name
  ec2-key              = var.ec2-key
  instance_type        = var.instance_type
  ssh_location         = var.ssh_location
  enable_ipv6          = false
  enable_dns_support   = true
  enable_dns_hostnames = true
  map_public_ip_on_launch = true
}

module "Docker_web" {
  source = "../../modules/Docker_website"

  depends_on     = [module.dev_vpc]
  bucket         = var.bucket
  state-key      = var.state-key
  dynamodb_table = var.dynamodb_table
  bucket-region  = var.bucket-region
  region         = var.region
  environment    = var.environment
  owner_name     = var.owner_name
  ec2-key        = var.ec2-key
  instance_type  = var.instance_type
  ssh_location   = var.ssh_location
}

Parent Module outputs

Yes, we have already declared outputs in the reusable module. But with reusable modules, if you want to see the outputs, we have to declare the outputs in our Parent Module as well. Just like variables, outputs have to be declared both in the child and parent modules.

outputs.tf
output "region" {
  description = "AWS region"
  value       = module.dev_vpc.region
}
output "vpc_id" {
  description = "VPC ID"
  value       = module.dev_vpc.vpc_id
}
output "public_subnet_1" {
  description = "Public Subnet 1"
  value       = module.dev_vpc.public_subnet_1
}
output "public_subnet_2" {
  description = "Public Subnet 2"
  value       = module.dev_vpc.public_subnet_2
}
output "private_subnet_1" {
  description = "Private Subnet 1"
  value       = module.dev_vpc.private_subnet_1
}
output "private_subnet_2" {
  description = "Private Subnet 2"
  value       = module.dev_vpc.private_subnet_2
}
output "NAT_sg_id" {
  description = "Security group ID for mat-sg"
  value       = module.dev_vpc.NAT_sg_id
}
output "Web-IP" {
  description = "Security group ID for RDS-sg"
  value       = module.Docker_web.Web_IP
}

Create Quality Assurance Parent Module

Change directory to ~/terraform/reusable_modules_exercise/teams/quality_assurance and add the following files: main.tf, variables.tf, terraform.tfvars, output.tf.

Variables for the quality assurance team

You might notice the “variables.tf” file for the QA team is exactly the same as the development team’s “variables.tf”. That is because both teams are calling the same reusable modules. The magic happens when we assign a value to the variables

variables.tf
variable "bucket" {
  description = "Name of the S3 bucket that will be holding Terraform Remote State"
  type = string
}
variable "state-key" {
  description = "Name of the file for the terraform state key"
  type = string
}
variable "dynamodb_table" {
  description = "Name to be assigned to the DynamoDB table"
  type = string
}
variable "region" {
  description = "Region where VPC will be located"
  type = string
}
variable "bucket-region" {
  description = "Region where S3 bucket is placed"
  type = string
}
variable "ec2-key" {
  description = "Regional EC2 key used by the team"
  type = string
}
variable "instance_type" {
  description = "EC2 instance type"
  type = string
}
variable "ssh_location" {
  description = "CIDR block allowed SSH access into resource"
  type = string
}
variable "environment" {
  description = "Identify the Team's Environment i.e. QA or Development"
  type = string
}
variable "owner_name" {
  description = "Name to be used on all the resources as deployment owner"
  type        = string
}
variable "enable_ipv6" {
  description = "Requests an Amazon-provided IPv6 CIDR block with a /56 prefix length for the VPC. You cannot specify the range of IP addresses, or the size of the CIDR block."
  type        = bool
  default     = false
}
variable "enable_dns_hostnames" {
  description = "Should be true to enable DNS hostnames in the VPC"
  type        = bool
  default     = true
}
variable "enable_dns_support" {
  description = "Should be true to enable DNS support in the VPC"
  type        = bool
  default     = true
}
variable "map_public_ip_on_launch" {
  description = "Whether to map the public IP on launch. "
  type = bool
  default     = true
}
terraform.tfvars

Again, this is where our QA team will create variances required by their team. You’ll not that I give an example of our QA team using the “US-West-2” region instead of “Us-West-1” like the development team uses for their region. Also, note I have stipulated an instance type of “t2.micro” to demonstrate another variance between teams.

region               = "us-west-2"
environment          = "QA"
instance_type        = "t3.micro"
ec2-key              = "<EC2 Key pair for development team>"
ssh_location         = "<Your public IP address>"
owner_name           = "<Your name, your teams name or email>"
bucket               = "<The name of your S3 bucket"
state-key            = "development-terraform.tfstate"
dynamodb_table       = "test_db_locks"
bucket-region        = "<the region for the S3 bucket>"
enable_dns_hostnames = true
enable_dns_support   = true
enable_ipv6          = false
main.tf (parent configuration module for QA team)
terraform {
  required_version = ">= 0.13.0"
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}
provider "aws" {
  region = var.region
}
module "QA_vpc" {
  source = "../../modules/vpc"

  bucket                  = var.bucket
  state-key               = var.state-key
  dynamodb_table          = var.dynamodb_table
  bucket-region           = var.bucket-region
  region                  = var.region
  environment             = var.environment
  owner_name              = var.owner_name
  ec2-key                 = var.ec2-key
  instance_type           = var.instance_type
  ssh_location            = var.ssh_location
  enable_ipv6             = false
  enable_dns_support      = true
  enable_dns_hostnames    = true
  map_public_ip_on_launch = true

}

module "Docker_web" {
  source = "../../modules/Docker_website"

  depends_on     = [module.QA_vpc]
  bucket         = var.bucket
  state-key      = var.state-key
  dynamodb_table = var.dynamodb_table
  bucket-region  = var.bucket-region
  region         = var.region
  environment    = var.environment
  owner_name     = var.owner_name
  ssh_location   = var.ssh_location
  ec2-key        = var.ec2-key
  instance_type  = var.instance_type
}
outputs.tf
output "region" {
  description = "AWS region"
  value       = module.QA_vpc.region
}
output "vpc_id" {
  description = "VPC ID"
  value       = module.QA_vpc.vpc_id
}
output "public_subnet_1" {
  description = "Public Subnet 1"
  value       = module.QA_vpc.public_subnet_1
}
output "public_subnet_2" {
  description = "Public Subnet 2"
  value       = module.QA_vpc.public_subnet_2
}
output "private_subnet_1" {
  description = "Private Subnet 1"
  value       = module.QA_vpc.private_subnet_1
}
output "private_subnet_2" {
  description = "Private Subnet 2"
  value       = module.QA_vpc.private_subnet_2
}
output "Web-IP" {
  description = "Security group ID for RDS-sg"
  value       = module.Docker_web.Web_IP
}

Deployment

Be sure to update the “terraform.tfvars” file to your settings. The GitHub repository does not have these files, so you will have to create a file for the development team and another for the QA team.

Please change the directory to ~/terraform/reusable_modules_exercise/teams/development

Perform the following terraform actions:

  • terraform init
  • terraform validate
  • terraform apply

Once completed, Terraform will have deployed our reusable code into AWS inside of the region specified by the settings configured in the parent module

Then change the directory to ~/terraform/reusable_modules_exercise/teams/quality_assurance

And perform the following actions:

  • terraform init
  • terraform validate
  • terraform apply

Once completed, Terraform will have deployed reusable cod for Quality Assurance. If you configured the Quality Assurance configuration with a different region, the same type of AWS resources is installed in a different region using the same reusable code.


Once completed with this exercise, feel free to remove all resources by issuing the following command in the terminal:

Change the directory to each team’s directory and perform the following destroy task. We don’t want to leave our EC2 instances running and forget about them.

AWS allows 750 hours of free tier EC2 hours. If you leave this exercise running, it has six EC2 instances (three for each team); left running it will use up your allowance of free EC2 hours in 5 days.

  • terraform destroy

This is not for production!

All public websites should have an application firewall between the Web Server and its internet connection, this exercise doesn’t create a firewall. So do not use this configuration for production

Most cloud deployments should have monitoring in place to detect and alert someone should an event occur to any resources that require remediation. this exercise does not include any monitoring

It is a good idea to remove All resources when you have completed this exercise so as not to incur costs

1 Enterprise resource planning (ERP) refers to a type of software that organizations use to manage day-to-day business activities such as accounting, procurement, project management, risk management and compliance, and supply chain operations.

Using Terraform to create SSL Certificate for an AWS Load Balancer

Load Balance Web Servers using SSL for My domain

AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.

from AWS Certificate Manager Docs

AWS Certificate Manager Pricing

Public SSL/TLS certificates provisioned through AWS Certificate Manager are free.

Overview

This exercise will build an auto-scaling group (ASG) of web servers. It is using almost the exact same code as my previous exercise.

The critical difference in this exercise is that we will add Terraform instructions to change our domain settings in AWS Route 53 and create a valid AWS SSL certificate using AWS Certificate Manager to enable SSL traffic to our website (HTTPS).

Prerequisites

  • You must have or purchase a domain for this exercise
    • It can be a domain purchased from any domain service, or you can buy a domain with AWS route 53
    • You must also ensure Route 53 is configured as your domain’s “Name Service” for the domain.
  • Terraform Installed
  • AWS account and AWS CLI installed and configured

The Code

Please clone or fork the code from my previous exercise from the GitHub repository.

Make a directory called Terraform, and be sure to change the directory to Terraform. On a Mac (cd ~/terraform). Then clone or fork my repository into the Terraform directory. You should now have a directory “ALB_ASG_Website_using_NAT_instances,” so let’s change directories into that directory.

Now we are going to add a file called “Route53.tf” using our favorite editor (in my case, “Visual Studio Code.”

provider "aws" {
  alias = "account_route53" # Specific to your setup
}
# This creates an SSL certificate
resource "aws_acm_certificate" "cert" {
  domain_name       = "<your domain>"
  validation_method = "DNS"
}
# Cert Validation 
resource "aws_route53_record" "cert_validation" {
  name    = tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_name
  type    = "CNAME"
  zone_id = "aws_route53_record" "MyDomain"
  records = [tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_value]
  ttl     = 60
}
# This is a DNS record for the ACM certificate validation to prove we own the domain
resource "aws_acm_certificate_validation" "cert_validation" {
  certificate_arn         = "${aws_acm_certificate.cert.arn}"
  validation_record_fqdns = [tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_name]
}
# Standard route53 DNS record for Domain pointing to an ALB
resource "aws_route53_record" "MyDomain" {
  zone_id = "<zone id of your domain>"
  name    = "<your domain>"
  type    = "A"
  alias {
    name                   = aws_lb.website-alb.dns_name
    zone_id                = aws_lb.website-alb.zone_id
    evaluate_target_health = true
  }
}

Be sure to change <your domain> into the exact domain registered in Route 53, for example, “example.com.” If you want to use something like “www.example.com,” it must already be registered exactly as “www.example.com” in Route 53. Also, be sure to get the “Zone ID” of your domain from Route 53 and replace <zone id of your domain> within the above “route53.tf” code.

That is it; the above code will automatically create a certificate in AWS Certificate Manager, the code will automatically add the neccesary DNS entry for the certificate, and will automatically validate the certificate.

Well Ok, one more change to be made

While researching how to use Terraform to automate adding an SSL certificate for our Load Balancer, every example missed a critical component to get this working. I lost a few hours troubleshooting, then banged my head on the desk because of the apparent failure to change the ALB listener to accept HTTPS. I suppose the writers assumed that everyone knows an ALB listener has to change if we use HTTPS traffic instead of HTTP traffic. However, that tidbit of information wasn’t included in any articles I found on the internet. Oh well, onward and upwards!

Change the “alb_listener.tf file

Delete “alb_listener.tf” and we’ll add a new “alb_listener.tf”.

resource "aws_lb_listener" "website-alb-listener" {
  load_balancer_arn = aws_lb.website-alb.arn
  port              = "443"
  protocol          = "HTTPS"
  ssl_policy        = "ELBSecurityPolicy-2015-05"
  certificate_arn   = aws_acm_certificate.cert.arn
  default_action {
    type = "forward"
    forward {
      target_group {
        arn = aws_lb_target_group.website-target.arn
      }
      stickiness {
        enabled  = false
        duration = 1
      }
    }
  }
}
resource "aws_lb_listener" "redirect" {
  load_balancer_arn = aws_lb.website-alb.arn
  port              = "80"
  protocol          = "HTTP"
  default_action {
    type = "redirect"
    redirect {
      port        = "443"
      protocol    = "HTTPS"
      status_code = "HTTP_301"
    }
  }
}

Our new listener instructions will forward HTTPS traffic to our load balancer. The code will also automatically redirect any HTTP traffic to HTTPS, thus forcing all traffic to be protected by an SSL transport.


The resources are free only if you don’t leave them running! There is a limit of EC2 hours allowed per month!

This is not for production!

All public websites should have an application firewall between the Web Server and its internet connection, this exercise doesn’t create the application firewall. So do not use this configuration for production

All websites should have monitoring and a method to scrape log events to detect and alert for potential problems with the deployment.

This exercise uses resources compatible with the AWS Free Tier plan. It does not have sufficient compute sizing to support a production workload.

It is a good idea to remove All resources when you have completed this exercise so as not to incur costs

Terraform – Scalable WordPress in AWS, using an ALB, ASG, and EFS

Using Terraform to deploy an auto-scaled WordPress site in AWS, with an application load balancer, while using EFS as storage for WordPress front end servers

Load balanced and Auto-Scaled WordPress deployment

This exercise will build an auto-scaled WordPress solution. While using EFS as the persistent storage solution. An auto-scaled front end can expand the number of front-end servers to handle growth in the number of users during peak hours. We also need a load-balancer that automatically distributes users amongst front-end servers to accommodate load distribution.

Ideally, we should use a scaling solution based on demand. I could write scaling an ASG based on demand, but demonstrating compliance by increasing client demand (representing peak load), could incur a substantial cost, and I’m trying to keep my exercises to be “compliant with a Free Tier plan.” Soooo, simply using an AWS ASG with desired capacity will be the solution for today.

Ideally, we should also use RDS for our database, which can scale based on demand. Using one MariaDB server that does not scale to user load kind of defeats the purpose of a scalable architecture. However, I’ve written this exercise to demonstrate deploying scaling WordPress front-end servers with an EFS shared file service and not so much as an ideal production architecture. Soooo, one MariaDB that is free tier compliant is our plan for today.

Why are we using EFS?

When scaling more than one WordPress front-end server, we’ll need a method to keep track of users amongst the front-end servers. We need storage common to all front-end servers to ensure each auto-scaled WordPress server is aware of user settings, activity, and configuration. AWS provides a shared file storage system called Elastic File Services (EFS). EFS is a serverless file storage system. EFS is compliant with NFS versions 4.0 and 4.1. Therefore, the latest versions of Amazon Linux, Red Hat, CentOS, and MAC operating systems are capable of using EFS as an NFS server. Amazon EC2 and other AWS compute instances running in multiple Availability Zones within the same AWS Region can access the file system so that many users can access and share a common data source.

Each front-end server using EFS has access to shared storage, allowing each server to have all user settings, configuration, and activity information.

Docker

We will be using Docker containers for our WordPress and MariaDB servers. The previous WordPress exercise used Ansible to configure servers with WordPress and MariaDB. But we are using auto-scaling, so I would like a method to deploy WordPress quickly rather than scripts or playbooks in this exercise—Docker to the rescue.

This exercise will be using official Docker images “WordPress” and “MariaDB.”

Terraform

We will be using Terraform to construct our AWS resources. Our Terraform code will build a new VPC, two public subnets, two private subnets, and the associative routing and security groups. Terraform will also construct our ALB, ASG, EC2, and EFS resources.

Requirements

  • Must have an AWS account
  • Install AWS CLIConfigure AWS CLIInstall Terraform
  • An EC2 Key Pair for AWS CLI (for connecting using SSH protocol)
  • AWS Administrator account or an account with the following permissions:
    • create VPC, subnets, routing, and security groups
    • create EC2 Instances and manage EC2 resources
    • create auto-scaling groups and load balancers
    • create and manage EFS and EFS mount points

GitHub Repository

https://github.com/surfingjoe/Wordpress-deployment-into-AWS-with-EFS-ALB-ASG-and-Docker

Building our Scaled WordPress Solution

vpc.tf

provider "aws" {
  region = var.region
}
data "aws_availability_zones" "all" {}
terraform {
  backend "s3" {
    bucket = "nickname-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}
data "aws_availability_zones" "available" {
  state = "available"
}
data "aws_region" "current" {}
module "vpc" {
  source = "terraform-aws-modules/vpc/aws"
  #version => "2.64.0"
  name = "${var.environment}-vpc"
  cidr = var.vpc_cidr_block
  azs             = data.aws_availability_zones.available.names
  private_subnets = slice(var.private_subnet_cidr_blocks, 0, var.private_subnet_count)
  public_subnets  = slice(var.public_subnet_cidr_blocks, 0, var.public_subnet_count)
intra_subnets   = slice(var.intra_subnet_cidr_blocks, 0, 2)
  enable_dns_support   = true
  enable_dns_hostnames = true
  single_nat_gateway   = true
  enable_nat_gateway   = true
  enable_vpn_gateway   = false
  tags = {
    Name  = "${var.environment}-VPC"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

vpc_variables.tf

variable "region" {
  description = "The region Terraform deploys your instances"
  type        = string
}
variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
}
variable "vpc_cidr_block" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
}
variable "public_subnet_count" {
  description = "Number of public subnets."
  type        = number
}
variable "private_subnet_count" {
  description = "Number of private subnets."
  type        = number
}
variable "intra_subnet_count" {
  description = "Number of private subnets"
  type        = number
}
variable "public_subnet_cidr_blocks" {
  description = "Available cidr blocks for public subnets"
  type        = list(string)
  default = [
    "10.0.1.0/24",
    "10.0.2.0/24",
    "10.0.3.0/24",
    "10.0.4.0/24",
    "10.0.5.0/24",
    "10.0.6.0/24",
    "10.0.7.0/24",
    "10.0.8.0/24",
  ]
}
variable "private_subnet_cidr_blocks" {
  description = "Available cidr blocks for private subnets"
  type        = list(string)
  default = [
    "10.0.101.0/24",
    "10.0.102.0/24",
    "10.0.103.0/24",
    "10.0.104.0/24",
    "10.0.105.0/24",
    "10.0.106.0/24",
    "10.0.107.0/24",
    "10.0.108.0/24",
  ]
}
variable "intra_subnet_cidr_blocks" {
  description = "Available cidr blocks for database subnets"
  type        = list(string)
  default = [
    "10.0.201.0/24",
    "10.0.202.0/24",
    "10.0.203.0/24",
    "10.0.204.0/24",
    "10.0.205.0/24",
    "10.0.206.0/24",
    "10.0.207.0/24",
    "10.0.208.0/24"
  ]
}

Security

The load balancer security group will only allow HTTP inbound traffic from my public IP address (in this exercise) at the time of this writing. I will possibly alter this exercise to include the configuration of a domain using Route 53 and a certificate for that domain, such that we can use HTTPS encrypted traffic instead of HTTP traffic. Using a certificate incurs costs because a Route 53 certificate for a domain is not included in a free tier plan. Therefore, I might write managing Route 53 using Terraform as an optional configuration later.

The WordPress Security group will only allow HTTP inbound traffic from the ALB security group and SSH only from the Controller security group.

The MySQL group will only allow MySQL protocol from the WordPress security group and SSH protocol from the Controller security group.

The optional Controller will only allow SSH inbound from My Public IP address.

security_groups.tf

resource "aws_security_group" "controller-ssh" {
  name        = "Controller-SG"
  description = "allow SSH from my location"
  vpc_id      = module.vpc.vpc_id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    cidr_blocks = ["${var.ssh_location}"]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-Controller-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
resource "aws_security_group" "web-sg" {
  name        = "Web-SG"
  description = "allow HTTP from Load Balancer, & SSH from controller"
  vpc_id      = module.vpc.vpc_id
  ingress {
    protocol        = "tcp"
    from_port       = 22
    to_port         = 22
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }
  ingress {
    from_port       = 80
    to_port         = 80
    protocol        = "tcp"
    security_groups = ["${aws_security_group.alb-sg.id}"]

  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-Web-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
resource "aws_security_group" "alb-sg" {
  name        = "ALB-SG"
  description = "allow Http, HTTPS"
  vpc_id      = module.vpc.vpc_id

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }

  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-ALB-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
resource "aws_security_group" "efs-sg" {
  name   = "ingress-efs-sg"
  vpc_id = module.vpc.vpc_id

  // NFS
  ingress {
    security_groups = ["${aws_security_group.controller-ssh.id}", "${aws_security_group.web-sg.id}"]
    from_port       = 2049
    to_port         = 2049
    protocol        = "tcp"
  }

  egress {
    security_groups = ["${aws_security_group.controller-ssh.id}", "${aws_security_group.web-sg.id}"]
    from_port       = 0
    to_port         = 0
    protocol        = "-1"
  }
  tags = {
    Name  = "${var.environment}-MyEFS-SG"
    stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
resource "aws_security_group" "MySQL-sg" {
  name        = "MySQL-SG"
  description = "allow SSH from Controller and MySQL from my IP and from web servers"
  vpc_id      = module.vpc.vpc_id
  ingress {
    protocol        = "tcp"
    from_port       = 22
    to_port         = 22
    security_groups = ["${aws_security_group.controller-ssh.id}"]
  }

  ingress {
    from_port = 3306
    to_port   = 3306
    protocol  = "tcp"

    security_groups = ["${aws_security_group.web-sg.id}"]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "${var.environment}-MySQL-SG"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

efs.tf

We are writing the Terraform code to create a general-purpose EFS deployment. You’ll note that I’m using a variable called “nickname” to create a unique EFS name. We are using “general purpose” performance and “bursting” throughput mode to stay within free tier plans and not incur costs. You’ll notice that we are creating a mount point in each private subnet so that our EC2 instances can make NFS mounts to an AWS EFS service.

resource "aws_efs_file_system" "my_efs" {
  creation_token   = "${var.nickname}-efs"
  performance_mode = "generalPurpose"
  throughput_mode  = "bursting"
  encrypted        = "true"
  tags = {
    Name  = "${var.environment}-MyEFS"
    stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}
resource "aws_efs_mount_target" "efs-mt-A" {
  file_system_id  = aws_efs_file_system.my_efs.id
  subnet_id       = module.vpc.intra_subnets[0]
  security_groups = ["${aws_security_group.efs-sg.id}"]
}
resource "aws_efs_mount_target" "efs-mt-B" {
  file_system_id  = aws_efs_file_system.my_efs.id
  subnet_id       = module.vpc.intra_subnets[1]
  security_groups = ["${aws_security_group.efs-sg.id}"]
}

wordpress.tf

The method of creating an auto-scaled WordPress deployment uses the same kind of Terraform code found in my previous exercise. If you would like to see more discussions about key attributes, and decisions to make about Terraform coding of an Auto Scaling Group please refer to my previous article.

Notice that I added a dependency on MariaDB in the code. It is not required, it will work with or without this dependency, but I like the idea of telling Terraform that I want our database to be active before creating WordPress.

Notice that we assign variables for EFS ID, dbhost, database name, the admin password, and the root password in the launch template.

#--------- Get Amazon Linux 2 AMI image  -------------------
data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]
  filter {
    name   = "name"
    values = ["amzn2-ami-hvm-*-x86_64-gp2"]
  }
}
# ---------  Create Launch Template ----------------------------
resource "aws_launch_template" "wordpress" {
  image_id               = data.aws_ami.amazon_linux.id
  instance_type          = var.instance_type
  key_name               = var.key
  vpc_security_group_ids = ["${aws_security_group.web-sg.id}"]
  user_data              = base64encode("${data.template_file.bootstrap.rendered}")
  depends_on             = [aws_efs_file_system.my_efs, aws_instance.mariadb]
  lifecycle { create_before_destroy = true }
  #monitoring             = true
}

# ------------ Create  Auto Scaling Group ----------------------
resource "aws_autoscaling_group" "wordpress_asg" {
  launch_template {
    name    = aws_launch_template.wordpress.name
    version = aws_launch_template.wordpress.latest_version
  }
  vpc_zone_identifier = module.vpc.private_subnets
  min_size            = 2
  max_size            = 6
  desired_capacity    = 2
  tag {
    key                 = "Name"
    value               = "Wordpress_ASG"
    propagate_at_launch = true
  }
  depends_on            = [aws_instance.mariadb]
}
data "template_file" "bootstrap" {
  template = file("bootstrap_wordpress.tpl")
  vars = {
    efs_id   = "${aws_efs_file_system.my_efs.id}"
    dbhost   = "${aws_instance.mariadb.private_ip}"
    user     = var.user
    password = var.password
    dbname   = var.dbname
    domain_name = var.domain_name
  }
}

vars.tf

This covers the variables needed for WordPress and MariaDB servers.

variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}
variable "environment" {
  description = "User selects environment"
  type        = string
}
variable "your_name" {
  description = "Your Name?"
  type        = string
}
variable "key" {
  description = "EC2 Key Pair Name"
  type        = string
}
variable "user" {
  description = "SQL User for WordPress"
  type        = string
}
variable "dbname" {
  description = "Database name for WordPress"
  type        = string
}
variable "password" {
  description = "User password for WordPress"
  type        = string
}
variable "root_password" {
  description = "User password for WordPress"
  type        = string
}
variable "domain_name" {
  description = "My Domain Name"
  type        = string
}

bootstrap_wordpress.tpl

This Terraform code will be used to configure each WordPress server with Docker and launch the WordPress Docker container with associative variables to configure EFS ID, dbhost, database name, and admin password, and root password.

#!/bin/bash
sudo yum -y update
hostnamectl set-hostname wordpress
# ----- Install AWS EFS Utilities --------------------
yum install -y amazon-efs-utils
# ----- Create EFS Mount --------------------
mkdir /efs
mount -t efs ${efs_id}:/ /efs
# ----- Edit fstab so EFS automatically loads on reboot
echo ${efs_id}:/ /efs efs defaults,_netdev 0 0 >> /etc/fstab
# Install & Run Docker --------------------
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker
# ----- Install docker compose --------------------
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
# ----- Docker run wordpress with assigning env variables  ------- 
docker run -d -e WORDPRESS_DB_HOST=${dbhost} -e WORDPRESS_DB_PASSWORD=${password} -e WORDPRESS_DB_USER=${user} -e WORDPRESS_DB_NAME=${dbname}  -v /efs/wordpress:/var/www/html -p 80:80 wordpress:latest

mariadb.tf

Notice that we are once again passing variables to our bootstrap by using a launch template.

# Creating controller node
resource "aws_instance" "mariadb" {
  ami                    = data.aws_ami.amazon_linux.id
  instance_type          = var.instance_type
  subnet_id              = module.vpc.private_subnets[1]
  vpc_security_group_ids = ["${aws_security_group.MySQL-sg.id}"]
  user_data  = data.template_file.bootstrap-db.rendered
  monitoring = true
  key_name   = var.key
  depends_on = [aws_efs_file_system.my_efs]

  tags = {
    Name  = "${var.environment}-MariaDB"
    Stage = "${var.environment}"
    Owner = "${var.your_name}"
  }
}

data "template_file" "bootstrap-db" {
  template = file("bootstrap_mariadb.tpl")
  vars = {
    efs_id        = "${aws_efs_file_system.my_efs.id}"
    root_password = var.root_password
    user          = var.user
    password      = var.password
    dbname        = var.dbname
  }
}

bootstrap_mariadb.tpl

#!/bin/bash
sudo yum -y update
hostnamectl set-hostname mariadb
# ----- Install AWS EFS Utilities ---------------
yum install -y amazon-efs-utils
# ----- Create the EFS Mount --------------------
mkdir /efs
mkdir /efs/mariadb
mount -t efs $efs_id:/ /efs
# ----- Edit fstab so EFS automatically loads on reboot ---
echo $efs_id:/ /efs efs defaults,_netdev 0 0 >> /etc/fstab
# ----- Install & Run Docker ---------------------
sudo amazon-linux-extras install -y docker
sudo usermod -a -G docker ec2-user
sudo systemctl start docker
docker run --name mariadb -e MYSQL_ROOT_PASSWORD=${root_password} -e MYSQL_USER=${user} -e MYSQL_PASSWORD=${password} -e MYSQL_DATABASE=${dbname} -p 3306:3306 -d -v /efs/mariadb:/var/lib/mysql docker.io/library/mariadb

alb.tf

resource "random_pet" "app" {
  length    = 2
  separator = "-"
}
resource "aws_lb" "wordpress-alb" {
  name               = "main-app-${random_pet.app.id}-lb"
  internal           = false
  load_balancer_type = "application"
  subnets            = module.vpc.public_subnets
  security_groups    = ["${aws_security_group.alb-sg.id}"]
}
resource "aws_lb_listener" "wordpress-alb-listner" {
  load_balancer_arn = aws_lb.wordpress-alb.arn
  port              = "80"
  protocol          = "HTTP"
  default_action {
    type = "forward"
    forward {
      target_group {
        arn = aws_lb_target_group.wordpress-target.arn
      }
      stickiness {
        enabled  = true
        duration = 1
      }
    }
  }
}

alb_target.tf

resource "aws_lb_target_group" "wordpress-target" {
  name     = "wordpress-${random_pet.app.id}-lb"
  port     = 80
  protocol = "HTTP"
  vpc_id   = module.vpc.vpc_id
  health_check {
    port     = 80
    protocol = "HTTP"
    timeout  = 5
    interval = 10
  }
}
# ----- Create a new ALB Target Group attachment. ------
resource "aws_autoscaling_attachment" "asg_attachment_website" {
  autoscaling_group_name = aws_autoscaling_group.wordpress_asg.id
  lb_target_group_arn    = aws_lb_target_group.wordpress-target.arn
}

output.tf

output "Controller-sg_id" {
  value       = [aws_security_group.controller-ssh.id]
}
output "vpc_id" {
  description = "Output VPC ID"
  value       = module.vpc.vpc_id
}
output "public_subnet_ids" {
  description = "Public subnet IDs"
  value       = module.vpc.public_subnets
}
output "private_subnet_ids" {
  description = "Private subnet IDs"
  value       = module.vpc.private_subnets
}
output "lb_dns_name" {
  value = aws_lb.wordpress-alb.dns_name
}
output "Auto_Scaling_Group_Name" {
  value = aws_autoscaling_group.wordpress_asg.name
}

terraform.tfvars

This file will be used to assign values to our variables. I have dummy values placed in the code below, of course, you will want to change the values.

your_name             = "Your name"
ssh_location          = "1.2.3.4/32"
root_password         = "Password"
user                  = "wordpress"
password              = "Password"
dbname                = "Wordpress"
environment           = "Test"
key                   = "Your EC2 Key name"
region                = "us-west-1"
public_subnet_count   = "2"
private_subnet_count  = "2"
intra_subnet_count.   = "2"
nickname              = "Your nickname"
domain                = "Your domain name"

Deploy our Resources using Terraform

Be sure to edit the variables in terraform.tfvars (currently, it has bogus values)

If you are placing this into any other region than us-west-1, you will have to change the AMI ID for the NAT instances in the file “vpc.tf”.

In your terminal, go to the VPC folder and execute the following commands:

  1. Terraform init
  2. terraform validate
  3. Terraform apply

Once the deployment is successful, the terminal will output something like the following output:

Auto_Scaling_Group_Name = "terraform-20220624191901645100000004"
Controller-sg_id = [
  "sg-03fbbf2bf5df75562",
]
aws_region = "us-west-1"
lb_dns_name = "main-app-nearby-lab-lb-73970083.us-west-1.elb.amazonaws.com"
vpc_id = "vpc-0ae0cd8eef3139128"

Copy the lb_dns_name, without the quotes, and paste the DNS name into any browser. If you have followed along and placed all of the code correctly, you should see something like the following:

Screen Shot

Notice Sometimes servers in an ASG take a few minutes to configure. Wait a couple of minutes if you get an error from our website and try again.

Open up your AWS Management Console, and go to the EC2 dashboard. Be sure to configure your EC2 dashboard to show tag columns with a tag value “Name”. A great way to identify your resources is using TAGS!!

If you have configured the dashboard to display the tag column "Names" in your EC2 dashboard, you should quickly be able to see one instance with the tag name "Test-MariaDB" and "Test-NAT2" and TWO servers with the Tag Name "Wordpress_ASG".

As an experiment, perhaps you would like to expand the number of Web servers. We can manually expand the number of desired capacity, and the Auto Scaling Group will automatically scale up or down the number of servers based on your command to change desired capacity.

The AWS CLI command is as follows:

aws autoscaling set-desired-capacity \
    --auto-scaling-group-name ASG_Name \
    --desired-capacity 4 \
    --honor-cooldown

Where ASG_Name in the command line above will be the terminals output of lb_dns_name (without the quotes of course). If you successfully executed the command line in your terminal, you should eventually see in the EC2 dashboard FOUR instances with the tag name “WordPress_ASG”. It does take a few minutes to execute the change. Demonstrating our ability to manually change the number of servers to four instead of two.

Now, go to your EC2 dashboard. Select one of the “WordPress_ASG” instances and select the drop-down box “Instance state”, then select “Stop Instance”. Your Instance will stop and what should happen, is the Auto Scaling Group and Load Balancer health checks will see that one of the instances is no longer working. The Auto Scaling Group will automatically take it out of service and create a new instance.

Now go to the Auto Scaling Groups panel (find this in the EC2 dashboard, left-hand pane under “Auto Scaling”. Click on the tab “Activity”. You should in a few minutes see an activity announcing:

“an instance was taken out of service in response to an EC2 health check indicating it has been terminated or stopped.”

The next activity will be to start a new instance. How about that! Working just like we designed the ASG to do for us. The ASG is automatically keeping our desired state of servers in a healthy state by creating new instances if one becomes unhealthy.


Once completed with this exercise, feel free to remove all resources by issuing the following command in the terminal:

  • terraform destroy

This is not for production!

All public websites should have security protection with a firewall (not just a security group). Since this is just an exercise, you can you in AWS free tier account, I do recommend the use of this configuration for production.

Most cloud deployments should have monitoring in place to detect and alert someone should an event occur to any resources that require remediation. this exercise does not include any monitoring

It is a good idea to remove All resources when you have completed this exercise so as not to incur costs

Create AWS load-balanced website using a custom AMI image

Load balanced Website servers

Repository

All of the Terraform code for this exercise is in Github repository

Features

  • AWS as cloud provider
  • Compliant with Free Tier plan
  • The ability to provision resources into AWS using “modular code.”
  • Using a community module to create the VPC, public and private subnets
  • Four EC2 Web Servers behind a Classic load balancer
  • Ability to launch or destroy bastion host (jump server) only when needed
    • Can add/remove bastion host (jump server) at any time without impact to other resources (Bastion Hosts – Provides administrators SSH access to servers located in a private network)

Requirements

  • Must have an AWS account
  • Install AWS CLI, Configure AWS CLI, Install Terraform
  • AWS Administrator account or an account with the following permissions:
    • Privilege to create, read & write an S3 bucket
    • Privilege to create an IAM profile
    • Privilege to create VPC, subnets, and security groups
    • Privilege to create security groups
    • Privilege to create a load balancer, internet gateway, and NAT gateway
    • Privilege to create EC2 images and manage EC2 resources
    • Ec2 Key Pair for the region
  • Create an S3 Bucket for Terraform State
  • In the previous exercise, we created a web server that was configured with a static website. We will use that configuration (AMI ID), for this exercise. Use the previous exercise EC2 image, saved as an EC2 image (We will need the AMI ID of that image for this exercise).

Infrastructure

New Infrastructure

Dry Code (reusable and repeatable)

Dry code (the principle of “do not repeat yourself”) means creating lines of code once and using or referencing that code many times. The benefit to everyone is re-usable code. 

  • Someone writes a bit of code and puts the code in a shared location
  • This allows other team members to copy the code or make references to the code
  • Everyone uses the same code but varies the utilization of code with variables

In the case of AWS deployments with Terraform, referenced code applied to a test environment using variables will create smaller or fewer resources in a test environment. In contrast, the same code with variables would deploy a larger resource or a greater scale of resources in production.

It makes sense to test and validate code in a test environment, then deploy the same code in production using variables that change the parameters of deployment.

We can accomplish dry code in Terraform by placing the “Infrastructure as Code” in a shared location such as Git, GitHub, AWS S3 buckets, shared files on your network, or a folder structure on your workstation. Then using the shared code in different deployments simply by using environment variables.

independent and modular

Modular coding allows code to be deployed “independent” of other code. For example, the ability to launch and test security groups, load balancers, EC2 instances, or containers as deployment modules, with or without dependencies on other resources.

Consider a bastion host (I call it a “Controller” as I also use a bastion host to run Ansible code). Using modular code we can launch a jump server (bastion-host) using Terraform, do some administration using SSH into some private servers, and when finished, we can shut down the controller. Meanwhile, Infrastructure launched with other modular code remains operational and not impacted by our addition and subsequent removal of a bastion host.

The Secret ingredient to modular terraform (Outputs, Inputs)

Output/Input -Seriously, the secret to modular and reusable Terraform code is wrapping our heads around putting code into a folder and using code to output certain parameters from that code into a remote state. Then using the outputted parameters from the remote state; as parameter inputs. Hence, we are passing data between modules. For example, code to create a VPC will include an output of the “VPC – ID”, and other modules will know the VPC ID by essentially getting the ID from Terraforms “Output.”

Location, Location, Location – The other secret is to place the output in a location for other modules to use as input “data.”, for example placing a remote state into an S3 bucket.

Using AWS S3 bucket

The diagram above represents storing Terraform state in an AWS S3 bucket. Create a Terraform Output parameter, which is placed into Terraform’s state file. Another module then gets the data.

Say for example we create a VPC and use an output statement as follows;

output "vpc_id" {
  description = "Output VPC ID"
  value       = module.vpc.vpc_id
}

Another module will know what VPC to use by getting the data about the VPC ID;

vpc_id = data.terraform_remote_state.vpc.outputs.vpc_id 

So one module outputs the property value of an AWS resource using an Output statement with a name, in this case, “vpc_id”, another module gets the data of the AWS resource by getting the data from Terraform State referencing the Output name, in this case, “vpc_id”.


So let us get started

First, please create the following folder structure shown below.

After creating the folders, we will place code into each folder and then use “Terraform apply” a few times to demonstrate the independence of modular Terraform code.


Create VPC.tf (in the VPC folder)

Note: this code is using a community module for the creation of a VPC. See the registry of community modules at:
https://registry.terraform.io/namespaces/terraform-aws-modules.

I like the community-built module AWS VPC Terraform module because it can create a VPC with public and private subnets, an internet gateway, and a Nat gateway with just a few lines of code.

However, to my knowledge, it is not written or supported by Hashicorp. It is written and supported by antonbabenko. I’m sure it’s a great module, and I personally use it, but I don’t know enough about it to recommend it for production usage. I have done some rudimentary tests, it works great, makes it far easier to produce the VPC & subnets in my test account. But, treat this module like any other community or open-source code before using it in production and do your own research.

vpc.tf

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

terraform {
  backend "s3" {
    bucket = "randomName-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"                    # Change to the region you selected for your S3 bucket
  }
}

provider "aws" {
  region = var.aws_region
}

data "aws_availability_zones" "available" {
  state = "available"
}

data "aws_region" "current" { }

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.6.0"

  cidr            = var.vpc_cidr_block
  azs             = data.aws_availability_zones.available.names
  private_subnets = slice(var.private_subnet_cidr_blocks, 0, 2)
  public_subnets  = slice(var.public_subnet_cidr_blocks, 0, 2)
  # database_subnets= slice(var.database_subnet_cidr_blocks, 0, 2)
  enable_dns_support = true
  enable_nat_gateway = true
  #enable_vpn_gateway = false
  single_nat_gateway = true
    tags = {
    Name          = "${var.environment}-VPC"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

Note: This will create a NAT gateway that is not free in AWS Free Tier; there will be a cost! For example: about a dollar per day in the US-West -1 region if left running.

Create variables.tf (in the VPC folder)

variable "aws_region" {
  description = "AWS region"
  type        = string
}
variable "environment" {
  description = "User selects environment"
  type = string
}
variable "your_name" {
  description = "Your Name?"
  type = string

}
variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
}

variable "vpc_cidr_block" {
  description = "CIDR block for VPC"
  type        = string
  default     = "10.0.0.0/16"
}

variable "public_subnet_cidr_blocks" {
  description = "Available cidr blocks for public subnets"
  type        = list(string)
  default = [
    "10.0.1.0/24",
    "10.0.2.0/24",
    "10.0.3.0/24",
    "10.0.4.0/24",
    "10.0.5.0/24",
    "10.0.6.0/24",
    "10.0.7.0/24",
    "10.0.8.0/24"
  ]
}

variable "private_subnet_cidr_blocks" {
  description = "Available cidr blocks for private subnets"
  type        = list(string)
  default = [
    "10.0.101.0/24",
    "10.0.102.0/24",
    "10.0.103.0/24",
    "10.0.104.0/24",
    "10.0.105.0/24",
    "10.0.106.0/24",
    "10.0.107.0/24",
    "10.0.108.0/24"
  ]
}

variable "database_subnet_cidr_blocks" {
  description = "Available cidr blocks for database subnets"
  type        = list(string)
  default = [
    "100.201.0/24",
    "100.202.0/24",
    "100.203.0/24",
    "100.204.0/24",
    "100.205.0/24",
    "100.206.0/24",
    "100.207.0/24",
    "100.208.0/24"
  ]
}
variable "public_subnet_count" {
  description = "Number of public subnets"
  type        = number
  default     = 2
}

variable "private_subnet_count" {
  description = "Number of private subnets"
  type        = number
  default     = 2
}

variable "database_subnet_count" {
  description = "Number of database subnets"
  type        = number
  default     = 2
}

Note: No “default” settings for the following variables.

  • Region
  • Environment
  • Your_Name
  • ssh_location

When creating variables without a “default”, it will cause “terraform apply,” to ask for your input for each of the variables that do not have a default setting. This allows an admin to stipulate a region of choice upon execution. Giving a Tag and optional input allows us to tag a deployment as “Test” or Development”. Using a variable with no default for “My public IP address” I named in this exercise as SSH_Location, allows you to input your public IP address and not have the IP address embedded in code. Hence, we can deploy the same code into different regions and environments, simply by changing the input to variables.

Instead of inputting answers manually for the above variables every time the code is executed, a common practice would be to create an “answer file using “.tfvars”. For example, we can create a “test.tfvars” file and then use that answer file as part of the Terraform Apply command, where the command would be:
“Terraform apply -var-file=test.tfvars
And the file would look something like the following:

test.tfvars

your_name       = "Joe"
ssh_location    = "1.2.3.4/32"
environment     = "Test"
region         = "us-west-1"

Note: A benefit of putting your answers into a file like “test.tfvars”, is that you can protect your answers from the public. By adding “*.tfvars” into .gitignore. A .gitignore file will force git to ignore stated file patterns in the .gitignore when pushing files into Github, which assures your sensitive dat is not copied into Git or GitHub.

Create security_groups.tf (in vpc folder)

Create a security group for the controller in the same folder “VPC”.

# -------------- Security Group for bastion host -----------------------
resource "aws_security_group" "controller-ssh" {
  name        = "ssh"
  description = "allow SSH from MyIP"
  vpc_id      = module.vpc.vpc_id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    cidr_blocks = ["${var.ssh_location}"]

  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
    tags = {
    Name          = "${var.environment}-Controller-SG"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}
# -------------- Security Group for ELB Web Servers -----------------------
resource "aws_security_group" "elb_web_sg" {
  name        = "${var.environment}-elb_web_sg"
  description = "allow SSH from Controller and HTTP from my IP"
  vpc_id      = module.vpc.vpc_id
  ingress {
    protocol    = "tcp"
    from_port   = 22
    to_port     = 22
    #security_groups  = ["sg-09812181ec902d546"]
    security_groups  = ["${aws_security_group.controller-ssh.id}"]
    }

    ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    security_groups = ["${aws_security_group.lb-sg.id}"]
    }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
    Name          = "${var.environment}-elb_web_sg"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

# -------------- Security Group for Load Balancer -----------------------
resource "aws_security_group" "lb-sg" {
  name        = "${var.environment}-lb-SG"
  description = "allow HTTP and HTTPS"
  vpc_id      = module.vpc.vpc_id

    ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
    }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
    }
    tags = {
    Name          = "${var.environment}-lb-SG"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

“Output.tf” will be used as data for other modules to use as “Input” data
In example : (elb-tf folder).

Outputs

Outputs.tf

# ------ Output Region ------------------------------
output "aws_region" {
  description = "AWS region"
  value       = data.aws_region.current.name
}
# ------- Output VPC ID ------------------------------
output "vpc_id" {
  description = "Output VPC ID"
  value       = module.vpc.vpc_id
}
# ------- Output Controller Security Group ID --------
output "Controller-sg_id" {
  description = "Security group IDs for Controller"
  value       = [aws_security_group.controller-ssh.id]
}
# ---- Output Load Balancer Security Group ID --------
output "lb_security_group_id" {
  description = "Security group IDs for load balancer"
  value       = [aws_security_group.lb-sg.id]
}
# ------- Output Web Servers Security Group ID --------
output "elb_web-sg_id" {
  description = "Security group IDs for elb-Web servers"
  value       = [aws_security_group.elb_web_sg.id]
}
# ------- Output Public Subnet Group IDs -------------
output "public_subnet_ids" {
  description = "Public subnet IDs"
  value       = module.vpc.public_subnets
}
# ------- Output Private Subnet Group IDs ------------
output "private_subnet_ids" {
  description = "Private subnet IDs"
  value       = module.vpc.private_subnets
}

As shown above, the “outputs.tf” is providing output data for:
Region, vpc_id, controller-sg_id, public_subnet_ids, private_subnet_ids.

After applying “Terraform apply -var-file=tfvars”, you will see the above outputs displayed in the terminal console.


New Module and New Folder
Load Balancer and distributed Web Servers

We are going to provision the Elastic Load Balancer and Web Servers from a different folder. A separate folder automatically becomes a module to Terraform. This module is isolated, and we can provision using this module from another workstation or even using a different privileged IAM user within an AWS account.

If you want to actually test the load-balancer feel free to read up on How to use AWS route 53 to route traffic to an AWS ELB load balancer.

Create a new folder “elb-web” cd into the directory and let’s get started.

elb-web.tf

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

# ------------- Configure the S3 backend for Terraform State -----------
data "terraform_remote_state" "vpc" {
  backend = "s3" 
  config = {
    bucket = "randomName-terraform-states"
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}

# ------------ Pull the remote state data to determine region ----------
provider "aws" {

  region = data.terraform_remote_state.vpc.outputs.aws_region
}

So we begin making statements, AWS is the cloud platform, and Hashicorp AWS is the module provider. Then stipulate an S3 bucket as the remote state and acquire our first “data input, from the S3 bucket, ” which is “data.terraform_remote_state.vpc.outputs.” and acquire the “Name” another input from the remote state, “aws_region”.

Inputs

elb-web.tf – continued

module "elb_http" {
  source  = "terraform-aws-modules/elb/aws"
  version = "3.0.0"

  # Ensure load balancer name is unique
  name = "lb-${random_string.lb_id.result}-${var.environment}-lb"

  internal = false

  security_groups = data.terraform_remote_state.vpc.outputs.lb_security_group_id 
  subnets         = data.terraform_remote_state.vpc.outputs.public_subnet_ids # pulling remote state data to obtain the public subnet IDS

  number_of_instances = length(aws_instance.app)
  instances           = aws_instance.app.*.id

  listener = [{
    instance_port     = "80"
    instance_protocol = "HTTP"
    lb_port           = "80"
    lb_protocol       = "HTTP"
  }]

  health_check = {
    target              = "HTTP:80/index.html"
    interval            = 10
    healthy_threshold   = 3
    unhealthy_threshold = 10
    timeout             = 5
  }
}

The code above uses another community module. In this case, the “Elastic Load Balancer (ELB) Terraform module“. This module was also written and supported by antonbabenko.

elb-web.tf – continued

resource "aws_instance" "web" {
  ami = "ami-08f38617285ff6cbd" # this is my AMI ID from previous exercise - an EC2 instance configured with a static website and saved as an EC2 image 
  count = var.instances_per_subnet * length(data.terraform_remote_state.vpc.outputs.private_subnet_ids)
  instance_type = var.instance_type
  key_name               = var.key
  # get the subnet IDs from remote state S3 buckets
  subnet_id              = data.terraform_remote_state.vpc.outputs.public_subnet_ids[count.index % length(data.terraform_remote_state.vpc.outputs.private_subnet_ids)]
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.elb_web-sg_id # Will create the security groups a bit later in this exercise
  tags = {
    Name          = "${var.environment}-Static_Web_Server"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

“Count” is a resource configuration that tells Terraform how many EC2 instances to create, and the length tells how many subnets to place the count of instances. In this case, we have two private subnets, so the “count” configuration will place two instances of the EC2 AMI into the two private subnets.

Note: once again, we are using “remote state” to obtain the private subnet information from the VPC module by using outputs placed into Terraform remote state S3 bucket by using “data_remote_state” to get the data for private subnets. .

variables.tf (for elb-web folder)

variable "instances_per_subnet" {
  description = "Number of EC2 instances in each private subnet"
  type        = number
  default     = 2
}

variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}

variable "environment" {
  description = "User selects environment"
  type = string
  default = "Test"
}

variable "key" {
  type    = string
}

variable "your_name" {
  description = "Your Name?"
  type        = string
}

variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
}

variable "controller_sg" {
  type = string
}

variable "lb_sg" {
  type = string
}

test.tfvars

your_name       = "Your Name"
ssh_location    = "1.2.3.4/32"
environment     = "Test"
key             = "Your EC2 key pair"

New Module and New Folder
Controller

Create and cd into a directory named “controller”. We will create three files: controller.tf, s3_policy.tf, and variables.tf

controller.tf

Note: We do not have to create or launch the controller for the load-balanced website to work. The controller (jump server) is handy if you want to SSH into one of the private servers for maintenance or troubleshooting. You don’t really need it, until you need it. hehe!

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}

#------------------------- State terraform backend location---------------------
data "terraform_remote_state" "vpc" {
  backend = "s3" 
  config = {
    bucket = "Your bucket name"  # be sure to update with name of your bucket
    key    = "terraform.tfstate"
    region = "us-west-1"
  }
}

# --------------------- Determine region from backend data -------------------
provider "aws" {
  region = data.terraform_remote_state.vpc.outputs.aws_region
}

#--------- Get Ubuntu 20.04 AMI image (SSM Parameter data) -------------------
data "aws_ssm_parameter" "ubuntu-focal" {
  name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
}


# Creating controller node
resource "aws_instance" "controller" {
  ami                    = data.aws_ssm_parameter.ubuntu-focal.value # from SSM Paramater
  instance_type          = var.instance_type
  subnet_id              = data.terraform_remote_state.vpc.outputs.public_subnet_ids [0]
  vpc_security_group_ids = data.terraform_remote_state.vpc.outputs.Controller-sg_id
  iam_instance_profile   = "${aws_iam_instance_profile.assume_role_profile.name}" 
  user_data              = file("bootstrap_controller.sh")
  private_ip             = "10.0.1.10"
  monitoring             = true
  key_name               = var.key

    tags = {
    Name          = "${var.environment}-Controller"
    Stage         = "${var.environment}"
    Owner         = "${var.your_name}"
  }
}

output "Controller" {
  value = [aws_instance.controller.public_ip]
}

s3_policy.tf

The S3 policy is not required for a Jump Server. We might need some files for common maintenance of server configuration using Ansible. I like to place these files into an S3 bucket such that Ansible playbooks can be applied to multiple servers. An S3 policy allows our Jump server (controller) access to an S3 bucket

# ------------ Create the actual S3 read & copy files policy ----
resource "aws_iam_policy" "copy-policy" {
  name        = "S3_Copy_policy"
  description = "IAM policy to allow copy files from S3 bucket"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],
      "Resource": ["arn:aws:s3:::S3-bucket-for-Ansible-Files",
                    "arn:aws:s3:::S3-bucket-for-Ansible-Files/*"]
    }
  ]
}
EOF
}

# ------------------ create assume role -----------------
resource "aws_iam_role" "assume-role" {
  name               = "assume-role"
  description        = "IAM policy that allows assume role"
  assume_role_policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Effect": "Allow",
        "Sid": ""
      }
    ]
}
EOF
}
# ------------ attach the role to the policy ----------------
resource "aws_iam_role_policy_attachment" "assign-copy-policy" {
  role       = aws_iam_role.assume-role.name
  policy_arn = aws_iam_policy.copy-policy.arn
  depends_on = [aws_iam_policy.copy-policy]
}

# ------------ create a profile to be used by EC2 instance ----
resource "aws_iam_instance_profile" "assume_role_profile" {
  name = "assume_role_profile"
  role = aws_iam_role.assume-role.name
}

variables.tf

variable "key" {
  type    = string
  default = "EC2 key pair name"  #be sure to update with the name of your EC2 Key pair for your region
}
variable "instance_type" {
  description = "Type of EC2 instance to use"
  type        = string
  default     = "t2.micro"
}
variable "environment" {
  description = "User selects environment"
  type = string
  default = "Test"
}
variable "your_name" {
  description = "Your Name?"
  type = string
  default = "Your Name"
}

Provisioning

  1. Be sure to change the S3 Bucket name in S3_policy.tf (lines 16 & 17), shown above in Red, into your S3 bucket name
  2. Be sure to change the test.tfvars in the VPC folder, variables of your choice
  3. Be sure to change the test.tfvars in the ELB-WEB folder, to variables of your choice
  4. Be sure to change the main.tf lines 11-13 with the configuration for your S3 bucket to store terraform backend state
  5. In your terminal, go to the VPC folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply -var-file=test.tfvars
  6. In your terminal, go to the elb-web folder and execute the following commands:
    1. Terraform init
    2. terraform validate
    3. Terraform apply -var-file=test.tfvars

      That is it, we have launched and should now have a load-balanced static website with resilience across availability zones and within each zone have at least two web servers for high availability

The controller (bastion host), can be launched at any time. Quite often, I’ll launch the controller to troubleshoot a test deployment.

It goes without saying, but it has to be said anyway. This is not for production!

All public websites should have some type of application firewall in between the Web Server and its internet connection!

All websites should have monitoring and a method to scrape log events to detect potential problems with the deployment.

It is a good idea to remove an EC2 instance, or and ELB, when you are finished with the exercise, so as not to incur costs

Create an AWS website & Bastion Host with Terraform

STATIC WEB SERVER AND A bastion host (jump server)

Requirements & installation of Terraform

The following must be installed and configured for this exercise:

Install AWS CLI

Configure AWS CLI

Install Terraform

Note:  You don't have to install the requirements on your desktop.  You can use a virtual desktop for your development environment using tools like Oracle's virtualbox or VMware Workstation or Player, or Mac Fusion or Mac Parallels.  Perhaps an AWS Workspace or AWS Cloud 9 environment. 

This example creates a static web server and a controller (otherwise called a bastion host or even a jump server). I like to call it a controller because, in later exercises, I will use the controller to execute an Ansible configuration of public and private AWS EC2 servers. For now, though, this exercise keeps it simple and creates a jump server (bastion host):

  • It demonstrates restricting SSH & HTTP traffic.
    • In the case of the web server, it allows SSH only from the controller (jump server)
    • In the case of the web server, it allows HTTP only from My Public IP address.
    • In the case of the controller, it allows SSH only from My Public IP address.
  • And this example creates a very real static webserver.

It is a common practice to put Web servers into a private network and then provide a reverse proxy or load balancer between the web server and the internet. Private servers can not be directly accessed from the internet. To access a private server for administration, it is common to use a bastion-host (aka jump server) and the SSH to the jump server and from the jump server SSH into private servers.

This exercise uses only one public subnet and technically doesn’t require a bastion-host (aka jump server) for server administration. Creating a VPC with a private network requires a NAT gateway or NAT instances placed into a public subnet so that the private subnet can pull updates or download software from the internet. A NAT gateway will incur costs in AWS even with a Free Tier plan. Thus I’m writing this code to give an example of a jump server that can be used in a Free Tier exercise that will incur no cost.


The code for this VPC is the same as the previous exercise, and its code method is explained in the last exercise. You can copy the contents of the previous exercise and make a few changes to each file. There are two extra files in this exercise, the S3 policy file and the files for the static Website.

Or you can clone the code for this exercise from my Github repository.


VPC.tf

# --------- Setup the VPC -------------------------
resource "aws_vpc" "my-vpc" {
  cidr_block           = var.vpc_cidr
  enable_dns_support   = true
  enable_dns_hostnames = true
  tags = {
    Name  = "My VPC"
    Stage = "Test"
  }
}
# --------- Setup an Internet Gateway --------------
resource "aws_internet_gateway" "my-igw" {
  vpc_id = aws_vpc.my-vpc.id
  tags = {
    Name = "My IGW"
  }
}
# --------- Setup a public subnet -------------------
resource "aws_subnet" "public-1" {
  vpc_id                  = aws_vpc.my-vpc.id
  map_public_ip_on_launch = true
  availability_zone       = var.public_availability_zone
  cidr_block              = var.public_subnet_cidr

  tags = {
    Name  = "Public-Subnet-1"
    Stage ="Test"
  }
}

# -------- Setup a route to the Internet ----------------
resource "aws_route_table" "public-route" {
  vpc_id = aws_vpc.my-vpc.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.my-igw.id
  }
  tags = {
    Name = "Public-Route"
  }
}
# ---------- associate internet route to public subnet ----
resource "aws_route_table_association" "public-1-assoc" {
  subnet_id      = aws_subnet.public-1.id
  route_table_id = aws_route_table.public-route.id
}

Variables.tf

The code for variables.tf is almost the same as the previous exercise. The change to variables.tf is the addition of a variable for an AWS key pair and a variable for a Public IP Address.

You will need to configure the “ssh_location” with an IP address. The IP address will be your public IP address. If you don’t know your public IP address, open a browser and type into your browser’s address space, “what is my IP address” the browser will then show your public IP address. Change the variable setting to “your IP address with a /32” subnet mask. (i.e. “1.2.3.4/32”)

This exercise provides a connection to the new EC2 instance named “controller” using SSH. So be sure to create an AWS EC2 Key Pair within the region you will be using for this exercise, and update the variable “key” with your existing or EC2 key pair name. (i.e. an EC2 Key Pair name of testkey.pem becomes “testkey” for the name.

variable "region" {
    type=string
    description="AWS region for placement of VPC"
    default="us-west-1"
}
variable "vpc_cidr" {
    type=string
    default="10.0.0.0/16"
}
variable "public_subnet_cidr" {
    type=string
    default="10.0.1.0/24"
}
variable "public_availability_zone"{
    type = string
    default="us-west-1a"
}
variable "instance_type" {
    type = string
    default = "t2.micro"
}
variable "key" {
  type    = string
  default = "Your AWS Key Name for the region"  
}
variable "ssh_location" {
  type        = string
  description = "My Public IP Address"
  default     = "1.2.3.4/32"
}

Main.tf

The code for main.tf in this exercise is almost the same as the previous exercise, except we are adding an EC2 instance named controller. Take note of the controller’s security group, which is using a new security group called “controller-sg.” We’ll discuss that security group in the Security_groups.tf discussion below.

Another change is the outputs. We are adding the “private_ip” of the web server in the outputs because we’ll need the private IP for an SSH connection by connecting to the controller and jumping from the controller into the webserver—output for the controller’s Public IP address.

Also, the controller has a unique “bootstrap-controller.sh” file. It doesn’t do much; it just runs a script for updating OS and apt packages upon launching the instance.

The “bootstrap-web.sh” is different from the first exercise. It runs an update & upgrade of the OS and apt packages upon launching the instance. The “bootstrap-web.sh” also installs Apache and AWS CLI and copies some files I’ve created for a static website from an S3 bucket into Apache’s folder /var/www/html.

provider "aws" {
  region = var.region
}
#Get Linux Ubuntu using SSM Parameter 
data "aws_ssm_parameter" "ubuntu-focal" {
  name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
}
# Creating Web server
resource "aws_instance" "web" {
  ami                    = data.aws_ssm_parameter.ubuntu-focal.value
  instance_type          = var.instance_type
  subnet_id              = aws_subnet.public-1.id
  vpc_security_group_ids = ["${aws_security_group.web.id}"]
  iam_instance_profile   = "${aws_iam_instance_profile.assume_role_profile.name}" 
  key_name               = var.key
  user_data = file("bootstrap_web.sh")
  tags = {
    Name  = "Basic-Web-Server"
    Stage = "Test"
  }
}
# Creating controller node
resource "aws_instance" "controller" {
  ami                    = data.aws_ssm_parameter.ubuntu-focal.value
  instance_type          = var.instance_type
  subnet_id              = aws_subnet.public-1.id
  vpc_security_group_ids = ["${aws_security_group.controller.id}"]
  user_data              = file("bootstrap_controller.sh")
  key_name               = var.key
  tags = {
    Name = "Controller"
    Stage = "Test"
  }
}
output "web" {
  value = [aws_instance.web.public_ip, aws_instance.web.private_ip]
}
output "Controller" {
  value = [aws_instance.controller.public_ip]
}

Security_Groups.tf

Our code creates two security groups, “web-sg” and “controller-sg.”

The first security group, “web-sg,” allows HTTP into the webserver but only from your public IP address. The code establishes a rule that will enable SSH, but only from your IP address to the controller, and then allows a jump from the controller to the Web server. This makes our web server a bit more secure in any environment because it restricts who and how an admin can establish an admin session on the webserver.

Take note of the unique method of controlling ingress within the web security group “web-sg.” In the ingress section, I have replaced “cidr_blocks” with “security_groups.” This is basically stating any resource assigned to the security group “controller-sg” is allowed an ingress connection (in this case, SSH).

Using security_groups instead of a “cidr_block” as an ingress rule provides an excellent method of controlling ingress to our EC2 instances. As you know, assigning a “cidr_block” is setting a group of IP addresses. Most code examples published as examples, show an ingress of 0.0.0.0/0, allowing anyone or any device inbound access. Opening your inbound traffic to the entire internet into our test environment might be a very convenient way of writing code examples. Still, though, it most certainly is not a good practice.

As stated earlier, both EC2 instances in this exercise are in a public subnet and do not require a jump server. I prefer to write exercises that simulate potential real-world examples as early in the coding practice as reasonably possible. One of those practices is using a security group as ingress to web servers instead of a “cidr_block.”


resource "aws_security_group" "web-sg" {
  vpc_id      = aws_vpc.my-vpc.id
  description = "Allows HTTP"
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    security_groups  = ["${aws_security_group.controller-sg.id}"]
  }
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = -1
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    name  = "SecurityGroup-Web"
    Stage = "Test"
  }
}

resource "aws_security_group" "controller-sg" {
  vpc_id      = aws_vpc.my-vpc.id
  description = "Allows SSH from MyIP"
  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["${var.ssh_location}"]
  }
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = -1
    cidr_blocks = ["0.0.0.0/0"]
  }
  tags = {
    Name  = "SecurityGroup-SSH"
    Stage = "Test"
  }
}

Using S3 bucket repository for website files

AWS S3 is a great place to store standard code for a team to utilize as shared public storage. Therefore we are creating an S3 bucket that will hold our static website files. This code will copy the files from an S3 bucket into our web server content folder.

So, we’ll copy the website files into an S3 bucket. And create a profile that allows an EC2 instance to read and copy files from S3.

Create an S3 bucket using AWS CLI

We need an S3 bucket to hold the website files. Go ahead and create a bucket using the AWS Management Console or use AWS command-line interface to create a new bucket.

Github sample website files

My Github repository has a file called “Static_Website_files.zip.” You are most certainly invited to unarchive the file and use it for your test website or create your static website files. Just know you’ll, of course need to unarchive the zip file contents before using the content.

s3_policy.tf

resource "aws_iam_policy" "copy-policy" {
  name        = "copy-anible-files"
  description = "IAM policy to allow copy files from S3 bucket"

  policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": ["s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket"
            ],
      "Resource": ["arn:aws:s3:::change the name to your S3 bucket name",
                    "arn:aws:s3:::change the name to your S3 bucket name/*"]
    }
  ]
}
EOF
}


resource "aws_iam_role" "assume-role" {
  name               = "assume-role"
  description        = "IAM policy that allows assume role"
  assume_role_policy = <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Effect": "Allow",
        "Sid": ""
      }
    ]
}
EOF
}

resource "aws_iam_role_policy_attachment" "assign-copy-policy" {
  role       = aws_iam_role.assume-role.name
  policy_arn = aws_iam_policy.copy-policy.arn
  depends_on = [aws_iam_policy.copy-policy]
}


resource "aws_iam_instance_profile" "assume_role_profile" {
  name = "assume_role_profile"
  role = aws_iam_role.assume-role.name
}

Copy website files into the new S3 bucket

The AWS command-line interface is a quick way to get the files into the bucket. I have a file on Github that you can download and use as the files for the Website. Download and unarchive the file “Static_Website_files.zip” into a temporary folder and use the AWS S3 copy command to copy the files into the new bucket. Or use the AWS Management Console to copy the files into the bucket. Once you have the files in S3, the bootstrap user data of the EC2 Instance “web” will automatically install the website files in the apache folder /var/www/html from your bucket.

Configuration – reminders

Be sure to configure the following in variables.tf
  • Place your public IP address as the default IP for the variable “ssh_location.”
  • Place your regional EC2 Key Pair name as the default for variable “Key.”
Be sure to configure the S3 bucket name in s3_policy.tf
  • Don’t forget to create an S3 bucket and place the Website static files into the bucket
  • Don’t forget to place the “ARN” of the S3 bucket into the S3_policy.tf

Launching the VPC and Web Server

After installing the requisite software, requisite files, and configuring the variables.

Run the following commands in terminal

  • Terraform init
    • Causes terraform to install the necessary provider modules, in this case, to support AWS provisioning
  • Terraform validate
    • Validates the AWS provisioning code
  • Terraform Apply
    • Performs the AWS provisioning of VPC and Web Server

After Terraform finishes provisioning the new VPC, Security Groups and Web Server, it will output the Public IP address of the new public server in the terminal window. Go ahead and copy the IP address, past it into a browser, and you should see something like the image below:

Once you have finished with this example, run the following command:

  • Terraform Destroy (to remove VPC and Web Server)

It goes without saying, but it has to be said anyway. This is not for production!

All public websites should have some type of application firewall in between the Web Server and its internet connection!

It is a good idea to remove an EC2 instance when you are finished with the instance, so as not to incur costs for leaving an EC2 running.


Creating a VPC manually

Two public and two private subnets

Step One – Create VPC

  1. Sign into AWS Console https://console.aws.amazon.com/vpc/
  2. Select your choice of an AWS region
    • ie. I’m from Los Angeles and choose Northern California as my region of choice
  3. Click on VPCs (under Resources by Region)
  4. In VPC settings, type in the following:
    • Name Tag = “New-VPC”
    • IPv4 CIDR block = “10.0.0.0/16”
    • Keep the defaults for the rest of the VPC form
    • Click Create VPC
    • After the New-VPC is created click on the New-VPC ID to see the details of the VPC
    • Notice that by default the DNS hostnames is disabled. It is not necessary, however, many tutorials will mention using the DNS hostnames, so it might be a good idea to change the DNS hostnames to “enabled”

Step One – Creating the Subnets

Step 1a – Public Subnet A

  1. On the left Navigation Pane – find and choose Subnets
  2. Select Create subnet
  3. Under VPC – click “Select a VPC” and choose the new subnet created called “New-VPC
  4. Under Subnet settings
    • Type in a subnet name, ie. “New-Public-Subnet-A
    • Type in a CIDR block for the IP range you would like to create for this subnet:
      • i.e. 10.0.1.0/25
  5. Under Availability Zone
    • Note: In region US-WEST-1 there exists only two availability zones, us-west-1a and us-west-1c
    • Choose us-west-1a
  6. Keep the defaults for the rest of the Subnet Form
  7. Click Create Subnet

Step 1b – Public Subnet B

  1. On the left Navigation Pane – find and choose Subnets
  2. Select Create subnet
  3. Under VPC – click “Select a VPC” and choose the new subnet created called “New-VPC
  4. Under Subnet settings
    • Type in a subnet name, ie. “New-Public-Subnet-B
    • Type in a CIDR block for the IP range you would like to create for this subnet:
      • i.e. 10.0.2.1/25
  5. Under Availability Zone
    • Note: Now we will choose an availability zone other than the one selected for Public-Subnet-A
    • Choose us-west-1c
  6. Keep the defaults for the rest of the Subnet Form
  7. Click Create Subnet

Step 1c – Private Subnet A

  1. On the left Navigation Pane – find and choose Subnets
  2. Select Create subnet
  3. Under VPC – click “Select a VPC” and choose the new subnet created called “New-VPC
  4. Under Subnet settings
    • Type in a subnet name, ie. “New-Private-Subnet-A
    • Type in a CIDR block for the IP range you would like to create for this subnet:
      • i.e. 10.0.2.0/25
  5. Under Availability Zone
    • Note: In region US-WEST-1 there exists only two availability zones, us-west-1a and us-west-1c
    • Choose us-west-1a
  6. Keep the defaults for the rest of the Subnet Form
  7. Click Create Subnet

Step 1d – Private Subnet B

  1. On the left Navigation Pane – find and choose Subnets
  2. Select Create subnet
  3. Under VPC – click “Select a VPC” and choose the new subnet created called “New-VPC
  4. Under Subnet settings
    • Type in a subnet name, ie. “New-Private-Subnet-B
    • Type in a CIDR block for the IP range you would like to create for this subnet:
      • i.e. 10.0.2.128/25 (128 IP addresses available for this subnet)
  5. Under Availability Zone
    • Note: Now we will choose an availability zone other than the one selected for Private-Subnet-A
    • Choose us-west-1c
  6. Keep the defaults for the rest of the Subnet Form
  7. Click Create Subnet

Check it out – We have a new VPC with four subnets

Hurray!! We now have a New-VPC and four subnets. BUT, let’s take a closer look at our subnet communications, because we are not done yet, as we now need to lay out the communication rules for our subnets.

Select the ID of any of the subnets, and the AWS console will show all the details for the selected subnet. Notice, a routing table and a network ACL table was automatically created for the new subnet. The routing table, allows routes to all other subnets with the route table of 10.0.0.0/16, and the network ACL has an automatic deny all for inbound and outbound traffic. So now we have subnets that can talk to each other but can not talk to the rest of the world. Guess we aren’t done yet.

The next steps are equally as important. We need a gateway to the internet to allow inbound/outbound traffic for our public networks. Another gateway to the internet that allows outbound traffic for our private networks.

As well, we need routing and firewall rules. So, we have to install an Internet gateway, a NAT gateway (or NAT instances), update the routing tables to/from the gateways, and create security groups to allow inbound traffic such as SSH, HTTP & HTTPS.

Step Two – Setup an Internet Gateway

  • If you don’t have it open already, goto the AWS VPC console
  • In the left hand navigation pane, select Internet Gateways
  • Then click Create Internet Gateway
  • Under Name Tag, give it a name, ie. New-Internet-Gateway
  • Keep the default settings for the rest of the form
  • Click Create Internet Gateway
  • The console will show the gateways has been created, and will show the ID of the gateway
  • In the upper right hand corner, click Attach to a VPC
  • In the VPC box, under available VPCs, click on Select a VPC and your New-VPC will automatically be displayed. Click on your New-VPC to select it
  • Then click Attach internet gateway

Step Three – Update the Internet routing

  • If you don’t have it open already, goto the AWS VPC console and select VPCs, then select your “New-VPC”, by clicking on the VPC ID of “New-VPC”
  • Then click the route table ID shown under the Main route table (this will select the route table for your new vpc)
  • You should now see the details of a route table for your new VPC. Click the Edit Routes tab
  • Click Add route
  • Under Destination, type 0.0.0.0/0
  • Under Target, click the down arrow and your new Internet Gateway should automatically be displayed. Select your new internet gateway
  • Click Save routes
  • Close the screen that pops up
  • Now find and click on the Subnet Associations Tab
    • Notice: The table states that you have no subnet associations and therefore:
      • The following subnets have not been explicitly associated with any route tables and are therefore associated with the main route table:
  • So we need to make sure we associate the public subnets with this route table (not the private subnets, we’ll fix them in just a bit)
  • Click on Edit Subnet Associations button
  • Select New-Public-Subnet-A and New-Public-Subnet-B
  • Then click Save

Step Four – Create a NAT Gateway

CAUTION: So far everything in the first three steps, do not incur any charges. However, for some strange reason A NAT Gateway (unlike the Internet Gateway) IS NOT FREE! YOU WILL BE CHARGED THE MOMENT YOU CREATE A NAT GATEWAY. So don’t leave the NAT Gateway running for very long, unless you are willing to pay about $1.00 or more per day. If you leave it running for an hour, it will cost you about a nickel per hour in the US regions.

An alternative is to use a NAT instance (an EC2 Instance specially configured as a NAT). AWS Free Tier allows 750 hours of a t2.micro EC2 running hours per month and hence a NAT instance is a good choice to use in a Free Tier Account. The creation of a NAT Instance will be covered as an alternative below. That said, a NAT Gateway is a managed service by AWS that is scalable and more efficient with routing traffic to the internet and in my opinion is worth a few cents to leave it running for a few hours.

  • Goto the AWS VPC console
  • In the left hand navigation pane select NAT Gateways
  • Click Create NAT Gateway
  • In the NAT gateway settings under Name type New-NAT-Gateway
  • Under Subnet, click Select a subnet, and select New-Public-Subnet-A
  • Alongside of the Elastic IP allocation ID is a button Allocate Elastic IP, click on that button and it will automatically allocate an Elastic IP ID
    • Caution: if you delete a NAT Gateway, its Elastic IP Address might still exist but not be associated.
    • AWS does NOT charge for an Elastic IP address that is allocated and associated, therefore during the lifetime of your NAT gateway, there is no extra charge for an Elastic IP address
      • But, AWS DOES CHARGE for an Elastic IP address that IS NOT associated. If you delete the NAT gateway, make sure you don’t have an Elastic IP address just hanging out by itself with no association (it will cost you money).
  • Click Create NAT gateway
  • Ideally in a production VPC cloud design, we would repeat the creation of a NAT gateway into the other public subnet (New-Public-Subnet-B). However, for the purposes of this tutorial, and the fact that most of us will be testing with a Free Tier AWS account, a single NAT gateway will suffice.
    • A second NAT gateway in another availability zone gives resiliency to our architecture, in case any events occur in an opposing availability zone that forces a service outage for resources within the availability zone, the second NAT gateway will still be working.

Step Five – Create a route table for Private Subnets via our new NAT gateway

  • Goto the AWS VPC console
  • In the left hand navigation pane select Route Tables
  • Click Create Route Table button
  • Type Private Route Table for Name Tag
  • For VPC, click the down arrow and Select our New-VPC
  • Click Create
  • Click on the route table ID in the screen that pops up
  • Click the Routes tab
  • Click Edit Routes
  • Click Add route
  • Type 0.0.0.0/0 for the Destination
  • Under Target click the down arrow and select our New-NAT-Gateway
  • Click Save Routes
  • Close the screen that pops up
  • Click the Subnet Associations tab
  • Click the Edit Subnet Associations button
  • Click Private-Subnet-A and Private-Subnet-B
  • Then click Save

A Working VPC with two public and two private subnets is now operational

Optional – Testing the new VPC with a bastion host

  • See the page Create Security group and setup “allow SSH
  • See the page Create an EC2 instance and setup an EC2 instance in either one of the public subnets with a public IP address and assign the Allow SSH security group created in the first step, assign the new EC2 instance a tag Key=”Name”, Value=”Bastion Host“.
    • Note: bastion host is a server whose purpose is to provide access to a private network from an external network, such as the Internet.
  • Jot down the Private IP address of the new EC2 instance (the private IP address will be used in the next step)
  • Create another new Security group that allows SSH only from the private IP address of the new EC2 instance created above), bastion host and name it “SSH-Bastion”
  • Create another EC2 Instance in a private subnet, without a public IP Address.
    • Any server installed into a Private Subnet, should not have a public IP address. Without a private IP address we are eliminating the ability to connect to an EC2 instance from the internet (hence why it is called “private”)
    • We need another avenue to connect to a private server, which is why we created the bastion host. We’ll connect to a bastion host, and then SSH from the Bastion host to a private server
  • Ideally by now, you have created an AWS Key Pair for example “testkey.pem” and you have already copied the key pair to an appropriate folder. This instruction assumes that you have the key located in the hidden folder /.ssh.
    • At the command line, type in:
ssh-add ~/.ssh/testkey.pem
  • Note: the above line assumes the location of your private key, change the path to your private key above, if your private key is located somewhere besides the /.ssh folder
    • ssh-add is a command for adding SSH private keys into the SSH authentication agent for implementing single sign-on with SSH. The agent process is called ssh-agent
    • Note: this allows us to connect to bastion host, and then from the bastion host connect to a private server (without having to copy our private keys to the bastion host)
  • Now we connect to the Bastion host using the following command
ssh -A ip-address

Where “ip-address” is the public ip-address of the bastion host

  • And now we connect to a private server, once connected to the bastion host

CleanUP

Once finished with this exercise, be sure to delete the following. You do not want to leave the resources running from this tutorial or it will consume your allocation of Free Tier Hours and especially the NAT Gateway as it is not free within a Free Tier account

Note: If you did use a NAT gateway, it will only cost you less than a dollar (today’s pricing in the us-west region) to run a NAT gateway for a few hours

  • Terminate the EC2 instances
  • Delete the new Security Groups
    • Note: Its Ok to leave security Groups in place, Security groups are Free in AWS
  • Delete the NAT Gateway (especially remember to delete the Nat gateway, it is not Free)
  • Release all Elastic IP addresses addresses
  • Delete the VPC
    • Note: Its Ok to leave a VPC with subnets in place
    • A VPC and its subnets are Free on any AWS account