
Requirements & installation of Terraform
The following must be installed and configured for this exercise:
Note: You don't have to install the requirements on your desktop. You can use a virtual desktop for your development environment using tools like Oracle's virtualbox or VMware Workstation or Player, or Mac Fusion or Mac Parallels. Perhaps an AWS Workspace or AWS Cloud 9 environment.
This example creates a static web server and a controller (otherwise called a bastion host or even a jump server). I like to call it a controller because, in later exercises, I will use the controller to execute an Ansible configuration of public and private AWS EC2 servers. For now, though, this exercise keeps it simple and creates a jump server (bastion host):
- It demonstrates restricting SSH & HTTP traffic.
- In the case of the web server, it allows SSH only from the controller (jump server)
- In the case of the web server, it allows HTTP only from My Public IP address.
- In the case of the controller, it allows SSH only from My Public IP address.
- And this example creates a very real static webserver.
It is a common practice to put Web servers into a private network and then provide a reverse proxy or load balancer between the web server and the internet. Private servers can not be directly accessed from the internet. To access a private server for administration, it is common to use a bastion-host (aka jump server) and the SSH to the jump server and from the jump server SSH into private servers.
This exercise uses only one public subnet and technically doesn’t require a bastion-host (aka jump server) for server administration. Creating a VPC with a private network requires a NAT gateway or NAT instances placed into a public subnet so that the private subnet can pull updates or download software from the internet. A NAT gateway will incur costs in AWS even with a Free Tier plan. Thus I’m writing this code to give an example of a jump server that can be used in a Free Tier exercise that will incur no cost.
The code for this VPC is the same as the previous exercise, and its code method is explained in the last exercise. You can copy the contents of the previous exercise and make a few changes to each file. There are two extra files in this exercise, the S3 policy file and the files for the static Website.
Or you can clone the code for this exercise from my Github repository.
VPC.tf
# --------- Setup the VPC -------------------------
resource "aws_vpc" "my-vpc" {
cidr_block = var.vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "My VPC"
Stage = "Test"
}
}
# --------- Setup an Internet Gateway --------------
resource "aws_internet_gateway" "my-igw" {
vpc_id = aws_vpc.my-vpc.id
tags = {
Name = "My IGW"
}
}
# --------- Setup a public subnet -------------------
resource "aws_subnet" "public-1" {
vpc_id = aws_vpc.my-vpc.id
map_public_ip_on_launch = true
availability_zone = var.public_availability_zone
cidr_block = var.public_subnet_cidr
tags = {
Name = "Public-Subnet-1"
Stage ="Test"
}
}
# -------- Setup a route to the Internet ----------------
resource "aws_route_table" "public-route" {
vpc_id = aws_vpc.my-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.my-igw.id
}
tags = {
Name = "Public-Route"
}
}
# ---------- associate internet route to public subnet ----
resource "aws_route_table_association" "public-1-assoc" {
subnet_id = aws_subnet.public-1.id
route_table_id = aws_route_table.public-route.id
}
Variables.tf
The code for variables.tf is almost the same as the previous exercise. The change to variables.tf is the addition of a variable for an AWS key pair and a variable for a Public IP Address.
You will need to configure the “ssh_location” with an IP address. The IP address will be your public IP address. If you don’t know your public IP address, open a browser and type into your browser’s address space, “what is my IP address” the browser will then show your public IP address. Change the variable setting to “your IP address with a /32” subnet mask. (i.e. “1.2.3.4/32”)
This exercise provides a connection to the new EC2 instance named “controller” using SSH. So be sure to create an AWS EC2 Key Pair within the region you will be using for this exercise, and update the variable “key” with your existing or EC2 key pair name. (i.e. an EC2 Key Pair name of testkey.pem becomes “testkey” for the name.
variable "region" {
type=string
description="AWS region for placement of VPC"
default="us-west-1"
}
variable "vpc_cidr" {
type=string
default="10.0.0.0/16"
}
variable "public_subnet_cidr" {
type=string
default="10.0.1.0/24"
}
variable "public_availability_zone"{
type = string
default="us-west-1a"
}
variable "instance_type" {
type = string
default = "t2.micro"
}
variable "key" {
type = string
default = "Your AWS Key Name for the region"
}
variable "ssh_location" {
type = string
description = "My Public IP Address"
default = "1.2.3.4/32"
}
Main.tf
The code for main.tf in this exercise is almost the same as the previous exercise, except we are adding an EC2 instance named controller. Take note of the controller’s security group, which is using a new security group called “controller-sg.” We’ll discuss that security group in the Security_groups.tf discussion below.
Another change is the outputs. We are adding the “private_ip” of the web server in the outputs because we’ll need the private IP for an SSH connection by connecting to the controller and jumping from the controller into the webserver—output for the controller’s Public IP address.
Also, the controller has a unique “bootstrap-controller.sh” file. It doesn’t do much; it just runs a script for updating OS and apt packages upon launching the instance.
The “bootstrap-web.sh” is different from the first exercise. It runs an update & upgrade of the OS and apt packages upon launching the instance. The “bootstrap-web.sh” also installs Apache and AWS CLI and copies some files I’ve created for a static website from an S3 bucket into Apache’s folder /var/www/html.
provider "aws" {
region = var.region
}
#Get Linux Ubuntu using SSM Parameter
data "aws_ssm_parameter" "ubuntu-focal" {
name = "/aws/service/canonical/ubuntu/server/20.04/stable/current/amd64/hvm/ebs-gp2/ami-id"
}
# Creating Web server
resource "aws_instance" "web" {
ami = data.aws_ssm_parameter.ubuntu-focal.value
instance_type = var.instance_type
subnet_id = aws_subnet.public-1.id
vpc_security_group_ids = ["${aws_security_group.web.id}"]
iam_instance_profile = "${aws_iam_instance_profile.assume_role_profile.name}"
key_name = var.key
user_data = file("bootstrap_web.sh")
tags = {
Name = "Basic-Web-Server"
Stage = "Test"
}
}
# Creating controller node
resource "aws_instance" "controller" {
ami = data.aws_ssm_parameter.ubuntu-focal.value
instance_type = var.instance_type
subnet_id = aws_subnet.public-1.id
vpc_security_group_ids = ["${aws_security_group.controller.id}"]
user_data = file("bootstrap_controller.sh")
key_name = var.key
tags = {
Name = "Controller"
Stage = "Test"
}
}
output "web" {
value = [aws_instance.web.public_ip, aws_instance.web.private_ip]
}
output "Controller" {
value = [aws_instance.controller.public_ip]
}
Security_Groups.tf

Our code creates two security groups, “web-sg” and “controller-sg.”
The first security group, “web-sg,” allows HTTP into the webserver but only from your public IP address. The code establishes a rule that will enable SSH, but only from your IP address to the controller, and then allows a jump from the controller to the Web server. This makes our web server a bit more secure in any environment because it restricts who and how an admin can establish an admin session on the webserver.
Take note of the unique method of controlling ingress within the web security group “web-sg.” In the ingress section, I have replaced “cidr_blocks” with “security_groups.” This is basically stating any resource assigned to the security group “controller-sg” is allowed an ingress connection (in this case, SSH).
Using security_groups instead of a “cidr_block” as an ingress rule provides an excellent method of controlling ingress to our EC2 instances. As you know, assigning a “cidr_block” is setting a group of IP addresses. Most code examples published as examples, show an ingress of 0.0.0.0/0, allowing anyone or any device inbound access. Opening your inbound traffic to the entire internet into our test environment might be a very convenient way of writing code examples. Still, though, it most certainly is not a good practice.
As stated earlier, both EC2 instances in this exercise are in a public subnet and do not require a jump server. I prefer to write exercises that simulate potential real-world examples as early in the coding practice as reasonably possible. One of those practices is using a security group as ingress to web servers instead of a “cidr_block.”
resource "aws_security_group" "web-sg" {
vpc_id = aws_vpc.my-vpc.id
description = "Allows HTTP"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
security_groups = ["${aws_security_group.controller-sg.id}"]
}
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["${var.ssh_location}"]
}
egress {
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
name = "SecurityGroup-Web"
Stage = "Test"
}
}
resource "aws_security_group" "controller-sg" {
vpc_id = aws_vpc.my-vpc.id
description = "Allows SSH from MyIP"
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${var.ssh_location}"]
}
egress {
from_port = 0
to_port = 0
protocol = -1
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "SecurityGroup-SSH"
Stage = "Test"
}
}
Using S3 bucket repository for website files
AWS S3 is a great place to store standard code for a team to utilize as shared public storage. Therefore we are creating an S3 bucket that will hold our static website files. This code will copy the files from an S3 bucket into our web server content folder.
So, we’ll copy the website files into an S3 bucket. And create a profile that allows an EC2 instance to read and copy files from S3.
Create an S3 bucket using AWS CLI
We need an S3 bucket to hold the website files. Go ahead and create a bucket using the AWS Management Console or use AWS command-line interface to create a new bucket.
Github sample website files
My Github repository has a file called “Static_Website_files.zip.” You are most certainly invited to unarchive the file and use it for your test website or create your static website files. Just know you’ll, of course need to unarchive the zip file contents before using the content.
s3_policy.tf
resource "aws_iam_policy" "copy-policy" {
name = "copy-anible-files"
description = "IAM policy to allow copy files from S3 bucket"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": ["arn:aws:s3:::change the name to your S3 bucket name",
"arn:aws:s3:::change the name to your S3 bucket name/*"]
}
]
}
EOF
}
resource "aws_iam_role" "assume-role" {
name = "assume-role"
description = "IAM policy that allows assume role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {"Service": "ec2.amazonaws.com"},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "assign-copy-policy" {
role = aws_iam_role.assume-role.name
policy_arn = aws_iam_policy.copy-policy.arn
depends_on = [aws_iam_policy.copy-policy]
}
resource "aws_iam_instance_profile" "assume_role_profile" {
name = "assume_role_profile"
role = aws_iam_role.assume-role.name
}
Copy website files into the new S3 bucket
The AWS command-line interface is a quick way to get the files into the bucket. I have a file on Github that you can download and use as the files for the Website. Download and unarchive the file “Static_Website_files.zip” into a temporary folder and use the AWS S3 copy command to copy the files into the new bucket. Or use the AWS Management Console to copy the files into the bucket. Once you have the files in S3, the bootstrap user data of the EC2 Instance “web” will automatically install the website files in the apache folder /var/www/html from your bucket.
Configuration – reminders
Be sure to configure the following in variables.tf
- Place your public IP address as the default IP for the variable “ssh_location.”
- Place your regional EC2 Key Pair name as the default for variable “Key.”
Be sure to configure the S3 bucket name in s3_policy.tf
- Don’t forget to create an S3 bucket and place the Website static files into the bucket
- Don’t forget to place the “ARN” of the S3 bucket into the S3_policy.tf
Launching the VPC and Web Server
After installing the requisite software, requisite files, and configuring the variables.
Run the following commands in terminal
- Terraform init
- Causes terraform to install the necessary provider modules, in this case, to support AWS provisioning
- Terraform validate
- Validates the AWS provisioning code
- Terraform Apply
- Performs the AWS provisioning of VPC and Web Server
After Terraform finishes provisioning the new VPC, Security Groups and Web Server, it will output the Public IP address of the new public server in the terminal window. Go ahead and copy the IP address, past it into a browser, and you should see something like the image below:

Once you have finished with this example, run the following command:
- Terraform Destroy (to remove VPC and Web Server)

It goes without saying, but it has to be said anyway. This is not for production!
All public websites should have some type of application firewall in between the Web Server and its internet connection!
It is a good idea to remove an EC2 instance when you are finished with the instance, so as not to incur costs for leaving an EC2 running.