Deploying Webserver on AWS using Terraform + EFS Storage

In this project, we will create a Terraform code which will automatically go to AWS and configure various services. In this process, it will launch an EC2 instance & will create one EFS storage, which will be attached to the instance. After this web server will be configured in the instance and exposed to the outer world. These all steps mentioned along with some other like creating Security group, key pairs, S3 bucket and uploading image inside, will completely be done using one Terraform code just by one command.

This project demonstrates End-to-End Automation using technologies including Terraform, HCL, Git & Github, AWS & its various services like EC2 (instance, key pair, Security Group, Snapshot), S3 (Bucket, Object), CloudFront, EFS, etc. Even if you are new to these technologies still it would be fun giving it a read while not focusing much on the code part.

◉ About Project (Steps need to be included)

@ Task 2: Have to create/launch Application using Terraform

  1. Create the key-pair and security group which allow the port 80.

2. Launch an EC2 instance. In this EC2 instance use the key and security group which we have created in step 1.

3. Launch one storage volume (EFS) and attach that volume into the EC2 instance launched & mount the directory.

4. Get the code uploaded by the developer in GitHub and copy the code in the /var/www/html folder for deployment.

5. Create S3 bucket, and copy/deploy the static images into the S3 bucket and change the permission to public readable.

6 Create a CloudFront using S3 bucket(which contains images) and use the CloudFront URL to update in code.

7. Launch the application for testing from the code itself.

◉ Flowchart of the Task:

◉ I have Performed the Task and attached the screenshots for your reference:

Step 1: First configure the terraform code which is the basis of this project.

# Set provider for Cloud services, here it is AWS

provider "aws" {
region = "ap-south-1"
profile = "samar"
}

# Creating Security Group with inbound access to port no. 22 (SSH) & 80 (HTTP)

resource "aws_security_group" "sg_http_ssh" {
name = "sg_http_ssh"
description = "Access to inbound traffic"
ingress {
description = "HTTP support"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SSH support"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my sg"
}
}

# Creating key pair (in “pem” format) to access the instance using SSH protocol

resource "tls_private_key" "mykey1" {
algorithm = "RSA"
}
resource "aws_key_pair" "key_access" {
key_name = "mykey"
public_key = tls_private_key.mykey1.public_key_openssh
depends_on = [
tls_private_key.mykey1,
]
tags = {
Name = "access key"
}
}

# Creating a S3 bucket:

- to keep the static content of our website, as S3 storage is very secure. By default the bucket is private and no one can access it, therefore we need to make its access Public by setting “public-read”

resource "aws_s3_bucket" "mybucket" {
bucket = "samar3199bucket"
acl = "public-read"
depends_on = [
aws_key_pair.key_access,
]
versioning {
enabled = true
}
tags = {
Name = "My bucket"
Environment = "Dev"
}
}

# Adding object here image to the S3 bucket

resource "aws_s3_bucket_object" "my_object" {
bucket = "samar3199bucket"
key = "image.png"
source = "C:/Users/Admine/Desktop/cloud_task2/image.png"
acl = "public-read"
depends_on = [
aws_s3_bucket.mybucket,
]
}

# Creating CloudFront of S3 bucket containing the image:

— so that we can get a unique URL which can be accessed by clients globally without much latency (delay). This URL we will provide in the website Html code.

resource "aws_cloudfront_distribution" "cf_s3" {

depends_on = [
aws_s3_bucket_object.my_object,
]
origin {
domain_name = aws_s3_bucket.mybucket.bucket_regional_domain_name
origin_id = "S3-samar3199bucket"

custom_origin_config {
http_port = 80
https_port = 443
origin_protocol_policy = "match-viewer"
origin_ssl_protocols = ["TLSv1", "TLSv1.1", "TLSv1.2"]
}
}

enabled = true
default_cache_behavior {
allowed_methods = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3-samar3199bucket"
forwarded_values {
query_string = false

cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}

restrictions {
geo_restriction {

restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

# Very important part, installing a Linux instance (OS)

resource "aws_instance" "my_instance" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
availability_zone = "ap-south-1a"
key_name = aws_key_pair.key_access.key_name
security_groups = [ "${aws_security_group.sg_http_ssh.name}" ]
provisioner "remote-exec" {
connection {
agent = "false"
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.mykey1.private_key_pem
host = aws_instance.my_instance.public_ip
}
inline = [
"sudo yum install httpd git -y",
"sudo systemctl restart httpd",
"sudo systemctl enable httpd",
]
}
depends_on = [
aws_cloudfront_distribution.cf_s3,
]
tags = {
Name = "myos"
}
}

# Creating EFS storage

resource "aws_efs_file_system" "my_nfs"{
depends_on = [aws_security_group.sg_http_ssh , aws_instance.my_instance , ]
creation_token = "my_nfs"
tags = {
Name = "my_nfs"
}
}

# Attaching EFS storage to EC2 instance:

resource "aws_efs_mount_target" "alpha" { 
depends_on = [aws_efs_file_system.my_nfs,]
file_system_id = aws_efs_file_system.my_nfs.id
subnet_id = aws_instance.my_instance.subnet_id
security_groups = ["${aws_security_group.sg_http_ssh.id}"]
}
output "myin_ip" {
value = aws_instance.my_instance.public_ip
}

# Getting into the instance through SSH protocol:

— then formatting & mounting the attached EFS storage with the document root of Apache webserver i.e., /var/www/html. Then clone the code from the SCM here Github, we are just keeping the dynamic part of the website that is the code here and not the static part like images, videos, etc.

resource "null_resource" "nullremote1"  {depends_on = [
aws_efs_mount_target.alpha,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = tls_private_key.mykey1.private_key_pem
host = aws_instance.my_instance.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/Samarps/task2_HMCloud.git /var/www/html/",
"sudo rm -rf /var/www/html*.jpg /var/www/html*.png /var/www/html*.jpeg",
]
}
}

# Retrieving Public IP of the Instance:

resource "null_resource" "instance_ip" {
provisioner "local-exec" {
command = "echo ${aws_instance.my_instance.public_ip} > publicip.txt"
}
depends_on = [
null_resource.nullremote1,
]
}

# Launching the website over Chrome:

— once the whole infrastructure is installed & configured then this resource will launch the Website in Chrome Browser Automatically by itself by entering the URL.

resource "null_resource" "nulllocal2"  {
depends_on = [
null_resource.instance_ip,
]
provisioner "local-exec" {
command = "start chrome ${aws_instance.my_instance.public_ip}"
}
}

Step2: Now let's create the website code in HTML with some CSS containing the CloudFront URL of the image. URL of the image (object) is like:

https://<bucket_name>.s3.<datacenter_name>.amazonaws.com/<object_name>

Bucket name: samar3199bucket; DC name: ap-south-1 (Mumbai); Image name: image.png

Step3: Save the Webpage code & some static content like image on Github:

Link of the Github repo: https://github.com/Samarps/task2_HMCloud.git

Step4: We will use the Terraform code that we created above to launch the whole infrastructure & then it will automatically launch the website for us.

# Initiate the terraform code:

— this command will initialize the terraform code & download the required plugins from the internet.

> terraform init

# Validate the terraform code:

> terraform validate

# Execute or apply the terraform code:

> terraform apply -auto-approve

The whole infrastructure launched successfully:

# The Website deployed:

— the web server is configured over the instance in AWS & finally, the website is deployed. This terraform code automatically launched the website in Chrome browser for us, without any involvement from our side. We created a project based on End-to-end Automation.

# Lets see service which automatically configured for us just by using one terraform code:

New Key pairs & Security Group was created and used…

S3 bucket with an Image & set to public visibility and CloudFront for S3 bucket to get a unique URL to access that image in bucket where configured automatically…

An EC2 instance was launched along with an EFS storage which was attached to that instance. The Webserver was configured automatically along with webpages being copied to Document root of Apache Webserver form the Github repo…

◉ Conclusion:

In this project, we achieved End-to-End Automation using various technologies including Terraform, HCL, Git & Github, AWS & its various services like EC2 (instance, key pair, Security Group, Snapshot), S3 (Bucket, Object), CloudFront, EFS, etc. This project was somewhat similar to my last project but here we learnt how to use EFS storage. The complete infrastructure we created using Terraform & finally destroyed as well, both with just one command.

It was a great experience overall to successfully complete this project & along the way I learnt a lot even from the moment when my code failed, which taught me a lot more things. I would like to thank Sir Vimal Daga for providing such an amazing project to work on.