Spawn resources in different cloud providers using Terraform
Been a while since I have penned something down. Enjoying my break from all the certifications that I had to prep for, during the summer, anyhow.
Introduction:
Previously we have seen how to provision CosmosDB using Terraform.
Now through this blog, I wanted to do a simple PoC (Proof of Concept) by using Terraform and implementing a “wordpress” application that consume resources from multiple cloud providers, just to show what we can do with Terraform. Going forward, multi-cloud usage would be a new norm in the industry.
We will create a “wordpress” application using a pod in GKE (Google Kubernetes Engine) which uses MySql provisioned using RDS service in AWS. We will use Kubernetes provider to connect to GKE and create the pod, service and required resources.
We will use AWS and GCP for this PoC. I will not be going much into details with regards to how to setup free accounts in both AWS and GCP. Use “aws configure” and “gcloud auth login” in your local terminal or shell and setup your accounts so that terraform can use those to create resources.
Assuming you have taken care of the account setup, lets now dig into the details.
The code used for this PoC is saved to git repo @ this location -> https://github.com/kprasant/multi-cloud-tf
main.tf
provider "google" {
project = "<Your project name>"
}
provider "aws" {
region = "ca-central-1"
}
provider "kubernetes" {
load_config_file = "false"host = "https://${google_container_cluster.primary.endpoint}"
username = "admin"
password = "<At least 15 chars>"
client_certificate = base64decode(
google_container_cluster.primary.master_auth[0].client_certificate,
)
client_key = base64decode(
google_container_cluster.primary.master_auth[0].client_key,
)
cluster_ca_certificate = base64decode(
google_container_cluster.primary.master_auth[0].cluster_ca_certificate,
)
}
Through this file we are basically telling terraform that we are going to use GCP, AWS and Kubernetes providers and few details like project or region. Running “terraform init” with this file alone would initialize the configuration. We will come back to “kubernetes” provider shortly.
Let’s start with MySql creation in AWS. For this we will refer to the file rds.tf in the repo. We are basically creating a VPC with 2 subnets, subnet1 and subnet2. Since we are connecting multiple clouds, the only way to connect these is through internet. So, we will be creating an internet gateway (IGW) in the VPC and setup route for the subnets.
Please note that I am ignoring security related details for now, since this is just a PoC . Do not use this repo or the contents for PRODUCTION. A sample security group with a rule is created for DB to open port 3306 to public. So that our “wordpress” pod can connect to this DB.
resource "aws_vpc" "db-vpc" {
cidr_block = "10.1.0.0/16"
instance_tenancy = "default"
enable_dns_hostnames = true
}resource "aws_subnet" "subnet1" {
vpc_id = aws_vpc.db-vpc.id
cidr_block = "10.1.1.0/24"
availability_zone = "ca-central-1a"
}resource "aws_subnet" "subnet2" {
vpc_id = aws_vpc.db-vpc.id
cidr_block = "10.1.2.0/24"
availability_zone = "ca-central-1b"
}resource "aws_route_table_association" "subnet1-rt" {
subnet_id = aws_subnet.subnet1.id
route_table_id = aws_vpc.db-vpc.default_route_table_id
depends_on = [ aws_subnet.subnet1 ]
}resource "aws_route_table_association" "subnet2-rt" {
subnet_id = aws_subnet.subnet2.id
route_table_id = aws_vpc.db-vpc.default_route_table_id
depends_on = [ aws_subnet.subnet2 ]
}resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.db-vpc.id
}resource "aws_route" "route-igw" {
route_table_id = aws_vpc.db-vpc.default_route_table_id
destination_cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
depends_on = [ aws_internet_gateway.igw ]
}resource "aws_default_security_group" "sg_rds" {
vpc_id = aws_vpc.db-vpc.idingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}resource "aws_db_subnet_group" "rds-sg" {
name = "mysql-sg"
subnet_ids = [aws_subnet.subnet1.id , aws_subnet.subnet2.id]
}resource "aws_db_instance" "rds" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "trialdb"
username = "admin"
password = "admin1234"
parameter_group_name = "default.mysql5.7"
db_subnet_group_name = aws_db_subnet_group.rds-sg.name
publicly_accessible = true
skip_final_snapshot = true
depends_on = [ aws_db_subnet_group.rds-sg ]
}
gke.tf
I have used/modified the lesson that’s posted on terraform website to create a basic cluster in Google Kubernetes Engine -> https://learn.hashicorp.com/tutorials/terraform/gke
Here we are basically creating a cluster with 1 node. It will be created in “default” VPC with “n1-standard-1” machine. Please setup a password before running the code. I have used for this PoC to setup a master user/pass and enabled “issue_client_certificate” just so that we can create the kube config file (main.tf) and we can create the pod, service, pvc etc using terraform itself.
Typically we wont be doing this, it’s not secure to create a master user/pass or enabling issuing client certificates. This should be done through IAM and RBAC. Well, now you can go back to notice that we are actually providing the kube config content dynamically to the kubernetes terraform provider instead of using a file path.
variable "gke_username" {
default = "admin"
description = "gke username"
}variable "region" {
default = "us-central1-a"
}variable "gke_password" {
default = "<15 chars in length>"
description = "gke password"
}variable "gke_num_nodes" {
default = 1
description = "number of gke nodes"
}# GKE cluster
resource "google_container_cluster" "primary" {
name = "trial-gke"
location = var.regionremove_default_node_pool = true
initial_node_count = 1master_auth {
username = var.gke_username
password = var.gke_passwordclient_certificate_config {
issue_client_certificate = true
}
}
}# Separately Managed Node Pool
resource "google_container_node_pool" "primary_nodes" {
name = "${google_container_cluster.primary.name}-node-pool"
location = var.region
cluster = google_container_cluster.primary.name
node_count = var.gke_num_nodesnode_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]labels = {
env = "trials"
}# preemptible = true
machine_type = "n1-standard-1"
tags = ["gke-node", "trials-gke"]
metadata = {
disable-legacy-endpoints = "true"
}
}
}output "kubernetes_cluster_name" {
value = google_container_cluster.primary.name
description = "GKE Cluster Name"
}
wordpress.tf
Using this file, we now create a deployment with 1 pod of “wordpress” and provide the DB details using environment variables to the pod. Then we add a load balancer type of service to expose our site to public. We add a persistent volume claim to provide storage to the pod and persist the content of the blogs.
Finally we output the IP address of the load balancer service to access our blog.
resource "kubernetes_service" "svc" {
metadata {
name = "wp-svc"
labels = {
app = "wp"
}
}
spec {
selector = {
app = "wp"
}
type = "LoadBalancer"
port {
port = "80"
}
}
depends_on = [ google_container_node_pool.primary_nodes ]
}resource "kubernetes_persistent_volume_claim" "pvc" {
metadata {
name = "wp-pvc"
}
spec {
access_modes = ["ReadWriteOnce"]
resources {
requests = {
storage = "1Gi"
}
}
}
depends_on = [ google_container_node_pool.primary_nodes ]
}resource "kubernetes_deployment" "deploy" {
metadata {
name = "wp-deploy"
labels = {
app = "wp"
}
}spec {
replicas = 1selector {
match_labels = {
app = "wp"
}
}template {
metadata {
labels = {
app = "wp"
}
}spec {
container {
image = "wordpress"
name = "wordpress"
port {
name = "wordpress"
container_port = "80"
}
volume_mount {
name = "wordpress-persistent-storage"
mount_path = "/var/www/html"
}
env {
name = "WORDPRESS_DB_HOST"
value = aws_db_instance.rds.address
}
env {
name = "WORDPRESS_DB_USER"
value = "admin"
}
env {
name = "WORDPRESS_DB_PASSWORD"
value = "admin1234"
}
env {
name = "WORDPRESS_DB_NAME"
value = "trialdb"
}
}
volume {
name = "wordpress-persistent-storage"
persistent_volume_claim {
claim_name = "wp-pvc"
}
}
}
}
}
depends_on = [ kubernetes_service.svc , kubernetes_persistent_volume_claim.pvc ]
}output "ip" {
value = kubernetes_service.svc.load_balancer_ingress.0.ip
}
Implementation:
These are the commands that I have used. They are same as before, that we used for provisioning the cosmos DB.
terraform init

terraform plan -out wp.json

terraform validate
Success! The configuration is valid.
This probably might be something new from last blog, better to run this command so that terraform will validate our files.
terraform apply "wp.json"
This should take around 10–12 mins to provision all the required resources for our PoC. Any issues while running these commands would need to be fixed to move forward. As you can see all resources are created and the final output “” is our IP to communicate with our wordpress application.


I hope you can see the database below with the connection details and our db “trialdb”.

And finally after every PoC, please run below command to remove all the resources that we have created, so that, we don’t incur much charges.
terraform destroy

Conclusion:
You might have already noticed that I have used a lot of hard coding in this exercise. You could use variables and make this more reusable.
Security wise RDS access should be confined to only the pod or GKE using the security group. You can add a DNS layer and SSL to access our blog. And much more….
This is a very basic PoC to show what we can do with terraform using multiple providers not just limited to cloud.
Hope this helps you with your work, certifications or research. Please “Clap” if this has been useful.