Solo Development & Terraform
Working alone (or even as a small team), it is important to get as much leverage out of your tools as possible, as you do not have any capacity to be inefficient. Version control systems, managed services, developer environment and tools all can contribute to a vastly more productive day as a solo developer.
As I was setting up infrastructure for the project, I kept a notes file opened side-by-side with a terminal, and would issue commands, note down what command I issued and the result, and iterate. This was very much a trial-and-error exercise. It would be easy to get lost in all the configuration and end up with an overprovisioned environment, or worse, an environment with glaring security holes. The note-taking allowed me to revert changes and prune down to what was strictly necessary, but even this set up was sub-optimal. If I had to recreate the environment, I would have to review the notes and replay the commands exactly. If I had to make a configuration change? Well, hopefully find-and-replace would be sufficient.
This was something I needed to get a handle on, especially if I end up expanding and hiring developers. I can’t just hand off a notes file and wish them luck. Fortunately, tools like Terraform exist, and they serve to make your infrastructure declarative and version-controlled, so it is much easier to review what you did and manage your environments safely and securely.
I’ll walk through provisioning a Kubernetes cluster with Terraform. This will be the start of a sample stack I will build that adheres closely to the stack I’m using. This repository will contain the Terraform setup I’m building, starting with a GKE cluster.
This assumes terraform
is installed and since I am working on GCP, this assumes that gcloud
is properly configured.
First we start by setting up our provider requirements, for now that is just Google:
# versions.tf
terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "3.52.0"
}
}
required_version = "~> 0.14"
}
The provider
is the cloud resource plugin specific to a IaaS/PaaS/SaaS solution.
I will also need some variables in my script to configure my cluster. This is accomplished through Terraform variables.
# variables.tf
variable "project_id" {
description = "project id"
}
variable "region" {
description = "region"
}
variable "zone" {
description = "zone"
}
Since these variables do not have default values, Terraform will prompt for a value when running any command that needs them.
Now to the main event. I’ve added comments inline where appropriate for the script, so it should be relatively self-explanatory.
# main.tf
variable "gke_username" {
default = ""
description = "gke username"
}
variable "gke_password" {
default = ""
description = "gke password"
}
# Provisioning a "google_container_cluster" resource named "primary"
resource "google_container_cluster" "primary" {
# These variables gets pulled from our variables.tf file
name = "${var.project_id}-gke"
location = var.zone
# Discard the initial node pool, because we will create our own
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = var.gke_username
password = var.gke_password
client_certificate_config {
issue_client_certificate = false
}
}
}
# Provisioning a "google_container_node_pool" resource named "primary_nodes"
# We do this separately so we can have finer control over our node pool
resource "google_container_node_pool" "primary_nodes" {
name = "${google_container_cluster.primary.name}-node-pool"
location = var.zone
# The cluster for this node pool is the cluster we created above
cluster = google_container_cluster.primary.name
# And start this cluster with two nodes
node_count = 2
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
# We start with some tiny, preemptible instances for cost purposes
preemptible = true
machine_type = "e2-micro"
}
}
# Finally, output our GKE cluster name (not necessary but nice to see)
output "gke_cluster_name" {
value = google_container_cluster.primary.name
}
Terraform has 4 basic commands: init
, plan
, apply
, and destroy
. init
sets up the terraform workspace, pulling down any required plugins. plan
stages the changes and gives you a chance to review what operations would be performed. apply
actually executes the changes against your providers. destroy
tears down the infrastructure. This workflow allows for atomic operations on infrastructure, and easy reproduction and duplication of infrastructure as well.
Now to have Terraform build our cluster. Terraform init
pulls down our required plugins and prepares our workspace.
jameslewis@~/src/sample_stack/sample-stack-tf on <main> % terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/google versions matching "3.52.0"...
- Installing hashicorp/google v3.52.0...
- Installed hashicorp/google v3.52.0 (signed by HashiCorp)
Terraform has made some changes to the provider dependency selections recorded
in the .terraform.lock.hcl file. Review those changes and commit them to your
version control system if they represent changes you intended to make.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform apply
prompts for the required variables, and describes what it will do. Before execution it requires confirmation, to help prevent accidental changes. It begins provisioning the cluster and node pool:
jameslewis@~/src/sample_stack/sample-stack-tf on <main> % terraform apply
var.project_id
project id
Enter a value: <XXX>
var.region
region
Enter a value: <YYY>
var.zone
zone
Enter a value: <ZZZ>
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<cut for brevity>
Plan: 2 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_container_cluster.primary: Creating…
<cut for brevity>
google_container_node_pool.primary_nodes: Creation complete after 1m13s [id=projects/<project>/locations/us-central1-a/clusters/<project>-gke/nodePools/<project>-gke-node-pool]
Apply complete! Resources: 2 added, 0 changed, 0 destroyed.
Outputs:
gke_cluster_name = "<project>-gke"
And it prints out our cluster name. Success!
Now, since GKE is not free, I want to tear all this infrastructure back down. Terraform destroy
handles that:
jameslewis@~/src/sample_stack/sample-stack-tf on <main> % terraform destroy
var.project_id
project id
Enter a value: <project>
var.region
region
Enter a value: us-central1
var.zone
zone
Enter a value: us-central1-a
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
<cut for brevity>
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
google_container_node_pool.primary_nodes: Destroying...
<cut for brevity>
google_container_cluster.primary: Destruction complete after 2m3s
Destroy complete! Resources: 2 destroyed.
And now I am back where I started. Congratulations! We did it!