open-nomad/terraform/gcp
Seth Hoenig f7c0e078a9 build: update golang version to 1.18.2
This PR update to Go 1.18.2. Also update the versions of hclfmt
and go-hclogfmt which includes newer dependencies necessary for dealing
with go1.18.

The hcl v2 branch is now 'nomad-v2.9.1+tweaks2', to include a fix for
newer macOS versions: 8927e75e82
2022-05-25 10:04:04 -05:00
..
env/us-east
modules/hashistack build: update golang version to 1.18.2 2022-05-25 10:04:04 -05:00
Makefile
README.md
packer.json

README.md

Provision a Nomad cluster on GCP

Open in Cloud Shell

To get started, you will need a GCP account.

Welcome

This tutorial will teach you how to deploy Nomad clusters to the Google Cloud Platform using Packer and Terraform.

Includes:

  • Installing HashiCorp Tools (Nomad, Consul, Vault, Terraform, and Packer).
  • Installing the GCP SDK CLI Tools, if you're not using Cloud Shell.
  • Creating a new GCP project, along with a Terraform Service Account.
  • Building a golden image using Packer.
  • Deploying a cluster with Terraform.

Install HashiCorp Tools

Nomad

Download the latest version of Nomad from HashiCorp's website by copying and pasting this snippet in the terminal:

curl "https://releases.hashicorp.com/nomad/0.12.4/nomad_0.12.4_linux_amd64.zip" -o nomad.zip
unzip nomad.zip
sudo mv nomad /usr/local/bin
nomad --version

Consul

Download the latest version of Consul from HashiCorp's website by copying and pasting this snippet in the terminal:

curl "https://releases.hashicorp.com/consul/1.8.3/consul_1.8.3_linux_amd64.zip" -o consul.zip
unzip consul.zip
sudo mv consul /usr/local/bin
consul --version

Vault

Download the latest version of Vault from HashiCorp's website by copying and pasting this snippet in the terminal:

curl "https://releases.hashicorp.com/vault/1.5.3/vault_1.5.3_linux_amd64.zip" -o vault.zip
unzip vault.zip
sudo mv vault /usr/local/bin
vault --version

Packer

Download the latest version of Packer from HashiCorp's website by copying and pasting this snippet in the terminal:

curl "https://releases.hashicorp.com/packer/1.6.2/packer_1.6.2_linux_amd64.zip" -o packer.zip
unzip packer.zip
sudo mv packer /usr/local/bin
packer --version

Terraform

Download the latest version of Terraform from HashiCorp's website by copying and pasting this snippet in the terminal:

curl "https://releases.hashicorp.com/terraform/0.13.1/terraform_0.13.1_linux_amd64.zip" -o terraform.zip
unzip terraform.zip
sudo mv terraform /usr/local/bin
terraform --version

Install and Authenticate the GCP SDK Command Line Tools

If you are using Google Cloud, you already have gcloud set up, and you can safely skip this step.

To install the GCP SDK Command Line Tools, follow the installation instructions for your specific operating system:

After installation, authenticate gcloud with the following command:

gcloud auth login

Create a New Project

Generate a project ID with the following command:

export GOOGLE_PROJECT="nomad-gcp-$(cat /dev/random | head -c 5 | xxd -p)"

Using that project ID, create a new GCP project:

gcloud projects create $GOOGLE_PROJECT

And then set your gcloud config to use that project:

gcloud config set project $GOOGLE_PROJECT

Next, let's link a billing account to that project. To determine what billing accounts are available, run the following command:

gcloud alpha billing accounts list

Locate the ACCOUNT_ID for the billing account you want to use, and set the GOOGLE_BILLING_ACCOUNT environment variable. Replace the XXXXXXX with the ACCOUNT_ID you located with the previous command output:

export GOOGLE_BILLING_ACCOUNT="XXXXXXX"

So we can link the GOOGLE_BILLING_ACCOUNT with the previously created GOOGLE_PROJECT:

gcloud alpha billing projects link "$GOOGLE_PROJECT" --billing-account "$GOOGLE_BILLING_ACCOUNT"

Enable Compute API

In order to deploy VMs to the project, we need to enable the compute API:

gcloud services enable compute.googleapis.com

Create Terraform Service Account

Finally, let's create a Terraform Service Account user and its account.json credentials file:

gcloud iam service-accounts create terraform \
    --display-name "Terraform Service Account" \
    --description "Service account to use with Terraform"
gcloud projects add-iam-policy-binding "$GOOGLE_PROJECT" \
  --member serviceAccount:"terraform@$GOOGLE_PROJECT.iam.gserviceaccount.com" \
  --role roles/editor
gcloud iam service-accounts keys create account.json \
    --iam-account "terraform@$GOOGLE_PROJECT.iam.gserviceaccount.com"

⚠️ Warning

The account.json credentials gives privileged access to this GCP project. Be careful to avoid leaking these credentials by accidentally committing them to version control systems such as git, or storing them where they are visible to others. In general, storing these credentials on an individually operated, private computer (like your laptop) or in your own GCP cloud shell is acceptable for testing purposes. For production use, or for teams, use a secrets management system like HashiCorp Vault. For this tutorial's purposes, we'll be storing the account.json credentials on disk in the cloud shell.

Now set the full path of the newly created account.json file as GOOGLE_APPLICATION_CREDENTIALS environment variable.

export GOOGLE_APPLICATION_CREDENTIALS=$(realpath account.json)

Ensure Required Environment Variables Are Set

Before moving onto the next steps, ensure the following environment variables are set:

  • GOOGLE_PROJECT with your selected GCP project ID.
  • GOOGLE_APPLICATION_CREDENTIALS with the full path to the Terraform Service Account account.json credentials file created in the last step.

Build HashiStack Golden Image with Packer

Packer is HashiCorp's open source tool for creating identical machine images for multiple platforms from a single source configuration. The machine image created here can be customized through modifications to the build configuration file and the shell script.

Use the following command to build the machine image:

packer build packer.json

Provision a cluster with Terraform

Change into the env/us-east environment directory:

cd env/us-east

Initialize Terraform:

terraform init

Plan infrastructure changes with Terraform:

terraform plan -var="project=${GOOGLE_PROJECT}" -var="credentials=${GOOGLE_APPLICATION_CREDENTIALS}"

Apply infrastructure changes with Terraform:

terraform apply -auto-approve -var="project=${GOOGLE_PROJECT}" -var="credentials=${GOOGLE_APPLICATION_CREDENTIALS}"

Access the Cluster

To access the Nomad, Consul, or Vault web UI inside the cluster, create an SSH tunnel using gcloud. To open up tunnels to all of the UIs available in the cluster, run these commands which will start each SSH tunnel as a background process in your current shell:

gcloud compute ssh hashistack-server-0 --zone=us-east1-c --tunnel-through-iap -- -f -N -L 127.0.0.1:4646:127.0.0.1:4646
gcloud compute ssh hashistack-server-0 --zone=us-east1-c --tunnel-through-iap -- -f -N -L 127.0.0.1:8200:127.0.0.1:8200
gcloud compute ssh hashistack-server-0 --zone=us-east1-c --tunnel-through-iap -- -f -N -L 127.0.0.1:8500:127.0.0.1:8500

After running those commands, you can now click any of the following links to open up a Web Preview using Cloud Shell:

If you're not using Cloud Shell, you can use any of these links:

In case you want to try out any of the optional steps with the Vault CLI later on, set this helper variable:

export VAULT_ADDR=http://localhost:8200

Next Steps

You have deployed a Nomad cluster to GCP! 🎉

Click here for next steps.

After You Finish

Come back here when you're done exploring Nomad and the HashiCorp stack. In the next section, you'll learn how to clean up, and will destroy the demo infrastructure you've created.

Conclusion

You have deployed a Nomad cluster to GCP!

Destroy Infrastructure

To destroy all the demo infrastructure:

terraform destroy -force -var="project=${GOOGLE_PROJECT}" -var="credentials=${GOOGLE_APPLICATION_CREDENTIALS}"

Delete the Project

Finally, to completely delete the project:

gcloud projects delete $GOOGLE_PROJECT

Alternative: Use the GUI

If you prefer to delete the project using GCP's Cloud Console, follow this link to GCP's Cloud Resource Manager.