efbd680d4e
Make a clear split between Packer and Terraform provisioning steps: the scripts in the `packer/linux` directory are run when we build the AMI whereas the stuff in shared are run at Terraform provisioning time. Merging all runtime provisioning scripts into a single script for each of server/client solves the following: * Userdata scripts can't take arguments, they can only be templated and that means we have to do TF escaping in bash/powershell scripts. * TF provisioning scripts race with userdata scripts. |
||
---|---|---|
.. | ||
configs | ||
packer | ||
shared | ||
README.md | ||
compute.tf | ||
iam.tf | ||
main.tf | ||
network.tf | ||
terraform.tfvars | ||
versions.tf |
README.md
Terraform provisioner for end to end tests
This folder contains terraform resources for provisioning a nomad cluster on AWS for end to end tests. It uses a Nomad binary identified by its commit SHA that's stored in a shared s3 bucket that Nomad team developers can access. The commit SHA can be from any branch that's pushed to remote.
Use envchain to store your AWS credentials.
$ cd e2e/terraform/
$ TF_VAR_nomad_sha=<nomad_sha> envchain nomadaws terraform apply
After this step, you should have a nomad client address to point the end to end tests in the e2e
folder to.
SSH
Terraform will output node IPs that may be accessed via ssh:
ssh -i keys/nomad-e2e-*.pem ubuntu@${EC2_IP_ADDR}
Teardown
The terraform state file stores all the info, so the nomad_sha doesn't need to be valid during teardown.
$ cd e2e/terraform/
$ envchain nomadaws TF_VAR_nomad_sha=yyyzzz terraform destroy