open-nomad/terraform/aws
Rob Genova 67d21ed192 restructure README files; add Azure README 2017-11-15 19:40:34 +00:00
..
env/us-east update AWS Terraform configs for consistency 2017-11-15 19:38:50 +00:00
modules/hashistack update AWS Terraform configs for consistency 2017-11-15 19:38:50 +00:00
README.md restructure README files; add Azure README 2017-11-15 19:40:34 +00:00
packer.json update config files to support systemd and Azure; reorganize Packer file hierarchy; update Vagrantfile to use latest tool versions 2017-11-15 19:31:46 +00:00

README.md

Provision a Nomad cluster on AWS

Pre-requisites

To get started, create the following:

Set the AWS environment variables

$ export AWS_ACCESS_KEY_ID=[AWS_ACCESS_KEY_ID]
$ export AWS_SECRET_ACCESS_KEY=[AWS_SECRET_ACCESS_KEY]

Build an AWS machine image with Packer

Packer is HashiCorp's open source tool for creating identical machine images for multiple platforms from a single source configuration. The Terraform templates included in this repo reference a publicly avaialble Amazon machine image (AMI) by default. The AMI can be customized through modifications to the build configuration script and packer.json.

Use the following command to build the AMI:

$ packer build packer.json

Provision a cluster with Terraform

cd to an environment subdirectory:

$ cd env/us-east

Update terraform.tfvars with your SSH key name and your AMI ID if you created a custom AMI:

region                  = "us-east-1"
ami                     = "ami-6ce26316"
instance_type           = "t2.medium"
key_name                = "KEY_NAME"
server_count            = "3"
client_count            = "4"

You can also modify the region, instance_type, server_count, and client_count. At least one client and one server are required.

Provision the cluster:

$ terraform init
$ terraform get
$ terraform plan
$ terraform apply

Access the cluster

SSH to one of the servers using its public IP:

$ ssh -i /path/to/private/key ubuntu@PUBLIC_IP

The infrastructure that is provisioned for this test environment is configured to allow all traffic over port 22. This is obviously not recommended for production deployments.

Next Steps

Click here for next steps.