open-nomad/terraform
2017-06-16 14:46:51 -07:00
..
aws update READMEs 2017-06-15 15:49:02 -07:00
examples update READMEs 2017-06-15 15:49:02 -07:00
shared install latest versions of products 2017-06-15 15:47:46 -07:00
README.md tweak top level README 2017-06-16 14:46:51 -07:00

Provision a Nomad cluster on AWS with Packer & Terraform

Use this to easily provision a Nomad sandbox environment on AWS with Packer and Terraform. Consul and Vault are also installed (colocated for convenience). The intention is to allow easy exploration of Nomad and its integrations with the HashiCorp stack. This is not meant to be a production ready environment. A demonstration of Nomad's Apache Spark integration is included.

Setup

Clone this repo and (optionally) use Vagrant to bootstrap a local staging environment:

$ git clone git@github.com:hashicorp/nomad.git
$ cd terraform/aws
$ vagrant up && vagrant ssh

The Vagrant staging environment pre-installs Packer, Terraform and Docker.

Pre-requisites

You will need the following:

You will need to copy your private SSH key to the Vagrant environment if you are using it.

Set environment variables for your AWS credentials:

$ export AWS_ACCESS_KEY_ID=[ACCESS_KEY_ID]
$ export AWS_SECRET_ACCESS_KEY=[SECRET_ACCESS_KEY]

Provision

cd to one of the environment subdirectories:

$ cd env/us-east

Update terraform.tfvars with your SSH key name:

region                  = "us-east-1"
ami                     = "ami-28a1dd3e"
instance_type           = "t2.medium"
key_name                = "KEY"
key_file                = "/home/vagrant/.ssh/KEY.pem"
server_count            = "3"
client_count            = "4"

For example:

region                  = "us-east-1"
ami                     = "ami-28a1dd3e"
instance_type           = "t2.medium"
key_name                = "hashi-us-east-1"
key_file                = "/home/vagrant/.ssh/hashi-us-east-1.pem"
server_count            = "3"
client_count            = "4"

Note that a pre-provisioned, publicly available AMI is used by default. To provision your own customized AMI with Packer, follow the instructions here. You will need to replace the AMI ID in terraform.tfvars with your own.

Provision the cluster:

terraform get
terraform plan
terraform apply

Access the cluster

SSH to a server using its public IP. For example:

$ ssh -i /home/vagrant/.ssh/KEY.pem ubuntu@SERVER_PUBLIC_IP

The AWS security group is configured by default to allow all traffic over port 22. This is not recommended for production deployments.

Run a few basic commands to verify that Consul and Nomad are up and running properly:

$ consul members
$ nomad server-members
$ nomad node-status

Optionally, initialize and unseal Vault:

$ vault init -key-shares=1 -key-threshold=1
$ vault unseal
$ export VAULT_TOKEN=[INITIAL_ROOT_TOKEN]

The vault init command above creates a single Vault unseal key. For a production environment, it is recommended that you create at least three unseal key shares and securely distribute them to independent operators.

Getting started with Nomad & the HashiCorp stack

See:

Apache Spark integration

Nomad is well-suited for analytical workloads, given its performance characteristics and first-class support for batch scheduling. Apache Spark is a popular data processing engine/framework that has been architected to use third-party schedulers. We maintain a fork that natively integrates Nomad with Spark. Sample job files and documentation are included here and also provisioned into the cluster itself under the HOME directory.