# Provision a Nomad cluster on AWS with Packer & Terraform
Use this to easily provision a Nomad sandbox environment on AWS with [Packer](https://packer.io) and [Terraform](https://terraform.io). [Consul](https://www.consul.io/intro/index.html) and [Vault](https://www.vaultproject.io/intro/index.html) are also installed (colocated for convenience). The intention is to allow easy exploration of Nomad and its integrations with the HashiCorp stack. This is not meant to be a production ready environment. A demonstration of [Nomad's Apache Spark integration](examples/spark/README.md) is included.
## Setup
Clone this repo and (optionally) use [Vagrant](https://www.vagrantup.com/intro/index.html) to bootstrap a local staging environment:
```bash
$ git clone git@github.com:hashicorp/nomad.git
$ cd terraform/aws
$ vagrant up && vagrant ssh
```
The Vagrant staging environment pre-installs Packer, Terraform and Docker.
Note that a pre-provisioned, publicly available AMI is used by default. To provision your own customized AMI with [Packer](https://www.packer.io/intro/index.html), follow the instructions [here](aws/packer/README.md). You will need to replace the AMI ID in terraform.tfvars with your own.
The `vault init` command above creates a single [Vault unseal key](https://www.vaultproject.io/docs/concepts/seal.html). For a production environment, it is recommended that you create at least three unseal key shares and securely distribute them to independent operators.
## Getting started with Nomad & the HashiCorp stack
Nomad is well-suited for analytical workloads, given its performance characteristics and first-class support for batch scheduling. Apache Spark is a popular data processing engine/framework that has been architected to use third-party schedulers. We maintain a fork that natively integrates Nomad with Spark. Sample job files and documentation are included [here](examples/spark/README.md) and also provisioned into the cluster itself under the `HOME` directory.