open-nomad/terraform/README.md

117 lines
4 KiB
Markdown
Raw Normal View History

2017-06-15 22:49:02 +00:00
# Provision a Nomad cluster on AWS with Packer & Terraform
Use this to easily provision a Nomad sandbox environment on AWS with [Packer](https://packer.io) and [Terraform](https://terraform.io). [Consul](https://www.consul.io/intro/index.html) and [Vault](https://www.vaultproject.io/intro/index.html) are also installed (colocated for convenience). The intention is to allow easy exploration of Nomad and its integrations with the HashiCorp stack. This is not meant to be a production ready environment. A demonstration of [Nomad's Apache Spark integration](examples/spark/README.md) is included.
## Setup
Clone this repo and (optionally) use [Vagrant](https://www.vagrantup.com/intro/index.html) to bootstrap a local staging environment:
```bash
$ git clone git@github.com:hashicorp/nomad.git
$ cd terraform/aws
$ vagrant up && vagrant ssh
```
The Vagrant staging environment pre-installs Packer, Terraform and Docker.
### Pre-requisites
You will need the following:
- AWS account
- [API access keys](http://aws.amazon.com/developers/access-keys/)
- [SSH key pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
Set environment variables for your AWS credentials:
```bash
$ export AWS_ACCESS_KEY_ID=[ACCESS_KEY_ID]
$ export AWS_SECRET_ACCESS_KEY=[SECRET_ACCESS_KEY]
```
## Provision
`cd` to one of the environment subdirectories:
```bash
$ cd env/us-east
```
Update terraform.tfvars with your SSH key name:
```bash
region = "us-east-1"
ami = "ami-28a1dd3e"
instance_type = "t2.medium"
key_name = "KEY"
server_count = "3"
client_count = "4"
```
For example:
```bash
region = "us-east-1"
ami = "ami-28a1dd3e"
instance_type = "t2.medium"
key_name = "hashi-us-east-1"
server_count = "3"
client_count = "4"
```
Note that a pre-provisioned, publicly available AMI is used by default. To provision your own customized AMI with [Packer](https://www.packer.io/intro/index.html), follow the instructions [here](aws/packer/README.md). You will need to replace the AMI ID in terraform.tfvars with your own.
Provision the cluster:
```bash
terraform get
terraform plan
terraform apply
```
## Access the cluster
SSH to a server using its public IP. For example:
```bash
$ ssh -i /home/vagrant/.ssh/KEY.pem ubuntu@SERVER_PUBLIC_IP
```
2017-06-16 21:46:51 +00:00
The AWS security group is configured by default to allow all traffic over port 22. This is not recommended for production deployments.
2017-06-15 22:49:02 +00:00
2017-06-16 21:46:51 +00:00
Run a few basic commands to verify that Consul and Nomad are up and running properly:
2017-06-15 22:49:02 +00:00
```bash
$ consul members
$ nomad server-members
$ nomad node-status
```
Optionally, initialize and unseal Vault:
```bash
$ vault init -key-shares=1 -key-threshold=1
$ vault unseal
$ export VAULT_TOKEN=[INITIAL_ROOT_TOKEN]
```
The `vault init` command above creates a single [Vault unseal key](https://www.vaultproject.io/docs/concepts/seal.html). For a production environment, it is recommended that you create at least three unseal key shares and securely distribute them to independent operators.
## Getting started with Nomad & the HashiCorp stack
See:
2017-06-16 21:46:51 +00:00
* [Getting Started with Nomad](https://www.nomadproject.io/intro/getting-started/jobs.html)
* [Consul integration](https://www.nomadproject.io/docs/service-discovery/index.html)
* [Vault integration](https://www.nomadproject.io/docs/vault-integration/index.html)
* [consul-template integration](https://www.nomadproject.io/docs/job-specification/template.html)
2017-06-15 22:49:02 +00:00
## Apache Spark integration
2017-06-24 23:28:55 +00:00
Nomad is well-suited for analytical workloads, given its performance characteristics and first-class support for batch scheduling. Apache Spark is a popular data processing engine/framework that has been architected to use third-party schedulers. We maintain a fork that natively integrates Nomad with Spark. Sample job files and documentation are included [here](examples/spark/README.md) and also provisioned into the cluster itself under the `HOME` directory.
2017-06-15 22:49:02 +00:00