2017-06-15 22:49:02 +00:00
|
|
|
# Provision a Nomad cluster on AWS with Packer & Terraform
|
|
|
|
|
2017-06-25 17:45:30 +00:00
|
|
|
Use this to easily provision a Nomad sandbox environment on AWS with
|
|
|
|
[Packer](https://packer.io) and [Terraform](https://terraform.io).
|
|
|
|
[Consul](https://www.consul.io/intro/index.html) and
|
|
|
|
[Vault](https://www.vaultproject.io/intro/index.html) are also installed
|
|
|
|
(colocated for convenience). The intention is to allow easy exploration of
|
2017-06-26 17:51:59 +00:00
|
|
|
Nomad and its integrations with the HashiCorp stack. This is *not* meant to be
|
|
|
|
a production ready environment. A demonstration of [Nomad's Apache Spark
|
2017-06-25 17:45:30 +00:00
|
|
|
integration](examples/spark/README.md) is included.
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
## Setup
|
|
|
|
|
2017-06-25 17:45:30 +00:00
|
|
|
Clone this repo and (optionally) use [Vagrant](https://www.vagrantup.com/intro/index.html)
|
|
|
|
to bootstrap a local staging environment:
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
```bash
|
|
|
|
$ git clone git@github.com:hashicorp/nomad.git
|
|
|
|
$ cd terraform/aws
|
|
|
|
$ vagrant up && vagrant ssh
|
|
|
|
```
|
|
|
|
|
2017-06-26 17:51:59 +00:00
|
|
|
The Vagrant staging environment pre-installs Packer, Terraform, and Docker.
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
### Pre-requisites
|
|
|
|
|
|
|
|
You will need the following:
|
|
|
|
|
|
|
|
- AWS account
|
|
|
|
- [API access keys](http://aws.amazon.com/developers/access-keys/)
|
|
|
|
- [SSH key pair](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
|
|
|
|
|
|
|
|
Set environment variables for your AWS credentials:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
$ export AWS_ACCESS_KEY_ID=[ACCESS_KEY_ID]
|
|
|
|
$ export AWS_SECRET_ACCESS_KEY=[SECRET_ACCESS_KEY]
|
|
|
|
```
|
|
|
|
|
2017-06-24 23:50:11 +00:00
|
|
|
## Provision a cluster
|
2017-06-15 22:49:02 +00:00
|
|
|
|
2017-06-24 23:50:11 +00:00
|
|
|
`cd` to an environment subdirectory:
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
```bash
|
|
|
|
$ cd env/us-east
|
|
|
|
```
|
|
|
|
|
2017-08-10 11:23:25 +00:00
|
|
|
Update `terraform.tfvars` with your SSH key name:
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
```bash
|
|
|
|
region = "us-east-1"
|
2017-08-08 00:41:17 +00:00
|
|
|
ami = "ami-a780afdc"
|
2017-06-15 22:49:02 +00:00
|
|
|
instance_type = "t2.medium"
|
2017-06-24 23:50:11 +00:00
|
|
|
key_name = "KEY_NAME"
|
2017-06-15 22:49:02 +00:00
|
|
|
server_count = "3"
|
|
|
|
client_count = "4"
|
|
|
|
```
|
|
|
|
|
2017-06-25 17:45:30 +00:00
|
|
|
Note that a pre-provisioned, publicly available AMI is used by default
|
|
|
|
(for the `us-east-1` region). To provision your own customized AMI with
|
|
|
|
[Packer](https://www.packer.io/intro/index.html), follow the instructions
|
|
|
|
[here](aws/packer/README.md). You will need to replace the AMI ID in
|
2017-06-26 17:51:59 +00:00
|
|
|
`terraform.tfvars` with your own. You can also modify the `region`,
|
|
|
|
`instance_type`, `server_count`, and `client_count`. At least one client and
|
|
|
|
one server are required.
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
Provision the cluster:
|
|
|
|
|
|
|
|
```bash
|
2017-06-25 17:45:30 +00:00
|
|
|
$ terraform get
|
|
|
|
$ terraform plan
|
|
|
|
$ terraform apply
|
2017-06-15 22:49:02 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
## Access the cluster
|
|
|
|
|
2017-06-25 17:09:28 +00:00
|
|
|
SSH to one of the servers using its public IP:
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
```bash
|
2017-06-25 17:09:28 +00:00
|
|
|
$ ssh -i /path/to/key ubuntu@PUBLIC_IP
|
2017-06-15 22:49:02 +00:00
|
|
|
```
|
|
|
|
|
2017-06-25 17:45:30 +00:00
|
|
|
Note that the AWS security group is configured by default to allow all traffic
|
2017-06-26 17:51:59 +00:00
|
|
|
over port 22. This is *not* recommended for production deployments.
|
2017-06-15 22:49:02 +00:00
|
|
|
|
2017-06-25 17:45:30 +00:00
|
|
|
Run a few basic commands to verify that Consul and Nomad are up and running
|
|
|
|
properly:
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
```bash
|
|
|
|
$ consul members
|
|
|
|
$ nomad server-members
|
|
|
|
$ nomad node-status
|
|
|
|
```
|
|
|
|
|
|
|
|
Optionally, initialize and unseal Vault:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
$ vault init -key-shares=1 -key-threshold=1
|
|
|
|
$ vault unseal
|
|
|
|
$ export VAULT_TOKEN=[INITIAL_ROOT_TOKEN]
|
|
|
|
```
|
|
|
|
|
2017-06-25 17:45:30 +00:00
|
|
|
The `vault init` command above creates a single
|
|
|
|
[Vault unseal key](https://www.vaultproject.io/docs/concepts/seal.html) for
|
|
|
|
convenience. For a production environment, it is recommended that you create at
|
|
|
|
least five unseal key shares and securely distribute them to independent
|
|
|
|
operators. The `vault init` command defaults to five key shares and a key
|
|
|
|
threshold of three. If you provisioned more than one server, the others will
|
|
|
|
become standby nodes (but should still be unsealed). You can query the active
|
|
|
|
and standby nodes independently:
|
2017-06-25 17:09:28 +00:00
|
|
|
|
|
|
|
```bash
|
|
|
|
$ dig active.vault.service.consul
|
|
|
|
$ dig active.vault.service.consul SRV
|
|
|
|
$ dig standby.vault.service.consul
|
|
|
|
```
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
## Getting started with Nomad & the HashiCorp stack
|
|
|
|
|
|
|
|
See:
|
|
|
|
|
2017-06-16 21:46:51 +00:00
|
|
|
* [Getting Started with Nomad](https://www.nomadproject.io/intro/getting-started/jobs.html)
|
|
|
|
* [Consul integration](https://www.nomadproject.io/docs/service-discovery/index.html)
|
|
|
|
* [Vault integration](https://www.nomadproject.io/docs/vault-integration/index.html)
|
|
|
|
* [consul-template integration](https://www.nomadproject.io/docs/job-specification/template.html)
|
2017-06-15 22:49:02 +00:00
|
|
|
|
|
|
|
## Apache Spark integration
|
|
|
|
|
2017-06-25 17:45:30 +00:00
|
|
|
Nomad is well-suited for analytical workloads, given its performance
|
|
|
|
characteristics and first-class support for batch scheduling. Apache Spark is a
|
|
|
|
popular data processing engine/framework that has been architected to use
|
2017-06-26 17:51:59 +00:00
|
|
|
third-party schedulers. The Nomad ecosystem includes a [fork that natively
|
|
|
|
integrates Nomad with Spark](https://github.com/hashicorp/nomad-spark). A
|
|
|
|
detailed walkthrough of the integration is included [here](examples/spark/README.md).
|