open-nomad/terraform
2018-03-31 18:01:21 +00:00
..
aws Add variable to enable unique naming of infra components 2018-03-31 18:01:21 +00:00
azure spelling: usually 2018-03-11 19:12:06 +00:00
examples Update scripts and AMI to reflect Nomad 0.6 2017-07-26 22:34:34 +00:00
shared Bump Hadoop version in run-time config scripts; update AMI 2018-03-20 19:55:09 +00:00
README.md Update README.md 2017-11-15 14:18:49 -08:00
Vagrantfile bump Terraform version to 0.11.0 2017-11-17 14:26:27 -08:00

Provision a Nomad cluster in the Cloud

Use this repo to easily provision a Nomad sandbox environment on AWS or Azure with Packer and Terraform. Consul and Vault are also installed (colocated for convenience). The intention is to allow easy exploration of Nomad and its integrations with the HashiCorp stack. This is not meant to be a production ready environment. A demonstration of Nomad's Apache Spark integration is included.

Setup

Clone the repo and optionally use Vagrant to bootstrap a local staging environment:

$ git clone git@github.com:hashicorp/nomad.git
$ cd terraform
$ vagrant up && vagrant ssh

The Vagrant staging environment pre-installs Packer, Terraform, Docker and the Azure CLI.

Provision a cluster

  • Follow the steps here to provision a cluster on AWS.
  • Follow the steps here to provision a cluster on Azure.

Continue with the steps below after a cluster has been provisioned.

Test

Run a few basic status commands to verify that Consul and Nomad are up and running properly:

$ consul members
$ nomad server-members
$ nomad node-status

Unseal the Vault cluster (optional)

To initialize and unseal Vault, run:

$ vault init -key-shares=1 -key-threshold=1
$ vault unseal
$ export VAULT_TOKEN=[INITIAL_ROOT_TOKEN]

The vault init command above creates a single Vault unseal key for convenience. For a production environment, it is recommended that you create at least five unseal key shares and securely distribute them to independent operators. The vault init command defaults to five key shares and a key threshold of three. If you provisioned more than one server, the others will become standby nodes but should still be unsealed. You can query the active and standby nodes independently:

$ dig active.vault.service.consul
$ dig active.vault.service.consul SRV
$ dig standby.vault.service.consul

See the Getting Started guide for an introduction to Vault.

Getting started with Nomad & the HashiCorp stack

Use the following links to get started with Nomad and its HashiCorp integrations:

Apache Spark integration

Nomad is well-suited for analytical workloads, given its performance characteristics and first-class support for batch scheduling. Apache Spark is a popular data processing engine/framework that has been architected to use third-party schedulers. The Nomad ecosystem includes a fork that natively integrates Nomad with Spark. A detailed walkthrough of the integration is included here.