e2e: provision cluster entirely through Terraform (#8748)

Have Terraform run the target-specific `provision.sh`/`provision.ps1` script
rather than the test runner code which needs to be customized for each
distro. Use Terraform's detection of variable value changes so that we can
re-run the provisioning without having to re-install Nomad on those specific
hosts that need it changed.

Allow the configuration "profile" (well-known directory) to be set by a
Terraform variable. The default configurations are installed during Packer
build time, and symlinked into the live configuration directory by the
provision script. Detect changes in the file contents so that we only upload
custom configuration files that have changed between Terraform runs
This commit is contained in:
Tim Gross 2020-09-18 11:27:24 -04:00 committed by GitHub
parent c8ce887fb2
commit 9d37233eaf
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
15 changed files with 647 additions and 157 deletions

View File

@ -4,69 +4,31 @@ This package contains integration tests. Unlike tests alongside Nomad code,
these tests expect there to already be a functional Nomad cluster accessible
(either on localhost or via the `NOMAD_ADDR` env var).
The `terraform` folder has provisioning code to spin up a Nomad cluster on AWS.
The tests work with the `NOMAD_ADDR` environment variable which can be set
either to a local dev Nomad agent or a Nomad client on AWS.
See [`framework/doc.go`](framework/doc.go) for how to write tests.
The `NOMAD_E2E=1` environment variable must be set for these tests to run.
## Local Nomad Development
When developing tests locally, provisioning is not required when only the tests
change. See [`framework/doc.go`](framework/doc.go) for how to write tests.
When making changes to the Nomad agent itself, use `./bin/update $(which nomad)
/usr/local/bin/nomad` and `./bin/run sudo systemctl restart nomad` to
destructively modify the provisioned cluster.
## Provisioning Test Infrastructure on AWS
You'll need Terraform and AWS credentials (`AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY`) to setup AWS instances on which e2e tests
will run. See the [README](https://github.com/hashicorp/nomad/blob/master/e2e/terraform/README.md)
for details. The number of servers and clients is configurable, as is
the configuration file for each client and server.
The `terraform/` folder has provisioning code to spin up a Nomad cluster on
AWS. You'll need both Terraform and AWS credentials to setup AWS instances on
which e2e tests will run. See the
[README](https://github.com/hashicorp/nomad/blob/master/e2e/terraform/README.md)
for details. The number of servers and clients is configurable, as is the
specific build of Nomad to deploy and the configuration file for each client
and server.
## Provisioning e2e Framework Nomad Cluster
## Provisioning Local Clusters
You can use the Terraform output from the [previous step](https://github.com/hashicorp/nomad/blob/master/e2e/terraform/README.md)
to generate a provisioning configuration file for the e2e framework.
To run tests against a local cluster, you'll need to make sure the following
environment variables are set:
```sh
# from the ./e2e/terraform directory
terraform output provisioning | jq . > ../provisioning.json
```
* `NOMAD_ADDR` should point to one of the Nomad servers
* `CONSUL_HTTP_ADDR` should point to one of the Consul servers
* `NOMAD_E2E=1`
By default the `provisioning.json` will not include the Nomad version
that will be deployed to each node. You can pass the following flags
to `go test` to set the version for all nodes:
- `-nomad.local_file=string`: provision this specific local binary of
Nomad. This is a path to a Nomad binary on your own
host. Ex. `-nomad.local_file=/home/me/nomad`
- `-nomad.sha=string`: provision this specific sha from S3. This is a
Nomad binary identified by its full commit SHA that's stored in a
shared s3 bucket that Nomad team developers can access. That commit
SHA can be from any branch that's pushed to
remote. Ex. `-nomad.sha=0b6b475e7da77fed25727ea9f01f155a58481b6c`
- `-nomad.version=string`: provision this version from
[releases.hashicorp.com](https://releases.hashicorp.com/nomad). Ex. `-nomad.version=0.10.2`
Then deploy Nomad to the cluster by passing `-provision.terraform`
without a Nomad version flag:
```sh
NOMAD_E2E=1 go test -v . \
-timeout 20m \
-nomad.local_file=$(which nomad) \
-provision.terraform=./provisioning.json \
-skipTests
```
- `-skipTests`: provisioning can take time, so it's best to skip tests
- `-timeout 20m`: depending on your cluster size and upload bandwidth the
default 10m timeout may not be long enough for provisioning to finish
_TODO: the scripts in `./bin` currently work only with Terraform, it would be
nice for us to have a way to deploy Nomad to Vagrant or local clusters._
## Running
@ -98,7 +60,7 @@ names in the full name of the tests:
go test -v . -run 'TestE2E/Consul/\*consul\.ScriptChecksE2ETest/TestGroup'
^ ^ ^ ^
| | | |
Component | | Test func
Component | | Test func
| |
Go Package Struct
```
@ -118,49 +80,38 @@ Run `terraform output` for IP addresses and details.
### ...Deploy a Cluster of Mixed Nomad Versions
The `provisioning.json` file output by Terraform has a blank field for
`nomad_sha` for each node of the cluster (server and client). You can
manually edit the file to replace this value with a `nomad_sha`,
`nomad_local_binary`, or `nomad_version` for each node to create a
cluster of mixed versions. The provisioning framework accepts any of
the following options for those fields:
The `variables.tf` file describes the `nomad_sha`, `nomad_version`, and
`nomad_local_binary` variable that can be used for most circumstances. But if
you want to deploy mixed Nomad versions, you can provide a list of versions in
your `terraform.tfvars` file.
- `nomad_sha`: This is a Nomad binary identified by its full commit
SHA that's stored in a shared s3 bucket that Nomad team developers
can access. That commit SHA can be from any branch that's pushed to
remote. (Ex. `"nomad_sha":
"0b6b475e7da77fed25727ea9f01f155a58481b6c"`)
- `nomad_local_binary`: This is a path to a Nomad binary on your own
host. (Ex. `"nomad_local_binary": "/home/me/nomad"`)
- `nomad_version`: This is a version number of Nomad that's been
released to HashiCorp. (Ex. `"nomad_version": "0.10.2"`)
For example, if you want to provision 3 servers all using Nomad 0.12.1, and 2
Linux clients using 0.12.1 and 0.12.2, you can use the following variables:
Then deploy Nomad to the cluster by passing `-provision.terraform`
without a Nomad version flag:
```hcl
# will be used for servers
nomad_version = "0.12.1"
```sh
go test -v . -provision.terraform ./provisioning.json -skipTests
# will override the nomad_version for Linux clients
nomad_version_client_linux = [
"0.12.1",
"0.12.2"
]
```
### ...Deploy Custom Configuration Files
The `provisioning.json` file includes a `bundles` section for each
node of the cluster (server and client). You can manually edit this
file to add, remove, or replace
```json
"bundles": [
{
"destination": "/ops/shared/nomad/base.hcl",
"source": "/home/me/custom.hcl"
}
]
```
Set the `profile` field to `"custom"` and put the configuration files in
`./terraform/config/custom/` as described in the
[README](https://github.com/hashicorp/nomad/blob/master/e2e/terraform/README.md#Profiles).
### ...Deploy More Than 4 Linux Clients
Right now the framework doesn't support this out-of-the-box because of
the way the provisioning script adds specific client configurations to
each client node (for constraint testing). You'll need to add
additional configuration files to
`./e2e/terraform/shared/nomad/indexed`.
Use the `"custom"` profile as described above.
### ...Change the Nomad Version After Provisioning
You can update the `nomad_sha` or `nomad_version` variables, or simply rebuild
the binary you have at the `nomad_local_binary` path so that Terraform picks
up the changes. Then run `terraform plan`/`terraform apply` again. This will
update Nomad in place, making the minimum amount of changes necessary.

View File

@ -2,16 +2,31 @@ NOMAD_SHA ?= $(shell git rev-parse HEAD)
PKG_PATH = $(shell pwd)/../../pkg/linux_amd64/nomad
dev-cluster:
terraform apply -auto-approve -var-file=terraform.tfvars.dev
terraform output provisioning | jq . > ../provisioning.json
cd .. && NOMAD_E2E=1 go test -v . -nomad.sha=$(NOMAD_SHA) -provision.terraform ./provisioning.json -skipTests
terraform output message
terraform apply -auto-approve \
-var="nomad_sha=$(NOMAD_SHA)" \
-var-file=terraform.tfvars.dev
terraform output environment
dev-cluster-from-local:
terraform apply -auto-approve -var-file=terraform.tfvars.dev
terraform output provisioning | jq . > ../provisioning.json
cd .. && NOMAD_E2E=1 go test -v . -nomad.local_file=$(PKG_PATH) -provision.terraform ./provisioning.json -skipTests
terraform output message
terraform apply -auto-approve \
-var="nomad_local_binary=$(PKG_PATH)" \
-var-file=terraform.tfvars.dev
terraform output environment
clean:
terraform destroy -auto-approve
full-cluster:
terraform apply -auto-approve \
-var="nomad_sha=$(NOMAD_SHA)" \
-var-file=terraform.tfvars
plan-dev-cluster:
terraform plan \
-var="nomad_sha=$(NOMAD_SHA)" \
-var-file=terraform.tfvars.dev
plan-full-cluster:
terraform plan \
-var="nomad_sha=$(NOMAD_SHA)" \
-var-file=terraform.tfvars

View File

@ -1,16 +1,11 @@
# Terraform infrastructure
This folder contains terraform resources for provisioning EC2 instances on AWS
to use as the target of end-to-end tests.
This folder contains Terraform resources for provisioning a Nomad cluster on
EC2 instances on AWS to use as the target of end-to-end tests.
Terraform provisions the AWS infrastructure only, whereas the Nomad
cluster is deployed to that infrastructure by the e2e
framework. Terraform's output will include a `provisioning` stanza
that can be written to a JSON file used by the e2e framework's
provisioning step.
You can use Terraform to output the provisioning parameter JSON file the e2e
framework uses.
Terraform provisions the AWS infrastructure assuming that EC2 AMIs have
already been built via Packer. It deploys a specific build of Nomad to the
cluster along with configuration files for Nomad, Consul, and Vault.
## Setup
@ -28,6 +23,7 @@ instance_type = "t2.medium"
server_count = "3"
client_count = "4"
windows_client_count = "1"
profile = "dev-cluster"
```
Run Terraform apply to deploy the infrastructure:
@ -37,6 +33,67 @@ cd e2e/terraform/
terraform apply
```
> Note: You will likely see "Connection refused" or "Permission denied" errors
> in the logs as the provisioning script run by Terraform hits an instance
> where the ssh service isn't yet ready. That's ok and expected; they'll get
> retried. In particular, Windows instances can take a few minutes before ssh
> is ready.
## Nomad Version
You'll need to pass one of the following variables in either your
`terraform.tfvars` file or as a command line argument (ex. `terraform apply
-var 'nomad_version=0.10.2+ent'`)
* `nomad_local_binary`: provision this specific local binary of Nomad. This is
a path to a Nomad binary on your own host. Ex. `nomad_local_binary =
"/home/me/nomad"`.
* `nomad_sha`: provision this specific sha from S3. This is a Nomad binary
identified by its full commit SHA that's stored in a shared s3 bucket that
Nomad team developers can access. That commit SHA can be from any branch
that's pushed to remote. Ex. `nomad_sha =
"0b6b475e7da77fed25727ea9f01f155a58481b6c"`
* `nomad_version`: provision this version from
[releases.hashicorp.com](https://releases.hashicorp.com/nomad). Ex. `nomad_version
= "0.10.2+ent"`
## Profiles
The `profile` field selects from a set of configuration files for Nomad,
Consul, and Vault by uploading the files found in `./config/<profile>`. The
profiles are as follows:
* `full-cluster`: This profile is used for nightly E2E testing. It assumes at
least 3 servers and includes a unique config for each Nomad client.
* `dev-cluster`: This profile is used for developer testing of a more limited
set of clients. It assumes at least 3 servers but uses the one config for
all the Linux Nomad clients and one config for all the Windows Nomad
clients.
* `custom`: This profile is used for one-off developer testing of more complex
interactions between features. You can build your own custom profile by
writing config files to the `./config/custom` directory, which are protected
by `.gitignore`
For each profile, application (Nomad, Consul, Vault), and agent type
(`server`, `client_linux`, or `client_windows`), the agent gets the following
configuration files, ignoring any that are missing.
* `./config/<profile>/<application>/*`: base configurations shared between all
servers and clients.
* `./config/<profile>/<application>/<type>/*`: base configurations shared
between all agents of this type.
* `./config/<profile>/<application>/<type>/indexed/*<index>.<ext>`: a
configuration for that particular agent, where the index value is the index
of that agent within the total count.
For example, with the `full-cluster` profile, 2nd Nomad server would get the
following configuration files:
* `./config/full-cluster/nomad/base.hcl`
* `./config/full-cluster/nomad/server/indexed/server-1.hcl`
The directory `./config/full-cluster/nomad/server` has no configuration files,
so that's safely skipped.
## Outputs
After deploying the infrastructure, you can get connection information
@ -50,8 +107,6 @@ about the cluster:
client node IPs.
- `terraform output windows_clients` will output the list of Windows
client node IPs.
- `terraform output provisioning | jq .` will output the JSON used by
the e2e framework for provisioning.
## SSH

91
e2e/terraform/nomad.tf Normal file
View File

@ -0,0 +1,91 @@
module "nomad_server" {
source = "./provision-nomad"
depends_on = [aws_instance.server]
count = var.server_count
platform = "linux_amd64"
profile = var.profile
role = "server"
index = count.index
# The specific version of Nomad deployed will default to whichever one of
# nomad_sha, nomad_version, or nomad_local_binary is set, but if you want to
# deploy multiple versions you can use the nomad_*_server variables to
# provide a list of builds
nomad_version = count.index < length(var.nomad_version_server) ? var.nomad_version_server[count.index] : var.nomad_version
nomad_sha = count.index < length(var.nomad_sha_server) ? var.nomad_sha_server[count.index] : var.nomad_sha
nomad_local_binary = count.index < length(var.nomad_local_binary_server) ? var.nomad_local_binary_server[count.index] : var.nomad_local_binary
connection = {
type = "ssh"
user = "ubuntu"
host = "${aws_instance.server[count.index].public_ip}"
port = 22
private_key = "${path.root}/keys/${local.random_name}.pem"
}
}
# TODO: split out the different Linux targets (ubuntu, centos, arm, etc.) when
# they're available
module "nomad_client_linux" {
source = "./provision-nomad"
depends_on = [aws_instance.client_linux]
count = var.client_count
platform = "linux_amd64"
profile = var.profile
role = "client-linux"
index = count.index
# The specific version of Nomad deployed will default to whichever one of
# nomad_sha, nomad_version, or nomad_local_binary is set, but if you want to
# deploy multiple versions you can use the nomad_*_client_linux
# variables to provide a list of builds
nomad_version = count.index < length(var.nomad_version_client_linux) ? var.nomad_version_client_linux[count.index] : var.nomad_version
nomad_sha = count.index < length(var.nomad_sha_client_linux) ? var.nomad_sha_client_linux[count.index] : var.nomad_sha
nomad_local_binary = count.index < length(var.nomad_local_binary_client_linux) ? var.nomad_local_binary_client_linux[count.index] : var.nomad_local_binary
connection = {
type = "ssh"
user = "ubuntu"
host = "${aws_instance.client_linux[count.index].public_ip}"
port = 22
private_key = "${path.root}/keys/${local.random_name}.pem"
}
}
# TODO: split out the different Windows targets (2016, 2019) when they're
# available
module "nomad_client_windows" {
source = "./provision-nomad"
depends_on = [aws_instance.client_windows]
count = var.windows_client_count
platform = "windows_amd64"
profile = var.profile
role = "client-windows"
index = count.index
# The specific version of Nomad deployed will default to whichever one of
# nomad_sha, nomad_version, or nomad_local_binary is set, but if you want to
# deploy multiple versions you can use the nomad_*_client_windows
# variables to provide a list of builds
nomad_version = count.index < length(var.nomad_version_client_windows) ? var.nomad_version_client_windows[count.index] : var.nomad_version
nomad_sha = count.index < length(var.nomad_sha_client_windows) ? var.nomad_sha_client_windows[count.index] : var.nomad_sha
connection = {
type = "ssh"
user = "Administrator"
host = "${aws_instance.client_windows[count.index].public_ip}"
port = 22
private_key = "${path.root}/keys/${local.random_name}.pem"
}
}

View File

@ -35,7 +35,7 @@ EOM
output "environment" {
description = "get connection config by running: $(terraform output environment)"
value = <<EOM
value = <<EOM
export NOMAD_ADDR=http://${aws_instance.server[0].public_ip}:4646
export CONSUL_HTTP_ADDR=http://${aws_instance.server[0].public_ip}:8500
export NOMAD_E2E=1

View File

@ -11,7 +11,12 @@ Options (use one of the following):
--nomad_sha SHA full git sha to install from S3
--nomad_version VERSION release version number (ex. 0.12.4+ent)
--nomad_binary FILEPATH path to file on host
--nostart do not start or restart Nomad
Options for configuration:
--config_profile FILEPATH path to config profile directory
--role ROLE role within config profile directory
--index INDEX count of instance, for profiles with per-instance config
--nostart do not start or restart Nomad
EOF
@ -25,6 +30,10 @@ PLATFORM=linux_amd64
START=1
install_fn=
NOMAD_PROFILE=
NOMAD_ROLE=
NOMAD_INDEX=
install_from_s3() {
# check that we don't already have this version
if [ "$(command -v nomad)" ]; then
@ -36,16 +45,13 @@ install_from_s3() {
aws s3 cp --quiet "$S3_URL" nomad.tar.gz
sudo tar -zxvf nomad.tar.gz -C "$INSTALL_DIR"
set_ownership
if [ $START == "1" ]; then sudo systemctl restart nomad; fi
}
install_from_uploaded_binary() {
# we don't check for reinstallation here because we do it at the user's
# end, rather than on the remote host, so that we don't bother copying
# if we don't have to
# we don't need to check for reinstallation here because we do it at the
# user's end so that we're not copying it up if we don't have to
sudo cp "$NOMAD_UPLOADED_BINARY" "$INSTALL_PATH"
set_ownership
if [ $START == "1" ]; then sudo systemctl restart nomad; fi
}
install_from_release() {
@ -59,7 +65,6 @@ install_from_release() {
curl -sL --fail -o /tmp/nomad.zip "$RELEASE_URL"
sudo unzip -o /tmp/nomad.zip -d "$INSTALL_DIR"
set_ownership
if [ $START == "1" ]; then sudo systemctl restart nomad; fi
}
set_ownership() {
@ -67,6 +72,45 @@ set_ownership() {
sudo chown root:root "$INSTALL_PATH"
}
sym() {
find "$1" -maxdepth 1 -type f -name "$2" 2>/dev/null \
| sudo xargs -I % ln -fs % "$3"
}
install_config_profile() {
if [ -d /tmp/custom ]; then
rm -rf /opt/config/custom
sudo mv /tmp/custom /opt/config/
fi
# we're removing the whole directory and recreating to avoid
# any quirks around dotfiles that might show up here.
sudo rm -rf /etc/nomad.d
sudo rm -rf /etc/consul.d
sudo rm -rf /etc/vault.d
sudo mkdir -p /etc/nomad.d
sudo mkdir -p /etc/consul.d
sudo mkdir -p /etc/vault.d
sym "${NOMAD_PROFILE}/nomad/" '*' /etc/nomad.d
sym "${NOMAD_PROFILE}/consul/" '*' /etc/consul.d
sym "${NOMAD_PROFILE}/vault/" '*' /etc/vault.d
if [ -n "$NOMAD_ROLE" ]; then
sym "${NOMAD_PROFILE}/nomad/${NOMAD_ROLE}/" '*' /etc/nomad.d
sym "${NOMAD_PROFILE}/consul/${NOMAD_ROLE}/" '*' /etc/consul.d
sym "${NOMAD_PROFILE}/vault/${NOMAD_ROLE}/" '*' /etc/vault.d
fi
if [ -n "$NOMAD_INDEX" ]; then
sym "${NOMAD_PROFILE}/nomad/${NOMAD_ROLE}/indexed/" "*${NOMAD_INDEX}*" /etc/nomad.d
sym "${NOMAD_PROFILE}/consul/${NOMAD_ROLE}/indexed/" "*${NOMAD_INDEX}*" /etc/consul.d
sym "${NOMAD_PROFILE}/vault/${NOMAD_ROLE}/indexed/" "*${NOMAD_INDEX}*" /etc/vault.d
fi
}
while [[ $# -gt 0 ]]
do
opt="$1"
@ -89,6 +133,21 @@ opt="$1"
install_fn=install_from_uploaded_binary
shift 2
;;
--config_profile)
if [ -z "$2" ]; then echo "Missing profile parameter"; usage; fi
NOMAD_PROFILE="/opt/config/${2}"
shift 2
;;
--role)
if [ -z "$2" ]; then echo "Missing role parameter"; usage; fi
NOMAD_ROLE="$2"
shift 2
;;
--index)
if [ -z "$2" ]; then echo "Missing index parameter"; usage; fi
NOMAD_INDEX="$2"
shift 2
;;
--nostart)
# for initial packer builds, we don't want to start Nomad
START=0
@ -98,6 +157,16 @@ opt="$1"
esac
done
# call the appropriate instalation function
if [ -z "$install_fn" ]; then echo "Missing install option"; usage; fi
$install_fn
# call the appropriate installation function
if [ -n "$install_fn" ]; then
$install_fn
fi
if [ -n "$NOMAD_PROFILE" ]; then
install_config_profile
fi
if [ $START == "1" ]; then
# sudo systemctl restart vault
sudo systemctl restart consul
sudo systemctl restart nomad
fi

View File

@ -72,6 +72,7 @@ mkdir_for_root $NOMAD_PLUGIN_DIR
sudo mv /tmp/linux/nomad.service /etc/systemd/system/nomad.service
echo "Install Nomad"
sudo mv /tmp/config /opt/
sudo mv /tmp/linux/provision.sh /opt/provision.sh
sudo chmod +x /opt/provision.sh
/opt/provision.sh --nomad_version $NOMADVERSION --nostart

View File

@ -46,6 +46,11 @@
{
"type": "windows-restart"
},
{
"type": "file",
"source": "../config",
"destination": "/opt"
},
{
"type": "file",
"source": "./windows/provision.ps1",

View File

@ -19,6 +19,11 @@
"source": "./linux",
"destination": "/tmp/linux"
},
{
"type": "file",
"source": "../config",
"destination": "/tmp/config"
},
{
"type": "shell",
"script": "./linux/setup.sh"

View File

@ -2,21 +2,27 @@ param(
[string]$nomad_sha,
[string]$nomad_version,
[string]$nomad_binary,
[string]$config_profile,
[string]$role,
[string]$index,
[switch]$nostart = $false
)
Set-StrictMode -Version latest
$ErrorActionPreference = "Stop"
$usage = @"
Usage: provision.ps1 [options...]
Options (use one of the following):
--nomad_sha SHA full git sha to install from S3
--nomad_version VERSION release version number (ex. 0.12.4+ent)
--nomad_binary FILEPATH path to file on host
--nostart do not start or restart Nomad
-nomad_sha SHA full git sha to install from S3
-nomad_version VERSION release version number (ex. 0.12.4+ent)
-nomad_binary FILEPATH path to file on host
Options for configuration:
-config_profile FILEPATH path to config profile directory
-role ROLE role within config profile directory
-index INDEX count of instance, for profiles with per-instance config
-nostart do not start or restart Nomad
"@
$RunningAsAdmin = ([Security.Principal.WindowsPrincipal] [Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
@ -61,10 +67,6 @@ function InstallFromS3 {
New-Item -ItemType Directory -Force -Path C:\opt\nomad.d
New-Item -ItemType Directory -Force -Path C:\opt\nomad
Write-Output "Installed Nomad."
if (!($nostart)) {
StartNomad
}
} Catch {
Write-Error "Failed to install Nomad."
$host.SetShouldExit(-1)
@ -80,10 +82,6 @@ function InstallFromUploadedBinary {
New-Item -ItemType Directory -Force -Path C:\opt\nomad.d
New-Item -ItemType Directory -Force -Path C:\opt\nomad
Write-Output "Installed Nomad."
if (!($nostart)) {
StartNomad
}
} Catch {
Write-Error "Failed to install Nomad."
$host.SetShouldExit(-1)
@ -119,10 +117,6 @@ function InstallFromRelease {
New-Item -ItemType Directory -Force -Path C:\opt\nomad.d
New-Item -ItemType Directory -Force -Path C:\opt\nomad
Write-Output "Installed Nomad."
if (!($nostart)) {
StartNomad
}
} Catch {
Write-Error "Failed to install Nomad."
$host.SetShouldExit(-1)
@ -130,7 +124,48 @@ function InstallFromRelease {
}
}
function StartNomad {
function ConfigFiles($src, $dest) {
Get-ChildItem -Path "$src" -Name -Attributes !Directory -ErrorAction Ignore`
| ForEach-Object { `
New-Item -ItemType SymbolicLink -Path "${dest}\$_" -Target "${src}\$_" }
}
function InstallConfigProfile {
if ( Test-Path -Path 'C:\tmp\custom' -PathType Container ) {
Remote-Item 'C:\opt\config\custom' -Force -ErrorAction Ignore
Move-Item -Path 'C:\tmp\custom' -Destination 'C:\opt\config\custom' -Force
}
$cfg = "C:\opt\config\${config_profile}"
Remove-Item "C:\opt\nomad.d\*" -Force -ErrorAction Ignore
Remove-Item "C:\opt\consul.d\*" -Force -ErrorAction Ignore
ConfigFiles "${cfg}\nomad" "C:\opt\nomad.d"
ConfigFiles "${cfg}\consul" "C:\opt\consul.d"
if ( "" -ne $role ) {
ConfigFiles "${cfg}\nomad\${role}" "C:\opt\nomad.d"
ConfigFiles "${cfg}\consul\${role}" "C:\opt\consul.d"
}
if ( "" -ne $index ) {
ConfigFiles "${cfg}\nomad\${role}\indexed\*${index}*" "C:\opt\nomad.d"
ConfigFiles "${cfg}\consul\${role}\indexed\*${index}*" "C:\opt\consul.d"
}
}
function CreateConsulService {
New-Service `
-Name "Consul" `
-BinaryPathName "C:\opt\consul.exe agent -config-dir C:\opt\consul.d" `
-StartupType "Automatic" `
-ErrorAction Ignore
}
function CreateNomadService {
New-NetFirewallRule `
-DisplayName 'Nomad HTTP Inbound' `
-Profile @('Public', 'Domain', 'Private') `
@ -145,21 +180,27 @@ function StartNomad {
-BinaryPathName "C:\opt\nomad.exe agent -config C:\opt\nomad.d" `
-StartupType "Automatic" `
-ErrorAction Ignore
Start-Service "Nomad"
}
if ( "" -ne $nomad_sha ) {
InstallFromS3
return
CreateNomadService
}
if ( "" -ne $nomad_version ) {
InstallFromRelease
return
CreateNomadService
}
if ( "" -ne $nomad_binary ) {
InstallFromUploadedBinary
return
CreateNomadService
}
if ( "" -ne $config_profile) {
InstallConfigProfile
}
Usage
if (!($nostart)) {
CreateConsulService
CreateNomadService
Restart-Service "Consul"
Restart-Service "Nomad"
}

View File

@ -0,0 +1,119 @@
locals {
provision_script = var.platform == "windows_amd64" ? "C:/opt/provision.ps1" : "/opt/provision.sh"
custom_path = abspath("${var.config_path}/custom/")
custom_config_files = compact(setunion(
fileset(local.custom_path, "nomad/*.hcl"),
fileset(local.custom_path, "nomad/${var.role}/*.hcl"),
fileset(local.custom_path, "nomad/${var.role}/indexed/*${var.index}.hcl"),
fileset(local.custom_path, "consul/*.json"),
fileset(local.custom_path, "consul/${var.role}/*.json"),
fileset(local.custom_path, "consul${var.role}indexed/*${var.index}*.json"),
fileset(local.custom_path, "vault/*.hcl"),
fileset(local.custom_path, "vault${var.role}*.hcl"),
fileset(local.custom_path, "vault${var.role}indexed/*${var.index}.hcl"),
))
# abstract-away platform-specific parameter expectations
_arg = var.platform == "windows_amd64" ? "-" : "--"
}
resource "null_resource" "provision_nomad" {
depends_on = [
null_resource.upload_custom_configs,
null_resource.upload_nomad_binary
]
# no need to re-run if nothing changes
triggers = {
script = data.template_file.provision_script.rendered
}
# Run the provisioner as a local-exec'd ssh command as a workaround for
# Windows remote-exec zero-byte scripts bug:
# https://github.com/hashicorp/terraform/issues/25634
#
# The retry behavior and explicit PasswordAuthenticaiton flag here are to
# workaround a race with the Windows userdata script that installs the
# authorized_key. Unfortunately this still results in a bunch of "permission
# denied" errors while waiting for those keys to be configured.
provisioner "local-exec" {
command = "until ssh -o PasswordAuthentication=no -o KbdInteractiveAuthentication=no -o LogLevel=ERROR -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i ${var.connection.private_key} -p ${var.connection.port} ${var.connection.user}@${var.connection.host} ${data.template_file.provision_script.rendered}; do sleep 5; done"
}
}
data "template_file" "provision_script" {
template = "${local.provision_script}${data.template_file.arg_nomad_sha.rendered}${data.template_file.arg_nomad_version.rendered}${data.template_file.arg_nomad_binary.rendered}${data.template_file.arg_profile.rendered}${data.template_file.arg_role.rendered}${data.template_file.arg_index.rendered}"
}
data "template_file" "arg_nomad_sha" {
template = var.nomad_sha != "" ? " ${local._arg}nomad_sha ${var.nomad_sha}" : ""
}
data "template_file" "arg_nomad_version" {
template = var.nomad_version != "" ? " ${local._arg}nomad_version ${var.nomad_version}" : ""
}
data "template_file" "arg_nomad_binary" {
template = var.nomad_local_binary != "" ? " ${local._arg}nomad_binary ${var.nomad_local_binary}" : ""
}
data "template_file" "arg_profile" {
template = var.profile != "" ? " ${local._arg}config_profile ${var.profile}" : ""
}
data "template_file" "arg_role" {
template = var.role != "" ? " ${local._arg}role ${var.role}" : ""
}
data "template_file" "arg_index" {
template = var.index != "" ? " ${local._arg}index ${var.index}" : ""
}
resource "null_resource" "upload_nomad_binary" {
count = var.nomad_local_binary != "" ? 1 : 0
depends_on = [null_resource.upload_custom_configs]
triggers = {
nomad_binary_sha = filemd5(var.nomad_local_binary)
}
connection {
type = "ssh"
user = var.connection.user
host = var.connection.host
port = var.connection.port
private_key = var.connection.private_key
timeout = "15m"
}
provisioner "file" {
source = var.nomad_local_binary
destination = "/tmp/nomad"
}
}
resource "null_resource" "upload_custom_configs" {
count = var.profile == "custom" ? 1 : 0
triggers = {
hashes = "${join(",", [for file in local.custom_config_files : filemd5(file)])}"
}
connection {
type = "ssh"
user = var.connection.user
host = var.connection.host
port = var.connection.port
private_key = var.connection.private_key
timeout = "15m"
}
provisioner "file" {
source = local.custom_path
destination = "/tmp/"
}
}

View File

@ -0,0 +1,58 @@
variable "platform" {
type = string
description = "Platform ID (ex. \"linux_amd64\" or \"windows_amd64\")"
default = "linux_amd64"
}
variable "nomad_version" {
type = string
description = "Nomad release version (ex. \"0.10.3\")"
default = ""
}
variable "nomad_sha" {
type = string
description = "Nomad build full SHA (ex. \"fef22bdbfa094b5d076710354275e360867261aa\")"
default = ""
}
variable "nomad_local_binary" {
type = string
description = "Path to local Nomad build (ex. \"/home/me/bin/nomad\")"
default = ""
}
variable "profile" {
type = string
description = "The name of the configuration profile (ex. 'full-cluster')"
default = ""
}
variable "role" {
type = string
description = "The role in the configuration profile for this instance (ex. 'client-linux')"
default = ""
}
variable "index" {
type = string # note that we have string here so we can default to ""
description = "The count of this instance for indexed configurations"
default = ""
}
variable "config_path" {
type = string
description = "The path to the config directory"
default = "../config"
}
variable "connection" {
type = object({
type = string
user = string
host = string
port = number
private_key = string
})
description = "ssh connection information for remote target"
}

View File

@ -3,3 +3,4 @@ instance_type = "t2.medium"
server_count = "3"
client_count = "4"
windows_client_count = "1"
profile = "full-cluster"

View File

@ -3,3 +3,8 @@ instance_type = "t2.medium"
server_count = "3"
client_count = "2"
windows_client_count = "0"
profile = "dev-cluster"
# Example overrides:
# nomad_local_binary = "../../pkg/linux_amd/nomad"
# nomad_local_binary_client_windows = ["../../pkg/windows_amd64/nomad.exe"]

View File

@ -13,11 +13,6 @@ variable "availability_zone" {
default = "us-east-1a"
}
variable "indexed" {
description = "Different configurations per client/server"
default = true
}
variable "instance_type" {
description = "The AWS instance type to use for both clients and servers."
default = "t2.medium"
@ -38,11 +33,6 @@ variable "windows_client_count" {
default = "1"
}
variable "nomad_sha" {
description = "The sha of Nomad to write to provisioning output"
default = ""
}
variable "aws_assume_role_arn" {
description = "The AWS IAM role to assume (not used by human users)"
default = ""
@ -57,3 +47,87 @@ variable "aws_assume_role_external_id" {
description = "The AWS IAM external ID to assume (not used by human users)"
default = ""
}
variable "profile" {
description = "A default Nomad/Consul/Vault configuration profile"
type = string
default = ""
}
# ----------------------------------------
# The specific version of Nomad deployed will default to whichever one of
# nomad_sha, nomad_version, or nomad_local_binary is set
variable "nomad_sha" {
description = "The sha of Nomad to provision"
default = ""
}
variable "nomad_version" {
description = "The release version of Nomad to provision"
default = ""
}
variable "nomad_local_binary" {
description = "The path to a local binary to provision"
default = ""
}
# ----------------------------------------
# If you want to deploy multiple versions you can use these variables to
# provide a list of builds to override the values of nomad_sha, nomad_version,
# or nomad_local_binary. Most of the time you can ignore these variables!
variable "nomad_version_server" {
description = "A list of Nomad versions to deploy to servers, to override nomad_version"
type = list(string)
default = []
}
variable "nomad_sha_server" {
description = "A list of Nomad SHAs to deploy to servers, to override nomad_sha"
type = list(string)
default = []
}
variable "nomad_local_binary_server" {
description = "A list of Nomad SHAs to deploy to servers, to override nomad_sha"
type = list(string)
default = []
}
variable "nomad_version_client_linux" {
description = "A list of Nomad versions to deploy to Linux clients, to override nomad_version"
type = list(string)
default = []
}
variable "nomad_sha_client_linux" {
description = "A list of Nomad SHAs to deploy to Linux clients, to override nomad_sha"
type = list(string)
default = []
}
variable "nomad_local_binary_client_linux" {
description = "A list of Nomad SHAs to deploy to Linux clients, to override nomad_sha"
type = list(string)
default = []
}
variable "nomad_version_client_windows" {
description = "A list of Nomad versions to deploy to Windows clients, to override nomad_version"
type = list(string)
default = []
}
variable "nomad_sha_client_windows" {
description = "A list of Nomad SHAs to deploy to Windows clients, to override nomad_sha"
type = list(string)
default = []
}
variable "nomad_local_binary_client_windows" {
description = "A list of Nomad SHAs to deploy to Windows clients, to override nomad_sha"
type = list(string)
default = []
}