Merge branch 'master' of github.com:hashicorp/nomad

This commit is contained in:
Diptanu Choudhury 2016-10-31 17:50:58 -07:00
commit f0afdde81f
19 changed files with 741 additions and 178 deletions

View file

@ -1,12 +1,29 @@
## 0.5.0 (Unreleased)
__BACKWARDS INCOMPATIBILITIES:__
* jobspec: Extracted the disk resources from the task to the task group. The
new block is name `ephemeral_disk`. Nomad will automatically convert
existing jobs but newly submitted jobs should refactor the disk resource
[GH-1710, GH-1679]
IMPROVEMENTS:
* core: Support for gossip encryption [GH-1791]
* core: Introduce node SecretID which can be used to minimize the available
surface area of RPCs to malicious Nomad Clients [GH-1597]
* core: Vault integration to handle secure introduction of tasks [GH-1583,
GH-1713]
* core: New `set_contains` constraint to determine if a set contains all
specified values [GH-1839]
* core: Scheduler version enforcement disallows different scheduler version
from making decisions simultaneously [GH-1872]
* core: Introduce node SecretID which can be used to minimize the available
surface area of RPCs to malicious Nomad Clients [GH-1597]
* core: Add `sticky` volumes which inform the scheduler to prefer placing
updated allocations on the same node and to reuse the `local/` and
`alloc/data` directory from previous allocation allowing semi-persistent
data and allow those folders to be synced from a remote node [GH-1654,
GH-1741]
* agent: Add DataDog telemetry sync [GH-1816]
* agent: Allow Consul health checks to use bind address rather than advertise
[GH-1866]
* api: Support TLS for encrypting Raft, RPC and HTTP APIs [GH-1853]
* api: Implement blocking queries for querying a job's evaluations [GH-1892]
* cli: `nomad alloc-status` shows allocation creation time [GH-1623]
@ -15,28 +32,44 @@ IMPROVEMENTS:
* client: Enforce shared allocation directory disk usage [GH-1580]
* client: Introduce a `secrets/` directory to tasks where sensitive data can
be written [GH-1681]
* client/jobspec: Add support for templates that can render static files,
dynamic content from Consul and secrets from Vault [GH-1783]
* driver: Export `NOMAD_JOB_NAME` environment variable [GH-1804]
* driver/docker: Docker For Mac support [GH-1806]
* driver/docker: Support Docker volumes [GH-1767]
* driver/docker: Allow Docker logging to be configured [GH-1767]
* driver/lxc: Support for LXC containers [GH-1699]
* driver/rkt: Support network configurations [GH-1862]
* driver/rkt: Support rkt volumes (rkt >= 1.0.0 required) [GH-1812]
BUG FIXES:
* core: Fix case where dead nodes were not properly handled by System
scheduler [GH-1715]
* agent: Handle the SIGPIPE signal preventing panics on journalctl restarts
[GH-1802]
* api: Disallow filesystem APIs to read paths that escape the allocation
directory [GH-1786]
* cli: `nomad run` failed to run on Windows [GH-1690]
* cli: `alloc-status` and `node-status` work without access to task stats
[GH-1660]
* cli: `alloc-status` does not query for allocation statistics if node is down
[GH-1844]
* client: Folder permissions are dropped even when not running as root [GH-1888]
* client: Prevent race when persisting state file [GH-1682]
* client: Retry recoverable errors when starting a driver [GH-1891]
* client: Fix old services not getting removed from consul on update [GH-1668]
* client: Folder permissions are dropped even when not running as root [GH-1888]
* client: Artifact download failures will be retried before failing tasks
[GH-1558]
* client: Fix a crash related to stats publishing when driver hasn't started
yet [GH-1723]
* client: Fix a memory leak in the executor that caused failed allocations
[GH-1762]
* client: Fix old services not getting removed from consul on update [GH-1668]
* client: Retry recoverable errors when starting a driver [GH-1891]
* client: Fix a crash related to stats publishing when driver hasn't started
yet [GH-1723]
* client: Chroot environment is only created once, avoid potential filesystem
errors [GH-1753]
* client: Failures to download an artifact are retried according to restart
policy before failing the allocation [GH-1653]
* client/executor: Prevent race when updating a job configuration with the
logger [GH-1886]
* client/fingerprint: Fix inconsistent CPU MHz fingerprinting [GH-1366]
* discovery: Fix old services not getting removed from Consul on update
[GH-1668]
@ -269,7 +302,7 @@ BUG FIXES:
[GH-906]
* consul: Remove concurrent map access [GH-874]
* driver/exec: Stopping tasks with more than one pid in a cgroup [GH-855]
* executor/linux: Add /run/resolvconf/ to chroot so DNS works [GH-905]
* client/executor/linux: Add /run/resolvconf/ to chroot so DNS works [GH-905]
## 0.3.0 (February 25, 2016)

View file

@ -148,9 +148,70 @@ job "example" {
}
}
# The "artifact" stanza instructs Nomad to download an artifact from a
# remote source prior to starting the task. This provides a convenient
# mechanism for downloading configuration files or data needed to run the
# task. It is possible to specify the "artifact" stanza multiple times to
# download multiple artifacts.
#
# For more information and examples on the "artifact" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/artifact.html
#
# artifact {
# source = "http://foo.com/artifact.tar.gz"
# options {
# checksum = "md5:c4aa853ad2215426eb7d70a21922e794"
# }
# }
# The "logs" stana instructs the Nomad client on how many log files and
# the maximum size of those logs files to retain. Logging is enabled by
# default, but the "logs" stanza allows for finer-grained control over
# the log rotation and storage configuration.
#
# For more information and examples on the "logs" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/logs.html
#
# logs {
# max_files = 10
# max_file_size = 15
# }
# The "resources" stanza describes the requirements a task needs to
# execute. Resource requirements include memory, disk space, network,
# cpu, and more. This ensures the task will execute on a machine that
# contains enough resource capacity.
#
# For more information and examples on the "resources" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/resources.html
#
resources {
cpu = 500 # 500 MHz
memory = 256 # 256MB
network {
mbits = 10
port "db" {}
}
}
# The "service" stanza instructs Nomad to register this task as a service
# in the service discovery engine, which is currently Consul. This will
# make the service addressable after Nomad has placed it on a host and
# port.
#
# For more information and examples on the "service" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/service.html
#
service {
# ${TASKGROUP} is filled in automatically by Nomad
name = "${TASKGROUP}-redis"
name = "global-redis-check"
tags = ["global", "cache"]
port = "db"
check {
@ -161,31 +222,38 @@ job "example" {
}
}
# We must specify the resources required for this task to ensure
# it runs on a machine with enough capacity.
resources {
cpu = 500 # 500 MHz
memory = 256 # 256MB
network {
mbits = 10
port "db" {}
}
}
# The artifact block can be specified one or more times to download
# artifacts prior to the task being started. This is convenient for
# shipping configs or data needed by the task.
# artifact {
# source = "http://foo.com/artifact.tar.gz"
# options {
# checksum = "md5:c4aa853ad2215426eb7d70a21922e794"
# }
# The "template" stanza instructs Nomad to manage a template, such as
# a configuration file or script. This template can optionally pull data
# from Consul or Vault to populate runtime configuration data.
#
# For more information and examples on the "template" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/template.html
#
# template {
# data = "---\nkey: {{ key \"service/my-key\" }}"
# destination = "local/file.yml"
# change_mode = "signal"
# change_signal = "SIGHUP"
# }
# Specify configuration related to log rotation
# logs {
# max_files = 10
# max_file_size = 15
# The "vault" stanza instructs the Nomad client to acquire a token from
# a HashiCorp Vault server. The Nomad servers must be configured and
# authorized to communicate with Vault. By default, Nomad will inject
# The token into the job via an environment variable and make the token
# available to the "template" stanza. The Nomad client handles the renewal
# and revocation of the Vault token.
#
# For more information and examples on the "vault" stanza, please see
# the online documentation at:
#
# https://www.nomadproject.io/docs/job-specification/vault.html
#
# vault {
# policies = ["cdn", "frontend"]
# change_mode = "signal"
# change_signal = "SIGHUP"
# }
# Controls the timeout between signalling a task it will be killed

View file

@ -61,8 +61,7 @@ before the starting the task.
## `artifact` Examples
The following examples only show the `artifact` stanzas. Remember that the
`artifact` Remember that the `artifact` stanza is only valid in the placements
listed above.
`artifact` stanza is only valid in the placements listed above.
### Download File

View file

@ -157,8 +157,7 @@ constraint {
## `constraint` Examples
The following examples only show the `constraint` stanzas. Remember that the
`constraint` Remember that the `constraint` stanza is only valid in the
placements listed above.
`constraint` stanza is only valid in the placements listed above.
### Kernel Data

View file

@ -41,8 +41,8 @@ automatically be converted to strings.
## `env` Examples
The following examples only show the `env` stanzas. Remember that the `env`
Remember that the `env` stanza is only valid in the placements listed above.
The following examples only show the `env` stanzas. Remember that the
`env` stanza is only valid in the placements listed above.
### Coercion

View file

@ -51,8 +51,8 @@ job "docs" {
## `group` Examples
The following examples only show the `group` stanzas. Remember that the `group`
Remember that the `group` stanza is only valid in the placements listed above.
The following examples only show the `group` stanzas. Remember that the
`group` stanza is only valid in the placements listed above.
### Specifying Count
@ -99,8 +99,8 @@ group "example" {
}
```
[task]: /docs/job-specification/task.html "Nomad task Specification"
[job]: /docs/job-specification/job.html "Nomad job Specification"
[constraint]: /docs/job-specification/constraint.html "Nomad constraint Specification"
[meta]: /docs/job-specification/meta.html "Nomad meta Specification"
[restart]: /docs/job-specification/restart.html "Nomad restart Specification"
[task]: /docs/job-specification/task.html "Nomad task Job Specification"
[job]: /docs/job-specification/job.html "Nomad job Job Specification"
[constraint]: /docs/job-specification/constraint.html "Nomad constraint Job Specification"
[meta]: /docs/job-specification/meta.html "Nomad meta Job Specification"
[restart]: /docs/job-specification/restart.html "Nomad restart Job Specification"

View file

@ -33,6 +33,10 @@ job "docs" {
"my-key" = "my-value"
}
periodic {
# ...
}
priority = 100
region = "north-america"
@ -44,10 +48,6 @@ job "docs" {
update {
# ...
}
periodic {
# ...
}
}
```
@ -87,8 +87,21 @@ job "docs" {
- `update` <code>([Update][update]: nil)</code> - Specifies the task's update
strategy. When omitted, rolling updates are disabled.
- `vault_token` `(string: "")` - Specifies the Vault token that proves the
submitter of the job has access to the specified policies in the
[`vault`][vault] stanza. This field is only used to transfer the token and is
not stored after job submission.
!> It is **strongly discouraged** to place the token as a configuration
parameter like this, since the token could be checked into source control
accidentally. Users should set the `VAULT_TOKEN` environment variable when
running the job instead.
## `job` Examples
The following examples only show the `job` stanzas. Remember that the
`job` stanza is only valid in the placements listed above.
### Docker Container
This example job starts a Docker container which runs as a service. Even though
@ -138,18 +151,61 @@ job "docs" {
}
resources {
cpu = 10
cpu = 20
}
}
}
}
```
### Consuming Secrets
This example shows a job which retrieves secrets from Vault and writes those
secrets to a file on disk, which the application then consumes. Nomad handles
all interactions with Vault.
```hcl
job "docs" {
datacenters = ["default"]
group "example" {
task "cat" {
driver = "exec"
config {
command = "cat"
args = ["local/secrets.txt"]
}
template {
data = "{{ secret \"secret/data\" }}"
destination = "local/secrets.txt"
}
vault {
policies = ["secret-readonly"]
}
resources {
cpu = 20
}
}
}
}
```
When submitting this job, you would run:
```
$ VAULT_TOKEN="..." nomad run example.nomad
```
[constraint]: /docs/job-specification/constraint.html "Nomad constraint Job Specification"
[group]: /docs/job-specification/group.html "Nomad group Job Specification"
[meta]: /docs/job-specification/meta.html "Nomad meta Job Specification"
[periodic]: /docs/job-specification/periodic.html "Nomad periodic Job Specification"
[task]: /docs/job-specification/task.html "Nomad task Job Specification"
[update]: /docs/job-specification/update.html "Nomad update Job Specification"
[vault]: /docs/job-specification/vault.html "Nomad vault Job Specification"
[meta]: /docs/job-specification/meta.html "Nomad meta Job Specification"
[scheduler]: /docs/runtime/schedulers.html "Nomad Scheduler Types"

View file

@ -54,8 +54,7 @@ For information on how to interact with logs after they have been configured, pl
## `logs` Examples
The following examples only show the `logs` stanzas. Remember that the
`logs` Remember that the `logs` stanza is only valid in the placements
listed above.
`logs` stanza is only valid in the placements listed above.
### Configure Defaults
@ -79,4 +78,4 @@ logs {
}
```
[logs-command]: /docs/commands/logs.html "nomad logs command"
[logs-command]: /docs/commands/logs.html "Nomad logs command"

View file

@ -56,8 +56,8 @@ automatically be converted to strings.
## `meta` Examples
The following examples only show the `meta` stanzas. Remember that the `meta`
Remember that the `meta` stanza is only valid in the placements listed above.
The following examples only show the `meta` stanzas. Remember that the
`meta` stanza is only valid in the placements listed above.
### Coercion

View file

@ -81,8 +81,7 @@ The label of the port is just text - it has no special meaning to Nomad.
## `network` Examples
The following examples only show the `network` stanzas. Remember that the
`network` Remember that the `network` stanza is only valid in the placements
listed above.
`network` stanza is only valid in the placements listed above.
### Bandwidth

View file

@ -48,8 +48,7 @@ consistent evaluation when Nomad spans multiple time zones.
## `periodic` Examples
The following examples only show the `periodic` stanzas. Remember that the
`periodic` Remember that the `periodic` stanza is only valid in the placements
listed above.
`periodic` stanza is only valid in the placements listed above.
### Run Daily

View file

@ -60,8 +60,7 @@ job "docs" {
## `resources` Examples
The following examples only show the `resources` stanzas. Remember that the
`resources` Remember that the `resources` stanza is only valid in the placements
listed above.
`resources` stanza is only valid in the placements listed above.
### Disk Space

View file

@ -0,0 +1,276 @@
---
layout: "docs"
page_title: "service Stanza - Job Specification"
sidebar_current: "docs-job-specification-service"
description: |-
The "service" stanza instructs Nomad to register the task as a service using
the service discovery integration.
---
# `service` Stanza
<table class="table table-bordered table-striped">
<tr>
<th width="120">Placement</th>
<td>
<code>job -> group -> task -> **service**</code>
</td>
</tr>
</table>
The `service` stanza instructs Nomad to register the task as a service using the
service discovery integration. This section of the documentation will discuss the
configuration, but please also read the
[Nomad service discovery documentation][service-discovery] for more detailed
information about the integration.
```hcl
job "docs" {
group "example" {
task "server" {
service {
tags = ["leader", "mysql"]
port = "db"
check {
type = "tcp"
port = "db"
interval = "10s"
timeout = "2s"
}
check {
type = "script"
name = "check_table"
command = "/usr/local/bin/check_mysql_table_status"
args = ["--verbose"]
interval = "60s"
timeout = "5s"
}
}
}
}
}
```
This section of the documentation only covers the job file options for
configuring service discovery. For more information on the setup and
configuration to integrate Nomad with service discovery, please see the
[Nomad service discovery documentation][service-discovery]. There are steps you
must take to configure Nomad. Simply adding this configuration to your job file
does not automatically enable service discovery.
## `service` Parameters
- `check` <code>([Check](#check-parameters): nil)</code> - Specifies a health
check associated with the service. This can be specified multiple times to
define multiple checks for the service. At this time, Nomad supports the
`script`<sup><small>1</small></sup>, `http` and `tcp` checks.
- `name` `(string: "<job>-<group>-<task>")` - Specifies the name of this
service. If not supplied, this will default to the name of the job, group, and
task concatenated together with a dash, like `"docs-example-server"`. Each
service must have a unique name within the cluster. Names must adhere to
[RFC-1123 §2.1](https://tools.ietf.org/html/rfc1123#section-2) and are limited
to alphanumeric and hyphen characters (i.e. `[a-z0-9\-]`), and be less than 64
characters in length.
In addition to the standard [Nomad interpolation][interpolation], the
following keys are also available:
- `${JOB}` - the name of the job
- `${GROUP}` - the name of the group
- `${TASK}` - the name of the task
- `${BASE}` - shorthand for `${JOB}-${GROUP}-${TASK}`
- `port` `(string: required)` - Specifies the label of the port on which this
service is running. Note this is the _label_ of the port and not the port
number. The port label must match one defined in the [`network`][network]
stanza.
- `tags` `(array<string>: [])` - Specifies the list of tags to associate with
this service. If this is not supplied, no tags will be assigned to the service
when it is registered.
### `check` Parameters
- `args` `(array<string>: [])` - Specifies additional arguments to the
`command`. This only applies to script-based health checks.
- `command` `(string: <varies>)` - Specifies the command to run for performing
the health check. The script must exit: 0 for passing, 1 for warning, or any
other value for a failing health check. This is required for script-based
health checks.
~> **Caveat:** The command must be the path to the command on disk, and no
shell exists by default. That means operators like `||` or `&&` are not
available. Additionally, all arguments must be supplied via the `args`
parameter. The achieve the behavior of shell operators, specify the command
as a shell, like `/bin/bash` and then use `args` to run the check.
- `initial_status` `(string: <enum>)` - Specifies the originating status of the
service. Valid options are the empty string, `passing`, `warning`, and
`critical`.
- `interval` `(string: <required>)` - Specifies the frequency of the health checks
that Consul will perform. This is specified using a label suffix like "30s"
or "1h". This must be greater than or equal to "1s"
- `name` `(string: "service: <name> check")` - Specifies the name of the health
check.
- `path` `(string: <varies>)` - Specifies the path of the HTTP endpoint which
Consul will query to query the health of a service. Nomad will automatically
add the IP of the service and the port, so this is just the relative URL to
the health check endpoint. This is required for http-based health checks.
- `port` `(string: <required>)` - Specifies the label of the port on which the
check will be performed. Note this is the _label_ of the port and not the port
number. The port label must match one defined in the [`network`][network]
stanza. If a port value was declared on the `service`, this will inherit from
that value if not supplied. If supplied, this value takes precedence over the
`service.port` value. This is useful for services which operate on multiple
ports.
- `protocol` `(string: "http")` - Specifies the protocol for the http-based
health checks. Valid options are `http` and `https`.
- `timeout` `(string: <required>)` - Specifies how long Consul will wait for a
health check query to succeed. This is specified using a label suffix like
"30s" or "1h". This must be greater than or equal to "1s"
- `type` `(string: <required>)` - This indicates the check types supported by
Nomad. Valid options are `script`, `http`, and `tcp`.
## `service` Examples
The following examples only show the `service` stanzas. Remember that the
`service` stanza is only valid in the placements listed above.
### Basic Service
This example registers a service named "load-balancer" with no health checks.
```hcl
service {
name = "load-balancer"
port = "lb"
}
```
This example must be accompanied by a [`network`][network] stanza which defines
a static or dynamic port labeled "lb". For example:
```hcl
resources {
network {
mbits = 10
port "lb" {}
}
}
```
### Check with Bash-isms
This example shows a common mistake and correct behavior for custom checks.
Suppose a health check like this:
```shell
$ test -f /tmp/file.txt
```
In this example `test` is not actually a command (binary) on the system; it is a
built-in shell function to bash. Thus, the following **would not work**:
```hcl
service {
check {
type = "script"
command = "test -f /tmp/file.txt" # THIS IS NOT CORRECT
}
}
```
Nomad will attempt to find an executable named `test` on your system, but it
does not exist. It is actually just a function of bash. Additionally, it is not
possible to specify the arguments in a single string. Here is the correct
solution:
```hcl
service {
check {
type = "script"
command = "/bin/bash"
args = ["-c", "test -f /tmp/file.txt"]
}
}
```
The `command` is actually `/bin/bash`, since that is the actual process we are
running. The arguments to that command are the script itself, which each
argument provided as a value to the `args` array.
### HTTP Health Check
This example shows a service with an HTTP health check. This will query the
service on the IP and port registered with Nomad at `/_healthz` every 5 seconds,
giving the service a maximum of 2 seconds to return a response. Any non-2xx code
is considered a failure.
```hcl
service {
check {
type = "http"
port = "lb"
path = "/_healthz"
interval = "5s"
timeout = "2s"
}
}
```
### Multiple Health Checks
This example shows a service with multiple health checks defined. All health
checks must be passing in order for the service to register as healthy.
```hcl
service {
check {
type = "http"
port = "lb"
path = "/_healthz"
interval = "5s"
timeout = "2s"
}
check {
type = "https"
port = "lb"
path = "/_healthz"
interval = "5s"
timeout = "2s"
}
check {
type = "script"
command = "/usr/local/bin/pg-tools"
args = ["verify", "database" "prod", "up"]
interval = "5s"
timeout = "2s"
}
}
```
- - -
<sup><small>1</small></sup><small> Script checks are not supported for the
[qemu driver][qemu] since the Nomad client does not have access to the file
system of a task for that driver.</small>
[service-discovery]: /docs/service-discovery/index.html "Nomad Service Discovery"
[interpolation]: /docs/runtime/interpolation.html "Nomad Runtime Interpolation"
[network]: /docs/job-specification/network.html "Nomad network Job Specification"
[qemu]: /docs/drivers/qemu.html "Nomad qemu Driver"

View file

@ -68,6 +68,9 @@ job "docs" {
## `task` Examples
The following examples only show the `task` stanzas. Remember that the
`task` stanza is only valid in the placements listed above.
### Docker Container
This example defines a task that starts a Docker container as a service. Docker
@ -111,7 +114,7 @@ task "server" {
}
resources {
cpu = 10
cpu = 20
}
}
```
@ -140,7 +143,7 @@ task "server" {
}
resources {
cpu = 10
cpu = 20
}
}
```

View file

@ -0,0 +1,125 @@
---
layout: "docs"
page_title: "template Stanza - Job Specification"
sidebar_current: "docs-job-specification-template"
description: |-
The "template" block instantiates an instance of a template renderer. This
creates a convenient way to ship configuration files that are populated from
Consul data, Vault secrets, or just general configurations within a Nomad
task.
---
# `template` Stanza
<table class="table table-bordered table-striped">
<tr>
<th width="120">Placement</th>
<td>
<code>job -> group -> task -> **template**</code>
</td>
</tr>
</table>
The `template` block instantiates an instance of a template renderer. This
creates a convenient way to ship configuration files that are populated from
Consul data, Vault secrets, or just general configurations within a Nomad task.
```hcl
job "docs" {
group "example" {
task "server" {
template {
source = "local/redis.conf.tpl"
destination = "local/redis.conf"
change_mode = "signal"
change_signal = "SIGNINT"
}
}
}
}
```
Nomad is utilizes a tool called [Consul Template][ct]. For a full list of the
API template functions, please see the [Consul Template README][ct].
## `template` Parameters
- `source` `(string: "")` - Specifies the path to the template to be rendered.
One of `source` or `data` must be specified, but not both. This source can
optionally be fetched using an [`artifact`][artifact] resource. This template
must exist on the machine prior to starting the task; it is not possible to
reference a template inside of a Docker container, for example.
- `destination` `(string: required)` - Specifies the location where the
resulting template should be rendered, relative to the task directory.
- `data` `(string: "")` - Specifies the raw template to execute. One of `source`
or `data` must be specified, but not both. This is useful for smaller
templates, but we recommend using `source` for larger templates.
- `change_mode` `(string: "restart")` - Specifies the behavior Nomad should take
if the Vault token changes. The possible values are:
- `"noop"` - take no action (continue running the task)
- `"restart"` - restart the task
- `"signal"` - send a configurable signal to the task
- `change_signal` `(string: "")` - Specifies the signal to send to the task as a
string like `"SIGUSR1"` or `"SIGINT"`. This option is required if the
`change_mode` is `signal`.
- `splay` `(string: "5s")` - Specifies a random amount of time to wait between
0ms and the given splay value before invoking the change mode. This is
specified using a label suffix like "30s" or "1h", and is often used to
prevent a thundering herd problem where all task instances restart at the same
time.
## `template` Examples
The following examples only show the `template` stanzas. Remember that the
`template` stanza is only valid in the placements listed above.
### Inline Template
This example uses an inline template to render a file to disk. This file watches
various keys in Consul for changes:
```hcl
template {
data = "---\nkey: {{ key \"service/my-key\" }}"
destination = "local/file.yml"
}
```
It is also possible to use heredocs for multi-line templates, like:
```hcl
template {
data = <<EOH
---
key: {{ key "service/my-key" }}
EOH
destination = "local/file.yml"
}
```
### Remote Template
This example uses an [`artifact`][artifact] stanza to download an input template
before passing it to the template engine:
```hcl
artifact {
source = "https://example.com/file.yml.tpl"
destination = "local/file.yml.tpl"
}
template {
source = "local/file.yml.tpl"
destination = "local/file.yml"
}
```
[ct]: https://github.com/hashicorp/consul-template "Consul Template by HashiCorp"
[artifact]: /docs/job-specification/artifact.html "Nomad artifact Job Specification"

View file

@ -43,8 +43,7 @@ job "docs" {
## `update` Examples
The following examples only show the `update` stanzas. Remember that the
`update` Remember that the `update` stanza is only valid in the placements
listed above.
`update` stanza is only valid in the placements listed above.
### Serial Upgrades

View file

@ -0,0 +1,96 @@
---
layout: "docs"
page_title: "vault Stanza - Job Specification"
sidebar_current: "docs-job-specification-vault"
description: |-
The "vault" stanza allows the task to specify that it requires a token from a
HashiCorp Vault server. Nomad will automatically retrieve a Vault token for
the task and handle token renewal for the task.
---
# `vault` Stanza
<table class="table table-bordered table-striped">
<tr>
<th width="120">Placement</th>
<td>
<code>job -> group -> task -> **vault**</code>
</td>
</tr>
</table>
The `vault` stanza allows the task to specify that it requires a token from a
[HashiCorp Vault][vault] server. Nomad will automatically retrieve a
Vault token for the task and handle token renewal for the task.
```hcl
job "docs" {
group "example" {
task "server" {
vault {
policies = ["cdn", "frontend"]
change_mode = "signal"
change_signal = "SIGUSR1"
}
}
}
}
```
If a `vault` stanza is specified, the [`template`][template] stanza can interact
with Vault as well.
## `vault` Parameters
- `change_mode` `(string: "restart")` - Specifies the behavior Nomad should take
if the Vault token changes. The possible values are:
- `"noop"` - take no action (continue running the task)
- `"restart"` - restart the task
- `"signal"` - send a configurable signal to the task
- `change_signal` `(string: "")` - Specifies the signal to send to the task as a
string like `"SIGUSR1"` or `"SIGINT"`. This option is required if the
`change_mode` is `signal`.
- `env` `(bool: true)` - Specifies if the `VAULT_TOKEN` environment variable
should be set when starting the task.
- `policies` `(array<string>: [])` - Specifies the set of Vault policies that
the task requires. The Nomad client will generate a a Vault token that is
limited to those policies.
## `vault` Examples
The following examples only show the `vault` stanzas. Remember that the
`vault` stanza is only valid in the placements listed above.
### Retrieve Token
This example tells the Nomad client to retrieve a Vault token. The token is
available to the task via the canonical environment variable `VAULT_TOKEN`. The
resulting token will have the "frontend" Vault policy attached.
```hcl
vault {
policies = ["frontend"]
}
```
### Signal Task
This example shows signaling the task instead of restarting it.
```hcl
vault {
policies = ["frontend"]
change_mode = "signal"
change_signal = "SIGINT"
}
```
[restart]: /docs/job-specification/restart.html "Nomad restart Job Specification"
[template]: /docs/job-specification/template.html "Nomad template Job Specification"
[vault]: https://www.vaultproject.io/ "Vault by HashiCorp"

View file

@ -11,8 +11,8 @@ description: |-
Nomad schedules workloads of various types across a cluster of generic hosts.
Because of this, placement is not known in advance and you will need to use
service discovery to connect tasks to other services deployed across your
cluster. Nomad integrates with [Consul](https://www.consul.io) to provide
service discovery and monitoring.
cluster. Nomad integrates with [Consul][] to provide service discovery and
monitoring.
Note that in order to use Consul with Nomad, you will need to configure and
install Consul on your nodes alongside Nomad, or schedule it as a system job.
@ -20,128 +20,29 @@ Nomad does not currently run Consul for you.
## Configuration
To configure Consul integration please see the Agent's configuration
[here](/docs/agent/config.html#consul_options).
To enable Consul integration, please see the
[Nomad agent Consul integration](/docs/agent/config.html#consul_options)
configuration.
## Service Definition Syntax
The service block in a task definition defines a service which Nomad will
register with Consul. Multiple service blocks are allowed in a task definition,
which allow registering multiple services for a task that exposes multiple
ports.
### Example
A brief example of a service definition in a task
```hcl
group "database" {
task "mysql" {
driver = "docker"
service {
tags = ["master", "mysql"]
port = "db"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
check {
type = "script"
name = "check_table"
command = "/usr/local/bin/check_mysql_table_status"
args = ["--verbose"]
interval = "60s"
timeout = "5s"
}
}
resources {
cpu = 500
memory = 1024
network {
mbits = 10
port "db" {}
}
}
}
}
```
* `Name`: An explicit name for the Service. Nomad will replace `${JOB}`,
`${TASKGROUP}` and `${TASK}` by the name of the job, task group or task,
respectively. `${BASE}` expands to the equivalent of
`${JOB}-${TASKGROUP}-${TASK}`, and is the default name for a Service.
Each service defined for a given task must have a distinct name, so if
a task has multiple services only one of them can use the default name
and the others must be explicitly named. Names must adhere to
[RFC-1123 §2.1](https://tools.ietf.org/html/rfc1123#section-2) and are
limited to alphanumeric and hyphen characters (i.e. `[a-z0-9\-]`), and be
less than 64 characters in length.
* `tags`: A list of tags associated with this Service. String interpolation is
supported in tags.
* `port`: `port` is optional and is used to associate a port with the service.
If specified, the port label must match one defined in the resources block.
This could be a label of either a dynamic or a static port.
* `check`: A check block defines a health check associated with the service.
Multiple check blocks are allowed for a service. Nomad supports the `script`,
`http` and `tcp` Consul Checks. Script checks are not supported for the qemu
driver since the Nomad client doesn't have access to the file system of a
task using the Qemu driver.
### Check Syntax
* `type`: This indicates the check types supported by Nomad. Valid options are
currently `script`, `http` and `tcp`.
* `name`: The name of the health check.
* `interval`: This indicates the frequency of the health checks that Consul will
perform.
* `timeout`: This indicates how long Consul will wait for a health check query
to succeed.
* `path`: The path of the http endpoint which Consul will query to query the
health of a service if the type of the check is `http`. Nomad will add the IP
of the service and the port, users are only required to add the relative URL
of the health check endpoint.
* `protocol`: This indicates the protocol for the http checks. Valid options
are `http` and `https`. We default it to `http`.
* `port`: The label of the port on which the check will be performed. The label
specified here has to be defined in the resource block of the task.
* `command`: This is the command that the Nomad client runs for doing script based
health check.
* `args`: Additional arguments to the `command` for script based health checks.
* `initial_status`: An optional parameter to set the initial status of the check
Valid options are the empty string, `passing`, `warning`, and `critical`.
To configure a job to register with service discovery, please see the
[`service` job specification documentation][service].
## Assumptions
* Consul 0.6.4 or later is needed for using the Script checks.
- Consul 0.6.4 or later is needed for using the Script checks.
* Consul 0.6.0 or later is needed for using the TCP checks.
- Consul 0.6.0 or later is needed for using the TCP checks.
* The service discovery feature in Nomad depends on operators making sure that
- The service discovery feature in Nomad depends on operators making sure that
the Nomad client can reach the Consul agent.
* Nomad assumes that it controls the life cycle of all the externally
- Nomad assumes that it controls the life cycle of all the externally
discoverable services running on a host.
* Tasks running inside Nomad also need to reach out to the Consul agent if
- Tasks running inside Nomad also need to reach out to the Consul agent if
they want to use any of the Consul APIs. Ex: A task running inside a docker
container in the bridge mode won't be able to talk to a Consul Agent running
on the loopback interface of the host since the container in the bridge mode
@ -149,3 +50,6 @@ group "database" {
network namespace of the host. There are a couple of ways to solve this, one
way is to run the container in the host networking mode, or make the Consul
agent listen on an interface in the network namespace of the container.
[consul]: https://www.consul.io/ "Consul by HashiCorp"
[service]: /docs/job-specification/service.html "Nomad service Job Specification"

View file

@ -75,12 +75,21 @@
<li<%= sidebar_current("docs-job-specification-restart")%>>
<a href="/docs/job-specification/restart.html">restart</a>
</li>
<li<%= sidebar_current("docs-job-specification-service")%>>
<a href="/docs/job-specification/service.html">service</a>
</li>
<li<%= sidebar_current("docs-job-specification-task")%>>
<a href="/docs/job-specification/task.html">task</a>
</li>
<li<%= sidebar_current("docs-job-specification-template")%>>
<a href="/docs/job-specification/template.html">template</a>
</li>
<li<%= sidebar_current("docs-job-specification-update")%>>
<a href="/docs/job-specification/update.html">update</a>
</li>
<li<%= sidebar_current("docs-job-specification-vault")%>>
<a href="/docs/job-specification/vault.html">vault</a>
</li>
</ul>
</li>