Fix formatting

This commit is contained in:
Seth Vargo 2016-10-03 18:23:50 -04:00
parent 6dea6df919
commit c3f06658b6
No known key found for this signature in database
GPG Key ID: 905A90C2949E8787
5 changed files with 471 additions and 418 deletions

View File

@ -21,55 +21,55 @@ environment variables.
<th>Description</th>
</tr>
<tr>
<td>NOMAD_ALLOC_DIR</td>
<td><tt>NOMAD_ALLOC_DIR</tt></td>
<td>Path to the shared alloc directory</td>
</tr>
<tr>
<td>NOMAD_TASK_DIR</td>
<td><tt>NOMAD_TASK_DIR</tt></td>
<td>Path to the local task directory</td>
</tr>
<tr>
<td>NOMAD_MEMORY_LIMIT</td>
<td><tt>NOMAD_MEMORY_LIMIT</tt></td>
<td>The task's memory limit in MB</td>
</tr>
<tr>
<td>NOMAD_CPU_LIMIT</td>
<td><tt>NOMAD_CPU_LIMIT</tt></td>
<td>The task's CPU limit in MHz</td>
</tr>
<tr>
<td>NOMAD_ALLOC_ID</td>
<td><tt>NOMAD_ALLOC_ID</tt></td>
<td>The allocation ID of the task</td>
</tr>
<tr>
<td>NOMAD_ALLOC_NAME</td>
<td><tt>NOMAD_ALLOC_NAME</tt></td>
<td>The allocation name of the task</td>
</tr>
<tr>
<td>NOMAD_ALLOC_INDEX</td>
<td><tt>NOMAD_ALLOC_INDEX</tt></td>
<td>The allocation index; useful to distinguish instances of task groups</td>
</tr>
<tr>
<td>NOMAD_TASK_NAME</td>
<td><tt>NOMAD_TASK_NAME</tt></td>
<td>The task's name</td>
</tr>
<tr>
<td>NOMAD_IP_"label"</td>
<td><tt>NOMAD_IP_&lt;label&gt;</tt></td>
<td>The IP of the the port with the given label</td>
</tr>
<tr>
<td>NOMAD_PORT_"label"</td>
<td><tt>NOMAD_PORT_&lt;label&gt;</tt></td>
<td>The port value with the given label</td>
</tr>
<tr>
<td>NOMAD_ADDR_"label"</td>
<td><tt>NOMAD_ADDR_&lt;label&gt;</tt></td>
<td>The IP:Port pair of the the port with the given label</td>
</tr>
<tr>
<td>NOMAD_HOST_PORT_"label"</td>
<td><tt>NOMAD_HOST_PORT_&lt;label&gt;</tt></td>
<td>The host port for the given label if the port is port mapped</td>
</tr>
<tr>
<td>NOMAD_META_"key"</td>
<td><tt>NOMAD_META_&lt;key&gt;</tt></td>
<td>The metadata of the task</td>
</tr>
</table>
@ -136,7 +136,7 @@ directories can be read through the following environment variables:
The job specification also allows you to specify a `meta` block to supply arbitrary
configuration to a task. This allows you to easily provide job-specific
configuration even if you use the same executable unit in multiple jobs. These
key-value pairs are passed through to the job as `NOMAD_META_"key"={value}`,
key-value pairs are passed through to the job as `NOMAD_META_<key>=<value>`,
where `key` is UPPERCASED from the job specification.
Currently there is no enforcement that the meta values be lowercase, but using

View File

@ -9,83 +9,116 @@ description: |-
# Job Specification
Jobs can be specified either in [HCL](https://github.com/hashicorp/hcl) or JSON.
HCL is meant to strike a balance between human readable and editable, and machine-friendly.
HCL is meant to strike a balance between human readable and editable, and
machine-friendly.
For machine-friendliness, Nomad can also read JSON configurations. In general, we recommend
using the HCL syntax.
For machine-friendliness, Nomad can also read JSON configurations. In general,
we recommend using the HCL syntax.
## HCL Syntax
For a detailed description of HCL general syntax, [see this guide](https://github.com/hashicorp/hcl#syntax).
Here we cover the details of the Job specification for Nomad:
For a detailed description of HCL general syntax, [see this
guide](https://github.com/hashicorp/hcl#syntax). Here we cover the details of
the Job specification for Nomad:
```
# Define a job called my-service
job "my-service" {
# Job should run in the US region
region = "us"
```hcl
# This declares a job named "docs". There can be exactly one
# job declaration per job file.
job "docs" {
# Specify this job should run in the region named "us". Regions
# are defined by the Nomad servers' configuration.
region = "us"
# Spread tasks between us-west-1 and us-east-1
datacenters = ["us-west-1", "us-east-1"]
# Spread the tasks in this job between us-west-1 and us-east-1.
datacenters = ["us-west-1", "us-east-1"]
# run this with service scheduler
type = "service"
# Run this job as a "service" type. Each job type has different
# properties. See the documentation below for more examples.
type = "service"
# Rolling updates should be sequential
update {
stagger = "30s"
max_parallel = 1
}
# Specify this job to have rolling updates, two-at-a-time, with
# 30 second intervals.
update {
stagger = "30s"
max_parallel = 1
}
group "webs" {
# We want 5 web servers
count = 5
# A group defines a series of tasks that should be co-located
# on the same client (host). All tasks within a group will be
# placed on the same host.
group "webs" {
# Specify the number of these tasks we want.
count = 5
# Create a web front end using a docker image
task "frontend" {
driver = "docker"
config {
image = "hashicorp/web-frontend"
}
service {
port = "http"
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
env {
DB_HOST = "db01.example.com"
DB_USER = "web"
DB_PASSWORD = "loremipsum"
}
resources {
cpu = 500
memory = 128
network {
mbits = 100
# Request for a dynamic port
port "http" {
}
# Request for a static port
port "https" {
static = 443
}
}
}
# Create an individual task (unit of work). This particular
# task utilizes a Docker container to front a web application.
task "frontend" {
# Specify the driver to be "docker". Nomad supports
# multiple drivers.
driver = "docker"
# Configuration is specific to each driver.
config {
image = "hashicorp/web-frontend"
}
# The service block tells Nomad how to register this service
# with Consul for service discovery and monitoring.
service {
# This tells Consul to monitor the service on the port
# labled "http". Since Nomad allocates high dynamic port
# numbers, we use labels to refer to them.
port = "http"
check {
type = "http"
path = "/health"
interval = "10s"
timeout = "2s"
}
}
# It is possible to set environment variables which will be
# available to the job when it runs.
env {
DB_HOST = "db01.example.com"
DB_USER = "web"
DB_PASS = "loremipsum"
}
# Specify the maximum resources required to run the job,
# include CPU, memory, and bandwidth.
resources {
cpu = 500 # MHz
memory = 128 # MB
network {
mbits = 100
# This requests a dynamic port named "http". This will
# be something like "46283", but we refer to it via the
# label "http".
port "http" {}
# This requests a static port on 443 on the host. This
# will restrict this task to running once per host, since
# there is only one port 443 on each host.
port "https" {
static = 443
}
}
}
}
}
}
```
This is a fairly simple example job, but demonstrates many of the features and syntax
of the job specification. The primary "objects" are the job, task group, and task.
Each job file has only a single job, however a job may have multiple task groups,
and each task group may have multiple tasks. Task groups are a set of tasks that
must be co-located on a machine. Groups with a single task and count of one
can be declared outside of a group which is created implicitly.
This is a fairly simple example job, but demonstrates many of the features and
syntax of the job specification. The primary "objects" are the job, task group,
and task. Each job file has only a single job, however a job may have multiple
task groups, and each task group may have multiple tasks. Task groups are a set
of tasks that must be co-located on a machine. Groups with a single task and
count of one can be declared outside of a group which is created implicitly.
Constraints can be specified at the job, task group, or task level to restrict
where a task is eligible for running. An example constraint looks like:
@ -93,8 +126,8 @@ where a task is eligible for running. An example constraint looks like:
```
# Restrict to only nodes running linux
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
attribute = "${attr.kernel.name}"
value = "linux"
}
```
@ -103,11 +136,9 @@ This metadata is opaque to Nomad and can be used for any purpose, including
defining constraints on the metadata. Metadata can be specified by:
```
# Setup ELB via metadata and setup foo
meta {
foo = "bar"
elb_mode = "tcp"
elb_check_interval = "10s"
elb_mode = "tcp"
elb_check_interval = "10s"
}
```
@ -152,26 +183,25 @@ The `job` object supports the following keys:
<a id="update"></a>
* `update` - Specifies the task's update strategy. When omitted, rolling
updates are disabled. The `update` block supports the following keys:
* `update` - Specifies the task's update strategy. When omitted, rolling
updates are disabled. The `update` block supports the following keys:
* `max_parallel` - `max_parallel` is given as an integer value and specifies
the number of tasks that can be updated at the same time.
* `max_parallel` - integer that specifies the number of tasks that can be
updated at the same time.
* `stagger` - `stagger` introduces a delay between sets of task updates and
is given as an as a time duration. If stagger is provided as an integer,
seconds are assumed. Otherwise the "s", "m", and "h" suffix can be used,
such as "30s".
* `stagger` - introduces a delay between sets of task updates and is given as
an as a time duration. If stagger is provided as an integer, seconds are
assumed. Otherwise the "s", "m", and "h" suffix can be used, such as "30s".
An example `update` block:
Here is an example `update` block:
```
```hcl
update {
// Update 3 tasks at a time.
max_parallel = 3
# Update 3 tasks at a time.
max_parallel = 3
// Wait 30 seconds between updates.
stagger = "30s"
# Wait 30 seconds between updates.
stagger = "30s"
}
```
@ -182,7 +212,7 @@ The `job` object supports the following keys:
timezone to ensure consistent evaluation when Nomad Servers span multiple
time zones. The `periodic` block is optional and supports the following keys:
* `enabled` - `enabled` determines whether the periodic job will spawn child
* `enabled` - determines whether the periodic job will spawn child
jobs. `enabled` is defaulted to true if the block is included.
* `cron` - A cron expression configuring the interval the job is launched
@ -190,21 +220,21 @@ The `job` object supports the following keys:
[here](https://github.com/gorhill/cronexpr#implementation) for full
documentation of supported cron specs and the predefined expressions.
* <a id="prohibit_overlap">`prohibit_overlap`</a> - `prohibit_overlap` can
* <a id="prohibit_overlap">`prohibit_overlap`</a> - this can
be set to true to enforce that the periodic job doesn't spawn a new
instance of the job if any of the previous jobs are still running. It is
defaulted to false.
An example `periodic` block:
Here is an example `periodic` block:
```
periodic {
// Launch every 15 minutes
cron = "*/15 * * * * *"
```hcl
periodic {
# Launch every 15 minutes
cron = "*/15 * * * * *"
// Do not allow overlapping runs.
prohibit_overlap = true
}
# Do not allow overlapping runs.
prohibit_overlap = true
}
```
### Task Group
@ -250,19 +280,19 @@ The `task` object supports the following keys:
task transitions to the dead state. [Click
here](/docs/jobspec/servicediscovery.html) to learn more about services.
* `env` - A map of key/value representing environment variables that
will be passed along to the running process. Nomad variables are
interpreted when set in the environment variable values. See the table of
interpreted variables [here](/docs/jobspec/interpreted.html).
* `env` - A map of key/value representing environment variables that will be
passed along to the running process. Nomad variables are interpreted when set
in the environment variable values. See the table of interpreted variables
[here](/docs/jobspec/interpreted.html).
For example the below environment map will be reinterpreted:
```
env {
// The value will be interpreted by the client and set to the
// correct value.
NODE_CLASS = "${nomad.class}"
}
```hcl
env {
# The value will be interpreted by the client and set to the
# correct value.
NODE_CLASS = "${nomad.class}"
}
```
* `resources` - Provides the resource requirements of the task.
@ -304,13 +334,13 @@ The `network` object supports the following keys:
* `mbits` (required) - The number of MBits in bandwidth required.
* `port` - `port` is a repeatable object that can be used to specify both
dynamic ports and reserved ports. It has the following format:
* `port` - a repeatable object that can be used to specify both
dynamic ports and reserved ports. It has the following format:
```
```hcl
port "label" {
// If the `static` field is omitted, a dynamic port will be assigned.
static = 6539
# If the `static` field is omitted, a dynamic port is assigned.
static = 6539
}
```
@ -332,35 +362,34 @@ The `restart` object supports the following keys:
time duration using the `s`, `m`, and `h` suffixes, such as `30s`. A random
jitter of up to 25% is added to the delay.
* `mode` - Controls the behavior when the task fails more than `attempts`
times in an interval. Possible values are listed below:
* `mode` - Controls the behavior when the task fails more than `attempts` times
in an interval. Possible values are listed below:
* `delay` - `delay` will delay the next restart until the next `interval` is
reached.
* `delay` - delay the next restart until the next `interval` is reached.
* `fail` - `fail` will not restart the task again.
* `fail` - do not restart the task again on failure.
The default `batch` restart policy is:
The default `batch` restart policy is:
```
restart {
attempts = 15
delay = "15s"
interval = "168h" # 7 days
mode = "delay"
}
```
```hcl
restart {
attempts = 15
delay = "15s"
interval = "168h" # 7 days
mode = "delay"
}
```
The default non-batch restart policy is:
The default non-batch restart policy is:
```
restart {
interval = "1m"
attempts = 2
delay = "15s"
mode = "delay"
}
```
```hcl
restart {
interval = "1m"
attempts = 2
delay = "15s"
mode = "delay"
}
```
### Constraint
@ -369,9 +398,9 @@ The `constraint` object supports the following keys:
* `attribute` - Specifies the attribute to examine for the
constraint. See the table of attributes [here](/docs/jobspec/interpreted.html#interpreted_node_vars).
* `operator` - Specifies the comparison operator. Defaults to equality,
and can be `=`, `==`, `is`, `!=`, `not`, `>`, `>=`, `<`, `<=`. The
ordering is compared lexically. The following are equivalent:
* `operator` - Specifies the comparison operator. Defaults to equality, and can
be `=`, `==`, `is`, `!=`, `not`, `>`, `>=`, `<`, `<=`. The ordering is
compared lexically. The following are equivalent:
* `=`, `==` and `is`
* `!=` and `not`
@ -390,11 +419,11 @@ The `constraint` object supports the following keys:
the attribute. This sets the operator to "regexp" and the `value`
to the regular expression.
* `distinct_hosts` - `distinct_hosts` accepts a boolean value and defaults to
`false`. If set, the scheduler will not co-locate any task groups on the same
machine. This can be specified as a job constraint which applies the
constraint to all task groups in the job, or as a task group constraint which
scopes the effect to just that group.
* `distinct_hosts` - `distinct_hosts` accepts a boolean value and defaults to
`false`. If set, the scheduler will not co-locate any task groups on the same
machine. This can be specified as a job constraint which applies the
constraint to all task groups in the job, or as a task group constraint which
scopes the effect to just that group.
Placing the constraint at both the job level and at the task group level is
redundant since when placed at the job level, the constraint will be applied
@ -414,10 +443,10 @@ The `logs` object configures the log rotation policy for a task's `stdout` and
`MB`.
If the amount of disk resource requested for the task is less than the total
amount of disk space needed to retain the rotated set of files, Nomad will return
a validation error when a job is submitted.
amount of disk space needed to retain the rotated set of files, Nomad will
return a validation error when a job is submitted.
```
```hcl
logs {
max_files = 3
max_file_size = 10 # Size is in MB
@ -459,31 +488,31 @@ The `artifact` object supports the following keys:
[here](https://github.com/hashicorp/go-getter/tree/ef5edd3d8f6f482b775199be2f3734fd20e04d4a#protocol-specific-options-1).
An example is given below:
```
options {
# Validate the downloaded artifact
checksum = "md5:c4aa853ad2215426eb7d70a21922e794"
```hcl
options {
# Validate the downloaded artifact
checksum = "md5:c4aa853ad2215426eb7d70a21922e794"
# S3 options for downloading artifacts from S3
aws_access_key_id = "<id>"
aws_access_key_secret = "<secret>"
aws_access_token = "<token>"
}
```
# S3 options for downloading artifacts from S3
aws_access_key_id = "<id>"
aws_access_key_secret = "<secret>"
aws_access_token = "<token>"
}
```
An example of downloading and unzipping an archive is as simple as:
An example of downloading and unzipping an archive is as simple as:
```
artifact {
# The archive will be extracted before the task is run, making
# it easy to ship configurations with your binary.
source = "https://example.com/my.zip"
```hcl
artifact {
# The archive will be extracted before the task is run,
# making it easy to ship configurations with your binary.
source = "https://example.com/my.zip"
options {
checksum = "md5:7f4b3e3b4dd5150d4e5aaaa5efada4c3"
}
}
```
options {
checksum = "md5:7f4b3e3b4dd5150d4e5aaaa5efada4c3"
}
}
```
#### S3 examples
@ -491,24 +520,27 @@ S3 has several different types of addressing and more detail can be found
[here](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro)
S3 region specific endpoints can be found
[here](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region)
[here](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region).
Path based style:
```
```hcl
artifact {
source = "https://s3-us-west-2.amazonaws.com/my-bucket-example/my_app.tar.gz"
}
```
or to override automatic detection in the URL, use the S3-specific syntax
```
```hcl
artifact {
source = "s3::https://s3-eu-west-1.amazonaws.com/my-bucket-example/my_app.tar.gz"
}
```
Virtual hosted based style
```
```hcl
artifact {
source = "my-bucket-example.s3-eu-west-1.amazonaws.com/my_app.tar.gz"
}

View File

@ -15,142 +15,142 @@ which will emit a JSON version of the job.
## JSON Syntax
Below is an example of a JSON object that submits a `Periodic` job to Nomad:
Below is an example of a JSON object that submits a `periodic` job to Nomad:
```
```json
{
"Job": {
"Region": "global",
"ID": "example",
"Name": "example",
"Type": "batch",
"Priority": 50,
"AllAtOnce": false,
"Datacenters": [
"dc1"
],
"Constraints": [
{
"LTarget": "${attr.kernel.name}",
"RTarget": "linux",
"Operand": "="
}
],
"TaskGroups": [
{
"Name": "cache",
"Count": 1,
"Constraints": null,
"Tasks": [
{
"Name": "redis",
"Driver": "docker",
"User": "foo-user",
"Config": {
"image": "redis:latest",
"port_map": [
{
"db": 6379
}
]
},
"Constraints": null,
"Env": {
"foo": "bar",
"baz": "pipe"
},
"Services": [
{
"Name": "cache-redis",
"Tags": [
"global",
"cache"
],
"PortLabel": "db",
"Checks": [
{
"Id": "",
"Name": "alive",
"Type": "tcp",
"Command": "",
"Args": null,
"Path": "",
"Protocol": "",
"Interval": 10000000000,
"Timeout": 2000000000
}
]
}
],
"Resources": {
"CPU": 500,
"MemoryMB": 256,
"DiskMB": 300,
"IOPS": 0,
"Networks": [
{
"ReservedPorts": [
{
"Label": "rpc",
"Value": 25566
}
],
"DynamicPorts": [
{
"Label": "db"
}
],
"MBits": 10
}
]
},
"Meta": {
"foo": "bar",
"baz": "pipe"
},
"KillTimeout": 5000000000,
"LogConfig": {
"MaxFiles": 10,
"MaxFileSizeMB": 10
},
"Artifacts": [
{
"GetterSource": "http://foo.com/artifact.tar.gz",
"GetterOptions": {
"checksum": "md5:c4aa853ad2215426eb7d70a21922e794"
},
"RelativeDest": "local/"
}
]
}
],
"RestartPolicy": {
"Interval": 300000000000,
"Attempts": 10,
"Delay": 25000000000,
"Mode": "delay"
},
"Meta": {
"foo": "bar",
"baz": "pipe"
"Job":{
"Region":"global",
"ID":"example",
"Name":"example",
"Type":"batch",
"Priority":50,
"AllAtOnce":false,
"Datacenters":[
"dc1"
],
"Constraints":[
{
"LTarget":"${attr.kernel.name}",
"RTarget":"linux",
"Operand":"="
}
],
"TaskGroups":[
{
"Name":"cache",
"Count":1,
"Constraints":null,
"Tasks":[
{
"Name":"redis",
"Driver":"docker",
"User":"foo-user",
"Config":{
"image":"redis:latest",
"port_map":[
{
"db":6379
}
}
]
},
"Constraints":null,
"Env":{
"foo":"bar",
"baz":"pipe"
},
"Services":[
{
"Name":"cache-redis",
"Tags":[
"global",
"cache"
],
"PortLabel":"db",
"Checks":[
{
"Id":"",
"Name":"alive",
"Type":"tcp",
"Command":"",
"Args":null,
"Path":"",
"Protocol":"",
"Interval":10000000000,
"Timeout":2000000000
}
]
}
],
"Resources":{
"CPU":500,
"MemoryMB":256,
"DiskMB":300,
"IOPS":0,
"Networks":[
{
"ReservedPorts":[
{
"Label":"rpc",
"Value":25566
}
],
"DynamicPorts":[
{
"Label":"db"
}
],
"MBits":10
}
]
},
"Meta":{
"foo":"bar",
"baz":"pipe"
},
"KillTimeout":5000000000,
"LogConfig":{
"MaxFiles":10,
"MaxFileSizeMB":10
},
"Artifacts":[
{
"GetterSource":"http://foo.com/artifact.tar.gz",
"GetterOptions":{
"checksum":"md5:c4aa853ad2215426eb7d70a21922e794"
},
"RelativeDest":"local/"
}
]
}
],
"Update": {
"Stagger": 10000000000,
"MaxParallel": 1
"RestartPolicy":{
"Interval":300000000000,
"Attempts":10,
"Delay":25000000000,
"Mode":"delay"
},
"Periodic": {
"Enabled": true,
"Spec": "* * * * *",
"SpecType": "cron",
"ProhibitOverlap": true
},
"Meta": {
"foo": "bar",
"baz": "pipe"
"Meta":{
"foo":"bar",
"baz":"pipe"
}
}
],
"Update":{
"Stagger":10000000000,
"MaxParallel":1
},
"Periodic":{
"Enabled":true,
"Spec":"* * * * *",
"SpecType":"cron",
"ProhibitOverlap":true
},
"Meta":{
"foo":"bar",
"baz":"pipe"
}
}
}
```
@ -189,21 +189,23 @@ The `Job` object supports the following keys:
and defaults to `service`. To learn more about each scheduler type visit
[here](/docs/jobspec/schedulers.html)
* `Update` - Specifies the task's update strategy. When omitted, rolling
updates are disabled. The `Update` object supports the following attributes:
* `Update` - Specifies the task's update strategy. When omitted, rolling
updates are disabled. The `Update` object supports the following attributes:
* `MaxParallel` - `MaxParallel` is given as an integer value and specifies
the number of tasks that can be updated at the same time.
* `MaxParallel` - `MaxParallel` is given as an integer value and specifies
the number of tasks that can be updated at the same time.
* `Stagger` - `Stagger` introduces a delay between sets of task updates and
is given in nanoseconds.
* `Stagger` - `Stagger` introduces a delay between sets of task updates and
is given in nanoseconds.
An example `Update` block:
```
"Update": {
```json
{
"Update": {
"MaxParallel" : 3,
"Stagger" : 10000000000
}
}
```
@ -230,13 +232,15 @@ The `Job` object supports the following keys:
An example `periodic` block:
```
"Periodic": {
"Spec": "*/15 * * * * *"
"SpecType": "cron",
"Enabled": true,
"ProhibitOverlap": true
}
```json
{
"Periodic": {
"Spec": "*/15 * * * * *"
"SpecType": "cron",
"Enabled": true,
"ProhibitOverlap": true
}
}
```
### Task Group
@ -286,10 +290,12 @@ The `Task` object supports the following keys:
For example the below environment map will be reinterpreted:
```
"Env": {
"NODE_CLASS" : "${nomad.class}"
}
```json
{
"Env": {
"NODE_CLASS" : "${nomad.class}"
}
}
```
* `KillTimeout` - `KillTimeout` is a time duration in nanoseconds. It can be
@ -437,7 +443,7 @@ The `Constraint` object supports the following keys:
* `Operand` - Specifies the test to be performed on the two targets. It takes on the
following values:
* `regexp` - Allows the `RTarget` to be a regular expression to be matched.
* `distinct_host` - If set, the scheduler will not co-locate any task groups on the same
@ -468,10 +474,12 @@ If the amount of disk resource requested for the task is less than the total
amount of disk space needed to retain the rotated set of files, Nomad will return
a validation error when a job is submitted.
```
"LogConfig: {
```json
{
"LogConfig": {
"MaxFiles": 3,
"MaxFileSizeMB": 10
}
}
```
@ -506,30 +514,31 @@ The `Artifact` object supports the following keys:
[here](https://github.com/hashicorp/go-getter/tree/ef5edd3d8f6f482b775199be2f3734fd20e04d4a#protocol-specific-options-1).
An example is given below:
```
"GetterOptions": {
```json
{
"GetterOptions": {
"checksum": "md5:c4aa853ad2215426eb7d70a21922e794",
"aws_access_key_id": "<id>",
"aws_access_key_secret": "<secret>",
"aws_access_token": "<token>"
}
}
```
An example of downloading and unzipping an archive is as simple as:
```
"Artifacts": [
{
# The archive will be extracted before the task is run, making
# it easy to ship configurations with your binary.
"GetterSource": "https://example.com/my.zip",
"GetterOptions": {
"checksum": "md5:7f4b3e3b4dd5150d4e5aaaa5efada4c3"
```json
{
"Artifacts": [
{
"GetterSource": "https://example.com/my.zip",
"GetterOptions": {
"checksum": "md5:7f4b3e3b4dd5150d4e5aaaa5efada4c3"
}
}
}
]
]
}
```
#### S3 examples
@ -541,28 +550,37 @@ S3 region specific endpoints can be found
[here](http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region)
Path based style:
```
"Artifacts": [
{
"GetterSource": "https://s3-us-west-2.amazonaws.com/my-bucket-example/my_app.tar.gz",
}
]
```json
{
"Artifacts": [
{
"GetterSource": "https://s3-us-west-2.amazonaws.com/my-bucket-example/my_app.tar.gz",
}
]
}
```
or to override automatic detection in the URL, use the S3-specific syntax
```
"Artifacts": [
{
"GetterSource": "s3::https://s3-eu-west-1.amazonaws.com/my-bucket-example/my_app.tar.gz",
}
]
```json
{
"Artifacts": [
{
"GetterSource": "s3::https://s3-eu-west-1.amazonaws.com/my-bucket-example/my_app.tar.gz",
}
]
}
```
Virtual hosted based style
```
"Artifacts": [
{
"GetterSource": "my-bucket-example.s3-eu-west-1.amazonaws.com/my_app.tar.gz",
}
]
```json
{
"Artifacts": [
{
"GetterSource": "my-bucket-example.s3-eu-west-1.amazonaws.com/my_app.tar.gz",
}
]
}
```

View File

@ -32,16 +32,14 @@ port will be allocated dynamically by the scheduler, and your service will have
to read an environment variable (see below) to know which port to bind to at
startup.
```
task "webservice" {
...
resources {
...
network {
port "http" {}
port "https" {}
}
```hcl
task "example" {
resources {
network {
port "http" {}
port "https" {}
}
}
}
```
@ -51,17 +49,15 @@ Static ports bind your job to a specific port on the host they're placed on.
Since multiple services cannot share a port, the port must be open in order to
place your task.
```
task "dnsservice" {
...
resources {
...
network {
port "dns" {
static = 53
}
}
```hcl
task "example" {
resources {
network {
port "dns" {
static = 53
}
}
}
}
```
@ -75,7 +71,7 @@ discovery, and used for the name of the environment variable that indicates
which port your application should bind to. For example, we've labeled this
port `http`:
```
```hcl
port "http" {}
```
@ -94,15 +90,17 @@ means that your application can listen on a fixed port (it does not need to
read the environment variable) and the dynamic port will be mapped to the port
in your container or VM.
```
driver = "docker"
```hcl
task "example" {
driver = "docker"
port "http" {}
port "http" {}
config {
config {
port_map = {
http = 8080
http = 8080
}
}
}
```

View File

@ -11,8 +11,8 @@ description: |-
Nomad schedules workloads of various types across a cluster of generic hosts.
Because of this, placement is not known in advance and you will need to use
service discovery to connect tasks to other services deployed across your
cluster. Nomad integrates with [Consul](https://www.consul.io) to provide service
discovery and monitoring.
cluster. Nomad integrates with [Consul](https://www.consul.io) to provide
service discovery and monitoring.
Note that in order to use Consul with Nomad, you will need to configure and
install Consul on your nodes alongside Nomad, or schedule it as a system job.
@ -34,37 +34,42 @@ ports.
A brief example of a service definition in a Task
```
```hcl
group "database" {
task "mysql" {
driver = "docker"
service {
tags = ["master", "mysql"]
port = "db"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
check {
type = "script"
name = "check_table"
command = "/usr/local/bin/check_mysql_table_status"
args = ["--verbose"]
interval = "60s"
timeout = "5s"
}
}
resources {
cpu = 500
memory = 1024
network {
mbits = 10
port "db" {
}
}
}
task "mysql" {
driver = "docker"
service {
tags = ["master", "mysql"]
port = "db"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
check {
type = "script"
name = "check_table"
command = "/usr/local/bin/check_mysql_table_status"
args = ["--verbose"]
interval = "60s"
timeout = "5s"
}
}
resources {
cpu = 500
memory = 1024
network {
mbits = 10
port "db" {}
}
}
}
}
```