203 lines
7.4 KiB
Plaintext
203 lines
7.4 KiB
Plaintext
---
|
|
layout: docs
|
|
page_title: group Stanza - Job Specification
|
|
sidebar_title: group
|
|
description: |-
|
|
The "group" stanza defines a series of tasks that should be co-located on the
|
|
same Nomad client. Any task within a group will be placed on the same client.
|
|
---
|
|
|
|
# `group` Stanza
|
|
|
|
<Placement groups={['job', 'group']} />
|
|
|
|
The `group` stanza defines a series of tasks that should be co-located on the
|
|
same Nomad client. Any [task][] within a group will be placed on the same
|
|
client.
|
|
|
|
```hcl
|
|
job "docs" {
|
|
group "example" {
|
|
# ...
|
|
}
|
|
}
|
|
```
|
|
|
|
## `group` Parameters
|
|
|
|
- `constraint` <code>([Constraint][]: nil)</code> -
|
|
This can be provided multiple times to define additional constraints.
|
|
|
|
- `affinity` <code>([Affinity][]: nil)</code> - This can be provided
|
|
multiple times to define preferred placement criteria.
|
|
|
|
- `spread` <code>([Spread][spread]: nil)</code> - This can be provided
|
|
multiple times to define criteria for spreading allocations across a
|
|
node attribute or metadata. See the
|
|
[Nomad spread reference](/docs/job-specification/spread) for more details.
|
|
|
|
- `count` `(int)` - Specifies the number of instances that should be running
|
|
under for this group. This value must be non-negative. This defaults to the
|
|
`min` value specified in the [`scaling`](/docs/job-specification/scaling)
|
|
block, if present; otherwise, this defaults to `1`.
|
|
|
|
- `ephemeral_disk` <code>([EphemeralDisk][]: nil)</code> - Specifies the
|
|
ephemeral disk requirements of the group. Ephemeral disks can be marked as
|
|
sticky and support live data migrations.
|
|
|
|
- `meta` <code>([Meta][]: nil)</code> - Specifies a key-value map that annotates
|
|
with user-defined metadata.
|
|
|
|
- `migrate` <code>([Migrate][]: nil)</code> - Specifies the group strategy for
|
|
migrating off of draining nodes. Only service jobs with a count greater than
|
|
1 support migrate stanzas.
|
|
|
|
- `reschedule` <code>([Reschedule][]: nil)</code> - Allows to specify a
|
|
rescheduling strategy. Nomad will then attempt to schedule the task on another
|
|
node if any of the group allocation statuses become "failed".
|
|
|
|
- `restart` <code>([Restart][]: nil)</code> - Specifies the restart policy for
|
|
all tasks in this group. If omitted, a default policy exists for each job
|
|
type, which can be found in the [restart stanza documentation][restart].
|
|
|
|
- `shutdown_delay` `(string: "0s")` - Specifies the duration to wait when
|
|
stopping a group's tasks. The delay occurs between Consul deregistration
|
|
and sending each task a shutdown signal. Ideally, services would fail
|
|
healthchecks once they receive a shutdown signal. Alternatively
|
|
`shutdown_delay` may be set to give in flight requests time to complete
|
|
before shutting down. A group level `shutdown_delay` will run regardless
|
|
if there are any defined group services. In addition, tasks may have their
|
|
own [`shutdown_delay`](/docs/job-specification/task#shutdown_delay)
|
|
which waits between deregistering task services and stopping the task.
|
|
|
|
- `stop_after_client_disconnect` `(string: "")` - Specifies a duration
|
|
after which a Nomad client that cannot communicate with the servers
|
|
will stop allocations based on this task group. By default, a client
|
|
will not stop an allocation until explicitly told to by a server. A
|
|
client that fails to heartbeat to a server within the
|
|
`hearbeat_grace` window and any allocations running on it will be
|
|
marked "lost" and Nomad will schedule replacement
|
|
allocations. However, these replaced allocations will continue to
|
|
run on the non-responsive client; an operator may desire that these
|
|
replaced allocations are also stopped in this case — for example,
|
|
allocations requiring exclusive access to an external resource. When
|
|
specified, the Nomad client will stop them after this duration. The
|
|
Nomad client process must be running for this to occur.
|
|
|
|
- `task` <code>([Task][]: <required>)</code> - Specifies one or more tasks to run
|
|
within this group. This can be specified multiple times, to add a task as part
|
|
of the group.
|
|
|
|
- `vault` <code>([Vault][]: nil)</code> - Specifies the set of Vault policies
|
|
required by all tasks in this group. Overrides a `vault` block set at the
|
|
`job` level.
|
|
|
|
- `volume` <code>([Volume][]: nil)</code> - Specifies the volumes that are
|
|
required by tasks within the group.
|
|
|
|
## `group` Examples
|
|
|
|
The following examples only show the `group` stanzas. Remember that the
|
|
`group` stanza is only valid in the placements listed above.
|
|
|
|
### Specifying Count
|
|
|
|
This example specifies that 5 instances of the tasks within this group should be
|
|
running:
|
|
|
|
```hcl
|
|
group "example" {
|
|
count = 5
|
|
}
|
|
```
|
|
|
|
### Tasks with Constraint
|
|
|
|
This example shows two abbreviated tasks with a constraint on the group. This
|
|
will restrict the tasks to 64-bit operating systems.
|
|
|
|
```hcl
|
|
group "example" {
|
|
constraint {
|
|
attribute = "${attr.cpu.arch}"
|
|
value = "amd64"
|
|
}
|
|
|
|
task "cache" {
|
|
# ...
|
|
}
|
|
|
|
task "server" {
|
|
# ...
|
|
}
|
|
}
|
|
```
|
|
|
|
### Metadata
|
|
|
|
This example show arbitrary user-defined metadata on the group:
|
|
|
|
```hcl
|
|
group "example" {
|
|
meta {
|
|
"my-key" = "my-value"
|
|
}
|
|
}
|
|
```
|
|
|
|
### Stop After Client Disconnect
|
|
|
|
This example shows how `stop_after_client_disconnect` interacts with
|
|
other stanzas. For the `first` group, after the default 10 second
|
|
[`heartbeat_grace`] window expires and 90 more seconds passes, the
|
|
server will reschedule the allocation. The client will wait 90 seconds
|
|
before sending a stop signal (`SIGTERM`) to the `first-task`
|
|
task. After 15 more seconds because of the task's `kill_timeout`, the
|
|
client will send `SIGKILL`. The `second` group does not have
|
|
`stop_after_client_disconnect`, so the server will reschedule the
|
|
allocation after the 10 second [`heartbeat_grace`] expires. It will
|
|
not be stopped on the client, regardless of how long the client is out
|
|
of touch.
|
|
|
|
Note that if the server's clocks are not closely synchronized with
|
|
each other, the server may reschedule the group before the client has
|
|
stopped the allocation. Operators should ensure that clock drift
|
|
between servers is as small as possible.
|
|
|
|
Note also that a group using this feature will be stopped on the
|
|
client if the Nomad server cluster fails, since the client will be
|
|
unable to contact any server in that case. Groups opting in to this
|
|
feature are therefore exposed to an additional runtime dependency and
|
|
potential point of failure.
|
|
|
|
```hcl
|
|
group "first" {
|
|
stop_after_client_disconnect = "90s"
|
|
|
|
task "first-task" {
|
|
kill_timeout = "15s"
|
|
}
|
|
}
|
|
|
|
group "second" {
|
|
|
|
task "second-task" {
|
|
kill_timeout = "5s"
|
|
}
|
|
}
|
|
```
|
|
|
|
[task]: /docs/job-specification/task 'Nomad task Job Specification'
|
|
[job]: /docs/job-specification/job 'Nomad job Job Specification'
|
|
[constraint]: /docs/job-specification/constraint 'Nomad constraint Job Specification'
|
|
[spread]: /docs/job-specification/spread 'Nomad spread Job Specification'
|
|
[affinity]: /docs/job-specification/affinity 'Nomad affinity Job Specification'
|
|
[ephemeraldisk]: /docs/job-specification/ephemeral_disk 'Nomad ephemeral_disk Job Specification'
|
|
[`heartbeat_grace`]: /docs/configuration/server/#heartbeat_grace
|
|
[meta]: /docs/job-specification/meta 'Nomad meta Job Specification'
|
|
[migrate]: /docs/job-specification/migrate 'Nomad migrate Job Specification'
|
|
[reschedule]: /docs/job-specification/reschedule 'Nomad reschedule Job Specification'
|
|
[restart]: /docs/job-specification/restart 'Nomad restart Job Specification'
|
|
[vault]: /docs/job-specification/vault 'Nomad vault Job Specification'
|
|
[volume]: /docs/job-specification/volume 'Nomad volume Job Specification'
|