2016-10-28 00:36:26 +00:00
---
2020-02-06 23:45:31 +00:00
layout: docs
2023-01-30 14:48:43 +00:00
page_title: resources Block - Job Specification
2016-10-28 00:36:26 +00:00
description: |-
2023-01-30 14:48:43 +00:00
The "resources" block describes the requirements a task needs to execute.
2020-07-23 13:18:59 +00:00
Resource requirements include memory, cpu, and more.
2016-10-28 00:36:26 +00:00
---
2023-01-30 14:48:43 +00:00
# `resources` Block
2016-10-28 00:36:26 +00:00
2020-02-06 23:45:31 +00:00
<Placement groups={['job', 'group', 'task', 'resources']} />
2016-10-28 00:36:26 +00:00
2023-01-30 14:48:43 +00:00
The `resources` block describes the requirements a task needs to execute.
2020-07-23 13:18:59 +00:00
Resource requirements include memory, CPU, and more.
2016-10-28 00:36:26 +00:00
```hcl
job "docs" {
group "example" {
task "server" {
resources {
cpu = 100
memory = 256
2019-01-23 00:42:38 +00:00
device "nvidia/gpu" {
count = 2
}
2016-10-28 00:36:26 +00:00
}
}
}
}
```
## `resources` Parameters
- `cpu` `(int: 100)` - Specifies the CPU required to run this task in MHz.
2023-02-14 20:43:07 +00:00
- `cores` <code>(`int`: <optional>)</code> - Specifies the number of CPU cores
to reserve specifically for the task. This may not be used with `cpu`. The behavior
of setting `cores` is specific to each task driver (e.g. [docker][docker_cpu], [exec][exec_cpu]).
2021-05-04 17:58:23 +00:00
2021-04-30 18:07:56 +00:00
- `memory` `(int: 300)` - Specifies the memory required in MB.
2023-02-14 20:43:07 +00:00
- `memory_max` <code>(`int`: <optional>)</code> - Optionally, specifies the
maximum memory the task may use, if the client has excess memory capacity, in MB.
See [Memory Oversubscription](#memory-oversubscription) for more details.
2016-10-28 00:36:26 +00:00
2019-01-23 00:42:38 +00:00
- `device` <code>([Device][]: <optional>)</code> - Specifies the device
requirements. This may be repeated to request multiple device types.
2016-10-28 00:36:26 +00:00
## `resources` Examples
2023-01-30 14:48:43 +00:00
The following examples only show the `resources` blocks. Remember that the
`resources` block is only valid in the placements listed above.
2016-10-28 00:36:26 +00:00
2021-05-04 17:58:23 +00:00
### Cores
2023-01-30 14:48:43 +00:00
This example specifies that the task requires 2 reserved cores. With this block, Nomad will find
2021-05-04 17:58:23 +00:00
a client with enough spare capacity to reserve 2 cores exclusively for the task. Unlike the `cpu` field, the
task will not share cpu time with any other tasks managed by Nomad on the client.
```hcl
resources {
cores = 2
}
```
2023-01-30 14:48:43 +00:00
If `cores` and `cpu` are both defined in the same resource block, validation of the job will fail.
2021-05-04 17:58:23 +00:00
2016-10-28 00:36:26 +00:00
### Memory
2017-07-17 18:41:50 +00:00
This example specifies the task requires 2 GB of RAM to operate. 2 GB is the
equivalent of 2000 MB:
2016-10-28 00:36:26 +00:00
```hcl
resources {
memory = 2000
}
```
2019-01-23 00:42:38 +00:00
### Devices
2023-01-30 14:48:43 +00:00
This example shows a device constraints as specified in the [device][] block
2019-01-23 00:42:38 +00:00
which require two nvidia GPUs to be made available:
```hcl
resources {
device "nvidia/gpu" {
count = 2
}
}
```
2021-05-13 17:35:51 +00:00
## Memory Oversubscription
Setting task memory limits requires balancing the risk of interrupting tasks
against the risk of wasting resources. If a task memory limit is set too low,
the task may exceed the limit and be interrupted; if the task memory is too
high, the cluster is left underutilized.
To help maximize cluster memory utilization while allowing a safety margin for
unexpected load spikes, Nomad 1.1. lets job authors set two separate memory
limits:
* `memory`: the reserve limit to represent the task’ s typical memory usage —
this number is used by the Nomad scheduler to reserve and place the task
* `memory_max`: the maximum memory the task may use, if the client has excess
available memory, and may be terminated if it exceeds
If a client's memory becomes contended or low, the operating system will
pressure the running tasks to free up memory. If the contention persists, Nomad
may kill oversubscribed tasks and reschedule them to other clients. The exact
mechanism for memory pressure is specific to the task driver, operating system,
and application runtime.
The new max limit attribute is currently supported by the official `docker`,
`exec`, and `java` task drivers. Consult the documentation of
community-supported task drivers for their memory oversubscription support.
Memory oversubscription is opt-in. Nomad operators can enable [Memory Oversubscription in the scheduler
2023-01-25 17:31:14 +00:00
configuration](/nomad/api-docs/operator/scheduler#update-scheduler-configuration). Enterprise customers can use [Resource
Quotas](/nomad/tutorials/governance-and-policy/quotas) to limit the memory
2021-05-13 17:35:51 +00:00
oversubscription.
To avoid degrading the cluster experience, we recommend examining and monitoring
resource utilization and considering the following suggestions:
* Set `oom_score_adj` for Linux host services that aren't managed by Nomad, e.g.
Docker, logging services, and the Nomad agent itself. For Systemd services, you can use the [`OOMScoreAdj` field](https://github.com/hashicorp/nomad/blob/v1.0.0/dist/systemd/nomad.service#L25).
* Monitor hosts for memory utilization and set alerts on Out-Of-Memory errors
2023-01-25 17:31:14 +00:00
* Set the [client `reserved`](/nomad/docs/configuration/client#reserved) with enough
2021-05-13 17:35:51 +00:00
memory for host services that aren't managed by Nomad as well as a buffer
for the memory excess. For example, if the client reserved memory is 1GB,
the allocations on the host may exceed their soft memory limit by almost
1GB in aggregate before the memory becomes contended and allocations get
killed.
2019-01-23 00:42:38 +00:00
2023-01-25 17:31:14 +00:00
[device]: /nomad/docs/job-specification/device 'Nomad device Job Specification'
2023-02-14 20:43:07 +00:00
[docker_cpu]: /nomad/docs/drivers/docker#cpu
[exec_cpu]: /nomad/docs/drivers/exec#cpu