--- layout: docs page_title: 'Drivers: Exec' description: The Exec task driver is used to run binaries using OS isolation primitives. --- # Isolated Fork/Exec Driver Name: `exec` The `exec` driver is used to simply execute a particular command for a task. However, unlike [`raw_exec`](/nomad/docs/drivers/raw_exec) it uses the underlying isolation primitives of the operating system to limit the task's access to resources. While simple, since the `exec` driver can invoke any command, it can be used to call scripts or other wrappers which provide higher level features. ## Task Configuration ```hcl task "webservice" { driver = "exec" config { command = "my-binary" args = ["-flag", "1"] } } ``` The `exec` driver supports the following configuration in the job spec: - `command` - The command to execute. Must be provided. If executing a binary that exists on the host, the path must be absolute and within the task's [chroot](#chroot) or in a [host volume][] mounted with a [`volume_mount`][volume_mount] block. The driver will make the binary executable and will search, in order: - The `local` directory with the task directory. - The task directory. - Any mounts, in the order listed in the job specification. - The `usr/local/bin`, `usr/bin` and `bin` directories inside the task directory. If executing a binary that is downloaded from an [`artifact`](/nomad/docs/job-specification/artifact), the path can be relative from the allocation's root directory. - `args` - (Optional) A list of arguments to the `command`. References to environment variables or any [interpretable Nomad variables](/nomad/docs/runtime/interpolation) will be interpreted before launching the task. - `pid_mode` - (Optional) Set to `"private"` to enable PID namespace isolation for this task, or `"host"` to disable isolation. If left unset, the behavior is determined from the [`default_pid_mode`][default_pid_mode] in plugin configuration. !> **Warning:** If set to `"host"`, other processes running as the same user will be able to access sensitive process information like environment variables. - `ipc_mode` - (Optional) Set to `"private"` to enable IPC namespace isolation for this task, or `"host"` to disable isolation. If left unset, the behavior is determined from the [`default_ipc_mode`][default_ipc_mode] in plugin configuration. !> **Warning:** If set to `"host"`, other processes running as the same user will be able to make use of IPC features, like sending unexpected POSIX signals. - `cap_add` - (Optional) A list of Linux capabilities to enable for the task. Effective capabilities (computed from `cap_add` and `cap_drop`) must be a subset of the allowed capabilities configured with [`allow_caps`][allow_caps]. ```hcl config { cap_add = ["net_raw", "sys_time"] } ``` - `cap_drop` - (Optional) A list of Linux capabilities to disable for the task. Effective capabilities (computed from `cap_add` and `cap_drop`) must be a subset of the allowed capabilities configured with [`allow_caps`][allow_caps]. ```hcl config { cap_drop = ["all"] cap_add = ["chown", "sys_chroot", "mknod"] } ``` ## Examples To run a binary present on the Node: ```hcl task "example" { driver = "exec" config { # When running a binary that exists on the host, the path must be absolute. command = "/bin/sleep" args = ["1"] } } ``` To execute a binary downloaded from an [`artifact`](/nomad/docs/job-specification/artifact): ```hcl task "example" { driver = "exec" config { command = "name-of-my-binary" } artifact { source = "https://internal.file.server/name-of-my-binary" options { checksum = "sha256:abd123445ds4555555555" } } } ``` ## Capabilities The `exec` driver implements the following [capabilities](/nomad/docs/concepts/plugins/task-drivers#capabilities-capabilities-error). | Feature | Implementation | | -------------------- | -------------- | | `nomad alloc signal` | true | | `nomad alloc exec` | true | | filesystem isolation | chroot | | network isolation | host, group | | volume mounting | all | ## Client Requirements The `exec` driver can only be run when on Linux and running Nomad as root. `exec` is limited to this configuration because currently isolation of resources is only guaranteed on Linux. Further, the host must have cgroups mounted properly in order for the driver to work. If you are receiving the error: ``` * Constraint "missing drivers" filtered <> nodes ``` and using the exec driver, check to ensure that you are running Nomad as root. This also applies for running Nomad in -dev mode. ## Plugin Options - `default_pid_mode` `(string: optional)` - Defaults to `"private"`. Set to `"private"` to enable PID namespace isolation for tasks by default, or `"host"` to disable isolation. !> **Warning:** If set to `"host"`, other processes running as the same user will be able to access sensitive process information like environment variables. - `default_ipc_mode` `(string: optional)` - Defaults to `"private"`. Set to `"private"` to enable IPC namespace isolation for tasks by default, or `"host"` to disable isolation. !> **Warning:** If set to `"host"`, other processes running as the same user will be able to make use of IPC features, like sending unexpected POSIX signals. - `no_pivot_root` `(bool: optional)` - Defaults to `false`. When `true`, the driver uses `chroot` for file system isolation without `pivot_root`. This is useful for systems where the root is on a ramdisk. - `allow_caps` - A list of allowed Linux capabilities. Defaults to ```hcl ["audit_write", "chown", "dac_override", "fowner", "fsetid", "kill", "mknod", "net_bind_service", "setfcap", "setgid", "setpcap", "setuid", "sys_chroot"] ``` which is modeled after the capabilities allowed by [docker by default][docker_caps] (without [`NET_RAW`][no_net_raw]). Allows the operator to control which capabilities can be obtained by tasks using [`cap_add`][cap_add] and [`cap_drop`][cap_drop] options. Supports the value `"all"` as a shortcut for allow-listing all capabilities supported by the operating system. !> **Warning:** Allowing more capabilities beyond the default may lead to undesirable consequences, including untrusted tasks being able to compromise the host system. ## Client Attributes The `exec` driver will set the following client attributes: - `driver.exec` - This will be set to "1", indicating the driver is available. ## Resource Isolation The resource isolation provided varies by the operating system of the client and the configuration. On Linux, Nomad will use cgroups, and a chroot to isolate the resources of a process and as such the Nomad agent must be run as root. Some Linux distributions do not boot with all required cgroups enabled by default. You can see which cgroups are enabled by reading `/proc/cgroups`, and verifying that all the following cgroups are enabled: ``` $ awk '{print $1 " " $4}' /proc/cgroups #subsys_name enabled cpuset 1 cpu 1 cpuacct 1 blkio 1 memory 1 devices 1 freezer 1 net_cls 1 perf_event 1 net_prio 1 hugetlb 1 pids 1 ``` ### Chroot The chroot is populated with data in the following directories from the host machine: ``` [ "/bin", "/etc", "/lib", "/lib32", "/lib64", "/run/resolvconf", "/sbin", "/usr", ] ``` The task's chroot is populated by linking or copying the data from the host into the chroot. Note that this can take considerable disk space. Since Nomad v0.5.3, the client manages garbage collection locally which mitigates any issue this may create. This list is configurable through the agent client [configuration file](/nomad/docs/configuration/client#chroot_env). ### CPU Nomad limits exec tasks' CPU based on CPU shares. CPU shares allow containers to burst past their CPU limits. CPU limits will only be imposed when there is contention for resources. When the host is under load your process may be throttled to stabilize QoS depending on how many shares it has. You can see how many CPU shares are available to your process by reading [`NOMAD_CPU_LIMIT`][runtime_env]. 1000 shares are approximately equal to 1 GHz. Please keep the implications of CPU shares in mind when you load test workloads on Nomad. If resources [`cores`][cores] is set, the task is given an isolated reserved set of CPU cores to make use of. The total set of cores the task may run on is the private set combined with the variable set of unreserved cores. The private set of CPU cores is available to your process by reading [`NOMAD_CPU_CORES`][runtime_env]. [default_pid_mode]: /nomad/docs/drivers/exec#default_pid_mode [default_ipc_mode]: /nomad/docs/drivers/exec#default_ipc_mode [cap_add]: /nomad/docs/drivers/exec#cap_add [cap_drop]: /nomad/docs/drivers/exec#cap_drop [no_net_raw]: /nomad/docs/upgrade/upgrade-specific#nomad-1-1-0-rc1-1-0-5-0-12-12 [allow_caps]: /nomad/docs/drivers/exec#allow_caps [docker_caps]: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities [host volume]: /nomad/docs/configuration/client#host_volume-block [volume_mount]: /nomad/docs/job-specification/volume_mount [cores]: /nomad/docs/job-specification/resources#cores [runtime_env]: /nomad/docs/runtime/environment#job-related-variables