--- layout: docs page_title: client Stanza - Agent Configuration description: |- The "client" stanza configures the Nomad agent to accept jobs as assigned by the Nomad server, join the cluster, and specify driver-specific configuration. --- # `client` Stanza The `client` stanza configures the Nomad agent to accept jobs as assigned by the Nomad server, join the cluster, and specify driver-specific configuration. ```hcl client { enabled = true servers = ["1.2.3.4:4647", "5.6.7.8:4647"] } ``` ## `client` Parameters - `alloc_dir` `(string: "[data_dir]/alloc")` - Specifies the directory to use for allocation data. By default, this is the top-level [data_dir](/docs/configuration#data_dir) suffixed with "alloc", like `"/opt/nomad/alloc"`. This must be an absolute path. - `chroot_env` ([ChrootEnv](#chroot_env-parameters): nil) - Specifies a key-value mapping that defines the chroot environment for jobs using the Exec and Java drivers. - `enabled` `(bool: false)` - Specifies if client mode is enabled. All other client configuration options depend on this value. - `max_kill_timeout` `(string: "30s")` - Specifies the maximum amount of time a job is allowed to wait to exit. Individual jobs may customize their own kill timeout, but it may not exceed this value. - `disable_remote_exec` `(bool: false)` - Specifies if the client should disable remote task execution to tasks running on this client. - `meta` `(map[string]string: nil)` - Specifies a key-value map that annotates with user-defined metadata. - `network_interface` `(string: varied)` - Specifies the name of the interface to force network fingerprinting on. When run in dev mode, this defaults to the loopback interface. When not in dev mode, the interface attached to the default route is used. The scheduler chooses from these fingerprinted IP addresses when allocating ports for tasks. This value support [go-sockaddr/template format][go-sockaddr/template]. If no non-local IP addresses are found, Nomad could fingerprint link-local IPv6 addresses depending on the client's [`"fingerprint.network.disallow_link_local"`](#fingerprint-network-disallow_link_local) configuration value. - `cpu_total_compute` `(int: 0)` - Specifies an override for the total CPU compute. This value should be set to `# Cores * Core MHz`. For example, a quad-core running at 2 GHz would have a total compute of 8000 (4 \* 2000). Most clients can determine their total CPU compute automatically, and thus in most cases this should be left unset. - `memory_total_mb` `(int:0)` - Specifies an override for the total memory. If set, this value overrides any detected memory. - `min_dynamic_port` `(int:20000)` - Specifies the minimum dynamic port to be assigned. Individual ports and ranges of ports may be excluded from dynamic port assignment via [`reserved`](#reserved-parameters) parameters. - `max_dynamic_port` `(int:32000)` - Specifies the maximum dynamic port to be assigned. Individual ports and ranges of ports may be excluded from dynamic port assignment via [`reserved`](#reserved-parameters) parameters. - `node_class` `(string: "")` - Specifies an arbitrary string used to logically group client nodes by user-defined class. This can be used during job placement as a filter. - `options` ([Options](#options-parameters): nil) - Specifies a key-value mapping of internal configuration for clients, such as for driver configuration. - `reserved` ([Reserved](#reserved-parameters): nil) - Specifies that Nomad should reserve a portion of the node's resources from receiving tasks. This can be used to target a certain capacity usage for the node. For example, a value equal to 20% of the node's CPU could be reserved to target a CPU utilization of 80%. - `servers` `(array: [])` - Specifies an array of addresses to the Nomad servers this client should join. This list is used to register the client with the server nodes and advertise the available resources so that the agent can receive work. This may be specified as an IP address or DNS, with or without the port. If the port is omitted, the default port of `4647` is used. - `server_join` ([server_join][server-join]: nil) - Specifies how the Nomad client will connect to Nomad servers. The `start_join` field is not supported on the client. The retry_join fields may directly specify the server address or use go-discover syntax for auto-discovery. See the documentation for more detail. - `state_dir` `(string: "[data_dir]/client")` - Specifies the directory to use to store client state. By default, this is - the top-level [data_dir](/docs/configuration#data_dir) suffixed with "client", like `"/opt/nomad/client"`. This must be an absolute path. - `gc_interval` `(string: "1m")` - Specifies the interval at which Nomad attempts to garbage collect terminal allocation directories. - `gc_disk_usage_threshold` `(float: 80)` - Specifies the disk usage percent which Nomad tries to maintain by garbage collecting terminal allocations. - `gc_inode_usage_threshold` `(float: 70)` - Specifies the inode usage percent which Nomad tries to maintain by garbage collecting terminal allocations. - `gc_max_allocs` `(int: 50)` - Specifies the maximum number of allocations which a client will track before triggering a garbage collection of terminal allocations. This will _not_ limit the number of allocations a node can run at a time, however after `gc_max_allocs` every new allocation will cause terminal allocations to be GC'd. - `gc_parallel_destroys` `(int: 2)` - Specifies the maximum number of parallel destroys allowed by the garbage collector. This value should be relatively low to avoid high resource usage during garbage collections. - `no_host_uuid` `(bool: true)` - By default a random node UUID will be generated, but setting this to `false` will use the system's UUID. Before Nomad 0.6 the default was to use the system UUID. - `cni_path` `(string: "/opt/cni/bin")` - Sets the search path that is used for CNI plugin discovery. Multiple paths can be searched using colon delimited paths - `cni_config_dir` `(string: "/opt/cni/config")` - Sets the directory where CNI network configuration is located. The client will use this path when fingerprinting CNI networks. Filenames should use the `.conflist` extension. - `bridge_network_name` `(string: "nomad")` - Sets the name of the bridge to be created by nomad for allocations running with bridge networking mode on the client. - `bridge_network_subnet` `(string: "172.26.64.0/20")` - Specifies the subnet which the client will use to allocate IP addresses from. - `artifact` ([Artifact](#artifact-parameters): varied) - Specifies controls on the behavior of task [`artifact`](/docs/job-specification/artifact) stanzas. - `template` ([Template](#template-parameters): nil) - Specifies controls on the behavior of task [`template`](/docs/job-specification/template) stanzas. - `host_volume` ([host_volume](#host_volume-stanza): nil) - Exposes paths from the host as volumes that can be mounted into jobs. - `host_network` ([host_network](#host_network-stanza): nil) - Registers additional host networks with the node that can be selected when port mapping. - `cgroup_parent` `(string: "/nomad")` - Specifies the cgroup parent for which cgroup subsystems managed by Nomad will be mounted under. Currently this only applies to the `cpuset` subsystems. This field is ignored on non Linux platforms. ### `chroot_env` Parameters Drivers based on [isolated fork/exec](/docs/drivers/exec) implement file system isolation using chroot on Linux. The `chroot_env` map allows the chroot environment to be configured using source paths on the host operating system. The mapping format is: ```text source_path -> dest_path ``` The following example specifies a chroot which contains just enough to run the `ls` utility: ```hcl client { chroot_env { "/bin/ls" = "/bin/ls" "/etc/ld.so.cache" = "/etc/ld.so.cache" "/etc/ld.so.conf" = "/etc/ld.so.conf" "/etc/ld.so.conf.d" = "/etc/ld.so.conf.d" "/etc/passwd" = "/etc/passwd" "/lib" = "/lib" "/lib64" = "/lib64" } } ``` When `chroot_env` is unspecified, the `exec` driver will use a default chroot environment with the most commonly used parts of the operating system. Please see the [Nomad `exec` driver documentation](/docs/drivers/exec#chroot) for the full list. As of Nomad 1.2, Nomad will never attempt to embed the `alloc_dir` in the chroot as doing so would cause infinite recursion. ### `reserved` Parameters - `cpu` `(int: 0)` - Specifies the amount of CPU to reserve, in MHz. - `cores` `(int: 0)` - Specifies the number of CPU cores to reserve. - `memory` `(int: 0)` - Specifies the amount of memory to reserve, in MB. - `disk` `(int: 0)` - Specifies the amount of disk to reserve, in MB. - `reserved_ports` `(string: "")` - Specifies a comma-separated list of ports to reserve on all fingerprinted network devices. Ranges can be specified by using a hyphen separating the two inclusive ends. See also [`host_network`](#host_network-stanza) for reserving ports on specific host networks. ### `artifact` Parameters - `http_read_timeout` `(string: "30m")` - Specifies the maximum duration in which an HTTP download request must complete before it is canceled. Set to `0` to not enforce a limit. - `http_max_size` `(string: "100GB")` - Specifies the maximum size allowed for artifacts downloaded via HTTP. Set to `0` to not enforce a limit. - `gcs_timeout` `(string: "30m")` - Specifies the maximum duration in which a Google Cloud Storate operation must complete before it is canceled. Set to `0` to not enforce a limit. - `git_timeout` `(string: "30m")` - Specifies the maximum duration in which a Git operation must complete before it is canceled. Set to `0` to not enforce a limit. - `hg_timeout` `(string: "30m")` - Specifies the maximum duration in which a Mercurial operation must complete before it is canceled. Set to `0` to not enforce a limit. - `s3_timeout` `(string: "30m")` - Specifies the maximum duration in which an S3 operation must complete before it is canceled. Set to `0` to not enforce a limit. ### `template` Parameters - `function_denylist` `([]string: ["plugin", "writeToFile"])` - Specifies a list of template rendering functions that should be disallowed in job specs. By default the `plugin` and `writeToFile` functions are disallowed as they allow unrestricted root access to the host. - `disable_file_sandbox` `(bool: false)` - Allows templates access to arbitrary files on the client host via the `file` function. By default, templates can access files only within the [task working directory]. - `max_stale` `(string: "87600h")` - This is the maximum interval to allow "stale" data. If `max_stale` is set to `0`, only the Consul leader will respond to queries, and requests that reach a follower will forward to the leader. In large clusters with many requests, this is not as scalable. This option allows any follower to respond to a query, so long as the last-replicated data is within this bound. Higher values result in less cluster load, but are more likely to have outdated data. This default of 10 years (`87600h`) matches the default Consul configuration. - `wait` `(map: { min = "5s" max = "4m" })` - Defines the minimum and maximum amount of time to wait before attempting to re-render a template. Consul Template re-renders templates whenever rendered variables from Consul, Nomad, or Vault change. However in order to minimize how often tasks are restarted or reloaded, Nomad will configure Consul Template with a backoff timer that will tick on an interval equal to the specified `min` value. Consul Template will always wait at least the as long as the `min` value specified. If the underlying data has not changed between two tick intervals, Consul Template will re-render. If the underlying data has changed, Consul Template will delay re-rendering until the underlying data stabilizes for at least one tick interval, or the configured `max` duration has elapsed. Once the `max` duration has elapsed, Consul Template will re-render the template with the data available at the time. This is useful to enable in systems where Consul is in a degraded state, or the referenced data values are changing rapidly, because it will reduce the number of times a template is rendered. This configuration is also exposed in the _task template stanza_ to allow overrides per task. ```hcl wait { min = "5s" max = "4m" } ``` - `wait_bounds` `(map: nil)` - Defines client level lower and upper bounds for per-template `wait` configuration. If the individual template configuration has a `min` lower than `wait_bounds.min` or a `max` greater than the `wait_bounds.max`, the bounds will be enforced, and the template `wait` will be adjusted before being sent to `consul-template`. ```hcl wait_bounds { min = "5s" max = "10s" } ``` - `block_query_wait` `(string: "5m")` - This is amount of time in seconds to wait for the results of a blocking query. Many endpoints in Consul support a feature known as "blocking queries". A blocking query is used to wait for a potential change using long polling. - `consul_retry` `(map: { attempts = 0 backoff = "250ms" max_backoff = "1m" })`- This controls the retry behavior when an error is returned from Consul. The template runner will not exit in the face of failure. Instead, it uses exponential back-off and retry functions to wait for the Consul cluster to become available, as is customary in distributed systems. ```hcl consul_retry { # This specifies the number of attempts to make before giving up. Each # attempt adds the exponential backoff sleep time. Setting this to # zero will implement an unlimited number of retries. attempts = 0 # This is the base amount of time to sleep between retry attempts. Each # retry sleeps for an exponent of 2 longer than this base. For 5 retries, # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s. backoff = "250ms" # This is the maximum amount of time to sleep between retry attempts. # When max_backoff is set to zero, there is no upper limit to the # exponential sleep between retry attempts. # If max_backoff is set to 10s and backoff is set to 1s, sleep times # would be: 1s, 2s, 4s, 8s, 10s, 10s, ... max_backoff = "1m" } ``` - `vault_retry` `(map: { attempts = 0 backoff = "250ms" max_backoff = "1m" })` - This controls the retry behavior when an error is returned from Vault. Consul Template is highly fault tolerant, meaning it does not exit in the face of failure. Instead, it uses exponential back-off and retry functions to wait for the cluster to become available, as is customary in distributed systems. ```hcl vault_retry { # This specifies the number of attempts to make before giving up. Each # attempt adds the exponential backoff sleep time. Setting this to # zero will implement an unlimited number of retries. attempts = 0 # This is the base amount of time to sleep between retry attempts. Each # retry sleeps for an exponent of 2 longer than this base. For 5 retries, # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s. backoff = "250ms" # This is the maximum amount of time to sleep between retry attempts. # When max_backoff is set to zero, there is no upper limit to the # exponential sleep between retry attempts. # If max_backoff is set to 10s and backoff is set to 1s, sleep times # would be: 1s, 2s, 4s, 8s, 10s, 10s, ... max_backoff = "1m" } ``` - `nomad_retry` `(map: { attempts = 0 backoff = "250ms" max_backoff = "1m" })` - This controls the retry behavior when an error is returned from Nomad. Consul Template is highly fault tolerant, meaning it does not exit in the face of failure. Instead, it uses exponential back-off and retry functions to wait for the cluster to become available, as is customary in distributed systems. ```hcl nomad_retry { # This specifies the number of attempts to make before giving up. Each # attempt adds the exponential backoff sleep time. Setting this to # zero will implement an unlimited number of retries. attempts = 0 # This is the base amount of time to sleep between retry attempts. Each # retry sleeps for an exponent of 2 longer than this base. For 5 retries, # the sleep times would be: 250ms, 500ms, 1s, 2s, then 4s. backoff = "250ms" # This is the maximum amount of time to sleep between retry attempts. # When max_backoff is set to zero, there is no upper limit to the # exponential sleep between retry attempts. # If max_backoff is set to 10s and backoff is set to 1s, sleep times # would be: 1s, 2s, 4s, 8s, 10s, 10s, ... max_backoff = "1m" } ``` ### `host_volume` Stanza The `host_volume` stanza is used to make volumes available to jobs. The key of the stanza corresponds to the name of the volume for use in the `source` parameter of a `"host"` type [`volume`](/docs/job-specification/volume) and ACLs. ```hcl client { host_volume "ca-certificates" { path = "/etc/ssl/certs" read_only = true } } ``` #### `host_volume` Parameters - `path` `(string: "", required)` - Specifies the path on the host that should be used as the source when this volume is mounted into a task. The path must exist on client startup. - `read_only` `(bool: false)` - Specifies whether the volume should only ever be allowed to be mounted `read_only`, or if it should be writeable. ### `host_network` Stanza The `host_network` stanza is used to register additional host networks with the node that can be used when port mapping. The key of the stanza corresponds to the name of the network used in the [`host_network`](/docs/job-specification/network#host-networks). ```hcl client { host_network "public" { cidr = "203.0.113.0/24" reserved_ports = "22,80" } } ``` #### `host_network` Parameters - `cidr` `(string: "")` - Specifies a cidr block of addresses to match against. If an address is found on the node that is contained by this cidr block, the host network will be registered with it. - `interface` `(string: "")` - Filters searching of addresses to a specific interface. - `reserved_ports` `(string: "")` - Specifies a comma-separated list of ports to reserve on all addresses associated with this network. Ranges can be specified by using a hyphen separating the two inclusive ends. [`reserved.reserved_ports`](#reserved_ports) are also reserved on each host network. ## `client` Examples ### Common Setup This example shows the most basic configuration for a Nomad client joined to a cluster. ```hcl client { enabled = true server_join { retry_join = [ "1.1.1.1", "2.2.2.2" ] retry_max = 3 retry_interval = "15s" } } ``` ### Reserved Resources This example shows a sample configuration for reserving resources to the client. This is useful if you want to allocate only a portion of the client's resources to jobs. ```hcl client { enabled = true reserved { cpu = 500 memory = 512 disk = 1024 reserved_ports = "22,80,8500-8600" } } ``` ### Custom Metadata, Network Speed, and Node Class This example shows a client configuration which customizes the metadata, network speed, and node class. The scheduler can use this information while processing [constraints][metadata_constraint]. The metadata is completely user configurable; the values below are for illustrative purposes only. ```hcl client { enabled = true node_class = "prod" meta { owner = "ops" cached_binaries = "redis,apache,nginx,jq,cypress,nodejs" rack = "rack-12-1" } } ``` [plugin-options]: #plugin-options [plugin-stanza]: /docs/configuration/plugin [server-join]: /docs/configuration/server_join 'Server Join' [metadata_constraint]: /docs/job-specification/constraint#user-specified-metadata 'Nomad User-Specified Metadata Constraint Example' [task working directory]: /docs/runtime/environment#task-directories 'Task directories' [go-sockaddr/template]: https://godoc.org/github.com/hashicorp/go-sockaddr/template