open-nomad/website/source/docs/agent/configuration/server.html.md
Alex Dadgar 529993eaa9 links
2017-02-27 09:40:55 -08:00

6.7 KiB

layout page_title sidebar_current description
docs server Stanza - Agent Configuration docs-agent-configuration-server The "server" stanza configures the Nomad agent to operate in server mode to participate in scheduling decisions, register with service discovery, handle join failures, and more.

server Stanza

Placement **server**

The server stanza configures the Nomad agent to operate in server mode to participate in scheduling decisions, register with service discovery, handle join failures, and more.

server {
  enabled          = true
  bootstrap_expect = 3
  retry_join       = ["1.2.3.4", "5.6.7.8"]
}

server Parameters

  • bootstrap_expect (int: required) - Specifies the number of server nodes to wait for before bootstrapping. It is most common to use the odd-numbered integers 3 or 5 for this value, depending on the cluster size. A value of 1 does not provide any fault tolerance and is not recommended for production use cases.

  • data_dir (string: "[data_dir]/server") - Specifies the directory to use - for server-specific data, including the replicated log. By default, this is - the top-level data_dir suffixed with "server", like "/opt/nomad/server". This must be an absolute path.

  • enabled (bool: false) - Specifies if this agent should run in server mode. All other server options depend on this value being set.

  • enabled_schedulers (array<string>: [all]) - Specifies which sub-schedulers this server will handle. This can be used to restrict the evaluations that worker threads will dequeue for processing.

  • encrypt (string: "") - Specifies the secret key to use for encryption of Nomad server's gossip network traffic. This key must be 16-bytes that are base64-encoded. The provided key is automatically persisted to the data directory and loaded automatically whenever the agent is restarted. This means that to encrypt Nomad server's gossip protocol, this option only needs to be provided once on each agent's initial startup sequence. If it is provided after Nomad has been initialized with an encryption key, then the provided key is ignored and a warning will be displayed. See the Nomad encryption documentation for more details on this option and its impact on the cluster.

  • node_gc_threshold (string: "24h") - Specifies how long a node must be in a terminal state before it is garbage collected and purged from the system. This is specified using a label suffix like "30s" or "1h".

  • num_schedulers (int: [num-cores]) - Specifies the number of parallel scheduler threads to run. This can be as many as one per core, or 0 to disallow this server from making any scheduling decisions. This defaults to the number of CPU cores.

  • protocol_version (int: 1) - Specifies the Nomad protocol version to use when communicating with other Nomad servers. This value is typically not required as the agent internally knows the latest version, but may be useful in some upgrade scenarios.

  • rejoin_after_leave (bool: false) - Specifies if Nomad will ignore a previous leave and attempt to rejoin the cluster when starting. By default, Nomad treats leave as a permanent intent and does not attempt to join the cluster again when starting. This flag allows the previous state to be used to rejoin the cluster.

  • retry_join (array<string>: []) - Specifies a list of server addresses to retry joining if the first attempt fails. This is similar to start_join, but only invokes if the initial join attempt fails. The list of addresses will be tried in the order specified, until one succeeds. After one succeeds, no further addresses will be contacted. This is useful for cases where we know the address will become available eventually. Use retry_join with an array as a replacement for start_join, do not use both options. See the server address format section for more information on the format of the string.

  • retry_interval (string: "30s") - Specifies the time to wait between retry join attempts.

  • retry_max (int: 0) - Specifies the maximum number of join attempts to be made before exiting with a return code of 1. By default, this is set to 0 which is interpreted as infinite retries.

  • start_join (array<string>: []) - Specifies a list of server addresses to join on startup. If Nomad is unable to join with any of the specified addresses, agent startup will fail. See the server address format section for more information on the format of the string.

Server Address Format

This section describes the acceptable syntax and format for describing the location of a Nomad server. There are many ways to reference a Nomad server, including directly by IP address and resolving through DNS.

Directly via IP Address

It is possible to address another Nomad server using its IP address. This is done in the ip:port format, such as:

1.2.3.4:5678

If the port option is omitted, it defaults to the Serf port, which is 4648 unless configured otherwise:

1.2.3.4 => 1.2.3.4:4648

Via Domains or DNS

It is possible to address another Nomad server using its DNS address. This is done in the address:port format, such as:

nomad-01.company.local:5678

If the port option is omitted, it defaults to the Serf port, which is 4648 unless configured otherwise:

nomad-01.company.local => nomad-01.company.local:4648

server Examples

Common Setup

This example shows a common Nomad agent server configuration stanza. The two IP addresses could also be DNS, and should point to the other Nomad servers in the cluster

server {
  enabled          = true
  bootstrap_expect = 3
  retry_join       = ["1.2.3.4", "5.6.7.8"]
}

Configuring Data Directory

This example shows configuring a custom data directory for the server data.

server {
  data_dir = "/opt/nomad/server"
}

Automatic Bootstrapping

The Nomad servers can automatically bootstrap if Consul is configured. For a more detailed explanation, please see the automatic Nomad bootstrapping documentation.

Restricting Schedulers

This example shows restricting the schedulers that are enabled as well as the maximum number of cores to utilize when participating in scheduling decisions:

server {
  enabled            = true
  enabled_schedulers = ["batch", "service"]
  num_schedulers     = 7
}