58 lines
2.8 KiB
Markdown
58 lines
2.8 KiB
Markdown
---
|
|
layout: "docs"
|
|
page_title: "Nomad Client and Server Requirements"
|
|
sidebar_current: "docs-cluster-requirements"
|
|
description: |-
|
|
Learn how to manually bootstrap a Nomad cluster using the server-join
|
|
command. This section also discusses Nomad federation across multiple
|
|
datacenters and regions.
|
|
---
|
|
|
|
# Cluster Requirements
|
|
|
|
## Resources (RAM, CPU, etc.)
|
|
|
|
**Nomad servers** may need to be run on large machine instances. We suggest
|
|
having 8+ cores, 32 GB+ of memory, 80 GB+ of disk and significant network
|
|
bandwidth. The core count and network recommendations are to ensure high
|
|
throughput as Nomad heavily relies on network communication and as the Servers
|
|
are managing all the nodes in the region and performing scheduling. The memory
|
|
and disk requirements are due to the fact that Nomad stores all state in memory
|
|
and will store two snapshots of this data onto disk. Thus disk should be at
|
|
least 2 times the memory available to the server when deploying a high load
|
|
cluster.
|
|
|
|
**Nomad clients** support reserving resources on the node that should not be
|
|
used by Nomad. This should be used to target a specific resource utilization per
|
|
node and to reserve resources for applications running outside of Nomad's
|
|
supervision such as Consul and the operating system itself.
|
|
|
|
Please see the [reservation configuration](/docs/agent/config.html#reserved) for
|
|
more detail.
|
|
|
|
## Network Topology
|
|
|
|
**Nomad servers** are expected to have sub 10 millisecond network latencies
|
|
between each other to ensure liveness and high throughput scheduling. Nomad
|
|
servers can be spread across multiple datacenters if they have low latency
|
|
connections between them to achieve high availability.
|
|
|
|
For example, on AWS every region comprises of multiple zones which have very low
|
|
latency links between them, so every zone can be modeled as a Nomad datacenter
|
|
and every Zone can have a single Nomad server which could be connected to form a
|
|
quorum and a region.
|
|
|
|
Nomad servers uses Raft for state replication and Raft being highly consistent
|
|
needs a quorum of servers to function, therefore we recommend running an odd
|
|
number of Nomad servers in a region. Usually running 3-5 servers in a region is
|
|
recommended. The cluster can withstand a failure of one server in a cluster of
|
|
three servers and two failures in a cluster of five servers. Adding more servers
|
|
to the quorum adds more time to replicate state and hence throughput decreases
|
|
so we don't recommend having more than seven servers in a region.
|
|
|
|
**Nomad clients** do not have the same latency requirements as servers since they
|
|
are not participating in Raft. Thus clients can have 100+ millisecond latency to
|
|
their servers. This allows having a set of Nomad servers that service clients
|
|
that can be spread geographically over a continent or even the world in the case
|
|
of having a single "global" region and many datacenter.
|