Go to file
Tim Gross 9ed75e1f72
client: de-duplicate alloc updates and gate during restore (#17074)
When client nodes are restarted, all allocations that have been scheduled on the
node have their modify index updated, including terminal allocations. There are
several contributing factors:

* The `allocSync` method that updates the servers isn't gated on first contact
  with the servers. This means that if a server updates the desired state while
  the client is down, the `allocSync` races with the `Node.ClientGetAlloc`
  RPC. This will typically result in the client updating the server with "running"
  and then immediately thereafter "complete".

* The `allocSync` method unconditionally sends the `Node.UpdateAlloc` RPC even
  if it's possible to assert that the server has definitely seen the client
  state. The allocrunner may queue-up updates even if we gate sending them. So
  then we end up with a race between the allocrunner updating its internal state
  to overwrite the previous update and `allocSync` sending the bogus or duplicate
  update.

This changeset adds tracking of server-acknowledged state to the
allocrunner. This state gets checked in the `allocSync` before adding the update
to the batch, and updated when `Node.UpdateAlloc` returns successfully. To
implement this we need to be able to equality-check the updates against the last
acknowledged state. We also need to add the last acknowledged state to the
client state DB, otherwise we'd drop unacknowledged updates across restarts.

The client restart test has been expanded to cover a variety of allocation
states, including allocs stopped before shutdown, allocs stopped by the server
while the client is down, and allocs that have been completely GC'd on the
server while the client is down. I've also bench tested scenarios where the task
workload is killed while the client is down, resulting in a failed restore.

Fixes #16381
2023-05-11 09:05:24 -04:00
.changelog client: de-duplicate alloc updates and gate during restore (#17074) 2023-05-11 09:05:24 -04:00
.circleci build: upgrade from go 1.20.3 to 1.20.4 (#17056) 2023-05-02 13:09:11 -07:00
.github Revert "ci: use `BACKPORT_MERGE_COMMIT` option (#16730)" (#17116) 2023-05-08 13:30:43 -04:00
.release [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
.semgrep [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
.tours Make number of scheduler workers reloadable (#11593) 2022-01-06 11:56:13 -05:00
acl [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
api api: update Go mod go version to 1.20 to match main mod. (#17137) 2023-05-10 16:29:06 +01:00
ci [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
client client: de-duplicate alloc updates and gate during restore (#17074) 2023-05-11 09:05:24 -04:00
command cli: upload var file(s) content on job submission (#17128) 2023-05-11 08:04:33 -05:00
contributing build: upgrade from go 1.20.3 to 1.20.4 (#17056) 2023-05-02 13:09:11 -07:00
demo Demo: NFS CSI Plugins (#16875) 2023-04-24 15:08:48 -05:00
dev [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
drivers testing: improve fidelity of mock driver task restore (#16990) 2023-04-27 11:54:10 -04:00
e2e cli: upload var file(s) content on job submission (#17128) 2023-05-11 08:04:33 -05:00
helper test: fix flakey broker notifier test. (#16994) 2023-05-10 13:40:25 +01:00
integrations Update metric names (#16894) 2023-04-18 13:25:42 -07:00
internal/testing/apitests [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
jobspec logs: fix missing allocation logs after update to Nomad 1.5.4 (#17087) 2023-05-04 16:01:18 -04:00
jobspec2 [scheduler] Honor `false` for distinct hosts constraint (#16907) 2023-04-17 17:43:56 -04:00
lib dep: update from jwt/v4 to jwt/v5 (#17062) 2023-05-03 11:17:38 -07:00
nomad client: de-duplicate alloc updates and gate during restore (#17074) 2023-05-11 09:05:24 -04:00
plugins Revert "hashicorp/go-msgpack v2 (#16810)" (#17047) 2023-05-01 17:18:34 -04:00
scheduler logs: fix missing allocation logs after update to Nomad 1.5.4 (#17087) 2023-05-04 16:01:18 -04:00
scripts build: upgrade from go 1.20.3 to 1.20.4 (#17056) 2023-05-02 13:09:11 -07:00
terraform [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
testutil full task cleanup when alloc prerun hook fails (#17104) 2023-05-08 13:17:10 -05:00
tools tools: update dependencies and use tree set (#16974) 2023-04-25 07:47:19 -05:00
ui ui: add sign-in link on err page (#17140) 2023-05-11 08:24:58 -04:00
version [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
website docs: add note to upgrade guide about yanked version (#17115) 2023-05-08 13:28:45 -04:00
.copywrite.hcl copywrite: excempt example assets from copywrite headers (#16971) 2023-04-24 10:36:11 -05:00
.git-blame-ignore-revs add copywrite headers commit to ignore-revs config file (#17037) 2023-05-01 10:57:43 -04:00
.gitattributes Remove invalid gitattributes 2018-02-14 14:47:43 -08:00
.gitignore git: ignore .fleet directory (#16144) 2023-02-13 07:39:30 -06:00
.go-version build: upgrade from go 1.20.3 to 1.20.4 (#17056) 2023-05-02 13:09:11 -07:00
.golangci.yml [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
.semgrepignore build: disable semgrep on structs.go for now 2022-02-01 10:09:49 -06:00
CHANGELOG.md minor typo; 1.3.x not 1.13.x (#17101) 2023-05-05 13:51:05 -04:00
CODEOWNERS ensure engineering has merge authority on build pipeline (#15350) 2022-11-21 14:30:02 -05:00
GNUmakefile Revert "hashicorp/go-msgpack v2 (#16810)" (#17047) 2023-05-01 17:18:34 -04:00
LICENSE [COMPLIANCE] Update MPL 2.0 LICENSE (#14884) 2022-10-13 08:43:12 -04:00
README.md Adds public roadmap project to readme 2023-03-20 15:11:38 -07:00
Vagrantfile dev: make cni, consul, dev, docker, and vault scripts Lima compat. (#16689) 2023-03-28 16:21:14 +01:00
build_linux_arm.go [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
go.mod deps: update go-metrics to prevent panic (#17133) 2023-05-10 21:33:15 -04:00
go.sum deps: update go-metrics to prevent panic (#17133) 2023-05-10 21:33:15 -04:00
main.go [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
main_test.go [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00

README.md

Nomad License: MPL 2.0 Discuss

HashiCorp Nomad logo

Nomad is a simple and flexible workload orchestrator to deploy and manage containers (docker, podman), non-containerized applications (executable, Java), and virtual machines (qemu) across on-prem and clouds at scale.

Nomad is supported on Linux, Windows, and macOS. A commercial version of Nomad, Nomad Enterprise, is also available.

Nomad provides several key features:

  • Deploy Containers and Legacy Applications: Nomads flexibility as an orchestrator enables an organization to run containers, legacy, and batch applications together on the same infrastructure. Nomad brings core orchestration benefits to legacy applications without needing to containerize via pluggable task drivers.

  • Simple & Reliable: Nomad runs as a single binary and is entirely self contained - combining resource management and scheduling into a single system. Nomad does not require any external services for storage or coordination. Nomad automatically handles application, node, and driver failures. Nomad is distributed and resilient, using leader election and state replication to provide high availability in the event of failures.

  • Device Plugins & GPU Support: Nomad offers built-in support for GPU workloads such as machine learning (ML) and artificial intelligence (AI). Nomad uses device plugins to automatically detect and utilize resources from hardware devices such as GPU, FPGAs, and TPUs.

  • Federation for Multi-Region, Multi-Cloud: Nomad was designed to support infrastructure at a global scale. Nomad supports federation out-of-the-box and can deploy applications across multiple regions and clouds.

  • Proven Scalability: Nomad is optimistically concurrent, which increases throughput and reduces latency for workloads. Nomad has been proven to scale to clusters of 10K+ nodes in real-world production environments.

  • HashiCorp Ecosystem: Nomad integrates seamlessly with Terraform, Consul, Vault for provisioning, service discovery, and secrets management.

Quick Start

Testing

See Learn: Getting Started for instructions on setting up a local Nomad cluster for non-production use.

Optionally, find Terraform manifests for bringing up a development Nomad cluster on a public cloud in the terraform directory.

Production

See Learn: Nomad Reference Architecture for recommended practices and a reference architecture for production deployments.

Documentation

Full, comprehensive documentation is available on the Nomad website: https://www.nomadproject.io/docs

Guides are available on HashiCorp Learn.

Roadmap

A timeline of major features expected for the next release or two can be found in the Public Roadmap.

This roadmap is a best guess at any given point, and both release dates and projects in each release are subject to change. Do not take any of these items as commitments, especially ones later than one major release away.

Contributing

See the contributing directory for more developer documentation.