Clients periodically fingerprint Vault and Consul to ensure the server has
updated attributes in the client's fingerprint. If the client can't reach
Vault/Consul, the fingerprinter clears the attributes and requires a node
update. Although this seems like correct behavior so that we can detect
intentional removal of Vault/Consul access, it has two serious failure modes:
(1) If a local Consul agent is restarted to pick up configuration changes and the
client happens to fingerprint at that moment, the client will update its
fingerprint and result in evaluations for all its jobs and all the system jobs
in the cluster.
(2) If a client loses Vault connectivity, the same thing happens. But the
consequences are much worse in the Vault case because Vault is not run as a
local agent, so Vault connectivity failures are highly correlated across the
entire cluster. A 15 second Vault outage will cause a new `node-update`
evalution for every system job on the cluster times the number of nodes, plus
one `node-update` evaluation for every non-system job on each node. On large
clusters of 1000s of nodes, we've seen this create a large backlog of evaluations.
This changeset updates the fingerprinting behavior to keep the last fingerprint
if Consul or Vault queries fail. This prevents a storm of evaluations at the
cost of requiring a client restart if Consul or Vault is intentionally removed
from the client.
* test: add e2e for non-overlapping placements
Followup to #10446
Fails (as expected) against 1.3.x at the wait for blocked eval (because
the allocs are allowed to overlap).
Passes against 1.4.0-beta.1 (as expected).
* Update e2e/overlap/overlap_test.go
Co-authored-by: James Rasell <jrasell@users.noreply.github.com>
This PR splits up the nomad/mock package into more files. Specific features
that have a lot of mocks get their own file (e.g. acl, variables, csi, connect, etc.).
Above that, functions that return jobs/allocs/nodes are in the job/alloc/node file. And
lastly other mocks/helpers are in mock.go
* scheduler: Fix bug where the scheduler would treat multiregion jobs as paused for job types that don't use deployments
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* button styles
* Further styles including global toggle adjustment
* sidebar funcs and header
* Functioning task logs in high-level sidebars
* same-lineify the show tasks toggle
* Changelog
* Full-height sidebar calc in css, plz drop soon container queries
* Active status and query params for allocations page
* Reactive shouldShowLogs getter and added to client and task group pages
* Higher order func passing, thanks @DingoEatingFuzz
* Non-service job types get allocation params passed
* Keyframe animation for task log sidebar
* Acceptance test
* A few more sub-row tests
* Lintfix
In Nomad 1.2.6 we shipped `eval list`, which accepts a `-json` flag, and
deprecated the usage of `eval status` without an evaluation ID with an upgrade
note that it would be removed in Nomad 1.4.0. This changeset completes that
work.
The `operator debug` command writes JSON files from API responses as a single
line containing an array of JSON objects. But some of these files can be
extremely large (GB's) for large production clusters, which makes it difficult
to parse them using typical line-oriented Unix command line tools that can
stream their inputs without consuming a lot of memory.
For collections that are typically large, instead emit newline-delimited JSON.
This changeset includes some first-pass refactoring of this command. It breaks
up monolithic methods that validate a path, create a file, serialize objects,
and write them to disk into smaller functions, some of which can now be
standalone to take advantage of generics.
Operators may have a setup whereby the TLS config comes from a
source other than setting Nomad specific env vars. In this case,
we should attempt to identify the scheme using the config setting
as a fallback.
* cleanup: refactor MapStringStringSliceValueSet to be cleaner
* cleanup: replace SliceStringToSet with actual set
* cleanup: replace SliceStringSubset with real set
* cleanup: replace SliceStringContains with slices.Contains
* cleanup: remove unused function SliceStringHasPrefix
* cleanup: fixup StringHasPrefixInSlice doc string
* cleanup: refactor SliceSetDisjoint to use real set
* cleanup: replace CompareSliceSetString with SliceSetEq
* cleanup: replace CompareMapStringString with maps.Equal
* cleanup: replace CopyMapStringString with CopyMap
* cleanup: replace CopyMapStringInterface with CopyMap
* cleanup: fixup more CopyMapStringString and CopyMapStringInt
* cleanup: replace CopySliceString with slices.Clone
* cleanup: remove unused CopySliceInt
* cleanup: refactor CopyMapStringSliceString to be generic as CopyMapOfSlice
* cleanup: replace CopyMap with maps.Clone
* cleanup: run go mod tidy
The concurrent gate access test is flaky since it depends on the order
of operations of two concurrent goroutines. Despite the heavy bias
towards one of the results, it's still possible to end the execution
with a closed gate.
I believe this case was created to test an earlier implementation where
the gate state was stored and mutated internally, so the access had to
be protected by a lock. However, the final implementation changed this
approach to be only channel-based, so there is no need for this flaky
test anymore.
* Added task links to various alloc tables
* Lintfix
* Border collapse and added to task group page
* Logs icon temporarily removed and localStorage added
* Mock task added to test
* Delog
* Two asserts in new test
* Remove commented-out code
* Changelog
* Removing args.allocation deps
In the event a single test fails to clear up properly after
itself, all other tests will fail as they attempt to create ACL
policies with the same names. This change ensures they use unique
ACL names, so when a single test fails, it is easy to identify it
is a problem with the test rather than the suite.
* test: don't use loop vars in goroutines
fixes a data race in the test
* test: copy objects in statestore before mutating
fixes data race in test
* test: @lgfa29's segmgrep rule for loops/goroutines
Found 2 places where we were improperly using loop variables inside
goroutines.
This test is broken in CircleCI only. It works on GHA in both 20.04 and 22.04
and has been verified to work on real Nomad; temporarily commenting-out so that
we don't block unrelated CI runs.
WIP to fix in https://github.com/hashicorp/nomad/pull/14600
The rewrite refactors the suite to use the new style along with
other recent testing improvements. In order to ensure the spread
tests do not impact each other, there is new cleanup functionality
to ensure both the job and allocations are removed from state
before the test exits completely.
The `TestVarGetCommand` test uses the wrong namespace in the autocomplete
test. The namespace only gets validated against if we have quota enforcement (or
more typically by ACL checks), so the test only fails in the ENT repo test runs.
* First attempt at stabilizing percy snapshots with faker
* Tokens seed moved to before management token generation
* Faker seed only in token test
* moving seed after storage clear
* And again, but back to no faker seeding
* Isolated seed and temporary log
* Setting seed(1) wherever we're snapshotting, or before establishing cluster scenarios
* Deliberate noop to see if percy is stable
* Changelog entry
* scheduler: stopped-yet-running allocs are still running
* scheduler: test new stopped-but-running logic
* test: assert nonoverlapping alloc behavior
Also add a simpler Wait test helper to improve line numbers and save few
lines of code.
* docs: tried my best to describe #10446
it's not concise... feedback welcome
* scheduler: fix test that allowed overlapping allocs
* devices: only free devices when ClientStatus is terminal
* test: output nicer failure message if err==nil
Co-authored-by: Mahmood Ali <mahmood@hashicorp.com>
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>