This PR updates Nomad's Consul service client to do map comparisons
using maps.Equal instead of reflect.DeepEqual. The bug fix is in how
DeepEqual treats nil slices different from empty slices, when actually
they should be treated the same.
* One-time tokens are not replicated between regions, so we don't want to enforce
that the version check across all of serf, just members in the same region.
* Scheduler: Disconnected clients handling is specific to a single region, so we
don't want to enforce that the version check across all of serf, just members in
the same region.
* Variables: enforce version check in Apply RPC
* Cleans up a bunch of legacy checks.
This changeset is specific to 1.4.x and the changes for previous versions of
Nomad will be manually backported in a separate PR.
In #14821 we fixed a panic that can happen if a leadership election happens in
the middle of an upgrade. That fix checks that all servers are at the minimum
version before initializing the keyring (which blocks evaluation processing
during trhe upgrade). But the check we implemented is over the serf membership,
which includes servers in any federated regions, which don't necessarily have
the same upgrade cycle.
Filter the version check by the leader's region.
Also bump up log levels of major keyring operations
Keeps failing in the nightly e2e test with unhelpful output like:
```
Failed
=== RUN TestOverlap
overlap_test.go:92: Followup job overlap93ee1d2b blocked. Sleeping for the rest of overlap48c26c39's shutdown_delay (9.2/10s)
overlap_test.go:105: 1500/2000 retries reached for github.com/hashicorp/nomad/e2e/overlap.TestOverlap (err=timed out before an allocation was found for overlap93ee1d2b)
overlap_test.go:105: timeout: timed out before an allocation was found for overlap93ee1d2b
--- FAIL: TestOverlap (38.96s)
```
I have not been able to replicate it in my own e2e cluster, so I added
the EvalDump helper to add detailed eval information like:
```
=== RUN TestOverlap
1/1 Job overlap7b0e90ec Eval c38c9919-a4f0-5baf-45f7-0702383c682a
Type: service
TriggeredBy: job-register
Deployment:
Status: pending ()
NextEval:
PrevEval:
BlockedEval:
-- No placement failures --
QueuedAllocs:
SnapshotIdx: 0
CreateIndex: 96
ModifyIndex: 96
...
```
Hopefully helpful when debugging other tests as well!
* test: simplify overlap job placement logic
Trying to fix#14806
Both the previous approach as well as this one worked on e2e clusters I
spun up.
* simplify code flow
Nomad runs one logmon process and also one docker_logger process for each
running allocation. A naive look at memory usage shows 10-30 MB of RSS, but a
closer look shows that most of this memory (ex. all but ~2MB for logmon) is
shared (`Shared_Clean` in Linux pmap).
But a heap dump of docker_logger shows that it currently has an extra ~2500 KiB
of heap (anonymously-mapped unshared memory) used for init blocks coming from
the agent code (ex. mostly regexes from go-version, structs, and the Consul
SDK). The packages for running logmon, docker_logger, and executor have an init
block that parses `os.Args` to drop into their own logic, which prevents them
from loading all the rest of the agent code and saves on memory, so this was
unexpected.
It looks like we accidentally reordered the imports in main to undo some of the
work originally done in 404d2d4c98f1df930be1ae9852fe6e6ae8c1517e. This changeset
restores the ordering. A follow-up heap dump shows this saves ~2MB of unshared
RSS per docker_logger process.
* helpers: lockfree lookup of nobody user on linux and darwin
This PR continues the nobody user lookup saga, by making the nobody
user lookup lock-free on linux and darwin.
By doing the lookup in an init block this originally broke on Windows,
where we must avoid doing the lookup at all. We can get around that
breakage by only doing the lookup on linux/darwin where the nobody
user is going to exist.
Also return the nobody user by value so that a copy is created that
cannot be modified by callers of Nobody().
* helper: move nobody code into unix file
This PR adds a jobspec mutator to constrain jobs making use of checks
in the nomad service provider to nomad clients of at least v1.4.0.
Before, in a mixed client version cluster it was possible to submit
an NSD job making use of checks and for that job to land on an older,
incompatible client node.
Closes#14862
This PR removes the assertion around when the 'task' field of
a check may be set. Starting in Nomad 1.4 we automatically set
the task field on all checks in support of the NSD checks feature.
This is causing validation problems elsewhere, e.g. when a group
service using the Consul provider sets 'task' it will fail
validation that worked previously.
The assertion of leaving 'task' unset was only about making sure
job submitters weren't expecting some behavior, but in practice
is causing bugs now that we need the task field for more than it
was originally added for.
We can simply update the docs, noting when the task field set by
job submitters actually has value.
When community members comment on long-closed issues, there's a number of
failure modes that make for a bad experience for them:
* Their comments are often missed entirely because notification settings make it
impractical for most developers to read comments on inactive issues.
* In our experience, the problem is only rarely a regression; because failures
are complex, totally different code paths can result in symptoms that initially
appear to be the same but turn out to be completely different under close
examination. This is particularly the case for issues fixed in very old
versions (sometimes 2 or more years old).
The Terraform core team uses a bot that locks issues after only 30 days. But
because we typically close issues automatically on PR merge but don't have
rolling releases, it'd frequently happen that unrelease fixes will have locked
comments, which isn't a good experience either. I've looked through the pace of
releases since Nomad 0.9.0 and the longest window between releases was 3
months. Set the window for the lock bot to 120 days to give us plenty of
breathing room so it doesn't feel like we're shutting down discussion
prematurely.
* docs: clarify nomad vars vs vault
I think we should make the difference in root key management between
Nomad and Vault clear in the concept docs. I didn't see anywhere else in
the docs we compared it.
I also s/secrets/variables everywhere except the first sentence since
the feature is intended to be more generic than secrets. Right now it's
more of a compliment to Consul's kv than Vault due to root key handling
and featureset.
* Update website/content/docs/concepts/variables.mdx
Co-authored-by: Tim Gross <tgross@hashicorp.com>
During an upgrade to Nomad 1.4.0, if a server running 1.4.0 becomes the leader
before one of the 1.3.x servers, the old server will crash because the keyring
is initialized and writes a raft entry.
Wait until all members are on a version that supports the keyring before
initializing it.
Metrics state is local to the server and needs to use time, which is normally
forbidden in the FSM code. We have a bypass for this rule for
`metrics.MeasureSince` but needed one for `metrics.MeasureSinceWithLabels` as well.
In #14742 we introduced a cached lookup of the `nobody` user, which is only ever
called on Unixish machines. But the initial caching was being done in an `init`
block, which meant it was being run on Windows as well. This prevents the Nomad
agent from starting on Windows.
An alternative fix here would be to have a separate `init` block for Windows and
Unix, but this potentially masks incorrect behavior if we accidentally added a
call to the `Nobody()` method on Windows later. This way we're forced to handle
the error in the caller.
The `hc-install` tool we're using needed a patch for a specific bug, but that's
since been merged. We definitely want to switch to using a standard release from
that project once one is shipped with the CLI, but pinning to HEAD should keep
us for now.