ACL tokens are granted permissions either by direct policy links
or via ACL role links. Callers should therefore be able to read
policies directly assigned to the caller token or indirectly by
ACL role links.
* refact: upgrade Promise.then to async/await
* naive solution (#14800)
* refact: use id instead of model
* chore: add changelog entry
* refact: add conditional safety around alloc
This changes adds ACL role creation and deletion to the event
stream. It is exposed as a single topic with two types; the filter
is primarily the role ID but also includes the role name.
While conducting this work it was also discovered that the events
stream has its own ACL resolution logic. This did not account for
ACL tokens which included role links, or tokens with expiry times.
ACL role links are now resolved to their policies and tokens are
checked for expiry correctly.
The client ACL cache was not accounting for tokens which included
ACL role links. This change modifies the behaviour to resolve role
links to policies. It will also now store ACL roles within the
cache for quick lookup. The cache TTL is configurable in the same
manner as policies or tokens.
Another small fix is included that takes into account the ACL
token expiry time. This was not included, which meant tokens with
expiry could be used past the expiry time, until they were GC'd.
Rename `Internals` section to `Concepts` to match core docs structure
and expand on how policies are evaluated.
Also include missing documentation for check grouping and fix examples
to use the new feature.
* Adds searching and filtering for nodes on topology view
* Lintfix and changelog
* Acceptance tests for topology search and filter
* Search terms also apply to class and dc on topo page
* Initialize queryparam values so as to not break history state
Move the Autoscaler agent configuration `source` to the `policy` page
since they are very closely related.
Also update all headers in this section so they follow the proper `h1 >
h2 > h3 > ...` hierarchy.
* consul: register checks along with service on initial registration
This PR updates Nomad's Consul service client to include checks in
an initial service registration, so that the checks associated with
the service are registered "atomically" with the service. Before, we
would only register the checks after the service registration, which
causes problems where the service is deemed healthy, even if one or
more checks are unhealthy - especially problematic in the case where
SuccessBeforePassing is configured.
Fixes#3935
* cr: followup to fix cause of extra consul logging
* cr: fix another bug
* cr: fixup changelog
My make knowledge is very very limited, so if there's a better way to do
this please let me know! This seems to work and lets me cut one off
builds easily.
This PR updates Nomad's Consul service client to do map comparisons
using maps.Equal instead of reflect.DeepEqual. The bug fix is in how
DeepEqual treats nil slices different from empty slices, when actually
they should be treated the same.
* One-time tokens are not replicated between regions, so we don't want to enforce
that the version check across all of serf, just members in the same region.
* Scheduler: Disconnected clients handling is specific to a single region, so we
don't want to enforce that the version check across all of serf, just members in
the same region.
* Variables: enforce version check in Apply RPC
* Cleans up a bunch of legacy checks.
This changeset is specific to 1.4.x and the changes for previous versions of
Nomad will be manually backported in a separate PR.
In #14821 we fixed a panic that can happen if a leadership election happens in
the middle of an upgrade. That fix checks that all servers are at the minimum
version before initializing the keyring (which blocks evaluation processing
during trhe upgrade). But the check we implemented is over the serf membership,
which includes servers in any federated regions, which don't necessarily have
the same upgrade cycle.
Filter the version check by the leader's region.
Also bump up log levels of major keyring operations
Keeps failing in the nightly e2e test with unhelpful output like:
```
Failed
=== RUN TestOverlap
overlap_test.go:92: Followup job overlap93ee1d2b blocked. Sleeping for the rest of overlap48c26c39's shutdown_delay (9.2/10s)
overlap_test.go:105: 1500/2000 retries reached for github.com/hashicorp/nomad/e2e/overlap.TestOverlap (err=timed out before an allocation was found for overlap93ee1d2b)
overlap_test.go:105: timeout: timed out before an allocation was found for overlap93ee1d2b
--- FAIL: TestOverlap (38.96s)
```
I have not been able to replicate it in my own e2e cluster, so I added
the EvalDump helper to add detailed eval information like:
```
=== RUN TestOverlap
1/1 Job overlap7b0e90ec Eval c38c9919-a4f0-5baf-45f7-0702383c682a
Type: service
TriggeredBy: job-register
Deployment:
Status: pending ()
NextEval:
PrevEval:
BlockedEval:
-- No placement failures --
QueuedAllocs:
SnapshotIdx: 0
CreateIndex: 96
ModifyIndex: 96
...
```
Hopefully helpful when debugging other tests as well!
* test: simplify overlap job placement logic
Trying to fix#14806
Both the previous approach as well as this one worked on e2e clusters I
spun up.
* simplify code flow
Nomad runs one logmon process and also one docker_logger process for each
running allocation. A naive look at memory usage shows 10-30 MB of RSS, but a
closer look shows that most of this memory (ex. all but ~2MB for logmon) is
shared (`Shared_Clean` in Linux pmap).
But a heap dump of docker_logger shows that it currently has an extra ~2500 KiB
of heap (anonymously-mapped unshared memory) used for init blocks coming from
the agent code (ex. mostly regexes from go-version, structs, and the Consul
SDK). The packages for running logmon, docker_logger, and executor have an init
block that parses `os.Args` to drop into their own logic, which prevents them
from loading all the rest of the agent code and saves on memory, so this was
unexpected.
It looks like we accidentally reordered the imports in main to undo some of the
work originally done in 404d2d4c98f1df930be1ae9852fe6e6ae8c1517e. This changeset
restores the ordering. A follow-up heap dump shows this saves ~2MB of unshared
RSS per docker_logger process.
* helpers: lockfree lookup of nobody user on linux and darwin
This PR continues the nobody user lookup saga, by making the nobody
user lookup lock-free on linux and darwin.
By doing the lookup in an init block this originally broke on Windows,
where we must avoid doing the lookup at all. We can get around that
breakage by only doing the lookup on linux/darwin where the nobody
user is going to exist.
Also return the nobody user by value so that a copy is created that
cannot be modified by callers of Nobody().
* helper: move nobody code into unix file
This PR adds a jobspec mutator to constrain jobs making use of checks
in the nomad service provider to nomad clients of at least v1.4.0.
Before, in a mixed client version cluster it was possible to submit
an NSD job making use of checks and for that job to land on an older,
incompatible client node.
Closes#14862
This PR removes the assertion around when the 'task' field of
a check may be set. Starting in Nomad 1.4 we automatically set
the task field on all checks in support of the NSD checks feature.
This is causing validation problems elsewhere, e.g. when a group
service using the Consul provider sets 'task' it will fail
validation that worked previously.
The assertion of leaving 'task' unset was only about making sure
job submitters weren't expecting some behavior, but in practice
is causing bugs now that we need the task field for more than it
was originally added for.
We can simply update the docs, noting when the task field set by
job submitters actually has value.