Clients periodically fingerprint Vault and Consul to ensure the server has
updated attributes in the client's fingerprint. If the client can't reach
Vault/Consul, the fingerprinter clears the attributes and requires a node
update. Although this seems like correct behavior so that we can detect
intentional removal of Vault/Consul access, it has two serious failure modes:
(1) If a local Consul agent is restarted to pick up configuration changes and the
client happens to fingerprint at that moment, the client will update its
fingerprint and result in evaluations for all its jobs and all the system jobs
in the cluster.
(2) If a client loses Vault connectivity, the same thing happens. But the
consequences are much worse in the Vault case because Vault is not run as a
local agent, so Vault connectivity failures are highly correlated across the
entire cluster. A 15 second Vault outage will cause a new `node-update`
evalution for every system job on the cluster times the number of nodes, plus
one `node-update` evaluation for every non-system job on each node. On large
clusters of 1000s of nodes, we've seen this create a large backlog of evaluations.
This changeset updates the fingerprinting behavior to keep the last fingerprint
if Consul or Vault queries fail. This prevents a storm of evaluations at the
cost of requiring a client restart if Consul or Vault is intentionally removed
from the client.
In Nomad 1.2.6 we shipped `eval list`, which accepts a `-json` flag, and
deprecated the usage of `eval status` without an evaluation ID with an upgrade
note that it would be removed in Nomad 1.4.0. This changeset completes that
work.
* scheduler: stopped-yet-running allocs are still running
* scheduler: test new stopped-but-running logic
* test: assert nonoverlapping alloc behavior
Also add a simpler Wait test helper to improve line numbers and save few
lines of code.
* docs: tried my best to describe #10446
it's not concise... feedback welcome
* scheduler: fix test that allowed overlapping allocs
* devices: only free devices when ClientStatus is terminal
* test: output nicer failure message if err==nil
Co-authored-by: Mahmood Ali <mahmood@hashicorp.com>
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
* Fixing heading order, adding text for links
* Apply suggestions from code review
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* Applying more suggestions from code review
Co-authored-by: Tim Gross <tgross@hashicorp.com>
A Nomad user reported problems with CSI volumes associated with failed
allocations, where the Nomad server did not send a controller unpublish RPC.
The controller unpublish is skipped if other non-terminal allocations on the
same node claim the volume. The check has a bug where the allocation belonging
to the claim being freed was included in the check incorrectly. During a normal
allocation stop for job stop or a new version of the job, the allocation is
terminal. But allocations that fail are not yet marked terminal at the point in
time when the client sends the unpublish RPC to the server.
For CSI plugins that support controller attach/detach, this means that the
controller will not be able to detach the volume from the allocation's host and
the replacement claim will fail until a GC is run. This changeset fixes the
conditional so that the claim's own allocation is not included, and makes the
logic easier to read. Include a test case covering this path.
Also includes two minor extra bugfixes:
* Entities we get from the state store should always be copied before
altering. Ensure that we copy the volume in the top-level unpublish workflow
before handing off to the steps.
* The list stub object for volumes in `nomad/structs` did not match the stub
object in `api`. The `api` package also did not include the current
readers/writers fields that are expected by the UI. True up the two objects and
add the previously undocumented fields to the docs.
When configuring Consul Service Mesh, it's sometimes necessary to
provide dynamic value that are only known to Nomad at runtime. By
interpolating configuration values (in addition to configuration keys),
user are able to pass these dynamic values to Consul from their Nomad
jobs.
These options are mutually exclusive but, since `-hcl2-strict` defaults
to `true` users had to explicitily set it to `false` when using `-hcl1`.
Also return `255` when job plan fails validation as this is the expected
code in this situation.
Nomad is generally compliant with the CSI specification for Container
Orchestrators (CO), except for unimplemented features. However, some storage
vendors have built CSI plugins that are not compliant with the specification or
which expect that they're only deployed on Kubernetes. Nomad cannot vouch for
the compatibility of any particular plugin, so clarify this in the docs.
Co-authored-by: Derek Strickland <1111455+DerekStrickland@users.noreply.github.com>
The ACL command docs are now found within a sub-dir like the
operator command docs. Updates to the ACL token commands to
accommodate token expiry have also been added.
The ACL API docs are now found within a sub-dir like the operator
API docs. The ACL docs now include the ACL roles endpoint as well
as updated ACL token endpoints for token expiration.
The configuration section is also updated to accommodate the new
ACL and server parameters for the new ACL features.
Update the on-disk format for the root key so that it's wrapped with a unique
per-key/per-server key encryption key. This is a bit of security theatre for the
current implementation, but it uses `go-kms-wrapping` as the interface for
wrapping the key. This provides a shim for future support of external KMS such
as cloud provider APIs or Vault transit encryption.
* Removes the JSON serialization extension we had on the `RootKey` struct; this
struct is now only used for key replication and not for disk serialization, so
we don't need this helper.
* Creates a helper for generating cryptographically random slices of bytes that
properly accounts for short reads from the source.
* No observable functional changes outside of the on-disk format, so there are
no test updates.
* allocrunner: handle lifecycle when all tasks die
When all tasks die the Coordinator must transition to its terminal
state, coordinatorStatePoststop, to unblock poststop tasks. Since this
could happen at any time (for example, a prestart task dies), all states
must be able to transition to this terminal state.
* allocrunner: implement different alloc restarts
Add a new alloc restart mode where all tasks are restarted, even if they
have already exited. Also unifies the alloc restart logic to use the
implementation that restarts tasks concurrently and ignores
ErrTaskNotRunning errors since those are expected when restarting the
allocation.
* allocrunner: allow tasks to run again
Prevent the task runner Run() method from exiting to allow a dead task
to run again. When the task runner is signaled to restart, the function
will jump back to the MAIN loop and run it again.
The task runner determines if a task needs to run again based on two new
task events that were added to differentiate between a request to
restart a specific task, the tasks that are currently running, or all
tasks that have already run.
* api/cli: add support for all tasks alloc restart
Implement the new -all-tasks alloc restart CLI flag and its API
counterpar, AllTasks. The client endpoint calls the appropriate restart
method from the allocrunner depending on the restart parameters used.
* test: fix tasklifecycle Coordinator test
* allocrunner: kill taskrunners if all tasks are dead
When all non-poststop tasks are dead we need to kill the taskrunners so
we don't leak their goroutines, which are blocked in the alloc restart
loop. This also ensures the allocrunner exits on its own.
* taskrunner: fix tests that waited on WaitCh
Now that "dead" tasks may run again, the taskrunner Run() method will
not return when the task finishes running, so tests must wait for the
task state to be "dead" instead of using the WaitCh, since it won't be
closed until the taskrunner is killed.
* tests: add tests for all tasks alloc restart
* changelog: add entry for #14127
* taskrunner: fix restore logic.
The first implementation of the task runner restore process relied on
server data (`tr.Alloc().TerminalStatus()`) which may not be available
to the client at the time of restore.
It also had the incorrect code path. When restoring a dead task the
driver handle always needs to be clear cleanly using `clearDriverHandle`
otherwise, after exiting the MAIN loop, the task may be killed by
`tr.handleKill`.
The fix is to store the state of the Run() loop in the task runner local
client state: if the task runner ever exits this loop cleanly (not with
a shutdown) it will never be able to run again. So if the Run() loops
starts with this local state flag set, it must exit early.
This local state flag is also being checked on task restart requests. If
the task is "dead" and its Run() loop is not active it will never be
able to run again.
* address code review requests
* apply more code review changes
* taskrunner: add different Restart modes
Using the task event to differentiate between the allocrunner restart
methods proved to be confusing for developers to understand how it all
worked.
So instead of relying on the event type, this commit separated the logic
of restarting an taskRunner into two methods:
- `Restart` will retain the current behaviour and only will only restart
the task if it's currently running.
- `ForceRestart` is the new method where a `dead` task is allowed to
restart if its `Run()` method is still active. Callers will need to
restart the allocRunner taskCoordinator to make sure it will allow the
task to run again.
* minor fixes
This PR documents a change made in the enterprise version of nomad that addresses the following issue:
When a user tries to filter audit logs, they do so with a stanza that looks like the following:
audit {
enabled = true
filter "remove deletes" {
type = "HTTPEvent"
endpoints = ["*"]
stages = ["OperationComplete"]
operations = ["DELETE"]
}
}
When specifying both an "endpoint" and a "stage", the events with both matching a "endpoint" AND a matching "stage" will be filtered.
When specifying both an "endpoint" and an "operation" the events with both matching a "endpoint" AND a matching "operation" will be filtered.
When specifying both a "stage" and an "operation" the events with a matching a "stage" OR a matching "operation" will be filtered.
The "OR" logic with stages and operations is unexpected and doesn't allow customers to get specific on which events they want to filter. For instance the following use-case is impossible to achieve: "I want to filter out all OperationReceived events that have the DELETE verb".
The original design for workload identities and ACLs allows for operators to
extend the automatic capabilities of a workload by using a specially-named
policy. This has shown to be potentially unsafe because of naming collisions, so
instead we'll allow operators to explicitly attach a policy to a workload
identity.
This changeset adds workload identity fields to ACL policy objects and threads
that all the way down to the command line. It also a new secondary index to the
ACL policy table on namespace and job so that claim resolution can efficiently
query for related policies.
When a Nomad agent starts and loads jobs that already existed in the
cluster, the default template uid and gid was being set to 0, since this
is the zero value for int. This caused these jobs to fail in
environments where it was not possible to use 0, such as in Windows
clients.
In order to differentiate between an explicit 0 and a template where
these properties were not set we need to use a pointer.
This PR updates the checks documentation to mention support for checks
when using the Nomad service provider. There are limitations of NSD
compared to Consul, and those configuration options are now noted as
being Consul-only.
* Initialized keyboard service
Neat but funky: dynamic subnav traversal
👻
generalized traverseSubnav method
Shift as special modifier key
Nice little demo panel
Keyboard shortcuts keycard
Some animation styles on keyboard shortcuts
Handle situations where a link is deeply nested from its parent menu item
Keyboard service cleanup
helper-based initializer and teardown for new contextual commands
Keyboard shortcuts modal component added and demo-ghost removed
Removed j and k from subnav traversal
Register and unregister methods for subnav plus new subnavs for volumes and volume
register main nav method
Generalizing the register nav method
12762 table keynav (#12975)
* Experimental feature: shortcut visual hints
* Long way around to a custom modifier for keyboard shortcuts
* dynamic table and list iterative shortcuts
* Progress with regular old tether
* Delogging
* Table Keynav tether fix, server and client navs, and fix to shiftless on modified arrow keys
Go to Optimize keyboard link and storage key changed to g r
parameterized jobs keyboard nav
Dynamic numeric keynav for multiple tables (#13482)
* Multiple tables init
* URL-bind enumerable keyboard commands and add to more taskRow and allocationRows
* Type safety and lint fixes
* Consolidated push to keyCommands
* Default value when removing keyCommands
* Remove the URL-based removal method and perform a recompute on any add
Get tests passing in Keynav: remove math helpers and a few other defensive moves (#13761)
* Remove ember math helpers
* Test fixes for jobparts/body
* Kill an unneeded integration helper test
* delog
* Trying if disabling percy lets this finish
* Okay so its not percy; try parallelism in circle
* Percyless yet again
* Trying a different angle to not have percy
* Upgrade percy to 1.6.1
[ui] Keyboard nav: "u" key to go up a level (#13754)
* U to go up a level
* Mislabelled my conditional
* Custom lint ignore rule
* Custom lint ignore rule, this time with commas
* Since we're getting rid of ember math helpers elsewhere, do the math ourselves here
Replace ArrowLeft etc. with an ascii arrow (#13776)
* Replace ArrowLeft etc. with an ascii arrow
* non-mutative helper cleanup
Keyboard Nav: let users rebind their shortcuts (#13781)
* click-outside and shortcuts enabled/disabled toggle
* Trap focus when modal open
* Enabled/disabled saved to localStorage
* Autofocus edit button on variable index
* Modal overflow styles
* Functional rebind
* Saving rebinds to localStorage for all majors
* Started on defaultCommandBindings
* Modal header style and cancel rebind on escape
* keyboardable keybindings w buttons instead of spans
* recording and defaultvalues
* Enter short-circuits rebind
* Only some commands are rebindable, and dont show dupes
* No unused get import
* More visually distinct header on modal
* Disallowed keys for rebind, showing buffer as you type, and moving dedupe to modal logic
willDestroy hook to prevent tests from doubling/tripling up addEventListener on kb events
remove unused tests
Keyboard Navigation acceptance tests (#13893)
* Acceptance tests for keyboard modal
* a11y audit fix and localStorage clear
* Bind/rebind/localStorage tests
* Keyboard tests for dynamic nav and tables
* Rebinder and assert expectation
* Second percy snapshot showing hints no longer relevant
Weird issue where linktos with query props specifically from the task-groups page would fail to route / hit undefined.shouldSuperCede errors
Adds the concept of exclusivity to a keycommand, removing peers that also share its label
Lintfix
Changelog and PR feedback
Changelog and PR feedback
Fix to rebinding in firefox by blurring the now-disabled button on rebind (#14053)
* Secure Variables shortcuts removed
* Variable index route autofocus removed
* Updated changelog entry
* Updated changelog entry
* Keynav docs (#14148)
* Section added to the API Docs UI page
* Added a note about disabling
* Prev and Next order
* Remove dev log and unneeded comments
The List RPCs only checked the ACL for the Prefix argument of the request. Add
an ACL filter to the paginator for the List RPC.
Extend test coverage of ACLs in the List RPC and in the `acl` package, and add a
"deny" capability so that operators can deny specific paths or prefixes below an
allowed path.