When a user performs a client API call, the Nomad client will
perform an RPC which looks up the ACL policies which the callers
ACL token is assigned. If the ACL token includes dangling (deleted)
policies, the call would previously fail with a permission denied
error.
This change ensures this error is not returned and that the lookup
will succeed in the event of dangling policies.
Co-authored-by: James Rasell <jrasell@users.noreply.github.com>
This change fixes a bug within the generic scheduler which meant
duplicate alloc indexes (names) could be submitted to the plan
applier and written to state. The bug originates from the
placements calculation notion that names of allocations being
replaced are blindly copied to their replacement. This is not
correct in all cases, particularly when dealing with canaries.
The fix updates the alloc name index tracker to include minor
duplicate tracking. This can be used when computing placements to
ensure duplicate are found, and a new name picked before the plan
is submitted. The name index tracking is now passed from the
reconciler to the generic scheduler via the results, so this does
not have to be regenerated, or another data structure used.
Co-authored-by: James Rasell <jrasell@users.noreply.github.com>
The iowait metric obtained from `/proc/stat` can under some circumstances
decrease. The relevant condition is when an interrupt arrives on a different
core than the one that gets woken up for the IO, and a particular counter in the
kernel for that core gets interrupted. This is documented in the man page for
the `proc(5)` pseudo-filesystem, and considered an unfortunate behavior that
can't be changed for the sake of ABI compatibility.
In Nomad, we get the current "busy" time (everything except for idle) and
compare it to the previous busy time to get the counter incremeent. If the
iowait counter decreases and the idle counter increases more than the increase
in the total busy time, we can get a negative total. This previously caused a
panic in our metrics collection (see #15861) but that is being prevented by
reporting an error message.
Fix the bug by putting a zero floor on the values we return from the host CPU
stats calculator.
Backport-of: #18835
* Rename pages to include roles
* Models and adapters
* [ui] Any policy checks in the UI now check for roles' policies as well as token policies (#18346)
* combinedPolicies as a concept
* Classic decorator on role adapter
* We added a new request for roles, so the test based on a specific order of requests got fickle fast
* Mirage roles cluster scaffolded
* Acceptance test for roles and policies on the login page
* Update mirage mock for nodes fetch to account for role policies / empty token.policies
* Roles-derived policies checks
* [ui] Access Control with Roles and Tokens (#18413)
* top level policies routes moved into access control
* A few more routes and name cleanup
* Delog and test fixes to account for new url prefix and document titles
* Overview page
* Tokens and Roles routes
* Tokens helios table
* Add a role
* Hacky role page and deletion
* New policy keyboard shortcut and roles breadcrumb nav
* If you leave New Role but havent made any changes, remove the newly-created record from store
* Roles index list and general role route crud
* Roles index actually links to roles now
* Helios button styles for new roles and policies
* Handle when you try to create a new role without having any policies
* Token editing generally
* Create Token functionality
* Cant delete self-token but management token editing and deleting is fine
* Upgrading helios caused codemirror to explode, shimmed
* Policies table fix
* without bang-element condition, modifier would refire over and over
* Token TTL or Time setting
* time will take you on
* Mirage hooks for create and list roles
* Ensure policy names only use allow characters in mirage mocks
* Mirage mocked roles and policies in the default cluster
* log and lintfix
* chromedriver to 2.1.2
* unused unit tests removed
* Nice profile dropdown
* With the HDS accordion, rename our internal component scss ref
* design revisions after discussion
* Tooltip on deleted-policy tokens
* Two-step button peripheral isDeleting gcode removed
* Never to null on token save
* copywrite headers added and empty routefiles removed
* acceptance test fixes for policies endpoint
* Route for updating a token
* Policies testfixes
* Ember on-click-outside modifier upgraded with general ember-modifier upgrade
* Test adjustments to account for new profile header dropdown
* Test adjustments for tokens via policy pages
* Removed an unused route
* Access Control index page tests
* a11y tests
* Tokens index acceptance tests generally
* Lintfix
* Token edit page tests
* Token editing tests
* New token expiration tests
* Roles Index tests
* Role editing policies tests
* A complete set of Access Control Roles tests
* Policies test
* Be more specific about which row to check for expiration time
* Nil check on expirationTime equality
* Management tokens shouldnt show No Roles/Policies, give them their own designation
* Route guard on selftoken, conditional columns, and afterModel at parent to prevent orphaned policies on tokens/roles from stopping a new save
* Policy unloading on delete and other todos plus autofocus conditionally re-enabled
* Invalid policies non-links now a concept for Roles index
* HDS style links to make job.variables.alert links look like links again
* Mirage finding looks weird so making model async in hash even though redundant
* Drop rsvp
* RSVP wasnt the problem, cached lookups were
* remove old todo comments
* de-log
We use capped exponential backoff in several places in the code when handling
failures. The code we've copy-and-pasted all over has a check to see if the
backoff is greater than the limit, but this check happens after the bitshift and
we always increment the number of attempts. This causes an overflow with a
fairly small number of failures (ex. at one place I tested it occurs after only
24 iterations), resulting in a negative backoff which then never recovers. The
backoff becomes a tight loop consuming resources and/or DoS'ing a Nomad RPC
handler or an external API such as Vault. Note this doesn't occur in places
where we cap the number of iterations so the loop breaks (usually to return an
error), so long as the number of iterations is reasonable.
Introduce a helper with a check on the cap before the bitshift to avoid overflow in all
places this can occur.
Fixes: #18199
Co-authored-by: stswidwinski <stan.swidwinski@gmail.com>