Many of the GitHub Actions from the build pipeline are written in a truly
ancient version of NodeJS. Upgrade to more recent versions.
Remove RelEng from codeowners
When spread targets have a percent value of zero it's possible for them to
return -Inf scoring because of a float divide by zero. This is very hard for
operators to debug because the string "-Inf" is returned in the API and that
breaks the presentation of debugging data.
Most scoring iterators are bracketed to -1/+1, but spread iterators do not so
that they can handle greatly unbalanced scoring so we can't simply return a -1
score without generating a score that might be greater than the negative scores
set by other spread targets. Instead, track the lowest-seen spread boost and use
that as the spread boost for any cases where we'd divide by zero.
Fixes: #8863
* client: ignore restart issued to terminal allocations
This PR fixes a bug where issuing a restart to a terminal allocation
would cause the allocation to run its hooks anyway. This was particularly
apparent with group_service_hook who would then register services but
then never deregister them - as the allocation would be effectively in
a "zombie" state where it is prepped to run tasks but never will.
* e2e: add e2e test for alloc restart zombies
* cl: tweak text
Co-authored-by: Tim Gross <tgross@hashicorp.com>
---------
Co-authored-by: Tim Gross <tgross@hashicorp.com>
The `DisableLogCollection` capability was introduced as an experimental
interface for the Docker driver in 0.10.4. The interface has been stable and
allowing third-party task drivers the same capability would be useful for those
drivers that don't need the additional overhead of logmon.
This PR only makes the capability public. It doesn't yet add it to the
configuration options for the other internal drivers.
Fixes: #14636#15686
New RPC endpoints introduced during OIDC and JWT auth perform unnecessary many RPC calls when they upsert generated ACL tokens, as pointed out by @tgross.
This PR moves the common logic from acl.UpsertTokens method into a helper method that contains common logic, and sidesteps authentication, metrics, etc.
Implementation of the base work for the new node pools feature. It includes a new `NodePool` struct and its corresponding state store table.
Upon start the state store is populated with two built-in node pools that cannot be modified nor deleted:
* `all` is a node pool that always includes all nodes in the cluster.
* `default` is the node pool where nodes that don't specify a node pool in their configuration are placed.
Tools like `nomad-nodesim` are unable to implement a minimal implementation of
an allocrunner so that we can test the client communication without having to
lug around the entire allocrunner/taskrunner code base. The allocrunner was
implemented with an interface specifically for this purpose, but there were
circular imports that made it challenging to use in practice.
Move the AllocRunner interface into an inner package and provide a factory
function type. Provide a minimal test that exercises the new function so that
consumers have some idea of what the minimum implementation required is.
* core: eliminate second index on job_submissions table
This PR refactors the job_submissions state store code to eliminate the
use of a second index formerly used for purging all versions of a given
job. In practice we ended up with duplicate entries on the table. Instead,
use index prefix scanning on the primary index and tidy up any potential
for creating (or removing) duplicates.
* core: pr comments followup
When client nodes are restarted, all allocations that have been scheduled on the
node have their modify index updated, including terminal allocations. There are
several contributing factors:
* The `allocSync` method that updates the servers isn't gated on first contact
with the servers. This means that if a server updates the desired state while
the client is down, the `allocSync` races with the `Node.ClientGetAlloc`
RPC. This will typically result in the client updating the server with "running"
and then immediately thereafter "complete".
* The `allocSync` method unconditionally sends the `Node.UpdateAlloc` RPC even
if it's possible to assert that the server has definitely seen the client
state. The allocrunner may queue-up updates even if we gate sending them. So
then we end up with a race between the allocrunner updating its internal state
to overwrite the previous update and `allocSync` sending the bogus or duplicate
update.
This changeset adds tracking of server-acknowledged state to the
allocrunner. This state gets checked in the `allocSync` before adding the update
to the batch, and updated when `Node.UpdateAlloc` returns successfully. To
implement this we need to be able to equality-check the updates against the last
acknowledged state. We also need to add the last acknowledged state to the
client state DB, otherwise we'd drop unacknowledged updates across restarts.
The client restart test has been expanded to cover a variety of allocation
states, including allocs stopped before shutdown, allocs stopped by the server
while the client is down, and allocs that have been completely GC'd on the
server while the client is down. I've also bench tested scenarios where the task
workload is killed while the client is down, resulting in a failed restore.
Fixes#16381
nomad#15861 describes intermitent panics caused by go-metrics Prometheus
client. We have not been able to further debug this problem due to the
lack of information when the panic happens.
go-metrics#146 prevents the panic from happening and also logs
additional information that can help us understand the root cause of the
problem.
This commits pins the go-metric dependency to this branch until we can
better debug the issue.
* Batch jobs, aside from child jobs, get the new status panel
* Clean up the imported jobAllocStatuses
* Note for mirage that batch jobs now have a historical status panel
* Batch job test for complete status
* Parameterized and periodic child jobs get the panel treatment
* Undo parameterized and periodic child test situations
This PR fixes a bug where nodes configured with populated
/etc/ssh/ssh_known_hosts files would be unable to read them during
artifact downloading.
Fixes#17086
* api: set the job submission during job reversion
This PR fixes a bug where the job submission would always be nil when
a job goes through a reversion to a previous version. Basically we need
to detect when this happens, lookup the submission of the job version
being reverted to, and set that as the submission of the new job being
created.
* e2e: add e2e test for job submissions during reversion
This e2e test ensures a reverted job inherits the job submission
associated with the version of the job being reverted to.
to avoid leaking task resources (e.g. containers,
iptables) if allocRunner prerun fails during
restore on client restart.
now if prerun fails, TaskRunner.MarkFailedKill()
will only emit an event, mark the task as failed,
and cancel the tr's killCtx, so then ar.runTasks()
-> tr.Run() can take care of the actual cleanup.
removed from (formerly) tr.MarkFailedDead(),
now handled by tr.Run():
* set task state as dead
* save task runner local state
* task stop hooks
also done in tr.Run() now that it's not skipped:
* handleKill() to kill tasks while respecting
their shutdown delay, and retrying as needed
* also includes task preKill hooks
* clearDriverHandle() to destroy the task
and associated resources
* task exited hooks
This reverts commit 1721e687c0832bea3d9b7eec5dcd3c4e7a924d71.
The change was expected to solve the sporadic problems we were having
with Backport Assistant, but it end up creating even more failures.
Nomad 1.5.4 shipped with a logmon bug that we rolled out a fix for in Nomad
1.5.5. Unfortunately we can't yank the release but we should leave a note in the
upgrade guide telling users to avoid it.
* System jobs get a panel and lost status reinstated
* Leveraging nodes and not worrying about rescheds for system jobs
* Consistency w restarted as well
* Text shadow removed and early return where possible
* System jobs added to the Historical Click list
* System alloc and client summary panels removed
* Bones of some new system jobs tests
* [ui, deployments] handle node read permissions for system job panel (#17073)
* Do the next-best-thing when we cant read nodes for system jobs
* Whitespace control handlebars expr
* Simplifies system jobs to not attempt to show a desired count, since it is a particularly complex number depending on constraints, number of nodes, etc.
* [ui, deployments] Fix order in which allocations are ascribed to the status chart (#17063)
* Discovery of alloc.isOld
* Correct sorting and better types
* A more honest walk-back that prioritizes running and pending allocs first
* Test scenario for descending-order allocs to show
* isOld mandates that we set a job version for our created job. Could also do this in the factory but maybe side-effecty
* Type simplification
* Fixed up a test that needed system job summary to be updated
* Tests for modifications to the job summary
* Explicitly mark the service jobs in test as not-deploying
* connect: use heuristic to detect sidecar task driver
This PR adds a heuristic to detect whether to use the podman task driver
for the connect sidecar proxy. The podman driver will be selected if there
is at least one task in the task group configured to use podman, and there
are zero tasks in the group configured to use docker. In all other cases
the task driver defaults to docker.
After this change, we should be able to run typical Connect jobspecs
(e.g. nomad job init [-short] -connect) on Clusters configured with the
podman task driver, without modification to the job files.
Closes#17042
* golf: cleanup driver detection logic
The job scale RPC endpoint hard-coded the eval creation to use the
type of service. This meant scaling events triggered on jobs of
type batch would create evaluations with the wrong type, which
does not seem to cause any problems, just confusion when
correlating the two.
When the server restarts for the upgrade, it loads the `structs.Job` from the
Raft snapshot/logs. The jobspec has long since been parsed, so none of the
guards around the default value are in play. The empty field value for `Enabled`
is the zero value, which is false.
This doesn't impact any running allocation because we don't replace running
allocations when either the client or server restart. But as soon as any
allocation gets rescheduled (ex. you drain all your clients during upgrades),
it'll be using the `structs.Job` that the server has, which has `Enabled =
false`, and logs will not be collected.
This changeset fixes the bug by adding a new field `Disabled` which defaults to
false (so that the zero value works), and deprecates the old field.
Fixes#17076
* docs: move CNI reference plugins installation to CNI overview page
This PR moves the instruction steps for install the CNI reference plugins
from the Consul Mesh integration page to the general Networking CNI page.
These plugins are required for bridge networking, not just Consul Mesh,
so it makes sense to have them on the general CNI page.
Closes#17038
* docs: fix a link to post install steps
Their release notes are here: https://github.com/golang-jwt/jwt/releases
Seemed wise to upgrade before we do even more with JWTs. For example
this upgrade *would* have mattered if we already implemented common JWT
claims such as expiration. Since we didn't rely on any claim
verification this upgrade is a noop...
...except for 1 test that called `Claims.Valid()`! Removing that
assertion *seems* scary, but it didn't actually do anything because we
didn't implement any of the standard claims it validated:
https://github.com/golang-jwt/jwt/blob/v4.5.0/map_claims.go#L120-L151
So functionally this major upgrade is a noop.
This special purpose UI provides commands that can benefit from direct
access to the io.Reader and io.Writers of the base cli.Ui. It can
traverse a chain of ColoredUis to find the base. Currently, it can
retrieve writers from a cli.BasicUi (or cli.MockUi for testing).
Renames ui.go and ui_test.go to log_ui.go and log_ui_test.go
This PR does some cleanup of an old code path for versions of Consul that
did not support reporting the supported versions of Envoy in its API. Those
versions are no longer supported for years at this point, and the fallback
version of envoy hasn't been supported by any version of Consul for almost
as long. Remove this code path that is no longer useful.
This PR modifies references to the envoyproxy/envoy docker image to
explicitly include the docker.io prefix. This does not affect existing
users, but makes things easier for Podman users, who otherwise need to
specify the full name because Podman does not default to docker.io