* Batch jobs, aside from child jobs, get the new status panel
* Clean up the imported jobAllocStatuses
* Note for mirage that batch jobs now have a historical status panel
* Batch job test for complete status
* Parameterized and periodic child jobs get the panel treatment
* Undo parameterized and periodic child test situations
This PR fixes a bug where nodes configured with populated
/etc/ssh/ssh_known_hosts files would be unable to read them during
artifact downloading.
Fixes#17086
* api: set the job submission during job reversion
This PR fixes a bug where the job submission would always be nil when
a job goes through a reversion to a previous version. Basically we need
to detect when this happens, lookup the submission of the job version
being reverted to, and set that as the submission of the new job being
created.
* e2e: add e2e test for job submissions during reversion
This e2e test ensures a reverted job inherits the job submission
associated with the version of the job being reverted to.
to avoid leaking task resources (e.g. containers,
iptables) if allocRunner prerun fails during
restore on client restart.
now if prerun fails, TaskRunner.MarkFailedKill()
will only emit an event, mark the task as failed,
and cancel the tr's killCtx, so then ar.runTasks()
-> tr.Run() can take care of the actual cleanup.
removed from (formerly) tr.MarkFailedDead(),
now handled by tr.Run():
* set task state as dead
* save task runner local state
* task stop hooks
also done in tr.Run() now that it's not skipped:
* handleKill() to kill tasks while respecting
their shutdown delay, and retrying as needed
* also includes task preKill hooks
* clearDriverHandle() to destroy the task
and associated resources
* task exited hooks
This reverts commit 1721e687c0832bea3d9b7eec5dcd3c4e7a924d71.
The change was expected to solve the sporadic problems we were having
with Backport Assistant, but it end up creating even more failures.
Nomad 1.5.4 shipped with a logmon bug that we rolled out a fix for in Nomad
1.5.5. Unfortunately we can't yank the release but we should leave a note in the
upgrade guide telling users to avoid it.
* System jobs get a panel and lost status reinstated
* Leveraging nodes and not worrying about rescheds for system jobs
* Consistency w restarted as well
* Text shadow removed and early return where possible
* System jobs added to the Historical Click list
* System alloc and client summary panels removed
* Bones of some new system jobs tests
* [ui, deployments] handle node read permissions for system job panel (#17073)
* Do the next-best-thing when we cant read nodes for system jobs
* Whitespace control handlebars expr
* Simplifies system jobs to not attempt to show a desired count, since it is a particularly complex number depending on constraints, number of nodes, etc.
* [ui, deployments] Fix order in which allocations are ascribed to the status chart (#17063)
* Discovery of alloc.isOld
* Correct sorting and better types
* A more honest walk-back that prioritizes running and pending allocs first
* Test scenario for descending-order allocs to show
* isOld mandates that we set a job version for our created job. Could also do this in the factory but maybe side-effecty
* Type simplification
* Fixed up a test that needed system job summary to be updated
* Tests for modifications to the job summary
* Explicitly mark the service jobs in test as not-deploying
* connect: use heuristic to detect sidecar task driver
This PR adds a heuristic to detect whether to use the podman task driver
for the connect sidecar proxy. The podman driver will be selected if there
is at least one task in the task group configured to use podman, and there
are zero tasks in the group configured to use docker. In all other cases
the task driver defaults to docker.
After this change, we should be able to run typical Connect jobspecs
(e.g. nomad job init [-short] -connect) on Clusters configured with the
podman task driver, without modification to the job files.
Closes#17042
* golf: cleanup driver detection logic
The job scale RPC endpoint hard-coded the eval creation to use the
type of service. This meant scaling events triggered on jobs of
type batch would create evaluations with the wrong type, which
does not seem to cause any problems, just confusion when
correlating the two.
When the server restarts for the upgrade, it loads the `structs.Job` from the
Raft snapshot/logs. The jobspec has long since been parsed, so none of the
guards around the default value are in play. The empty field value for `Enabled`
is the zero value, which is false.
This doesn't impact any running allocation because we don't replace running
allocations when either the client or server restart. But as soon as any
allocation gets rescheduled (ex. you drain all your clients during upgrades),
it'll be using the `structs.Job` that the server has, which has `Enabled =
false`, and logs will not be collected.
This changeset fixes the bug by adding a new field `Disabled` which defaults to
false (so that the zero value works), and deprecates the old field.
Fixes#17076
* docs: move CNI reference plugins installation to CNI overview page
This PR moves the instruction steps for install the CNI reference plugins
from the Consul Mesh integration page to the general Networking CNI page.
These plugins are required for bridge networking, not just Consul Mesh,
so it makes sense to have them on the general CNI page.
Closes#17038
* docs: fix a link to post install steps
Their release notes are here: https://github.com/golang-jwt/jwt/releases
Seemed wise to upgrade before we do even more with JWTs. For example
this upgrade *would* have mattered if we already implemented common JWT
claims such as expiration. Since we didn't rely on any claim
verification this upgrade is a noop...
...except for 1 test that called `Claims.Valid()`! Removing that
assertion *seems* scary, but it didn't actually do anything because we
didn't implement any of the standard claims it validated:
https://github.com/golang-jwt/jwt/blob/v4.5.0/map_claims.go#L120-L151
So functionally this major upgrade is a noop.
This special purpose UI provides commands that can benefit from direct
access to the io.Reader and io.Writers of the base cli.Ui. It can
traverse a chain of ColoredUis to find the base. Currently, it can
retrieve writers from a cli.BasicUi (or cli.MockUi for testing).
Renames ui.go and ui_test.go to log_ui.go and log_ui_test.go
This PR does some cleanup of an old code path for versions of Consul that
did not support reporting the supported versions of Envoy in its API. Those
versions are no longer supported for years at this point, and the fallback
version of envoy hasn't been supported by any version of Consul for almost
as long. Remove this code path that is no longer useful.
This PR modifies references to the envoyproxy/envoy docker image to
explicitly include the docker.io prefix. This does not affect existing
users, but makes things easier for Podman users, who otherwise need to
specify the full name because Podman does not default to docker.io
This PR updates the envoy_bootstrap_hook to no longer disable itself if
the task driver in use is not docker. In other words, make it work for
podman and other image based task drivers. The hook now only checks that
1. the task is a connect sidecar
2. the task.config block contains an "image" field
* Status panel shows failed and lost, but probably dont have the condition quite right
* Rescheduled and Replaced cells instead of a general failed/lost one
* Tests moving to acceptance
* Fixed desiredTotal and added acceptance test for restarted
* moved integration test into acceptance test generally
* Now that we represent Lost in the graph, have to make our unplaced testcase as Unknown
* No need to declare new vars for immediately returned getters
* Literal restart and resched add to the tallies, rather than 'would have but ran out of attampts' like before
* Testfixes now that weve redefined what restarts and reschedules are indicated by
While working on client status update improvements, I encountered problems
getting tests with the mock driver to correctly restore.
Unlike typical drivers the mock driver doesn't have an external source of truth
for whether the task is running (ex. making API calls to `dockerd` or looking
for a running PID), and so in order to make up that information, it re-parses
the original task config. But the taskrunner doesn't call the encoding step for
`RecoverTask`, only `StartTask`, so the task config the mock driver gets is
missing data.
Update the mock driver to stash the "external" state in the task state that
we'll get from the task runner, so that we don't have to try to recover from the
original `TaskConfig` anymore. This should bring the mock driver closer to the
behavior of the other drivers.
* Sets up a CSS grid for Evaluations sidebar
* Flex seems more sensible for this actually
* Tighten up the header margin
* Percy found a diff; the expand button wasnt showing for view logs sidebar
* [ui] Service job status panel (#16134)
* it begins
* Hacky demo enabled
* Still very hacky but seems deece
* Floor of at least 3 must be shown
* Width from on-high
* Other statuses considered
* More sensible allocTypes listing
* Beginnings of a legend
* Total number of allocs running now maps over job.groups
* Lintfix
* base the number of slots to hold open on actual tallies, which should never exceed totalAllocs
* Versions get yer versions here
* Versions lookin like versions
* Mirage fixup
* Adds Remaining as an alloc chart status and adds historical status option
* Get tests passing again by making job status static for a sec
* Historical status panel click actions moved into their own component class
* job detail tests plz chill
* Testing if percy is fickle
* Hyper-specfic on summary distribution bar identifier
* Perhaps the 2nd allocSummary item no longer exists with the more accurate afterCreate data
* UI Test eschewing the page pattern
* Bones of a new acceptance test
* Track width changes explicitly with window-resize
* testlintfix
* Alloc counting tests
* Alloc grouping test
* Alloc grouping with complex resizing
* Refined the list of showable statuses
* PR feedback addressed
* renamed allocation-row to allocation-status-row
* [ui, job status] Make panel status mode a queryParam (#16345)
* queryParam changing
* Test for QP in panel
* Adding @tracked to legacy controller
* Move the job of switching to Historical out to larger context
* integration test mock passed func
* [ui] Service job deployment status panel (#16383)
* A very fast and loose deployment panel
* Removing Unknown status from the panel
* Set up oldAllocs list in constructor, rather than as a getter/tracked var
* Small amount of template cleanup
* Refactored latest-deployment new logic back into panel.js
* Revert now-unused latest-deployment component
* margin bottom when ungrouped also
* Basic integration tests for job deployment status panel
* Updates complete alloc colour to green for new visualizations only (#16618)
* Updates complete alloc colour to green for new visualizations only
* Pale green instead of dark green for viz in general
* [ui] Job Deployment Status: History and Update Props (#16518)
* Deployment history wooooooo
* Styled deployment history
* Update Params
* lintfix
* Types and groups for updateParams
* Live-updating history
* Harden with types, error states, and pending states
* Refactor updateParams to use trigger component
* [ui] Deployment History search (#16608)
* Functioning searchbox
* Some nice animations for history items
* History search test
* Fixing up some old mirage conventions
* some a11y rule override to account for scss keyframes
* Split panel into deploying and steady components
* HandleError passed from job index
* gridified panel elements
* TotalAllocs added to deploying.js
* Width perc to px
* [ui] Splitting deployment allocs by status, health, and canary status (#16766)
* Initial attempt with lots of scratchpad work
* Style mods per UI discussion
* Fix canary overflow bug
* Dont show canary or health for steady/prev-alloc blocks
* Steady state
* Thanks Julie
* Fixes steady-state versions
* Legen, wait for it...
* Test fixes now that we have a minimum block size
* PR prep
* Shimmer effect on pending and unplaced allocs (#16801)
* Shimmer effect on pending and unplaced
* Dont show animation in the legend
* [ui, deployments] Linking allocblocks and legends to allocation / allocations index routes (#16821)
* Conditional link-to component and basic linking to allocations and allocation routes
* Job versions filter added to allocations index page
* Steady state legends link
* Legend links
* Badge count links for versions
* Fix: faded class on steady-state legend items
* version link now wont show completed ones
* Fix a11y violations with link labels
* Combining some template conditional logic
* [ui, deployments] Conversions on long nanosecond update params (#16882)
* Conversions on long nanosecond nums
* Early return in updateParamGroups comp prop
* [ui, deployments] Mirage Actively Deploying Job and Deployment Integration Tests (#16888)
* Start of deployment alloc test scaffolding
* Bit of test cleanup and canary for ungrouped allocs
* Flakey but more robust integrations for deployment panel
* De-flake acceptance tests and add an actively deploying job to mirage
* Jitter-less alloc status distribution removes my bad math
* bugfix caused by summary.desiredTotal non-null
* More interesting mirage active deployment alloc breakdown
* Further tests for previous-allocs row
* Previous alloc legend tests
* Percy snapshots added to integration test
* changelog
Demo (and easily reproduce, locally) a CSI setup
with separate controller and node plugins.
This runs NFS in a container backed by a host volume
and CSI controller and node plugins from rocketduck:
gitlab.com/rocketduck/csi-plugin-nfs
Co-authored-by: Florian Apolloner <florian@apolloner.eu>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* services: un-mark group services as deregistered if restart hook runs
This PR may fix a bug where group services will never be deregistered if the
group undergoes a task restart.
* e2e: add test case for restart and deregister group service
* cl: add cl
* e2e: add wait for service list call
Some Nomad users ship application logs out-of-band via syslog. For these users
having `logmon` (and `docker_logger`) running is unnecessary overhead. Allow
disabling the logmon and pointing the task's stdout/stderr to /dev/null.
This changeset is the first of several incremental improvements to log
collection short of full-on logging plugins. The next step will likely be to
extend the internal-only task driver configuration so that cluster
administrators can turn off log collection for the entire driver.
---
Fixes: #11175
Co-authored-by: Thomas Weber <towe75@googlemail.com>