When the server restarts for the upgrade, it loads the `structs.Job` from the
Raft snapshot/logs. The jobspec has long since been parsed, so none of the
guards around the default value are in play. The empty field value for `Enabled`
is the zero value, which is false.
This doesn't impact any running allocation because we don't replace running
allocations when either the client or server restart. But as soon as any
allocation gets rescheduled (ex. you drain all your clients during upgrades),
it'll be using the `structs.Job` that the server has, which has `Enabled =
false`, and logs will not be collected.
This changeset fixes the bug by adding a new field `Disabled` which defaults to
false (so that the zero value works), and deprecates the old field.
Fixes#17076
* docs: move CNI reference plugins installation to CNI overview page
This PR moves the instruction steps for install the CNI reference plugins
from the Consul Mesh integration page to the general Networking CNI page.
These plugins are required for bridge networking, not just Consul Mesh,
so it makes sense to have them on the general CNI page.
Closes#17038
* docs: fix a link to post install steps
Their release notes are here: https://github.com/golang-jwt/jwt/releases
Seemed wise to upgrade before we do even more with JWTs. For example
this upgrade *would* have mattered if we already implemented common JWT
claims such as expiration. Since we didn't rely on any claim
verification this upgrade is a noop...
...except for 1 test that called `Claims.Valid()`! Removing that
assertion *seems* scary, but it didn't actually do anything because we
didn't implement any of the standard claims it validated:
https://github.com/golang-jwt/jwt/blob/v4.5.0/map_claims.go#L120-L151
So functionally this major upgrade is a noop.
This special purpose UI provides commands that can benefit from direct
access to the io.Reader and io.Writers of the base cli.Ui. It can
traverse a chain of ColoredUis to find the base. Currently, it can
retrieve writers from a cli.BasicUi (or cli.MockUi for testing).
Renames ui.go and ui_test.go to log_ui.go and log_ui_test.go
This PR does some cleanup of an old code path for versions of Consul that
did not support reporting the supported versions of Envoy in its API. Those
versions are no longer supported for years at this point, and the fallback
version of envoy hasn't been supported by any version of Consul for almost
as long. Remove this code path that is no longer useful.
This PR modifies references to the envoyproxy/envoy docker image to
explicitly include the docker.io prefix. This does not affect existing
users, but makes things easier for Podman users, who otherwise need to
specify the full name because Podman does not default to docker.io
This PR updates the envoy_bootstrap_hook to no longer disable itself if
the task driver in use is not docker. In other words, make it work for
podman and other image based task drivers. The hook now only checks that
1. the task is a connect sidecar
2. the task.config block contains an "image" field
* Status panel shows failed and lost, but probably dont have the condition quite right
* Rescheduled and Replaced cells instead of a general failed/lost one
* Tests moving to acceptance
* Fixed desiredTotal and added acceptance test for restarted
* moved integration test into acceptance test generally
* Now that we represent Lost in the graph, have to make our unplaced testcase as Unknown
* No need to declare new vars for immediately returned getters
* Literal restart and resched add to the tallies, rather than 'would have but ran out of attampts' like before
* Testfixes now that weve redefined what restarts and reschedules are indicated by
While working on client status update improvements, I encountered problems
getting tests with the mock driver to correctly restore.
Unlike typical drivers the mock driver doesn't have an external source of truth
for whether the task is running (ex. making API calls to `dockerd` or looking
for a running PID), and so in order to make up that information, it re-parses
the original task config. But the taskrunner doesn't call the encoding step for
`RecoverTask`, only `StartTask`, so the task config the mock driver gets is
missing data.
Update the mock driver to stash the "external" state in the task state that
we'll get from the task runner, so that we don't have to try to recover from the
original `TaskConfig` anymore. This should bring the mock driver closer to the
behavior of the other drivers.
* Sets up a CSS grid for Evaluations sidebar
* Flex seems more sensible for this actually
* Tighten up the header margin
* Percy found a diff; the expand button wasnt showing for view logs sidebar
* [ui] Service job status panel (#16134)
* it begins
* Hacky demo enabled
* Still very hacky but seems deece
* Floor of at least 3 must be shown
* Width from on-high
* Other statuses considered
* More sensible allocTypes listing
* Beginnings of a legend
* Total number of allocs running now maps over job.groups
* Lintfix
* base the number of slots to hold open on actual tallies, which should never exceed totalAllocs
* Versions get yer versions here
* Versions lookin like versions
* Mirage fixup
* Adds Remaining as an alloc chart status and adds historical status option
* Get tests passing again by making job status static for a sec
* Historical status panel click actions moved into their own component class
* job detail tests plz chill
* Testing if percy is fickle
* Hyper-specfic on summary distribution bar identifier
* Perhaps the 2nd allocSummary item no longer exists with the more accurate afterCreate data
* UI Test eschewing the page pattern
* Bones of a new acceptance test
* Track width changes explicitly with window-resize
* testlintfix
* Alloc counting tests
* Alloc grouping test
* Alloc grouping with complex resizing
* Refined the list of showable statuses
* PR feedback addressed
* renamed allocation-row to allocation-status-row
* [ui, job status] Make panel status mode a queryParam (#16345)
* queryParam changing
* Test for QP in panel
* Adding @tracked to legacy controller
* Move the job of switching to Historical out to larger context
* integration test mock passed func
* [ui] Service job deployment status panel (#16383)
* A very fast and loose deployment panel
* Removing Unknown status from the panel
* Set up oldAllocs list in constructor, rather than as a getter/tracked var
* Small amount of template cleanup
* Refactored latest-deployment new logic back into panel.js
* Revert now-unused latest-deployment component
* margin bottom when ungrouped also
* Basic integration tests for job deployment status panel
* Updates complete alloc colour to green for new visualizations only (#16618)
* Updates complete alloc colour to green for new visualizations only
* Pale green instead of dark green for viz in general
* [ui] Job Deployment Status: History and Update Props (#16518)
* Deployment history wooooooo
* Styled deployment history
* Update Params
* lintfix
* Types and groups for updateParams
* Live-updating history
* Harden with types, error states, and pending states
* Refactor updateParams to use trigger component
* [ui] Deployment History search (#16608)
* Functioning searchbox
* Some nice animations for history items
* History search test
* Fixing up some old mirage conventions
* some a11y rule override to account for scss keyframes
* Split panel into deploying and steady components
* HandleError passed from job index
* gridified panel elements
* TotalAllocs added to deploying.js
* Width perc to px
* [ui] Splitting deployment allocs by status, health, and canary status (#16766)
* Initial attempt with lots of scratchpad work
* Style mods per UI discussion
* Fix canary overflow bug
* Dont show canary or health for steady/prev-alloc blocks
* Steady state
* Thanks Julie
* Fixes steady-state versions
* Legen, wait for it...
* Test fixes now that we have a minimum block size
* PR prep
* Shimmer effect on pending and unplaced allocs (#16801)
* Shimmer effect on pending and unplaced
* Dont show animation in the legend
* [ui, deployments] Linking allocblocks and legends to allocation / allocations index routes (#16821)
* Conditional link-to component and basic linking to allocations and allocation routes
* Job versions filter added to allocations index page
* Steady state legends link
* Legend links
* Badge count links for versions
* Fix: faded class on steady-state legend items
* version link now wont show completed ones
* Fix a11y violations with link labels
* Combining some template conditional logic
* [ui, deployments] Conversions on long nanosecond update params (#16882)
* Conversions on long nanosecond nums
* Early return in updateParamGroups comp prop
* [ui, deployments] Mirage Actively Deploying Job and Deployment Integration Tests (#16888)
* Start of deployment alloc test scaffolding
* Bit of test cleanup and canary for ungrouped allocs
* Flakey but more robust integrations for deployment panel
* De-flake acceptance tests and add an actively deploying job to mirage
* Jitter-less alloc status distribution removes my bad math
* bugfix caused by summary.desiredTotal non-null
* More interesting mirage active deployment alloc breakdown
* Further tests for previous-allocs row
* Previous alloc legend tests
* Percy snapshots added to integration test
* changelog
Demo (and easily reproduce, locally) a CSI setup
with separate controller and node plugins.
This runs NFS in a container backed by a host volume
and CSI controller and node plugins from rocketduck:
gitlab.com/rocketduck/csi-plugin-nfs
Co-authored-by: Florian Apolloner <florian@apolloner.eu>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* services: un-mark group services as deregistered if restart hook runs
This PR may fix a bug where group services will never be deregistered if the
group undergoes a task restart.
* e2e: add test case for restart and deregister group service
* cl: add cl
* e2e: add wait for service list call
Some Nomad users ship application logs out-of-band via syslog. For these users
having `logmon` (and `docker_logger`) running is unnecessary overhead. Allow
disabling the logmon and pointing the task's stdout/stderr to /dev/null.
This changeset is the first of several incremental improvements to log
collection short of full-on logging plugins. The next step will likely be to
extend the internal-only task driver configuration so that cluster
administrators can turn off log collection for the entire driver.
---
Fixes: #11175
Co-authored-by: Thomas Weber <towe75@googlemail.com>
PR #14492 introduced a new check to return 0 when the `nomad job plan`
command returns a diff of type `None`.
But the `-diff` CLI flag was also being used to control whether the plan
request should return the diff of not instead of just controlling if the
diff was printed.
This means that when `-diff=false` is set the response does not include
any diff information, and so the new check panics.
This commit fixes the problem by always requesting a diff and using the
`-diff` only for controlling output, as it's currently documented.
Remove unneeded service injection. This service is not being used in
this controller and currently only exists in `main`, causing
`release/1.5.x` to break.
* Honor value for distinct_hosts constraint
* Add test for feasibility checking for `false`
---------
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
* Upgrade from hashicorp/go-msgpack v1.1.5 to v2.1.0
Fixes#16808
* Update hashicorp/net-rpc-msgpackrpc to v2 to match go-msgpack
* deps: use go-msgpack v2.0.0
go-msgpack v2.1.0 includes some code changes that we will need to
investigate furthere to assess its impact on Nomad, so keeping this
dependency on v2.0.0 for now since it's no-op.
---------
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
This PR eliminates code specific to looking up and caching the uid/gid/user.User
object associated with the nobody user in an init block. This code existed before
adding the generic users cache and was meant to optimize the one search path we
knew would happen often. Now that we have the cache, seems reasonable to eliminate
this init block and use the cache instead like for any other user.
Also fixes a constraint on the podman (and other) drivers, where building without
CGO became problematic on some OS like Fedora IoT where the nobody user cannot
be found with the pure-Go standard library.
Fixes github.com/hashicorp/nomad-driver-podman/issues/228
Examples in the documentation frequently include tokens, including Vault tokens
which end up triggering GitHub's secret scanner. Remove these from consideration
so that we don't get false positive reports.