While working on client status update improvements, I encountered problems
getting tests with the mock driver to correctly restore.
Unlike typical drivers the mock driver doesn't have an external source of truth
for whether the task is running (ex. making API calls to `dockerd` or looking
for a running PID), and so in order to make up that information, it re-parses
the original task config. But the taskrunner doesn't call the encoding step for
`RecoverTask`, only `StartTask`, so the task config the mock driver gets is
missing data.
Update the mock driver to stash the "external" state in the task state that
we'll get from the task runner, so that we don't have to try to recover from the
original `TaskConfig` anymore. This should bring the mock driver closer to the
behavior of the other drivers.
* Sets up a CSS grid for Evaluations sidebar
* Flex seems more sensible for this actually
* Tighten up the header margin
* Percy found a diff; the expand button wasnt showing for view logs sidebar
* [ui] Service job status panel (#16134)
* it begins
* Hacky demo enabled
* Still very hacky but seems deece
* Floor of at least 3 must be shown
* Width from on-high
* Other statuses considered
* More sensible allocTypes listing
* Beginnings of a legend
* Total number of allocs running now maps over job.groups
* Lintfix
* base the number of slots to hold open on actual tallies, which should never exceed totalAllocs
* Versions get yer versions here
* Versions lookin like versions
* Mirage fixup
* Adds Remaining as an alloc chart status and adds historical status option
* Get tests passing again by making job status static for a sec
* Historical status panel click actions moved into their own component class
* job detail tests plz chill
* Testing if percy is fickle
* Hyper-specfic on summary distribution bar identifier
* Perhaps the 2nd allocSummary item no longer exists with the more accurate afterCreate data
* UI Test eschewing the page pattern
* Bones of a new acceptance test
* Track width changes explicitly with window-resize
* testlintfix
* Alloc counting tests
* Alloc grouping test
* Alloc grouping with complex resizing
* Refined the list of showable statuses
* PR feedback addressed
* renamed allocation-row to allocation-status-row
* [ui, job status] Make panel status mode a queryParam (#16345)
* queryParam changing
* Test for QP in panel
* Adding @tracked to legacy controller
* Move the job of switching to Historical out to larger context
* integration test mock passed func
* [ui] Service job deployment status panel (#16383)
* A very fast and loose deployment panel
* Removing Unknown status from the panel
* Set up oldAllocs list in constructor, rather than as a getter/tracked var
* Small amount of template cleanup
* Refactored latest-deployment new logic back into panel.js
* Revert now-unused latest-deployment component
* margin bottom when ungrouped also
* Basic integration tests for job deployment status panel
* Updates complete alloc colour to green for new visualizations only (#16618)
* Updates complete alloc colour to green for new visualizations only
* Pale green instead of dark green for viz in general
* [ui] Job Deployment Status: History and Update Props (#16518)
* Deployment history wooooooo
* Styled deployment history
* Update Params
* lintfix
* Types and groups for updateParams
* Live-updating history
* Harden with types, error states, and pending states
* Refactor updateParams to use trigger component
* [ui] Deployment History search (#16608)
* Functioning searchbox
* Some nice animations for history items
* History search test
* Fixing up some old mirage conventions
* some a11y rule override to account for scss keyframes
* Split panel into deploying and steady components
* HandleError passed from job index
* gridified panel elements
* TotalAllocs added to deploying.js
* Width perc to px
* [ui] Splitting deployment allocs by status, health, and canary status (#16766)
* Initial attempt with lots of scratchpad work
* Style mods per UI discussion
* Fix canary overflow bug
* Dont show canary or health for steady/prev-alloc blocks
* Steady state
* Thanks Julie
* Fixes steady-state versions
* Legen, wait for it...
* Test fixes now that we have a minimum block size
* PR prep
* Shimmer effect on pending and unplaced allocs (#16801)
* Shimmer effect on pending and unplaced
* Dont show animation in the legend
* [ui, deployments] Linking allocblocks and legends to allocation / allocations index routes (#16821)
* Conditional link-to component and basic linking to allocations and allocation routes
* Job versions filter added to allocations index page
* Steady state legends link
* Legend links
* Badge count links for versions
* Fix: faded class on steady-state legend items
* version link now wont show completed ones
* Fix a11y violations with link labels
* Combining some template conditional logic
* [ui, deployments] Conversions on long nanosecond update params (#16882)
* Conversions on long nanosecond nums
* Early return in updateParamGroups comp prop
* [ui, deployments] Mirage Actively Deploying Job and Deployment Integration Tests (#16888)
* Start of deployment alloc test scaffolding
* Bit of test cleanup and canary for ungrouped allocs
* Flakey but more robust integrations for deployment panel
* De-flake acceptance tests and add an actively deploying job to mirage
* Jitter-less alloc status distribution removes my bad math
* bugfix caused by summary.desiredTotal non-null
* More interesting mirage active deployment alloc breakdown
* Further tests for previous-allocs row
* Previous alloc legend tests
* Percy snapshots added to integration test
* changelog
Demo (and easily reproduce, locally) a CSI setup
with separate controller and node plugins.
This runs NFS in a container backed by a host volume
and CSI controller and node plugins from rocketduck:
gitlab.com/rocketduck/csi-plugin-nfs
Co-authored-by: Florian Apolloner <florian@apolloner.eu>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
* services: un-mark group services as deregistered if restart hook runs
This PR may fix a bug where group services will never be deregistered if the
group undergoes a task restart.
* e2e: add test case for restart and deregister group service
* cl: add cl
* e2e: add wait for service list call
Some Nomad users ship application logs out-of-band via syslog. For these users
having `logmon` (and `docker_logger`) running is unnecessary overhead. Allow
disabling the logmon and pointing the task's stdout/stderr to /dev/null.
This changeset is the first of several incremental improvements to log
collection short of full-on logging plugins. The next step will likely be to
extend the internal-only task driver configuration so that cluster
administrators can turn off log collection for the entire driver.
---
Fixes: #11175
Co-authored-by: Thomas Weber <towe75@googlemail.com>
PR #14492 introduced a new check to return 0 when the `nomad job plan`
command returns a diff of type `None`.
But the `-diff` CLI flag was also being used to control whether the plan
request should return the diff of not instead of just controlling if the
diff was printed.
This means that when `-diff=false` is set the response does not include
any diff information, and so the new check panics.
This commit fixes the problem by always requesting a diff and using the
`-diff` only for controlling output, as it's currently documented.
Remove unneeded service injection. This service is not being used in
this controller and currently only exists in `main`, causing
`release/1.5.x` to break.
* Honor value for distinct_hosts constraint
* Add test for feasibility checking for `false`
---------
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
* Upgrade from hashicorp/go-msgpack v1.1.5 to v2.1.0
Fixes#16808
* Update hashicorp/net-rpc-msgpackrpc to v2 to match go-msgpack
* deps: use go-msgpack v2.0.0
go-msgpack v2.1.0 includes some code changes that we will need to
investigate furthere to assess its impact on Nomad, so keeping this
dependency on v2.0.0 for now since it's no-op.
---------
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
This PR eliminates code specific to looking up and caching the uid/gid/user.User
object associated with the nobody user in an init block. This code existed before
adding the generic users cache and was meant to optimize the one search path we
knew would happen often. Now that we have the cache, seems reasonable to eliminate
this init block and use the cache instead like for any other user.
Also fixes a constraint on the podman (and other) drivers, where building without
CGO became problematic on some OS like Fedora IoT where the nobody user cannot
be found with the pure-Go standard library.
Fixes github.com/hashicorp/nomad-driver-podman/issues/228
Examples in the documentation frequently include tokens, including Vault tokens
which end up triggering GitHub's secret scanner. Remove these from consideration
so that we don't get false positive reports.
Adds a new configuration to clients to optionally allow them to drain their
workloads on shutdown. The client sends the `Node.UpdateDrain` RPC targeting
itself and then monitors the drain state as seen by the server until the drain
is complete or the deadline expires. If it loses connection with the server, it
will monitor local client status instead to ensure allocations are stopped
before exiting.
If an allocation is slow to stop because of `kill_timeout` or `shutdown_delay`,
the node drain is marked as complete prematurely, even though drain monitoring
will continue to report allocation migrations. This impacts the UI or API
clients that monitor node draining to shut down nodes.
This changeset updates the behavior to wait until the client status of all
drained allocs are terminal before marking the node as done draining.
The `-deadline` and `-force` flag for the `nomad node drain` command only cause
the draining to ignore the `migrate` block's healthy deadline, max parallel,
etc. These flags don't have anything to do with the `kill_timeout` or
`shutdown_delay` options of the jobspec.
This changeset fixes the skipped E2E tests so that they validate the intended
behavior, and updates the docs for more clarity.
* [no ci] deps: update docker to 23.0.3
This PR brings our docker/docker dependency (which is hosted at github.com/moby/moby)
up to 23.0.3 (forward about 2 years). Refactored our use of docker/libnetwork to
reference the package in its new home, which is docker/docker/libnetwork (it is
no longer an independent repository). Some minor nearby test case cleanup as well.
* add cl
* func: add namespace support for list deployment
* func: add wildcard to namespace filter for deployments
* Update deployment_endpoint.go
* style: use must instead of require or asseert
* style: rename paginator to avoid clash with import
* style: add changelog entry
* fix: add missing parameter for upsert jobs
msgpackrpc codec handles are specific to a connection and cannot be shared
between goroutines; this can cause corrupted decoding. Fix the drainer
integration test so that we create separate codecs for the goroutines that the
test helper spins up to simulate client updates.
This changeset also refactors the drainer integration test to bring it up to
current idioms and library usages, make assertions more clear, and reduce
duplication.
The first start of a Consul Connect proxy sidecar triggers a run
of the envoy_version hook which modifies the task config image
entry. The modification takes into account a number of factors to
correctly populate this. Importantly, once the hook has run, it
marks itself as done so the taskrunner will not execute it again.
When the client receives a non-destructive update for the
allocation which the proxy sidecar is a member of, it will update
and overwrite the task definition within the taskerunner. In doing
so it overwrite the modification performed by the hook. If the
allocation is restarted, the envoy_version hook will be skipped as
it previously marked itself as done, and therefore the sidecar
config image is incorrect and causes a driver error.
The fix removes the hook in marking itself as done to the view of
the taskrunner.
* api: enable support for setting original source alongside job
This PR adds support for setting job source material along with
the registration of a job.
This includes a new HTTP endpoint and a new RPC endpoint for
making queries for the original source of a job. The
HTTP endpoint is /v1/job/<id>/submission?version=<version> and
the RPC method is Job.GetJobSubmission.
The job source (if submitted, and doing so is always optional), is
stored in the job_submission memdb table, separately from the
actual job. This way we do not incur overhead of reading the large
string field throughout normal job operations.
The server config now includes job_max_source_size for configuring
the maximum size the job source may be, before the server simply
drops the source material. This should help prevent Bad Things from
happening when huge jobs are submitted. If the value is set to 0,
all job source material will be dropped.
* api: avoid writing var content to disk for parsing
* api: move submission validation into RPC layer
* api: return an error if updating a job submission without namespace or job id
* api: be exact about the job index we associate a submission with (modify)
* api: reword api docs scheduling
* api: prune all but the last 6 job submissions
* api: protect against nil job submission in job validation
* api: set max job source size in test server
* api: fixups from pr