This closes#9495. As detailed in there, the collection query GET
/v1/volumes?type=csi doesn’t return ReadAllocs and WriteAllocs, so the #
Allocs cell was always showing 0 upon first load because it was derived
from the lengths of those arrays. This uses the heretofore-ignored
CurrentReaders and CurrentWriters values to calculate the total instead.
The single-resource query GET /v1/volume/csi%2F:id doesn’t return
CurrentReaders and CurrentWriters that absence doesn’t override the
stored values when visiting an individual item.
Thanks to @apollo13 for reporting this and to @tgross for the API logs
and suggestion.
This adds:
* a script for building and deploying the Ember UI and Storybook to
Vercel
* configuration for that deployment
* a header link to the UI to link to Storybook when built with
STORYBOOK_LINK=true
It also removes a file used to configure Netlify redirects.
The Netlify setup had two “sites”: nomad-storybook and nomad-ui. I
attempted to replicate that here but ran into some platform limitations
with Vercel: two “projects” cannot share the same root directory without
also sharing the same vercel.json that lets us specify configuration
such as the rewrite needed to handle deep linking into the Ember UI. I
tried having Storybook use /ui/storybook as the root directory (and
adding a symbolically-linked package.json to bypass Vercel’s refusal
to build without it) but that produced broken Storybook deployments.
This instead combines the two projects into one
(nomad-storybook-and-ui), defaults to forwarding / to /ui/, and
adds the header link to the UI to navigate to Storybook.
Rather than have a complex build script in the Vercel configuration UI,
this delegates to a script in the repository.
During testing we discovered old versions of Nomad and Consul seemed to
prevent Envoy from accepting new connections while the Nomad agent was
being upgraded.
* use full name for events
use evaluation and allocation instead of short name
* update api event stream package and shortnames
* update docs
* make sync; fix typo
* backwards compat not from 1.0.0-beta event stream api changes
* use api types instead of string
* rm backwards compat note that only changed between prereleases
* remove backwards incompat that only existed in prereleases
* systemd should be downcased
* containerd should be downcased
* spellchecking, adjust list item spacing
* QEMU should be upcased
* spelling, it's->its
* Fewer exclamation points; drive-by list spacing
* Update website/pages/docs/internals/security.mdx
* Namespace is not ent only now.
Co-authored-by: Tim Gross <tgross@hashicorp.com>
When the Docker driver kills as task, we send a request via the Docker API for
dockerd to fire the signal. We send that signal and then block for the
`kill_timeout` waiting for the container to exit. But if the Docker API
blocks, we will block indefinitely because we haven't configured the API call
with the same timeout.
This changeset is a minimal intervention to add the timeout to the Docker API
call _only_ when we have the `kill_timeout` set. Future work should examine
whether we should be threading contexts through other `go-dockerclient` API
calls.
When the Docker driver kills as task, we send a request via the Docker API for
dockerd to fire the signal. We send that signal and then block for the
`kill_timeout` waiting for the container to exit. But if the Docker API
blocks, we will block indefinitely because we haven't configured the API call
with the same timeout.
This changeset is a minimal intervention to add the timeout to the Docker API
call _only_ when we have the `kill_timeout` set. Future work should examine
whether we should be threading contexts through other `go-dockerclient` API
calls.
Expand `volume` and `volume_mount` sections to describe how to use HCL2
dynamic blocks and interpolation to have finer-grained control over how
allocations get volumes.
* prevent duplicate job events
when a job is updated, the job_version table is updated with a structs.Job, this caused there to be multiple job events since we are switching off the change type and not the table
* test length
* add table value to tests
Previously, every Envoy Connect sidecar would spawn as many worker
threads as logical CPU cores. That is Envoy's default behavior when
`--concurrency` is not explicitly set. Nomad now sets the concurrency
flag to 1, which is sensible for the default cpu = 250 Mhz resources
allocated for sidecar proxies. The concurrency value can be configured
in Client configuration by setting `meta.connect.proxy_concurrency`.
Closes#9341
* debug: add pprof duration CLI argument
* debug: add CSI plugin details
* update help text with ACL requirements
* debug: provide ACL hints upon permission failures
* debug: only write file when pprof retrieve is successful
* debug: add helper function to clean bad characters from dynamic filenames
* debug: ensure files are unable to escape the capture directory
* upsertaclpolicies
* delete acl policies msgtype
* upsert acl policies msgtype
* delete acl tokens msgtype
* acl bootstrap msgtype
wip unsubscribe on token delete
test that subscriptions are closed after an ACL token has been deleted
Start writing policyupdated test
* update test to use before/after policy
* add SubscribeWithACLCheck to run acl checks on subscribe
* update rpc endpoint to use broker acl check
* Add and use subscriptions.closeSubscriptionFunc
This fixes the issue of not being able to defer unlocking the mutex on
the event broker in the for loop.
handle acl policy updates
* rpc endpoint test for terminating acl change
* add comments
Co-authored-by: Kris Hicks <khicks@hashicorp.com>
If Docker auth helpers are used but aith fails or the image isn't found, we
hard fail the task. Users may set `auth_soft_fail` to fallback to the public
Docker Hub on a per-job basis. But users that mix public and private images
have to set `auth_soft_fail=true` for every job using a public image if Docker
auth helpers are used.