This closes#9495. As detailed in there, the collection query GET
/v1/volumes?type=csi doesn’t return ReadAllocs and WriteAllocs, so the #
Allocs cell was always showing 0 upon first load because it was derived
from the lengths of those arrays. This uses the heretofore-ignored
CurrentReaders and CurrentWriters values to calculate the total instead.
The single-resource query GET /v1/volume/csi%2F:id doesn’t return
CurrentReaders and CurrentWriters that absence doesn’t override the
stored values when visiting an individual item.
Thanks to @apollo13 for reporting this and to @tgross for the API logs
and suggestion.
This adds:
* a script for building and deploying the Ember UI and Storybook to
Vercel
* configuration for that deployment
* a header link to the UI to link to Storybook when built with
STORYBOOK_LINK=true
It also removes a file used to configure Netlify redirects.
The Netlify setup had two “sites”: nomad-storybook and nomad-ui. I
attempted to replicate that here but ran into some platform limitations
with Vercel: two “projects” cannot share the same root directory without
also sharing the same vercel.json that lets us specify configuration
such as the rewrite needed to handle deep linking into the Ember UI. I
tried having Storybook use /ui/storybook as the root directory (and
adding a symbolically-linked package.json to bypass Vercel’s refusal
to build without it) but that produced broken Storybook deployments.
This instead combines the two projects into one
(nomad-storybook-and-ui), defaults to forwarding / to /ui/, and
adds the header link to the UI to navigate to Storybook.
Rather than have a complex build script in the Vercel configuration UI,
this delegates to a script in the repository.
This builds on filtering to allow the optimize page to show recommendations
for the active namespace vs all namespaces. If turning off the toggle causes
the summary from the active card to become excluded from the filtered list,
the active summary changes, as with the facets.
It also includes a fix for this bug:
https://github.com/hashicorp/nomad/pull/9294#pullrequestreview-527748994
Plugin health for controllers should show "Node Only" in the UI only when both
conditions are true: controllers are not required, and no controllers have
registered themselves (0 expected controllers). This accounts for "monolith"
plugins which might register as both controllers and nodes but not necessarily
have `ControllerRequired = true` because they don't implement the Controller
RPC endpoints we need (this requirement was added in #7844)
This changeset includes the following fixes:
* Update the Plugins tab of the UI so that monolith plugins don't show "Node
Only" once they've registered.
* Add the missing "Node Only" logic to the Volumes tab of the UI.
This is mostly copied from the jobs list. One uncertainty
is what to do when changing a facet causes the currently-
active card to be excluded from the filtered list 🤔
Without this, visiting any job detail page on Nomad OSS would crash with
an error like this:
Error: Ember Data Request GET
/v1/recommendations?job=ping%F0%9F%A5%B3&namespace=default returned a
404 Payload (text/xml)
The problem was twofold.
1. The recommendation ability didn’t include anything about checking
whether the feature was present. This adds a request to
/v1/operator/license on application load to determine which features are
present and store them in the system service. The ability now looks for
'Dynamic Application Sizing' in that feature list.
2. Second, I didn’t check permissions at all in the job-fetching or job
detail templates.
This continues iteration on the DAS UI by adding the ability to directly
navigate to a recommendation summary by (namespaced) slug and a copy
button for the direct navigation link.
It includes a change to CopyButton allowing it to take a block that’s
rendered within the button.
It also changes some instances of multi-relationship traversal to use
in-summary attributes, such as summary.jobNamespace instead of
summary.job.namespace.name.
Before, we'd show a helpful error message when a task isn't running
instead of erroring in a generic way. Turns out when an alloc is
terminal but reachable, the filesystem is left behind so we were hiding
it.
Now it is always shown and in the event that something errors, it'll
either be generic, or--more commonly--a 404 of the allocation.