This doesn’t include Ember Data, as we are still back on 3.12.
Most changes are deprecation updates, linting fixes, and dependencies. It can
be read commit-by-commit, though many of them are mechanical and skimmable.
For the new linting exclusions, I’ve added them to the Tech Debt list.
The decrease in test count is because linting is no longer included in ember test.
There’s a new deprecation warning in the logs that can be fixed by updating Ember
Power Select but when I tried that it caused it to render incorrectly, so I decided to
ignore it for now and address it separately.
This fixes a flaky test, as seen in this failure:
https://app.circleci.com/pipelines/github/hashicorp/nomad/14726/workflows/f4ae0bf2-0699-4d18-b55e-5221aafe393c/jobs/137128
One part of the test involves toggling off all memory recommendations
and then accepting, but it’s not possible to accept when there are
no CPU recommendations to begin with, which can happen because
there’s a 10% chance of not creating a corresponding recommendation
in the task factory. Since two tasks are created for this module, it’s
only a 1% chance of no CPU task, but that means 1% flakiness!
This closes#8744 and #9826.
It necessitated some customisation options for TwoStepButton. One is inlineText, which puts the confirmation text in the same line as the buttons. Also, there was a single-use configuration option named isInfoAction that I removed in favour of passing a set of class configuration options like this:
@classes={{hash
idleButton="is-warning"
confirmationMessage="inherit-color"
cancelButton="is-danger is-important"
confirmButton="is-warning"}}
This closes#7459.
While emoji don’t actually need escaping, expanding the
expression that enumerates all task name characters that
don’t need escaping to include emoji is prohibitive, since
it’s a discontinuous range. The emoji-regex project has
such an expression and it’s 12kB.
This fixes the regular expression to property escape emoji
as a single character instead of as its component bytes.
Thanks to @DingoEatingFuzz for the suggestion.
This closes#9966. It was looking at the query parameters
for the namespace and region, but allocation (and task!)
routes don’t have a namespace query parameter. Since the URL
generator requires the job for all calls, it makes sense to
extract the namespace and region from the job instead.
The region will naturally be appended to URLs via
token.authorizedRequest but agent members includes all servers across
all regions so relying on the application-level region isn't good
enough.
On very small clusters, the node count heuristic is impractical and
leads to confusion. By additionally requiring 10+ sibling allocs, the
lines will be shown more often.
This fixes a couple bugs
1. Overreporting resources reserved due to counting terminal allocs
2. Overreporting unique client placements due to uniquing on object refs
instead of on client ID.
Various page objects had breadcrumbs and breadcrumbFor within them, this
moves those to the existing Layout page object that contains shared page objects.
Instead of creating recommendations for all the jobs used
across these tests, this creates a specific job with
a higher group count, which reduces the likelihood
of having no recommendations to 0.0001%.
It was incorrect to assume that each task group would always
have recommendations, since there’s a 1% chance that a task
won’t have a recommendation. (10% chance for CPU and memory.)
This uses the number of groups with recommendations instead.
This closes#9495. As detailed in there, the collection query GET
/v1/volumes?type=csi doesn’t return ReadAllocs and WriteAllocs, so the #
Allocs cell was always showing 0 upon first load because it was derived
from the lengths of those arrays. This uses the heretofore-ignored
CurrentReaders and CurrentWriters values to calculate the total instead.
The single-resource query GET /v1/volume/csi%2F:id doesn’t return
CurrentReaders and CurrentWriters that absence doesn’t override the
stored values when visiting an individual item.
Thanks to @apollo13 for reporting this and to @tgross for the API logs
and suggestion.
This adds:
* a script for building and deploying the Ember UI and Storybook to
Vercel
* configuration for that deployment
* a header link to the UI to link to Storybook when built with
STORYBOOK_LINK=true
It also removes a file used to configure Netlify redirects.
The Netlify setup had two “sites”: nomad-storybook and nomad-ui. I
attempted to replicate that here but ran into some platform limitations
with Vercel: two “projects” cannot share the same root directory without
also sharing the same vercel.json that lets us specify configuration
such as the rewrite needed to handle deep linking into the Ember UI. I
tried having Storybook use /ui/storybook as the root directory (and
adding a symbolically-linked package.json to bypass Vercel’s refusal
to build without it) but that produced broken Storybook deployments.
This instead combines the two projects into one
(nomad-storybook-and-ui), defaults to forwarding / to /ui/, and
adds the header link to the UI to navigate to Storybook.
Rather than have a complex build script in the Vercel configuration UI,
this delegates to a script in the repository.
This builds on filtering to allow the optimize page to show recommendations
for the active namespace vs all namespaces. If turning off the toggle causes
the summary from the active card to become excluded from the filtered list,
the active summary changes, as with the facets.
It also includes a fix for this bug:
https://github.com/hashicorp/nomad/pull/9294#pullrequestreview-527748994
The API is missing values for `ReadAllocs` and `WriteAllocs` fields, resulting
in allocation claims not being populated in the web UI. These fields mirror
the fields in `nomad/structs.CSIVolume`. Returning a separate list of stubs
for read and write would be ideal, but this can't be done without either
bloating the API response with repeated full `Allocation` data, or causing a
panic in previous versions of the CLI.
The `nomad/structs` fields are persisted with nil values and are populated
during RPC, so we'll do the same in the HTTP API and populate the `ReadAllocs`
and `WriteAllocs` fields with a map of allocation IDs, but with null
values. The web UI will then create its `ReadAllocations` and
`WriteAllocations` fields by mapping from those IDs to the values in
`Allocations`, instead of flattening the map into a list.
Plugin health for controllers should show "Node Only" in the UI only when both
conditions are true: controllers are not required, and no controllers have
registered themselves (0 expected controllers). This accounts for "monolith"
plugins which might register as both controllers and nodes but not necessarily
have `ControllerRequired = true` because they don't implement the Controller
RPC endpoints we need (this requirement was added in #7844)
This changeset includes the following fixes:
* Update the Plugins tab of the UI so that monolith plugins don't show "Node
Only" once they've registered.
* Add the missing "Node Only" logic to the Volumes tab of the UI.
This is mostly copied from the jobs list. One uncertainty
is what to do when changing a facet causes the currently-
active card to be excluded from the filtered list 🤔
Without this, visiting any job detail page on Nomad OSS would crash with
an error like this:
Error: Ember Data Request GET
/v1/recommendations?job=ping%F0%9F%A5%B3&namespace=default returned a
404 Payload (text/xml)
The problem was twofold.
1. The recommendation ability didn’t include anything about checking
whether the feature was present. This adds a request to
/v1/operator/license on application load to determine which features are
present and store them in the system service. The ability now looks for
'Dynamic Application Sizing' in that feature list.
2. Second, I didn’t check permissions at all in the job-fetching or job
detail templates.
This continues iteration on the DAS UI by adding the ability to directly
navigate to a recommendation summary by (namespaced) slug and a copy
button for the direct navigation link.
It includes a change to CopyButton allowing it to take a block that’s
rendered within the button.
It also changes some instances of multi-relationship traversal to use
in-summary attributes, such as summary.jobNamespace instead of
summary.job.namespace.name.
This returns an array of all icons. As the comment suggests, it's
because the SVGs file can't be imported in stories since it is generated
as part of the Ember project.
Before, we'd show a helpful error message when a task isn't running
instead of erroring in a generic way. Turns out when an alloc is
terminal but reachable, the filesystem is left behind so we were hiding
it.
Now it is always shown and in the event that something errors, it'll
either be generic, or--more commonly--a 404 of the allocation.
Now all data loading happens in the TopoViz component as well as
computation of resource proportions.
Allocation selection state is also managed centrally uses a dedicated
structure indexed by group key (job id and task group name). This way
allocations don't need to be scanned at the node level, which is O(n) at
the best (assuming no ember overhead on recomputes).
- Plot all datacenters
- For each datacenter, plot all nodes
- For each node, plot all allocations by memory and cpu
- For empty nodes, highlight the emptiness
- When hovering over allocations, give them visual focus