This fixes a flaky test, as seen in this failure:
https://app.circleci.com/pipelines/github/hashicorp/nomad/14726/workflows/f4ae0bf2-0699-4d18-b55e-5221aafe393c/jobs/137128
One part of the test involves toggling off all memory recommendations
and then accepting, but it’s not possible to accept when there are
no CPU recommendations to begin with, which can happen because
there’s a 10% chance of not creating a corresponding recommendation
in the task factory. Since two tasks are created for this module, it’s
only a 1% chance of no CPU task, but that means 1% flakiness!
This closes#8744 and #9826.
It necessitated some customisation options for TwoStepButton. One is inlineText, which puts the confirmation text in the same line as the buttons. Also, there was a single-use configuration option named isInfoAction that I removed in favour of passing a set of class configuration options like this:
@classes={{hash
idleButton="is-warning"
confirmationMessage="inherit-color"
cancelButton="is-danger is-important"
confirmButton="is-warning"}}
This closes#7459.
While emoji don’t actually need escaping, expanding the
expression that enumerates all task name characters that
don’t need escaping to include emoji is prohibitive, since
it’s a discontinuous range. The emoji-regex project has
such an expression and it’s 12kB.
This fixes the regular expression to property escape emoji
as a single character instead of as its component bytes.
Thanks to @DingoEatingFuzz for the suggestion.
This closes#9966. It was looking at the query parameters
for the namespace and region, but allocation (and task!)
routes don’t have a namespace query parameter. Since the URL
generator requires the job for all calls, it makes sense to
extract the namespace and region from the job instead.
The region will naturally be appended to URLs via
token.authorizedRequest but agent members includes all servers across
all regions so relying on the application-level region isn't good
enough.
On very small clusters, the node count heuristic is impractical and
leads to confusion. By additionally requiring 10+ sibling allocs, the
lines will be shown more often.
This fixes a couple bugs
1. Overreporting resources reserved due to counting terminal allocs
2. Overreporting unique client placements due to uniquing on object refs
instead of on client ID.
Various page objects had breadcrumbs and breadcrumbFor within them, this
moves those to the existing Layout page object that contains shared page objects.
Instead of creating recommendations for all the jobs used
across these tests, this creates a specific job with
a higher group count, which reduces the likelihood
of having no recommendations to 0.0001%.
It was incorrect to assume that each task group would always
have recommendations, since there’s a 1% chance that a task
won’t have a recommendation. (10% chance for CPU and memory.)
This uses the number of groups with recommendations instead.
This builds on filtering to allow the optimize page to show recommendations
for the active namespace vs all namespaces. If turning off the toggle causes
the summary from the active card to become excluded from the filtered list,
the active summary changes, as with the facets.
It also includes a fix for this bug:
https://github.com/hashicorp/nomad/pull/9294#pullrequestreview-527748994
The API is missing values for `ReadAllocs` and `WriteAllocs` fields, resulting
in allocation claims not being populated in the web UI. These fields mirror
the fields in `nomad/structs.CSIVolume`. Returning a separate list of stubs
for read and write would be ideal, but this can't be done without either
bloating the API response with repeated full `Allocation` data, or causing a
panic in previous versions of the CLI.
The `nomad/structs` fields are persisted with nil values and are populated
during RPC, so we'll do the same in the HTTP API and populate the `ReadAllocs`
and `WriteAllocs` fields with a map of allocation IDs, but with null
values. The web UI will then create its `ReadAllocations` and
`WriteAllocations` fields by mapping from those IDs to the values in
`Allocations`, instead of flattening the map into a list.
Plugin health for controllers should show "Node Only" in the UI only when both
conditions are true: controllers are not required, and no controllers have
registered themselves (0 expected controllers). This accounts for "monolith"
plugins which might register as both controllers and nodes but not necessarily
have `ControllerRequired = true` because they don't implement the Controller
RPC endpoints we need (this requirement was added in #7844)
This changeset includes the following fixes:
* Update the Plugins tab of the UI so that monolith plugins don't show "Node
Only" once they've registered.
* Add the missing "Node Only" logic to the Volumes tab of the UI.
This is mostly copied from the jobs list. One uncertainty
is what to do when changing a facet causes the currently-
active card to be excluded from the filtered list 🤔
Without this, visiting any job detail page on Nomad OSS would crash with
an error like this:
Error: Ember Data Request GET
/v1/recommendations?job=ping%F0%9F%A5%B3&namespace=default returned a
404 Payload (text/xml)
The problem was twofold.
1. The recommendation ability didn’t include anything about checking
whether the feature was present. This adds a request to
/v1/operator/license on application load to determine which features are
present and store them in the system service. The ability now looks for
'Dynamic Application Sizing' in that feature list.
2. Second, I didn’t check permissions at all in the job-fetching or job
detail templates.
This continues iteration on the DAS UI by adding the ability to directly
navigate to a recommendation summary by (namespaced) slug and a copy
button for the direct navigation link.
It includes a change to CopyButton allowing it to take a block that’s
rendered within the button.
It also changes some instances of multi-relationship traversal to use
in-summary attributes, such as summary.jobNamespace instead of
summary.job.namespace.name.