This doesn’t include Ember Data, as we are still back on 3.12.
Most changes are deprecation updates, linting fixes, and dependencies. It can
be read commit-by-commit, though many of them are mechanical and skimmable.
For the new linting exclusions, I’ve added them to the Tech Debt list.
The decrease in test count is because linting is no longer included in ember test.
There’s a new deprecation warning in the logs that can be fixed by updating Ember
Power Select but when I tried that it caused it to render incorrectly, so I decided to
ignore it for now and address it separately.
This fixes a flaky test, as seen in this failure:
https://app.circleci.com/pipelines/github/hashicorp/nomad/14726/workflows/f4ae0bf2-0699-4d18-b55e-5221aafe393c/jobs/137128
One part of the test involves toggling off all memory recommendations
and then accepting, but it’s not possible to accept when there are
no CPU recommendations to begin with, which can happen because
there’s a 10% chance of not creating a corresponding recommendation
in the task factory. Since two tasks are created for this module, it’s
only a 1% chance of no CPU task, but that means 1% flakiness!
This closes#8744 and #9826.
It necessitated some customisation options for TwoStepButton. One is inlineText, which puts the confirmation text in the same line as the buttons. Also, there was a single-use configuration option named isInfoAction that I removed in favour of passing a set of class configuration options like this:
@classes={{hash
idleButton="is-warning"
confirmationMessage="inherit-color"
cancelButton="is-danger is-important"
confirmButton="is-warning"}}
This closes#7459.
While emoji don’t actually need escaping, expanding the
expression that enumerates all task name characters that
don’t need escaping to include emoji is prohibitive, since
it’s a discontinuous range. The emoji-regex project has
such an expression and it’s 12kB.
This fixes the regular expression to property escape emoji
as a single character instead of as its component bytes.
Thanks to @DingoEatingFuzz for the suggestion.
This closes#9966. It was looking at the query parameters
for the namespace and region, but allocation (and task!)
routes don’t have a namespace query parameter. Since the URL
generator requires the job for all calls, it makes sense to
extract the namespace and region from the job instead.
The region will naturally be appended to URLs via
token.authorizedRequest but agent members includes all servers across
all regions so relying on the application-level region isn't good
enough.
On very small clusters, the node count heuristic is impractical and
leads to confusion. By additionally requiring 10+ sibling allocs, the
lines will be shown more often.
This fixes a couple bugs
1. Overreporting resources reserved due to counting terminal allocs
2. Overreporting unique client placements due to uniquing on object refs
instead of on client ID.
Various page objects had breadcrumbs and breadcrumbFor within them, this
moves those to the existing Layout page object that contains shared page objects.
Instead of creating recommendations for all the jobs used
across these tests, this creates a specific job with
a higher group count, which reduces the likelihood
of having no recommendations to 0.0001%.
It was incorrect to assume that each task group would always
have recommendations, since there’s a 1% chance that a task
won’t have a recommendation. (10% chance for CPU and memory.)
This uses the number of groups with recommendations instead.
This closes#9495. As detailed in there, the collection query GET
/v1/volumes?type=csi doesn’t return ReadAllocs and WriteAllocs, so the #
Allocs cell was always showing 0 upon first load because it was derived
from the lengths of those arrays. This uses the heretofore-ignored
CurrentReaders and CurrentWriters values to calculate the total instead.
The single-resource query GET /v1/volume/csi%2F:id doesn’t return
CurrentReaders and CurrentWriters that absence doesn’t override the
stored values when visiting an individual item.
Thanks to @apollo13 for reporting this and to @tgross for the API logs
and suggestion.
This adds:
* a script for building and deploying the Ember UI and Storybook to
Vercel
* configuration for that deployment
* a header link to the UI to link to Storybook when built with
STORYBOOK_LINK=true
It also removes a file used to configure Netlify redirects.
The Netlify setup had two “sites”: nomad-storybook and nomad-ui. I
attempted to replicate that here but ran into some platform limitations
with Vercel: two “projects” cannot share the same root directory without
also sharing the same vercel.json that lets us specify configuration
such as the rewrite needed to handle deep linking into the Ember UI. I
tried having Storybook use /ui/storybook as the root directory (and
adding a symbolically-linked package.json to bypass Vercel’s refusal
to build without it) but that produced broken Storybook deployments.
This instead combines the two projects into one
(nomad-storybook-and-ui), defaults to forwarding / to /ui/, and
adds the header link to the UI to navigate to Storybook.
Rather than have a complex build script in the Vercel configuration UI,
this delegates to a script in the repository.
This builds on filtering to allow the optimize page to show recommendations
for the active namespace vs all namespaces. If turning off the toggle causes
the summary from the active card to become excluded from the filtered list,
the active summary changes, as with the facets.
It also includes a fix for this bug:
https://github.com/hashicorp/nomad/pull/9294#pullrequestreview-527748994
The API is missing values for `ReadAllocs` and `WriteAllocs` fields, resulting
in allocation claims not being populated in the web UI. These fields mirror
the fields in `nomad/structs.CSIVolume`. Returning a separate list of stubs
for read and write would be ideal, but this can't be done without either
bloating the API response with repeated full `Allocation` data, or causing a
panic in previous versions of the CLI.
The `nomad/structs` fields are persisted with nil values and are populated
during RPC, so we'll do the same in the HTTP API and populate the `ReadAllocs`
and `WriteAllocs` fields with a map of allocation IDs, but with null
values. The web UI will then create its `ReadAllocations` and
`WriteAllocations` fields by mapping from those IDs to the values in
`Allocations`, instead of flattening the map into a list.
Plugin health for controllers should show "Node Only" in the UI only when both
conditions are true: controllers are not required, and no controllers have
registered themselves (0 expected controllers). This accounts for "monolith"
plugins which might register as both controllers and nodes but not necessarily
have `ControllerRequired = true` because they don't implement the Controller
RPC endpoints we need (this requirement was added in #7844)
This changeset includes the following fixes:
* Update the Plugins tab of the UI so that monolith plugins don't show "Node
Only" once they've registered.
* Add the missing "Node Only" logic to the Volumes tab of the UI.
This is mostly copied from the jobs list. One uncertainty
is what to do when changing a facet causes the currently-
active card to be excluded from the filtered list 🤔
Without this, visiting any job detail page on Nomad OSS would crash with
an error like this:
Error: Ember Data Request GET
/v1/recommendations?job=ping%F0%9F%A5%B3&namespace=default returned a
404 Payload (text/xml)
The problem was twofold.
1. The recommendation ability didn’t include anything about checking
whether the feature was present. This adds a request to
/v1/operator/license on application load to determine which features are
present and store them in the system service. The ability now looks for
'Dynamic Application Sizing' in that feature list.
2. Second, I didn’t check permissions at all in the job-fetching or job
detail templates.
This continues iteration on the DAS UI by adding the ability to directly
navigate to a recommendation summary by (namespaced) slug and a copy
button for the direct navigation link.
It includes a change to CopyButton allowing it to take a block that’s
rendered within the button.
It also changes some instances of multi-relationship traversal to use
in-summary attributes, such as summary.jobNamespace instead of
summary.job.namespace.name.
This returns an array of all icons. As the comment suggests, it's
because the SVGs file can't be imported in stories since it is generated
as part of the Ember project.
Before, we'd show a helpful error message when a task isn't running
instead of erroring in a generic way. Turns out when an alloc is
terminal but reachable, the filesystem is left behind so we were hiding
it.
Now it is always shown and in the event that something errors, it'll
either be generic, or--more commonly--a 404 of the allocation.
Now all data loading happens in the TopoViz component as well as
computation of resource proportions.
Allocation selection state is also managed centrally uses a dedicated
structure indexed by group key (job id and task group name). This way
allocations don't need to be scanned at the node level, which is O(n) at
the best (assuming no ember overhead on recomputes).
- Plot all datacenters
- For each datacenter, plot all nodes
- For each node, plot all allocations by memory and cpu
- For empty nodes, highlight the emptiness
- When hovering over allocations, give them visual focus
The job factory will now accept an array of resourceSpecs that is a shorthand
notation for memory, cpu, disk, and iops requirements.
These specs get passed down to task groups. The task group factory will
split the resource requirements near evenly (there is variance
threshold) across all expected tasks.
Allocations then construct task-resource objects based on the resources
from the matching task.
My suggestion is that this table isn’t sufficiently useful to
keep around with the combinatoric explosion of other lifecycle
phases. The logic was that someone might wonder “why isn’t my
main task starting?” and this table would show that the prestart
tasks hadn’t yet completed. One might wonder the same about
any task that has prerequisites, so should a poststart task have
a table that shows main tasks? And so on.
Since the route hierarchy guarantees that one has already passed
through a template that shows the lifecycle chart before one
can reach the template where this table is displayed, I believe
this table is redundant. It also conveys information in a more
abstract way than the chart, which is dense and more easily
understood, to me.
This continues #8455 by adding accessibility audits to component integration
tests and fixing associated errors. It adds audits to existing tests rather than
adding separate ones to facilitate auditing the various permutations a
component’s rendering can go through.
It also adds linting to ensure audits happen in component tests. This
necessitated consolidating test files that were scattered.
This extracts some common API-idiosyncracy-handling patterns from model serialisers into properties that are processed by the application serialiser:
* arrayNullOverrides converts a null property value to an empty array
* mapToArray converts a map to an array of maps, using the original map keys as Name properties on the array maps
* separateNanos splits nanosecond-containing timestamps into millisecond timestamps and separate nanosecond properties
Displays all scale events in the form of an annotated line chart. When
annotations are clicked, the timestamp, message, and meta propeties for
the event are displayed below the chart.
This makes use of the PR I recently had merged to eslint-plugin-ember-a11y-testing
to add linting that ensures an accessibility audit is called at least once per acceptance
test file. When I have added linting for component tests, it can apply there too.
I added exclusions for the filesystem browser tests, which are covered by behaviors/fs
and for the search test which will involve significant overrides to Ember Power Select
default templates.
This introduces ember-a11y-testing to acceptance tests via a helper
wrapper that allows us to globally ignore rules that we can address
separately. It also adds fixes for the aXe rules that were failing.
The spacing has been broken for job types that use this yield
(parameterised and periodic) since I added the exec button
to this template. This could be further refined to allow a more
logical grouping of elements where buttons and tags are
separate.
Thanks to @notnoop for this UX improvement suggestion.
The allocation’s task group is always known, so it
might as well be preselected in the sidebar when the
exec window opens. Also, if the task group only has
one task, might as well preselect it too.
This closes#8422, another bug facilitated by the difficulty
of automated testing when opening another window. Thanks to
@notnoop for narrowing this down.
This updates the Ember edition setting to Octane, which I removed from #8319
because it required the template-only Glimmer components setting to be turned
on, which this does. These changes to templates accommodate that setting.
This includes fixes for newer template lint rules that came along with
updating that dependency, which was necessary to be able to use
the no-curly-component-invocation rule. It also updates some curly
invocations that I missed in #8075.
This updates to Ember 3.16 but leaves Ember Data at 3.12 so we don’t need
to use the model fragments beta. It can be reviewed on a commit-by-commit
basis: blueprint updates, fixes for test failures, and the removal of
now-deprecated partials.
It’s not a true update to Octane as that would involve turning on template-only
components by default, which breaks various things. We can accomplish that
separately and then add the edition setting to package.json.
Thanks to @cibernox’s isActive clarification in
cibernox/ember-power-select#1374, this replaces the use
of a hacked Power Select API with a deliberate blurring
of the trigger element, which is equivalent to setting
the element to inactive.
The CSS I added in #8249 to make the search be properly
centred also made the logo unclickable as it was hidden
behind the centred element! This makes the logo stay
above the search container.
- Click label to focus input
- Focusing input selects value
- Entering an invalid value reverts selection
- Entering a fractional number floors the value
This updates the look of the search control, adds a hint about the slash
shortcut, adds highlighting of fuzzy search results, and addresses a few
edge case UX failures. It moves to using a fork of Ember Power Select
to handle an edge case where pressing escape would put the control
in an undesirable active-but-not-open state.
Sometimes a job would be created with a running deployment which made
the increment button disabled.
While I was finding the root cause, I also changed the waitUntil pattern
to match the StepperInput technique which is more resilient to code
changes.
Adding keys tells Ember to rerender matching entries instead of
destroying and recreating.
Without this key, every time the allocation collection changes, every
allocation row gets destroyed and recreated.
This happens a lot, since each allocation needs to be reloaded which
dirties the collection.
Since allocation rows fetch stats on init, each of these many many
renders results in a stats request.
By using key and rerendering matching records, this all goes away. Since
the rows aren't being destroyed and recreated, the init stats request
isn't being made overnumerously.
This introduces a DataCaches service so recently-updated collections don’t need
to be requeried within a minute, or based on the current route. It only searches
jobs and nodes. There are known bugs that will be addressed in upcoming PRs.
This was a disturbing discovery. Requests in watch loops would recycle
AbortControllers meaning once any request was aborted, all requests
forever after were skipped. I noticed it with deployments and job
summary on the job detail page.
I suspect this regression occurred when jQuery was removed. This needs
test coverage still to make sure it doesn't happen again.