This closes#10146.
Because of cibernox/ember-power-select#1203, which documents
the current impossibility of attaching test selectors to a
PowerSelect invocation, this uses test selectors on parent
containers instead, occasionally adding wrappers when needed.
I chose to leave the existing test selectors in the hopes that
we can return to using them eventually, but I could easily
remove them if it seems like extra noise now.
Presumably for the same reason, @class no longer works, so
this adjusts the scoping of global search CSS to preserve the style
of the search control.
I also included an update to the latest version of
ember-test-selectors, since we were far behind and I tried
that before finding the aforelinked issue.
Finally, this replaces ember-cli-uglify with ember-cli-terser to address
production build failures as described at ember-cli/ember-cli#9290.
This doesn’t include Ember Data, as we are still back on 3.12.
Most changes are deprecation updates, linting fixes, and dependencies. It can
be read commit-by-commit, though many of them are mechanical and skimmable.
For the new linting exclusions, I’ve added them to the Tech Debt list.
The decrease in test count is because linting is no longer included in ember test.
There’s a new deprecation warning in the logs that can be fixed by updating Ember
Power Select but when I tried that it caused it to render incorrectly, so I decided to
ignore it for now and address it separately.
This fixes a flaky test, as seen in this failure:
https://app.circleci.com/pipelines/github/hashicorp/nomad/14726/workflows/f4ae0bf2-0699-4d18-b55e-5221aafe393c/jobs/137128
One part of the test involves toggling off all memory recommendations
and then accepting, but it’s not possible to accept when there are
no CPU recommendations to begin with, which can happen because
there’s a 10% chance of not creating a corresponding recommendation
in the task factory. Since two tasks are created for this module, it’s
only a 1% chance of no CPU task, but that means 1% flakiness!
This fixes a couple bugs
1. Overreporting resources reserved due to counting terminal allocs
2. Overreporting unique client placements due to uniquing on object refs
instead of on client ID.
Various page objects had breadcrumbs and breadcrumbFor within them, this
moves those to the existing Layout page object that contains shared page objects.
Instead of creating recommendations for all the jobs used
across these tests, this creates a specific job with
a higher group count, which reduces the likelihood
of having no recommendations to 0.0001%.
It was incorrect to assume that each task group would always
have recommendations, since there’s a 1% chance that a task
won’t have a recommendation. (10% chance for CPU and memory.)
This uses the number of groups with recommendations instead.
This builds on filtering to allow the optimize page to show recommendations
for the active namespace vs all namespaces. If turning off the toggle causes
the summary from the active card to become excluded from the filtered list,
the active summary changes, as with the facets.
It also includes a fix for this bug:
https://github.com/hashicorp/nomad/pull/9294#pullrequestreview-527748994
The API is missing values for `ReadAllocs` and `WriteAllocs` fields, resulting
in allocation claims not being populated in the web UI. These fields mirror
the fields in `nomad/structs.CSIVolume`. Returning a separate list of stubs
for read and write would be ideal, but this can't be done without either
bloating the API response with repeated full `Allocation` data, or causing a
panic in previous versions of the CLI.
The `nomad/structs` fields are persisted with nil values and are populated
during RPC, so we'll do the same in the HTTP API and populate the `ReadAllocs`
and `WriteAllocs` fields with a map of allocation IDs, but with null
values. The web UI will then create its `ReadAllocations` and
`WriteAllocations` fields by mapping from those IDs to the values in
`Allocations`, instead of flattening the map into a list.
Plugin health for controllers should show "Node Only" in the UI only when both
conditions are true: controllers are not required, and no controllers have
registered themselves (0 expected controllers). This accounts for "monolith"
plugins which might register as both controllers and nodes but not necessarily
have `ControllerRequired = true` because they don't implement the Controller
RPC endpoints we need (this requirement was added in #7844)
This changeset includes the following fixes:
* Update the Plugins tab of the UI so that monolith plugins don't show "Node
Only" once they've registered.
* Add the missing "Node Only" logic to the Volumes tab of the UI.
This is mostly copied from the jobs list. One uncertainty
is what to do when changing a facet causes the currently-
active card to be excluded from the filtered list 🤔
Without this, visiting any job detail page on Nomad OSS would crash with
an error like this:
Error: Ember Data Request GET
/v1/recommendations?job=ping%F0%9F%A5%B3&namespace=default returned a
404 Payload (text/xml)
The problem was twofold.
1. The recommendation ability didn’t include anything about checking
whether the feature was present. This adds a request to
/v1/operator/license on application load to determine which features are
present and store them in the system service. The ability now looks for
'Dynamic Application Sizing' in that feature list.
2. Second, I didn’t check permissions at all in the job-fetching or job
detail templates.
This continues iteration on the DAS UI by adding the ability to directly
navigate to a recommendation summary by (namespaced) slug and a copy
button for the direct navigation link.
It includes a change to CopyButton allowing it to take a block that’s
rendered within the button.
It also changes some instances of multi-relationship traversal to use
in-summary attributes, such as summary.jobNamespace instead of
summary.job.namespace.name.
Before, we'd show a helpful error message when a task isn't running
instead of erroring in a generic way. Turns out when an alloc is
terminal but reachable, the filesystem is left behind so we were hiding
it.
Now it is always shown and in the event that something errors, it'll
either be generic, or--more commonly--a 404 of the allocation.
My suggestion is that this table isn’t sufficiently useful to
keep around with the combinatoric explosion of other lifecycle
phases. The logic was that someone might wonder “why isn’t my
main task starting?” and this table would show that the prestart
tasks hadn’t yet completed. One might wonder the same about
any task that has prerequisites, so should a poststart task have
a table that shows main tasks? And so on.
Since the route hierarchy guarantees that one has already passed
through a template that shows the lifecycle chart before one
can reach the template where this table is displayed, I believe
this table is redundant. It also conveys information in a more
abstract way than the chart, which is dense and more easily
understood, to me.
This continues #8455 by adding accessibility audits to component integration
tests and fixing associated errors. It adds audits to existing tests rather than
adding separate ones to facilitate auditing the various permutations a
component’s rendering can go through.
It also adds linting to ensure audits happen in component tests. This
necessitated consolidating test files that were scattered.
This makes use of the PR I recently had merged to eslint-plugin-ember-a11y-testing
to add linting that ensures an accessibility audit is called at least once per acceptance
test file. When I have added linting for component tests, it can apply there too.
I added exclusions for the filesystem browser tests, which are covered by behaviors/fs
and for the search test which will involve significant overrides to Ember Power Select
default templates.
This introduces ember-a11y-testing to acceptance tests via a helper
wrapper that allows us to globally ignore rules that we can address
separately. It also adds fixes for the aXe rules that were failing.
This updates the look of the search control, adds a hint about the slash
shortcut, adds highlighting of fuzzy search results, and addresses a few
edge case UX failures. It moves to using a fork of Ember Power Select
to handle an edge case where pressing escape would put the control
in an undesirable active-but-not-open state.
This introduces a DataCaches service so recently-updated collections don’t need
to be requeried within a minute, or based on the current route. It only searches
jobs and nodes. There are known bugs that will be addressed in upcoming PRs.
This partially addresses #7799.
Task state filesystems are contained within a subdirectory of their
parent allocation, so almost everything that existed for browsing task
state filesystems was applicable to browsing allocations, just without
the task name prepended to the path. I aimed to push this differential
handling into as few contained places as possible.
The tests also have significant overlap, so this includes an extracted
behavior to run the same tests for allocations and task states.
This updates Xterm.js to 4.6.0, which includes support for reverse-wraparound
mode, so we no longer need to use a vendored dependency, which closes#7461.
The interface for accessing the buffer that’s used for test assertions changed.
With the dependency now accessed conventionally, we can have it load only when
it’s needed by an exec popup window, which closes#7516. That saves us
≈60kb compressed in the dependency bundle!
Adding this settled makes this test pass now that Ember Data is using
fetch instead of jquery. The test was presumably always incorrect but
never flaked.
Changing namespaces can be done anywhere in the app even though many
Nomad resources aren't namespace-sensitive (e.g., clients, plugins).
A user changing namespaces is an intent to reset context, "now I want
to begin a task that relates to Namespace X". Where that task begins
used to always be the Jobs list, since it was the only namespace
sensitive resource. Now with CSI Volumes, "square 1" is Volumes if the
namespace is changed from a storage page.
This closes#7456. It hides the terminal when the job is dead and
displays an error when trying to open an exec session for a task
that isn’t running. There’s a skipped test for the latter behaviour
that I’ll have to come back for.
This closes#7454. It makes use of the existing watchable tools to
allow the exec popup sidebar to be live-updating. It also adds
alphabetic sorting of task groups and tasks.
Most tests bypass setting the token via the UI, instead choosing
to set it in localStorage directly, because the acceptance tests
for the token UI are sufficient to exercise that part of the UI,
so this speeds up the test a bit.
This is a minimal implementation that closes#7463. It doesn’t include
true support for moving around within the command to edit using arrow
keys because it gets too complex when managing wrapping at the edge of
the terminal. Instead, arrow keys are ignored. It also ignores ^A and
^E, which are cursor manipulations that pose similar problems to arrow
keys. It does support ^U, which deletes the entire command.
It also allows a command to be pasted, which was previously unsupported.
This is accomplished by migrating from Xterm.js’s onKey handler to
onData, which is recommended here:
https://github.com/xtermjs/xterm.js/issues/2673#issuecomment-574897733
onData is a higher-level handler that issues events with the final
interpreted data instead of the individual key events. That means the
processing in this PR has changed from inspecting DOM key events to
inspecting their ASCII equivalents, which I’ve extracted into a utility
dictionary for use in tests and implementation.
One consequence of ignoring most control characters is that if you paste
a string that includes a control character, that character will be
stripped. It’s somewhat strange for compound sequences like arrow keys;
if you run copy('/bin/b' + '\x1b[D' + 'ash') in a Javascript console and
paste what’s on the clipboard, you get "/bin/b[Dash". That’s because
the left arrow key, as in that centre portion of the string,
is represented by the escape character and a coded sequence. Stripping
the control character leaves the coded sequence as part of the paste.
That seems like an acceptable compromise vs either ignoring any pasted
string with control characters (confusing UX) or trying to interpret and
strip all such compound control sequences (difficult to be exhaustive).
Closes#7197#7199
Note: Test coverage is limited to adapter and serializer unit tests. All
acceptance tests have been stubbed and all features have been manually
tested end-to-end.
This represents Phase 1 of #6993 which is the core workflow of CSI in
the UI. It includes a couple new pages for viewing all external volumes
as well as the allocations associated with each. It also updates
existing volume related views on job and allocation pages to handle both
Host Volumes and CSI Volumes.
This connects Xterm.js to a Nomad exec websocket so people
can interact on clients via live sessions. There are buttons on
job, allocation, task group, and task detail pages that open a
popup that lets them edit their shell command and start a
session.
More is to come, as recorded in issues.
This builds on API changes in #6017 and #6021 to conditionally turn off the
“Run Job” button based on the current token’s capabilities, or the capabilities
of the anonymous policy if no token is present.
If you try to visit the job-run route directly, it redirects to the job list.
I unintentionally introduced a flapping test in #6817. The
draining status of the node will be randomly chosen and
that flag takes precedence over eligibility. This forces
the draining flag to be false rather than random so the
test should no longer flap.
See here for an example failure:
https://circleci.com/gh/hashicorp/nomad/26368
There are two changes here, and some caveats/commentary:
1. The “State“ table column was actually sorting only by status. The state was not an actual property, just something calculated in each client row, as a product of status, isEligible, and isDraining. This PR adds isDraining as a component of compositeState so it can be used for sorting.
2. The Sortable mixin declares dependent keys that cause the sort to be live-updating, but only if the members of the array change, such as if a new client is added, but not if any of the sortable properties change. This PR adds a SortableFactory function that generates a mixin whose listSorted computed property includes dependent keys for the sortable properties, so the table will live-update if any of the sortable properties change, not just the array members. There’s a warning if you use SortableFactory without dependent keys and via the original Sortable interface, so we can eventually migrate away from it.
The recurring problem here was that sometimes the factories would
generate more than one task, and it was random whether the task
with the proxy task would be the first in the list. This ensures
that the proxy task is always first so the tests can run again.
This fixes a race condition in the pseudo-relationship between a
TaskState and a Task that was causing the Consul Connect proxy tag
to sometimes show on the wrong task. There’s no direct Ember Data-style
relationship between a TaskState and its Task; instead, it’s determined
by searching for a Task with the matching name. The related Task was
sometimes stored before everything was ready and not recalculated when
the name became known. This ensures the relationship is accurate if the
TaskState’s name property changes.
I put this property in the wrong place.
I’ve found how to fix the mock API in the tests but
they’re failing to pass with headless Chrome only,
so they’re skipped for now.
When sorting by size, directories are sorted by name, as size
isn’t displayed.
This includes a change to the positioning of sort arrows for all tables,
moving them closer to the text, because in some cases, the arrows
for right-aligned columns were ambiguously positioned.
This uses ember-page-title to add dynamic page titles throughout the
route hierarchy. When there’s more than one region, the current
current region is added before the final entry of “- Nomad”.
The draining, eligibility, and status fields now all show under a combined
state column. Draining takes precedence, then (in)eligibility; if neither of
those is true, the status displays.
Since one allocation is preempted, the alloc factory creates a new alloc
that wasn't guaranteed to be running. When it is the first alloc row in
the table, then the alloc row detail test fails because non-running
allocs don't have metrics. The fix was to manually update all the alloc
clientStatuses.
This is incredibly tricky with query params, since there is a bundle of
timing issues, lifecycle issues, missing features, and all around
gotchas with query params.
This solution has no observers and no instances of the system service
being set from the jobs controller.
The upside to this is no observers, much easier to follow logic, no more
dependent key chain reactions.