* ui: add parameterized dispatch interface
This commit adds a new interface for dispatching parameteried jobs, if
the user has the right permissions. The UI can be accessed by viewing a
parameterized job and clicking on the "Dispatch Job" button located in
the "Job Launches" section.
* fix failing lint test
* clean up dispatch and remove meta
This commit cleans up a few things that had typos and
inconsistent naming. In line with this, the custom
`meta` view was removed in favor of using the
included `AttributesTable`.
* ui: encode dispatch job payload and start adding tests
* ui: remove unused test imports
* ui: redesign job dispatch form
* ui: initial acceptance tests for dispatch job
* ui: generate parameterized job children with correct id format
* ui: fix job dispatch breadcrumb link
* ui: refactor job dispatch component into glimmer component and add form validation
* ui: remove unused CSS class
* ui: align job dispatch button
* ui: handle namespace-specific requests on job dispatch
* ui: rename payloadMissing to payloadHasError
* ui: don't re-fetch job spec on dispatch job
* ui: keep overview tab selected on job dispatch page
* ui: fix task and task-group linting
* ui: URL encode job id on dispatch job tests
* ui: fix error when job meta is null
* ui: handle job dispatch from adapter
* ui: add more tests for dispatch job page
* ui: add "job dispatch" capability check
* ui: update job dispatch from code review
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
In job versions, if you have an ACL token with a write policy
you should be able to revert a job, however, that was not the
case here. This is because we're using ember-can to check if
the user can run a job. That permission relies on policiesSupportRunning
which uses a function called namespaceIncludesCapability. We're going to
need to refactor any cases that use this function.
When the client launches, use a consistent read to fetch its own allocs,
but allow stale read afterwards as long as reads don't revert into older
state.
This change addresses an edge case affecting restarting client. When a
client restarts, it may fetch a stale data concerning its allocs: allocs
that have completed prior to the client shutdown may still have "run/running"
desired/client status, and have the client attempt to re-run again.
An alternative approach is to track the indices such that the client
set MinQueryIndex on the maximum index the client ever saw, or compare
received allocs against locally restored client state. Garbage
collection complicates this approach (local knowledge is not complete),
and the approach still risks starting "dead" allocations (e.g. the
allocation may have been placed when client just restarted and have
already been reschuled by the time the client started. This approach
here is effective against all kinds of stalness problems with small
overhead.
Basically the same as #10896 but with the Affinity struct.
Since we use reflect.DeepEquals for job comparison, there is
risk of false positives for changes due to a job struct with
memoized vs non-memoized strings.
Closes#10897
This PR removes the vendor directory from the Nomad repository.
Contributers will no longer need to deal with our `make sync`
step when working on Nomad, which was suprising when making changes
to the api. It also causes huge diffs in PRs that nobody looks at.
This PR causes Nomad to no longer memoize the String value of
a Constraint. The private memoized variable may or may not be
initialized at any given time, which means a reflect.DeepEqual
comparison between two jobs (e.g. during Plan) may return incorrect
results.
Fixes#10836
We wanted the ability to get our namespace from query params
in order to do this, we're using additional attributes via
ember-can to set a bound property directly from our
handlebar file. This sets us up better in the event that
the namespace filter changes on the UI because our handlebar
file will be aware of the change, whereas our ability may not
update as the namespace filter updates.
The name property had to be added back to the agent schema
in the Agent Factory because the /agent/monitor endpoint in
the config finds agents by their names and since member is not
a proper entity in our Mirage Config we can't just findBy name
of the member. So although we're following the correct schema
we're set-up to rely on this.
Otherwise the spinner would just end, which felt a bit awkward.
I wanted to see a "✓" to know that everything was ok, and a "!" (maybe something else?) if something went wrong.