* ui: add parameterized dispatch interface
This commit adds a new interface for dispatching parameteried jobs, if
the user has the right permissions. The UI can be accessed by viewing a
parameterized job and clicking on the "Dispatch Job" button located in
the "Job Launches" section.
* fix failing lint test
* clean up dispatch and remove meta
This commit cleans up a few things that had typos and
inconsistent naming. In line with this, the custom
`meta` view was removed in favor of using the
included `AttributesTable`.
* ui: encode dispatch job payload and start adding tests
* ui: remove unused test imports
* ui: redesign job dispatch form
* ui: initial acceptance tests for dispatch job
* ui: generate parameterized job children with correct id format
* ui: fix job dispatch breadcrumb link
* ui: refactor job dispatch component into glimmer component and add form validation
* ui: remove unused CSS class
* ui: align job dispatch button
* ui: handle namespace-specific requests on job dispatch
* ui: rename payloadMissing to payloadHasError
* ui: don't re-fetch job spec on dispatch job
* ui: keep overview tab selected on job dispatch page
* ui: fix task and task-group linting
* ui: URL encode job id on dispatch job tests
* ui: fix error when job meta is null
* ui: handle job dispatch from adapter
* ui: add more tests for dispatch job page
* ui: add "job dispatch" capability check
* ui: update job dispatch from code review
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
In job versions, if you have an ACL token with a write policy
you should be able to revert a job, however, that was not the
case here. This is because we're using ember-can to check if
the user can run a job. That permission relies on policiesSupportRunning
which uses a function called namespaceIncludesCapability. We're going to
need to refactor any cases that use this function.
When the client launches, use a consistent read to fetch its own allocs,
but allow stale read afterwards as long as reads don't revert into older
state.
This change addresses an edge case affecting restarting client. When a
client restarts, it may fetch a stale data concerning its allocs: allocs
that have completed prior to the client shutdown may still have "run/running"
desired/client status, and have the client attempt to re-run again.
An alternative approach is to track the indices such that the client
set MinQueryIndex on the maximum index the client ever saw, or compare
received allocs against locally restored client state. Garbage
collection complicates this approach (local knowledge is not complete),
and the approach still risks starting "dead" allocations (e.g. the
allocation may have been placed when client just restarted and have
already been reschuled by the time the client started. This approach
here is effective against all kinds of stalness problems with small
overhead.
Basically the same as #10896 but with the Affinity struct.
Since we use reflect.DeepEquals for job comparison, there is
risk of false positives for changes due to a job struct with
memoized vs non-memoized strings.
Closes#10897
This PR removes the vendor directory from the Nomad repository.
Contributers will no longer need to deal with our `make sync`
step when working on Nomad, which was suprising when making changes
to the api. It also causes huge diffs in PRs that nobody looks at.
This PR causes Nomad to no longer memoize the String value of
a Constraint. The private memoized variable may or may not be
initialized at any given time, which means a reflect.DeepEqual
comparison between two jobs (e.g. during Plan) may return incorrect
results.
Fixes#10836
We wanted the ability to get our namespace from query params
in order to do this, we're using additional attributes via
ember-can to set a bound property directly from our
handlebar file. This sets us up better in the event that
the namespace filter changes on the UI because our handlebar
file will be aware of the change, whereas our ability may not
update as the namespace filter updates.
The name property had to be added back to the agent schema
in the Agent Factory because the /agent/monitor endpoint in
the config finds agents by their names and since member is not
a proper entity in our Mirage Config we can't just findBy name
of the member. So although we're following the correct schema
we're set-up to rely on this.
Otherwise the spinner would just end, which felt a bit awkward.
I wanted to see a "✓" to know that everything was ok, and a "!" (maybe something else?) if something went wrong.
This PR fixes a bug where the underlying Envoy process of a Connect gateway
would consume a full core of CPU if there is more than one sidecar or gateway
in a group. The utilization was being caused by Consul injecting an envoy_ready_listener
on 127.0.0.1:8443, of which only one of the Envoys would be able to bind to.
The others would spin in a hot loop trying to bind the listener.
As a workaround, we now specify -address during the Envoy bootstrap config
step, which is how Consul maps this ready listener. Because there is already
the envoy_admin_listener, and we need to continue supporting running gateways
in host networking mode, and in those case we want to use the same port
value coming from the service.port field, we now bind the admin listener to
the 127.0.0.2 loop-back interface, and the ready listener takes 127.0.0.1.
This shouldn't make a difference in the 99.999% use case where envoy is
being run in its official docker container. Advanced users can reference
${NOMAD_ENVOY_ADMIN_ADDR_<service>} (as they 'ought to) if needed,
as well as the new variable ${NOMAD_ENVOY_READY_ADDR_<service>} for the
envoy_ready_listener.