Service discovery or mesh network systems consuming the Nomad event stream or API need to know the CNI assigned IP for the allocation. This data is returned by the underlying Nomad API but isn't mapped in the response struct.
* Text and code wrapping as a localStorage var
* task-log uses wrapping and kb shortcut
* Word wrap keyboard labels
* Wrapper as a toggle not a button
* Changelog and fixed an extra space trailing log lines
* Moves toggle to inside
* Acceptance tests for ww and toggle click
The requirements for client-to-server and client-to-client topologies are not
well-documented in the production install requirements docs. Document that
clients make connections to servers (and not the other way around), and that
clients don't need to communicate with each other (with some exceptions).
Fixes: #17631
Update the revision used by the docker action. This should always reflect the commit that's being built as this may differ from the default <github.sha> that the workflow was invoked at.
Goes with https://github.com/hashicorp/actions-docker-build/pull/59 - and should not be merged until this PR is merged and a new version of the action is cut.
* drivers/docker: refactor use of clients in docker driver
This PR refactors how we manage the two underlying clients used by the
docker driver for communicating with the docker daemon. We keep two clients
- one with a hard-coded timeout that applies to all operations no matter
what, intended for use with short lived / async calls to docker. The other
has no timeout and is the responsibility of the caller to set a context
that will ensure the call eventually terminates.
The use of these two clients has been confusing and mistakes were made
in a number of places where calls were making use of the wrong client.
This PR makes it so that a user must explicitly call a function to get
the client that makes sense for that use case.
Fixes#17023
* cr: followup items
Given a deployment that has a `progress_deadline`, if a task group runs
out of reschedule attempts, allow it to fail at this time instead of
waiting until the `progress_deadline` is reached.
Fixes: #17260
* CSS alignment and spacing for job status panel
* Only fade the count, not the legend icon, when count is 0
* Unrounded version corners
* changelog
* css has to only remove border radius when count is present
* Seed stabilization for services test
* Try consolidating the testfixes from before
* Total test isolation and bonus logs
* Drop the isolation but keep the logs
* Remove bonus logging
This PR refactors some old PID isolation tests to make use of the e2e/v3
packages. Should be quite a bit easier to read. Adds 'alloc exec' capability
to the jobs3 package.
In #17354 we made client updates prioritized to reduce client-to-server
traffic. When the client has no previously-acknowledged update we assume that
the update is of typical priority; although we don't know that for sure in
practice an allocation will never become healthy quickly enough that the first
update we send is the update saying the alloc is healthy.
But that doesn't account for allocations that quickly fail in an unrecoverable
way because of allocrunner hook failures, and it'd be nice to be able to send
those failure states to the server more quickly. This changeset does so and adds
some extra comments on reasoning behind priority.
If you use `nomad node drain -force`, the drain deadline is set to -1ns. If you
have not prevented system and CSI node plugin allocations from being drained
with `-ignore-system`, they will be immediately drained as well. This is
typically not safe for CSI node plugins.
Also fix some broken links.
Fixes: #17696
Unlike nodes, jobs are allowed to be registered in the node pool `all`,
in which case all nodes are used for evaluating placements. When listing
jobs for the `all` node pool only those that are explicitly in this node
pool should be returned.
Our changelog has become large enough that GitHub's rendering is very slow,
resulting in error pages ("angry unicorns"). Split out the older unsupported
versions of Nomad into their own file so that we only need to render the most
recent versions, while keeping the older versions relatively searchable by
having them in a single file.
This complements the `env` parameter, so that the operator can author
tasks that don't share their Vault token with the workload when using
`image` filesystem isolation. As a result, more powerful tokens can be used
in a job definition, allowing it to use template stanzas to issue all kinds of
secrets (database secrets, Vault tokens with very specific policies, etc.),
without sharing that issuing power with the task itself.
This is accomplished by creating a directory called `private` within
the task's working directory, which shares many properties of
the `secrets` directory (tmpfs where possible, not accessible by
`nomad alloc fs` or Nomad's web UI), but isn't mounted into/bound to the
container.
If the `disable_file` parameter is set to `false` (its default), the Vault token
is also written to the NOMAD_SECRETS_DIR, so the default behavior is
backwards compatible. Even if the operator never changes the default,
they will still benefit from the improved behavior of Nomad never reading
the token back in from that - potentially altered - location.
* e2e: create a v3/ set of packages for creating Nomad e2e tests
This PR creates an experimental set of packages under `e2e/v3/` for crafting
Nomad e2e tests. Unlike previous generations, this is an attempt at providing
a way to create tests in a declarative (ish) pattern, with a focus on being
easy to use, easy to cleanup, and easy to debug.
@shoenig is just trying this out to see how it goes.
Lots of features need to be implemented.
Many more docs need to be written.
Breaking changes are to be expected.
There are known and unknown bugs.
No warranty.
Quick run of `example` with verbose logging.
```shell
➜ NOMAD_E2E_VERBOSE=1 go test -v
=== RUN TestExample
=== RUN TestExample/testSleep
util3.go:25: register (service) job: "sleep-809"
util3.go:25: checking eval: 9f0ae04d-7259-9333-3763-44d0592d03a1, status: pending
util3.go:25: checking eval: 9f0ae04d-7259-9333-3763-44d0592d03a1, status: complete
util3.go:25: checking deployment: a85ad2f8-269c-6620-d390-8eac7a9c397d, status: running
util3.go:25: checking deployment: a85ad2f8-269c-6620-d390-8eac7a9c397d, status: running
util3.go:25: checking deployment: a85ad2f8-269c-6620-d390-8eac7a9c397d, status: running
util3.go:25: checking deployment: a85ad2f8-269c-6620-d390-8eac7a9c397d, status: running
util3.go:25: checking deployment: a85ad2f8-269c-6620-d390-8eac7a9c397d, status: successful
util3.go:25: deployment a85ad2f8-269c-6620-d390-8eac7a9c397d was a success
util3.go:25: deregister job "sleep-809"
util3.go:25: system gc
=== RUN TestExample/testNamespace
util3.go:25: apply namespace "example-291"
util3.go:25: register (service) job: "sleep-967"
util3.go:25: checking eval: a2a2303a-adf1-2621-042e-a9654292e569, status: pending
util3.go:25: checking eval: a2a2303a-adf1-2621-042e-a9654292e569, status: complete
util3.go:25: checking deployment: 3395e9a8-3ffc-8990-d5b8-cc0ce311f302, status: running
util3.go:25: checking deployment: 3395e9a8-3ffc-8990-d5b8-cc0ce311f302, status: running
util3.go:25: checking deployment: 3395e9a8-3ffc-8990-d5b8-cc0ce311f302, status: running
util3.go:25: checking deployment: 3395e9a8-3ffc-8990-d5b8-cc0ce311f302, status: successful
util3.go:25: deployment 3395e9a8-3ffc-8990-d5b8-cc0ce311f302 was a success
util3.go:25: deregister job "sleep-967"
util3.go:25: system gc
util3.go:25: cleanup namespace "example-291"
=== RUN TestExample/testEnv
util3.go:25: register (batch) job: "env-582"
util3.go:25: checking eval: 600f3bce-ea17-6d13-9d20-9d9eb2a784f7, status: pending
util3.go:25: checking eval: 600f3bce-ea17-6d13-9d20-9d9eb2a784f7, status: complete
util3.go:25: deregister job "env-582"
util3.go:25: system gc
--- PASS: TestExample (10.08s)
--- PASS: TestExample/testSleep (5.02s)
--- PASS: TestExample/testNamespace (4.02s)
--- PASS: TestExample/testEnv (1.03s)
PASS
ok github.com/hashicorp/nomad/e2e/example 10.079s
```
* cluster3: use filter for kernel.name instead of filtering manually
Unfortunately due to the split build nature of the ember app and
storybook it isn't possible to import mirage in the storybook context to
control scenarios via a knob :(