There are two changes here, and some caveats/commentary:
1. The “State“ table column was actually sorting only by status. The state was not an actual property, just something calculated in each client row, as a product of status, isEligible, and isDraining. This PR adds isDraining as a component of compositeState so it can be used for sorting.
2. The Sortable mixin declares dependent keys that cause the sort to be live-updating, but only if the members of the array change, such as if a new client is added, but not if any of the sortable properties change. This PR adds a SortableFactory function that generates a mixin whose listSorted computed property includes dependent keys for the sortable properties, so the table will live-update if any of the sortable properties change, not just the array members. There’s a warning if you use SortableFactory without dependent keys and via the original Sortable interface, so we can eventually migrate away from it.
We currently log an error if preemption is unable to find a suitable set of
allocations to preempt. This commit changes that to debug level since not finding
preemptable allocations is not an error condition.
Refactor the metrics end-to-end tests so they can be run with our e2e
test framework. Runs fabio/prometheus and a collection of jobs that
will cause metrics to be measured. We then query Prometheus to ensure
we're publishing those allocation metrics and some metrics from the
clients as well.
Includes adding a placeholder for running the same tests on Windows.
Stop joining libcontainer executor process into the newly created task
container cgroup, to ensure that the cgroups are fully destroyed on
shutdown, and to make it consistent with other plugin processes.
Previously, executor process is added to the container cgroup so the
executor process resources get aggregated along with user processes in
our metric aggregation.
However, adding executor process to container cgroup adds some
complications with much benefits:
First, it complicates cleanup. We must ensure that the executor is
removed from container cgroup on shutdown. Though, we had a bug where
we missed removing it from the systemd cgroup. Because executor uses
`containerState.CgroupPaths` on launch, which includes systemd, but
`cgroups.GetAllSubsystems` which doesn't.
Second, it may have advese side-effects. When a user process is cpu
bound or uses too much memory, executor should remain functioning
without risk of being killed (by OOM killer) or throttled.
Third, it is inconsistent with other drivers and plugins. Logmon and
DockerLogger processes aren't in the task cgroups. Neither are
containerd processes, though it is equivalent to executor in
responsibility.
Fourth, in my experience when executor process moves cgroup while it's
running, the cgroup aggregation is odd. The cgroup
`memory.usage_in_bytes` doesn't seem to capture the full memory usage of
the executor process and becomes a red-harring when investigating memory
issues.
For all the reasons above, I opted to have executor remain in nomad
agent cgroup and we can revisit this when we have a better story for
plugin process cgroup management.
It has been decided we're going to live in a many core world.
Let's take advantage of that and parallelize these state store
tests which all run in memory and are largely CPU bound.
An unscientific benchmark demonstrating the improvement:
[mp state (master)] $ go test
PASS
ok github.com/hashicorp/nomad/nomad/state 5.162s
[mp state (f-parallelize-state-store-tests)] $ go test
PASS
ok github.com/hashicorp/nomad/nomad/state 1.527s
> Sentinel-embedded applications can choose to whitelist or blacklist
certain standard imports. Please reference the documentation for the
Sentinel-enabled application you're using to determine if all standard
imports are available.
The `ALLOC_INDEX` isn't guaranteed to be unique, and this has caused
some user confusion. The servers make a best-effort attempt to make
this value unique from 0 to count-1 but when you have canaries on the
task group, there are reused indexes because you have multiple job
versions running at the same time. If a user needs a unique number for
interpolating a value in your application, they can get this by
combining the job version and the alloc index.
Co-Authored-By: Michael Schurter <mschurter@hashicorp.com>
Currently `nomad monitor -node-id` will panic when a node-id does not
match any nodes, as there is no empty result bounds checking. Here we
return an error to the user when no nodes are found.
Copy the updated version of freeport (sdk/freeport), and tweak it for use
in Nomad tests. This means staying below port 10000 to avoid conflicts with
the lib/freeport that is still transitively used by the old version of
consul that we vendor. Also provide implementations to find ephemeral ports
of macOS and Windows environments.
Ports acquired through freeport are supposed to be returned to freeport,
which this change now also introduces. Many tests are modified to include
calls to a cleanup function for Server objects.
This should help quite a bit with some flakey tests, but not all of them.
Our port problems will not go away completely until we upgrade our vendor
version of consul. With Go modules, we'll probably do a 'replace' to swap
out other copies of freeport with the one now in 'nomad/helper/freeport'.
Operators commonly have docker logs aggregated using various tools and
don't need nomad to manage their docker logs. Worse, Nomad uses a
somewhat heavy docker api call to collect them and it seems to cause
problems when a client runs hundreds of log collections.
Here we add a knob to disable log aggregation completely for nomad.
When log collection is disabled, we avoid running logmon and
docker_logger for the docker tasks in this implementation.
The downside here is once disabled, `nomad logs ...` commands and API
no longer return logs and operators must corrolate alloc-ids with their
aggregated log info.
This is meant as a stop gap measure. Ideally, we'd follow up with at
least two changes:
First, we should optimize behavior when we can such that operators don't
need to disable docker log collection. Potentially by reverting to
using pre-0.9 syslog aggregation in linux environments, though with
different trade-offs.
Second, when/if logs are disabled, nomad logs endpoints should lookup
docker logs api on demand. This ensures that the cost of log collection
is paid sparingly.
There is an undocumented way of mapping a dynamically allocated port to the container. This is applicable in bridge networking ( necessary for consul connect enabled services ) to expose the service *directly*. This is needed when using upstream connect services, but you need to expose the service by normal means. By referencing the current documentation you need to use static ports in order to do so. Introduced in #6189 but undocumented
You'd think since golangci-lint embeds misspell we could use that,
but it fails to run if it finds no Go source files, which is the
case in our website/ directory that we want to check.
gometalinter has been deprecated, with golangci-lint as its spiritual
and recommended successor. Here we switch to using it with an equivalent
configuration, albeit with newer versions of some linters.
To maintain compatibility with existing settings, we have a couple of
things disabled here, specifically:
- tests
We have a lot of unused code in our tests that choke deadcode.
We should attempt to clean these up soon so that we can lint our
testcode.
- govet.check-shadowing = false
This breaks on redefining `err` which we do all over the nomad
codebase.
When trying to run this example, Nomad v0.10.2 raises the following error:
`Error getting job struct: Error parsing job file from example-ipv6.hcl: error parsing: At 33:22: Unknown token: 27:16 IDENT db`
Adding quotes around the port map `db` fixes the problem and the job works as expected.