This would happen because a no connection error happens after the second request fails, but
that's because it's assumed the second request is to a server node. However, if a user clicks
stderr fast enough, the first and second requests are both to the client node. This changes
the logic to check if the request is to the server before deeming log streaming a total failure.
Typically a failover means that the client can't be reached. However, if
the client does eventually return after the timeout period, the log will
stream indefinitely. This fixes that using an API that wasn't broadly
available at the time this was first written.
The `CSIVolumeClaim` fields were added after 0.11.1, so claims made
before that may be missing the value. Repair this when we read the
volume out of the state store.
The `NodeID` field was added after 0.11.0, so we need to ensure it's
been populated during upgrades from 0.11.0.
Fixes#7681
The current behavior of the CPU fingerprinter in AWS is that it
reads the **current** speed from `/proc/cpuinfo` (`CPU MHz` field).
This is because the max CPU frequency is not available by reading
anything on the EC2 instance itself. Normally on Linux one would
look at e.g. `sys/devices/system/cpu/cpuN/cpufreq/cpuinfo_max_freq`
or perhaps parse the values from the `CPU max MHz` field in
`/proc/cpuinfo`, but those values are not available.
Furthermore, no metadata about the CPU is made available in the
EC2 metadata service.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-categories.html
Since `go-psutil` cannot determine the max CPU speed it defaults to
the current CPU speed, which could be basically any number between
0 and the true max. This is particularly bad on large, powerful
reserved instances which often idle at ~800 MHz while Nomad does
its fingerprinting (typically IO bound), which Nomad then uses as
the max, which results in severe loss of available resources.
Since the CPU specification is unavailable programmatically (at least
not without sudo) use a best-effort lookup table. This table was
generated by going through every instance type in AWS documentation
and copy-pasting the numbers.
https://aws.amazon.com/ec2/instance-types/
This approach obviously is not ideal as future instance types will
need to be added as they are introduced to AWS. However, using the
table should only be an improvement over the status quo since right
now Nomad miscalculates available CPU resources on all instance types.
Use v1.1.5 of go-msgpack/codec/codecgen, so go-msgpack codecgen matches
the library version.
We branched off earlier to pick up
f51b518921
, but apparently that's not needed as we could customize the package via
`-c` argument.
Examples for HTTP based task-group service healthchecks are
covered by the `countdash` demo, but gRPC checks currently
have no runnable examples.
This PR adds a trivial gRPC enabled application that provides
a Service implementing the standard gRPC healthcheck interface.
Running `make dev` runs `hclfmt`, but this isn't checked as part of
CI. That makes it possible to merge un-formatted HCL and Nomad
jobspecs that later will make for dirty git staging areas when
developers pull master.
This changeset adds HCL linting to the `make check` target.
Adds a `CSIVolumeClaim` type to be tracked as current and past claims
on a volume. Allows for a client RPC failure during node or controller
detachment without having to keep the allocation around after the
first garbage collection eval.
This changeset lays groundwork for moving the actual detachment RPCs
into a volume watching loop outside the GC eval.
Failed requests due to API client errors are to be marked as DEBUG.
The Error log level should be reserved to signal problems with the
cluster and are actionable for nomad system operators. Logs due to
misbehaving API clients don't represent a system level problem and seem
spurius to nomad maintainers at best. These log messages can also be
attack vectors for deniel of service attacks by filling servers disk
space with spurious log messages.