This PR
- upgrades the serf library
- has the test start the join process using the un-joined server first
- disables schedulers on the servers
- uses the WaitForLeader and wantPeers helpers
Not sure which, if any of these actually improves the flakiness of this test.
* chore: bump to latest docs-page
* fix: bump to react-consent-manager patch
* chore: bump to consent-manager with events dep
* chore: bump to stable consent-manager release
In PR #12108 we added missing fields to the plugin response, but we
didn't include the manual serialization steps that we need until
issue #10470 is resolved.
The behaviors of CSI plugins are governed by their capabilities as
defined by the CSI specification. When debugging plugin issues, it's
useful to know which behaviors are expected so they can be matched
against RPC calls made to the plugin allocations.
Expose the plugin capabilities as named in the CSI spec in the `nomad
plugin status -verbose` output.
When the alloc runner claims a volume, an allocation for a previous
version of the job may still have the volume claimed because it's
still shutting down. In this case we'll receive an error from the
server. Retry this error until we succeed or until a very long timeout
expires, to give operators a chance to recover broken plugins.
Make the alloc runner hook tolerant of temporary RPC failures.
* Remove redundant schedulable check in `FreeWriteClaims`. If a volume
has been created but not yet claimed, its capabilities will be checked
in `WriteSchedulable` at both scheduling time and claim time. We don't
need to also check them in the `FreeWriteClaims` method.
* Enforce maximum volume claims for writers.
When the scheduler checks feasibility for CSI volumes, the check is
fairly loose: earlier versions of the same job are not counted as
active claims. This allows the scheduler to place new allocations
for the new version of a job, under the assumption that we'll replace
the existing allocations and their volume claims.
But when the alloc runner claims the volume, we need to enforce the
active claims even if they're for allocations of an earlier version of
the job. Otherwise we'll try to mount a volume that's currently being
unmounted, and this will cause replacement allocations to frequently
fail.
* Enforce single-node reader check for read-only volumes. When the
alloc runner makes a claim for a read-only volume, we only check that
the volume is potentially schedulable and not that it actually has
free read claims.
If a plugin job fails before successfully fingerprinting the plugins,
the plugin will not exist when we try to delete the job. Tolerate
missing plugins.
The dynamic plugin registry assumes that plugins are singletons, which
matches the behavior of other Nomad plugins. But because dynamic
plugins like CSI are implemented by allocations, we need to handle the
possibility of multiple allocations for a given plugin type + ID, as
well as behaviors around interleaved allocation starts and stops.
Update the data structure for the dynamic registry so that more recent
allocations take over as the instance manager singleton, but we still
preserve the previous running allocations so that restores work
without racing.
Multiple allocations can run on a client for the same plugin, even if
only during updates. Provide each plugin task a unique path for the
control socket so that the tasks don't interfere with each other.
Detection of the full set of plugin capabilities was added in Nomad
1.1 for the volume creation workflow, but these were not added to the
API response for plugins.