The test inserts an alloc in the server state, but expect the client to
start the alloc runner for it almost immediately.
Here, we add a retry loop to check that the client start all expected
alloc runners eventually.
Pick up a panic fix of f1bd0923b8
Some CirleCI builds fail somewhat mysteriously, e.g. https://circleci.com/gh/hashicorp/nomad/52676 , due to this bug. The test json file emit the following:
```
{"Time":"2020-03-28T12:03:08.602544522Z","Action":"output","Package":"github.com/hashicorp/nomad/client/allocrunner","Test":"TestGroupServiceHook_Update08Alloc","Output":"panic: send on closed channel\n"}
{"Time":"2020-03-28T12:03:08.602576075Z","Action":"output","Package":"github.com/hashicorp/nomad/client/allocrunner","Test":"TestGroupServiceHook_Update08Alloc","Output":"\n"}
{"Time":"2020-03-28T12:03:08.602584429Z","Action":"output","Package":"github.com/hashicorp/nomad/client/allocrunner","Test":"TestGroupServiceHook_Update08Alloc","Output":"goroutine 403 [running]:\n"}
{"Time":"2020-03-28T12:03:08.602590561Z","Action":"output","Package":"github.com/hashicorp/nomad/client/allocrunner","Test":"TestGroupServiceHook_Update08Alloc","Output":"github.com/hashicorp/nomad/vendor/github.com/stretchr/testify/assert.Eventually.func1(0xc000a78000, 0xc0009cf160)\n"}
{"Time":"2020-03-28T12:03:08.602598464Z","Action":"output","Package":"github.com/hashicorp/nomad/client/allocrunner","Test":"TestGroupServiceHook_Update08Alloc","Output":"\t/home/circleci/go/src/github.com/hashicorp/nomad/vendor/github.com/stretchr/testify/assert/assertions.go:1494 +0x47\n"}
{"Time":"2020-03-28T12:03:08.602604952Z","Action":"output","Package":"github.com/hashicorp/nomad/client/allocrunner","Test":"TestGroupServiceHook_Update08Alloc","Output":"created by github.com/hashicorp/nomad/vendor/github.com/stretchr/testify/assert.Eventually\n"}
```
* added method to retrieve all scaling policies for use in snapshotting, plus test
* better testing for ScalingPoliciesByNamespace
* added scaling policy snapshot persist and restore (and test of restore)
manually tested snapshot restore.
resolves#7539
Looks like the latest `github.com/docker/docker/registry.ResolveAuthConfig` expect
`github.com/docker/docker/api/types.AuthConfig` rather than
`github.com/docker/cli/cli/config/types.AuthConfig`. The two types are
identical but live in different packages.
Here, we embed `registry.ResolveAuthConfig` from upstream repo, but with
the signature we need.
links ending with / and links without a terminal / are handled the same
according to the netlify redirect documenation
(cherry picked from commit 91171c6a1dd74e3396a66e35c27c6ec858c5977f)
* nomad/structs/csi: delete just one plugin type from a node
* nomad/structs/csi: add DeleteAlloc
* nomad/state/state_store: add deleteJobFromPlugin
* nomad/state/state_store: use DeleteAlloc not DeleteNodeType
* move CreateTestCSIPlugin to state to avoid an import cycle
* nomad/state/state_store_test: delete a plugin by deleting its jobs
* nomad/*_test: move CreateTestCSIPlugin to state
* nomad/state/state_store: update one plugin per transaction
* command/plugin_status_test: move CreateTestCSIPlugin
* nomad: csi: handle nils CSIPlugin methods, clarity
When a node has no host volumes, the API response will
have a null value for the HostVolumes attribute, which
in turn becomes a null value instead of an empty array
in the store. This protects against that, ensuring host
volumes is always an array.
If not in use and not changing external ids, it should not be an error to register a volume again.
* nomad/state/state_store: make volume registration idempotent
Fix a regression where we accidentally started treating non-AWS
environments as AWS environments, resulting in bad networking settings.
Two factors some at play:
First, in [1], we accidentally switched the ultimate AWS test from
checking `ami-id` to `instance-id`. This means that nomad started
treating more environments as AWS; e.g. Hetzner implements `instance-id`
but not `ami-id`.
Second, some of these environments return empty values instead of
errors! Hetzner returns empty 200 response for `local-ipv4`, resulting
into bad networking configuration.
This change fix the situation by restoring the check to `ami-id` and
ensuring that we only set network configuration when the ip address is
not-empty. Also, be more defensive around response whitespace input.
[1] https://github.com/hashicorp/nomad/pull/6779