It has been decided we're going to live in a many core world.
Let's take advantage of that and parallelize these state store
tests which all run in memory and are largely CPU bound.
An unscientific benchmark demonstrating the improvement:
[mp state (master)] $ go test
PASS
ok github.com/hashicorp/nomad/nomad/state 5.162s
[mp state (f-parallelize-state-store-tests)] $ go test
PASS
ok github.com/hashicorp/nomad/nomad/state 1.527s
If ACL Request is unauthenticated, we should honor the anonymous token.
This PR makes few changes:
* `GetPolicy` endpoints may return policy if anonymous policy allows it,
or return permission denied otherwise.
* `ListPolicies` returns an empty policy list, or one with anonymous
policy if one exists.
Without this PR, the we return an incomprehensible error.
Before:
```
$ curl http://localhost:4646/v1/acl/policy/doesntexist; echo
acl token lookup failed: index error: UUID must be 36 characters
$ curl http://localhost:4646/v1/acl/policies; echo
acl token lookup failed: index error: UUID must be 36 characters
```
After:
```
$ curl http://localhost:4646/v1/acl/policy/doesntexist; echo
Permission denied
$ curl http://localhost:4646/v1/acl/policies; echo
[]
```
Rename SnapshotAfter to SnapshotMinIndex. The old name was not
technically accurate. SnapshotAtOrAfter is more accurate, but wordy and
still lacks context about what precisely it is at or after (the index).
SnapshotMinIndex was chosen as it describes the action (snapshot), a
constraint (minimum), and the object of the constraint (index).
Fix a case where `node.StatusUpdatedAt` was manipulated directly in
memory.
This ensures that StatusUpdatedAt is set in raft layer, and ensures that
the field is updated when node drain/eligibility is updated too.
This fixes a bug in the state store during plan apply. When
denormalizing preempted allocations it incorrectly set the preemptor's
job during the update. This eventually causes a panic downstream in the
client. Added a test assertion that failed before and passes after this fix
Fixes https://github.com/hashicorp/nomad/issues/4299
Upon investigating this case further, we determined the issue to be a race between applying `JobBatchDeregisterRequest` fsm operation and processing job-deregister evals.
Processing job-deregister evals should wait until the FSM log message finishes applying, by using the snapshot index. However, with `JobBatchDeregister`, any single individual job deregistering was applied accidentally incremented the snapshot index and resulted into processing job-deregister evals. When a Nomad server receives an eval for a job in the batch that is yet to be deleted, we accidentally re-run it depending on the state of allocation.
This change ensures that we delete deregister all of the jobs and inserts all evals in a single transactions, thus blocking processing related evals until deregistering complete.