This log line should be rare since:
1. Most tokens should be logged synchronously, not via this async
batched method. Async revocation only takes place when Vault
connectivity is lost and after leader election so no revocations are
missed.
2. There should rarely be >1 batch (1,000) tokens to revoke since the
above conditions should be brief and infrequent.
3. Interval is 5 minutes, so this log line will be emitted at *most*
once every 5 minutes.
What makes this log line rare is also what makes it interesting: due to
a bug prior to Nomad 0.11.2 some tokens may never get revoked. Therefore
Nomad tries to re-revoke them on every leader election. This caused a
massive buildup of old tokens that would never be properly revoked and
purged. Nomad 0.11.3 mostly fixed this but still had a bug in purging
revoked tokens via Raft (fixed in #8553).
The nomad.vault.distributed_tokens_revoked metric is only ticked upon
successful revocation and purging, making any bugs or slowness in the
process difficult to detect.
Logging before a potentially slow revocation+purge operation is
performed will give users much better indications of what activity is
going on should the process fail to make it to the metric.
Displays all scale events in the form of an annotated line chart. When
annotations are clicked, the timestamp, message, and meta propeties for
the event are displayed below the chart.
This makes use of the PR I recently had merged to eslint-plugin-ember-a11y-testing
to add linting that ensures an accessibility audit is called at least once per acceptance
test file. When I have added linting for component tests, it can apply there too.
I added exclusions for the filesystem browser tests, which are covered by behaviors/fs
and for the search test which will involve significant overrides to Ember Power Select
default templates.
Fixes https://github.com/hashicorp/nomad/issues/8544
This PR fixes a bug where using `nomad job plan ...` always report no change if the submitted job contain scaling.
The issue has three contributing factors:
1. The plan endpoint doesn't populate the required scaling policy ID; unlike the job register endpoint
2. The plan endpoint suppresses errors on job insertion - the job insertion fails here, because the scaling policy is missing the required ID
3. The scheduler reports no update necessary when the relevant job isn't in store (because the insertion failed)
This PR fixes the first two factors. Changing the scheduler to be more strict might make sense, but may violate some idempotency invariant or make the scheduler more brittle.