Sometimes a job would be created with a running deployment which made
the increment button disabled.
While I was finding the root cause, I also changed the waitUntil pattern
to match the StepperInput technique which is more resilient to code
changes.
- Changed boilerplate intro copy to match messaging in approved 0.12 announcement copy launching next Monday
- Added Virtual Talks section with each of their Youtube links and year timestamps from this year
- Updated the Who Uses Nomad section in alignment with Nomad GitHub READDME in ordering
- Added new customer talks such as Cloudflare and yearly timestamps to each of them
- Removed outdated Community Tools and Integrations section
Adding keys tells Ember to rerender matching entries instead of
destroying and recreating.
Without this key, every time the allocation collection changes, every
allocation row gets destroyed and recreated.
This happens a lot, since each allocation needs to be reloaded which
dirties the collection.
Since allocation rows fetch stats on init, each of these many many
renders results in a stats request.
By using key and rerendering matching records, this all goes away. Since
the rows aren't being destroyed and recreated, the init stats request
isn't being made overnumerously.
This introduces a DataCaches service so recently-updated collections don’t need
to be requeried within a minute, or based on the current route. It only searches
jobs and nodes. There are known bugs that will be addressed in upcoming PRs.
In #8209 we fixed the `max_parallel` stanza for multiregion by introducing the
`IsMultiregionStarter` check, but didn't apply it to the earlier place its
required. The result is that deployments start but don't place allocations.
If `max_parallel` is not set, all regions should begin in a `running` state
rather than a `pending` state. Otherwise the first region is set to `running`
and then all the remaining regions once it enters `blocked. That behavior is
technically correct in that we have at most `max_parallel` regions running,
but definitely not what a user expects.
In multiregion deployments when ACLs are enabled, the deploymentwatcher needs
an appropriately scoped ACL token with the same `submit-job` rights as the
user who submitted it. The token will already be replicated, so store the
accessor ID so that it can be retrieved by the leader.