ember-cli-storybook and storybook itself has progressed to the point
where the DIY configs aren't necessary. It's all swept under the
`framework: '@storybook/ember'` config in main.js. Yay!
It's unfortunate having to point to a hash for ember-cli-storybook, but
there hasn't been a release since the environment PR merged. At least
this is better than pointing at a fork?
Stories that used named blocked wouldn't render the named blocks.
Evidently this was due to using a customized template renderer that
became incompatible when Ember was upgraded.
* Fix DevicesSets being removed when cpusets are reloaded with cgroup v2
This meant that if any allocation was created or removed, all
active DevicesSets were removed from all cgroups of all tasks.
This was most noticeable with "exec" and "raw_exec", as it meant
they no longer had access to /dev files.
* e2e: add test for verifying cgroups do not interfere with access to devices
---------
Co-authored-by: Seth Hoenig <shoenig@duck.com>
This seems to finish in about 20 minutes, or run for 6+ hours until hitting
a default timeout. Set a timeout to 30 minutes so we aren't wasting time
and runners.
This changeset adds the node pool as a label anywhere we're already emitting
labels with additional information such as node class or ID about the client.
This changeset includes some fixes to documentation discovered while working on
node pools, but we didn't want to include in the node pool PRs so they can get
backported easily:
* namespace apply/delete commands are forwarded to the authoritative region
* deleting a namespace requires there are no non-terminal jobs in any of the
federated regions
* fixed a typo in the name of the `nomad.client.allocated.disk` metric
* Tooltip on individual allocs in the panel
* Isolate allocation cells to their own component
* Tipsy trigger
* Aria label for failed-or-lost tooltips
* Buildfix
* Try adding percy exec back to exam run
Provide a no-op implementation of the drivers.DriverNetoworkManager
interface to be used by systems that don't support network isolation and
prevent panics where a network manager is expected.
When registering a node with a new node pool in a non-authoritative
region we can't create the node pool because this new pool will not be
replicated to other regions.
This commit modifies the node registration logic to only allow automatic
node pool creation in the authoritative region.
In non-authoritative regions, the client is registered, but the node
pool is not created. The client is kept in the `initialing` status until
its node pool is created in the authoritative region and replicated to
the client's region.
We don't want to delete node pools that have nodes or non-terminal jobs. Add a
check in the `DeleteNodePools` RPC to check locally and in federated regions,
similar to how we check that it's safe to delete namespaces.
Upserts and deletes of node pools are forwarded to the authoritative region,
just like we do for namespaces, quotas, ACL policies, etc. Replicate node pools
from the authoritative region.
Whenever we write a Raft log entry for node pools, we need to first make sure
that all servers can safely apply the log without panicking. Gate upsert and
delete RPCs on all servers being upgraded to the minimum version.
If the authoritative region has been upgraded to a version of Nomad that has new
replicated objects (such as ACL Auth Methods, ACL Binding Rules, etc.), the
non-authoritative regions will start replicating those objects as soon as their
leader is upgraded. If a server in the non-authoritative region is upgraded and
then becomes the leader before all the other servers in the region have been
upgraded, then it will attempt to write a Raft log entry that the followers
don't understand. The followers will then panic.
Add same the minimum version checks that we do for RPC writes to the leader's
replication loop.
This PR fixes a bug where the docker network pause container would not be
stopped and removed in the case where a node is restarted, the alloc is
moved to another node, the node comes back up. See the issue below for
full repro conditions.
Basically in the DestroyNetwork PostRun hook we would depend on the
NetworkIsolationSpec field not being nil - which is only the case
if the Client stays alive all the way from network creation to network
teardown. If the node is rebooted we lose that state and previously
would not be able to find the pause container to remove. Now, we manually
find the pause container by scanning them and looking for the associated
allocID.
Fixes#17299
* Exam to parallelize tests
* Logging to try to solve test flakiness
* Logging in another failure
* Hardening for one test and snapshot for another
* Explicitly set the first one as the servicedAlloc instead of randomly picking
* A wild CircleCI test failure appears
* de-log
During shutdown of a client with drain_on_shutdown there is a race between
the Client ending the cgroup and the task's cpuset manager cleaning up
the cgroup. During the path traversal, skip anything we cannot read, which
avoids the nil DirEntry we try to dereference now.
Go released a security update to fix build-time code injection and execution via
CGO. This doesn't impact already-released versions of Nomad, just the build
toolchain, so we won't be releasing a Nomad security update to go with it.
Implement scheduler support for node pool:
* When a scheduler is invoked, we get a set of the ready nodes in the DCs that
are allowed for that job. Extend the filter to include the node pool.
* Ensure that changes to a job's node pool are picked up as destructive
allocation updates.
* Add `NodesInPool` as a metric to all reporting done by the scheduler.
* Add the node-in-pool the filter to the `Node.Register` RPC so that we don't
generate spurious evals for nodes in the wrong pool.
Implements the HTTP API associated with the `NodePool.ListJobs` RPC, including
the `api` package for the public API and documentation.
Update the `NodePool.ListJobs` RPC to fix the missing handling of the special
"all" pool.