This is intended to be used like `yarn exam:parallel -- more --options`
This way a split and partition can be provided by CI without CI also
needing to deal with percy details.
If the dynamic port range for a node is set so that the min is equal to the max,
there's only one port available and this passes config validation. But the
scheduler panics when it tries to pick a random port. Only add the randomness
when there's more than one to pick from.
Adds a test for the behavior but also adjusts the commentary on a couple of the
existing tests that made it seem like this case was already covered if you
didn't look too closely.
Fixes: #17585
the windows docker install script stopped working.
after trying various things to fix the script,
I opted instead for a base image that comes with
docker already installed.
error output during build was:
Installing Docker.
WARNING: Cannot find path 'C:\Users\Administrator\AppData\Local\Temp\DockerMsftProvider\DockerDefault_DockerSearchIndex.json' because it does not exist.
WARNING: Cannot bind argument to parameter 'downloadURL' because it is an empty string.
WARNING: The property 'AbsoluteUri' cannot be found on this object. Verify that the property exists.
WARNING: The property 'RequestMessage' cannot be found on this object. Verify that the property exists.
Failed to install Docker.
Install-Package : No match was found for the specified search criteria and package name 'docker'.
Some of the paths ignored by `test-core.yaml` need to be checked by
`make check`. The `checks.yaml` workflow run on these paths and can also
be used as a reusable workflow.
* jobspec: rename node pool scheduler_configuration
In HCL specifications we usually call configuration blocks `config`
instead of `configuration`.
* np: add memory oversubscription config
* np: make scheduler config ENT
Node v18 uses a newer version of openssl than webpack 4 is compatible
with. This is the quickest fix.
The ideal fix would be to upgrade webpack to v5 but the state of Ember,
Storybook, and generally just JS dep management makes this not an
option.
If the `nomad` CLI is used to access a cluster running a version that
does not include node pools the command will `nil` panic when trying to
resolve the job's node pool.
Add structs and fields to support the Nomad Pools Governance Enterprise
feature of controlling node pool access via namespaces.
Nomad Enterprise allows users to specify a default node pool to be used
by jobs that don't specify one. In order to accomplish this, it's
necessary to distinguish between a job that explicitly uses the
`default` node pool and one that did not specify any.
If the `default` node pool is set during job canonicalization it's
impossible to do this, so this commit allows a job to have an empty node
pool value during registration but sets to `default` at the admission
controller mutator.
In order to guarantee state consistency the state store validates that
the job node pool is set and exists before inserting it.
Although most of the time jobs will be assigned to a single node pool, users may
want to set the node pool to "all" and then constraint to a subset of node
pools. Add support for setting a contraint like `${node.pool}`.
Use the new style of e2e test for the podman suite ... which is all of
one test case that was skipped out. Turn the case back on, and we will
add more tests in the near future.
* e2e: cleanup podman installation in jammy image
The original steps were copied over from the bionic image and does a lot
of hoop jumping we do not need anymore.
For the moment just hard-code installing the v0.4.2 version of the driver,
but I may follow up and modify hc-install to support installing @latest
like go itself.
* use releases for hc-install
ember-cli-storybook and storybook itself has progressed to the point
where the DIY configs aren't necessary. It's all swept under the
`framework: '@storybook/ember'` config in main.js. Yay!
It's unfortunate having to point to a hash for ember-cli-storybook, but
there hasn't been a release since the environment PR merged. At least
this is better than pointing at a fork?
Stories that used named blocked wouldn't render the named blocks.
Evidently this was due to using a customized template renderer that
became incompatible when Ember was upgraded.
* Fix DevicesSets being removed when cpusets are reloaded with cgroup v2
This meant that if any allocation was created or removed, all
active DevicesSets were removed from all cgroups of all tasks.
This was most noticeable with "exec" and "raw_exec", as it meant
they no longer had access to /dev files.
* e2e: add test for verifying cgroups do not interfere with access to devices
---------
Co-authored-by: Seth Hoenig <shoenig@duck.com>
This seems to finish in about 20 minutes, or run for 6+ hours until hitting
a default timeout. Set a timeout to 30 minutes so we aren't wasting time
and runners.
This changeset adds the node pool as a label anywhere we're already emitting
labels with additional information such as node class or ID about the client.
This changeset includes some fixes to documentation discovered while working on
node pools, but we didn't want to include in the node pool PRs so they can get
backported easily:
* namespace apply/delete commands are forwarded to the authoritative region
* deleting a namespace requires there are no non-terminal jobs in any of the
federated regions
* fixed a typo in the name of the `nomad.client.allocated.disk` metric
* Tooltip on individual allocs in the panel
* Isolate allocation cells to their own component
* Tipsy trigger
* Aria label for failed-or-lost tooltips
* Buildfix
* Try adding percy exec back to exam run
Provide a no-op implementation of the drivers.DriverNetoworkManager
interface to be used by systems that don't support network isolation and
prevent panics where a network manager is expected.
When registering a node with a new node pool in a non-authoritative
region we can't create the node pool because this new pool will not be
replicated to other regions.
This commit modifies the node registration logic to only allow automatic
node pool creation in the authoritative region.
In non-authoritative regions, the client is registered, but the node
pool is not created. The client is kept in the `initialing` status until
its node pool is created in the authoritative region and replicated to
the client's region.