Commit Graph

800 Commits

Author SHA1 Message Date
Phil Renaud 9a5d67d475
[ui] Keyboard shortcuts to switch regions (#17169)
* Regions keynav

* Dont show if you only have a single region (global by default)
2023-05-12 11:46:00 -04:00
Tim Gross 9ed75e1f72
client: de-duplicate alloc updates and gate during restore (#17074)
When client nodes are restarted, all allocations that have been scheduled on the
node have their modify index updated, including terminal allocations. There are
several contributing factors:

* The `allocSync` method that updates the servers isn't gated on first contact
  with the servers. This means that if a server updates the desired state while
  the client is down, the `allocSync` races with the `Node.ClientGetAlloc`
  RPC. This will typically result in the client updating the server with "running"
  and then immediately thereafter "complete".

* The `allocSync` method unconditionally sends the `Node.UpdateAlloc` RPC even
  if it's possible to assert that the server has definitely seen the client
  state. The allocrunner may queue-up updates even if we gate sending them. So
  then we end up with a race between the allocrunner updating its internal state
  to overwrite the previous update and `allocSync` sending the bogus or duplicate
  update.

This changeset adds tracking of server-acknowledged state to the
allocrunner. This state gets checked in the `allocSync` before adding the update
to the batch, and updated when `Node.UpdateAlloc` returns successfully. To
implement this we need to be able to equality-check the updates against the last
acknowledged state. We also need to add the last acknowledged state to the
client state DB, otherwise we'd drop unacknowledged updates across restarts.

The client restart test has been expanded to cover a variety of allocation
states, including allocs stopped before shutdown, allocs stopped by the server
while the client is down, and allocs that have been completely GC'd on the
server while the client is down. I've also bench tested scenarios where the task
workload is killed while the client is down, resulting in a failed restore.

Fixes #16381
2023-05-11 09:05:24 -04:00
Daniel Bennett a7ed6f5c53
full task cleanup when alloc prerun hook fails (#17104)
to avoid leaking task resources (e.g. containers,
iptables) if allocRunner prerun fails during
restore on client restart.

now if prerun fails, TaskRunner.MarkFailedKill()
will only emit an event, mark the task as failed,
and cancel the tr's killCtx, so then ar.runTasks()
-> tr.Run() can take care of the actual cleanup.

removed from (formerly) tr.MarkFailedDead(),
now handled by tr.Run():
 * set task state as dead
 * save task runner local state
 * task stop hooks

also done in tr.Run() now that it's not skipped:
 * handleKill() to kill tasks while respecting
   their shutdown delay, and retrying as needed
   * also includes task preKill hooks
 * clearDriverHandle() to destroy the task
   and associated resources
 * task exited hooks
2023-05-08 13:17:10 -05:00
stswidwinski 9c1c2cb5d2
Correct the status description and modify time of canceled evals. (#17071)
Fix for #17070. Corrected the status description and modify time of evals which are canceled due to another eval having completed in the meantime.
2023-05-08 08:50:36 -04:00
Seth Hoenig fff2eec625
connect: use heuristic to detect sidecar task driver (#17065)
* connect: use heuristic to detect sidecar task driver

This PR adds a heuristic to detect whether to use the podman task driver
for the connect sidecar proxy. The podman driver will be selected if there
is at least one task in the task group configured to use podman, and there
are zero tasks in the group configured to use docker. In all other cases
the task driver defaults to docker.

After this change, we should be able to run typical Connect jobspecs
(e.g. nomad job init [-short] -connect) on Clusters configured with the
podman task driver, without modification to the job files.

Closes #17042

* golf: cleanup driver detection logic
2023-05-05 10:19:30 -05:00
James Rasell 6ec4a69f47
scale: fixed a bug where evals could be created with wrong type. (#17092)
The job scale RPC endpoint hard-coded the eval creation to use the
type of service. This meant scaling events triggered on jobs of
type batch would create evaluations with the wrong type, which
does not seem to cause any problems, just confusion when
correlating the two.
2023-05-05 14:46:10 +01:00
Tim Gross 17bd930ca9
logs: fix missing allocation logs after update to Nomad 1.5.4 (#17087)
When the server restarts for the upgrade, it loads the `structs.Job` from the
Raft snapshot/logs. The jobspec has long since been parsed, so none of the
guards around the default value are in play. The empty field value for `Enabled`
is the zero value, which is false.

This doesn't impact any running allocation because we don't replace running
allocations when either the client or server restart. But as soon as any
allocation gets rescheduled (ex. you drain all your clients during upgrades),
it'll be using the `structs.Job` that the server has, which has `Enabled =
false`, and logs will not be collected.

This changeset fixes the bug by adding a new field `Disabled` which defaults to
false (so that the zero value works), and deprecates the old field.

Fixes #17076
2023-05-04 16:01:18 -04:00
Charlie Voiselle 8f6fa14e9e
[deps] Update consul-template to v0.31.0 (#16908)
* Update consul-template to v0.31.0
* Add changelog
2023-05-03 09:15:17 -04:00
Michael Schurter f8f9e91b8a
build: upgrade from go 1.20.3 to 1.20.4 (#17056)
Includes CVE fixes that do *not* impact Nomad:

https://groups.google.com/g/golang-announce/c/MEb0UyuSMsU
2023-05-02 13:09:11 -07:00
Seth Hoenig e9fec4ebc8
connect: remove unusable path for fallback envoy image names (#17044)
This PR does some cleanup of an old code path for versions of Consul that
did not support reporting the supported versions of Envoy in its API. Those
versions are no longer supported for years at this point, and the fallback
version of envoy hasn't been supported by any version of Consul for almost
as long. Remove this code path that is no longer useful.
2023-05-02 09:48:44 -05:00
Seth Hoenig e8d53ea30b
connect: use explicit docker.io prefix in default envoy image names (#17045)
This PR modifies references to the envoyproxy/envoy docker image to
explicitly include the docker.io prefix. This does not affect existing
users, but makes things easier for Podman users, who otherwise need to
specify the full name because Podman does not default to docker.io
2023-05-02 09:27:48 -05:00
Seth Hoenig 86f6a38867
connect: do not restrict auto envoy version to docker task driver (#17041)
This PR updates the envoy_bootstrap_hook to no longer disable itself if
the task driver in use is not docker. In other words, make it work for
podman and other image based task drivers. The hook now only checks that

1. the task is a connect sidecar
2. the task.config block contains an "image" field
2023-05-01 15:07:35 -05:00
Phil Renaud 5ca59aef56
Move the token JWT console log out of an interator (#17010) 2023-04-28 13:46:10 -04:00
Seth Hoenig 5744b2cd4f
docs: add more notes about artifact breaking changes in 1.5.0 (#17005)
* changelog: note artifact breaking changes for 1.5.0

* docs: add note about environment variables to artifact job spec docs

* Update website/content/docs/job-specification/artifact.mdx

Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>

---------

Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
2023-04-27 11:41:18 -05:00
Michael Schurter d3b0bbc088
deps: update go-bexpr from 0.1.11 to 0.1.12 (#16991)
Pulls in https://github.com/hashicorp/go-bexpr/pull/38

Fixes #16758
2023-04-27 09:01:42 -07:00
Phil Renaud 7f7f764c5a
[ui] Fixed: Evaluations sidebar/response not scrollable (#16960)
* Sets up a CSS grid for Evaluations sidebar

* Flex seems more sensible for this actually

* Tighten up the header margin

* Percy found a diff; the expand button wasnt showing for view logs sidebar
2023-04-27 09:49:18 -04:00
James Rasell 4d2c1403c2
scale: do not allow scaling of jobs with type system. (#16969) 2023-04-25 15:47:44 +01:00
Phil Renaud 7dbebe9a93
[ui, feature] Job Page Redesign (#16932)
* [ui] Service job status panel (#16134)

* it begins

* Hacky demo enabled

* Still very hacky but seems deece

* Floor of at least 3 must be shown

* Width from on-high

* Other statuses considered

* More sensible allocTypes listing

* Beginnings of a legend

* Total number of allocs running now maps over job.groups

* Lintfix

* base the number of slots to hold open on actual tallies, which should never exceed totalAllocs

* Versions get yer versions here

* Versions lookin like versions

* Mirage fixup

* Adds Remaining as an alloc chart status and adds historical status option

* Get tests passing again by making job status static for a sec

* Historical status panel click actions moved into their own component class

* job detail tests plz chill

* Testing if percy is fickle

* Hyper-specfic on summary distribution bar identifier

* Perhaps the 2nd allocSummary item no longer exists with the more accurate afterCreate data

* UI Test eschewing the page pattern

* Bones of a new acceptance test

* Track width changes explicitly with window-resize

* testlintfix

* Alloc counting tests

* Alloc grouping test

* Alloc grouping with complex resizing

* Refined the list of showable statuses

* PR feedback addressed

* renamed allocation-row to allocation-status-row

* [ui, job status] Make panel status mode a queryParam (#16345)

* queryParam changing

* Test for QP in panel

* Adding @tracked to legacy controller

* Move the job of switching to Historical out to larger context

* integration test mock passed func

* [ui] Service job deployment status panel (#16383)

* A very fast and loose deployment panel

* Removing Unknown status from the panel

* Set up oldAllocs list in constructor, rather than as a getter/tracked var

* Small amount of template cleanup

* Refactored latest-deployment new logic back into panel.js

* Revert now-unused latest-deployment component

* margin bottom when ungrouped also

* Basic integration tests for job deployment status panel

* Updates complete alloc colour to green for new visualizations only (#16618)

* Updates complete alloc colour to green for new visualizations only

* Pale green instead of dark green for viz in general

* [ui] Job Deployment Status: History and Update Props (#16518)

* Deployment history wooooooo

* Styled deployment history

* Update Params

* lintfix

* Types and groups for updateParams

* Live-updating history

* Harden with types, error states, and pending states

* Refactor updateParams to use trigger component

* [ui] Deployment History search (#16608)

* Functioning searchbox

* Some nice animations for history items

* History search test

* Fixing up some old mirage conventions

* some a11y rule override to account for scss keyframes

* Split panel into deploying and steady components

* HandleError passed from job index

* gridified panel elements

* TotalAllocs added to deploying.js

* Width perc to px

* [ui] Splitting deployment allocs by status, health, and canary status (#16766)

* Initial attempt with lots of scratchpad work

* Style mods per UI discussion

* Fix canary overflow bug

* Dont show canary or health for steady/prev-alloc blocks

* Steady state

* Thanks Julie

* Fixes steady-state versions

* Legen, wait for it...

* Test fixes now that we have a minimum block size

* PR prep

* Shimmer effect on pending and unplaced allocs (#16801)

* Shimmer effect on pending and unplaced

* Dont show animation in the legend

* [ui, deployments] Linking allocblocks and legends to allocation / allocations index routes (#16821)

* Conditional link-to component and basic linking to allocations and allocation routes

* Job versions filter added to allocations index page

* Steady state legends link

* Legend links

* Badge count links for versions

* Fix: faded class on steady-state legend items

* version link now wont show completed ones

* Fix a11y violations with link labels

* Combining some template conditional logic

* [ui, deployments] Conversions on long nanosecond update params (#16882)

* Conversions on long nanosecond nums

* Early return in updateParamGroups comp prop

* [ui, deployments] Mirage Actively Deploying Job and Deployment Integration Tests (#16888)

* Start of deployment alloc test scaffolding

* Bit of test cleanup and canary for ungrouped allocs

* Flakey but more robust integrations for deployment panel

* De-flake acceptance tests and add an actively deploying job to mirage

* Jitter-less alloc status distribution removes my bad math

* bugfix caused by summary.desiredTotal non-null

* More interesting mirage active deployment alloc breakdown

* Further tests for previous-allocs row

* Previous alloc legend tests

* Percy snapshots added to integration test

* changelog
2023-04-24 22:45:39 -04:00
Seth Hoenig 753c17c9de
services: un-mark group services as deregistered if restart hook runs (#16905)
* services: un-mark group services as deregistered if restart hook runs

This PR may fix a bug where group services will never be deregistered if the
group undergoes a task restart.

* e2e: add test case for restart and deregister group service

* cl: add cl

* e2e: add wait for service list call
2023-04-24 14:24:51 -05:00
Tim Gross 72cbe53f19
logs: allow disabling log collection in jobspec (#16962)
Some Nomad users ship application logs out-of-band via syslog. For these users
having `logmon` (and `docker_logger`) running is unnecessary overhead. Allow
disabling the logmon and pointing the task's stdout/stderr to /dev/null.

This changeset is the first of several incremental improvements to log
collection short of full-on logging plugins. The next step will likely be to
extend the internal-only task driver configuration so that cluster
administrators can turn off log collection for the entire driver.

---

Fixes: #11175

Co-authored-by: Thomas Weber <towe75@googlemail.com>
2023-04-24 10:00:27 -04:00
valodzka 379497a484
fix host port handling for ipv6 (#16723) 2023-04-20 19:53:20 -07:00
Etienne Bruines 1e3531b978
cni: fix plugin fingerprinting versions (#16776)
CNI plugins v1.2.0 and above output a second line, containing supported protocol versions.
2023-04-20 18:44:39 -07:00
Luiz Aoqui a1ba068e1f
cli: fix panic on job plan when -diff=false (#16944)
PR #14492 introduced a new check to return 0 when the `nomad job plan`
command returns a diff of type `None`.

But the `-diff` CLI flag was also being used to control whether the plan
request should return the diff of not instead of just controlling if the
diff was printed.

This means that when `-diff=false` is set the response does not include
any diff information, and so the new check panics.

This commit fixes the problem by always requesting a diff and using the
`-diff` only for controlling output, as it's currently documented.
2023-04-20 17:33:29 -07:00
astudentofblake 42c4c8d5ea
fix: added landlock access to /usr/libexec for getter (#16900) 2023-04-20 11:16:04 -05:00
claire labry d2beea3435
changelog: add changelog update for vendor label for linux packaging (#16071) 2023-04-19 08:14:14 -07:00
Luiz Aoqui fb588fcbb8
allocrunner: prevent panic on network manager (#16921) 2023-04-18 13:39:13 -07:00
Charlie Voiselle 9e8f2a937c
[scheduler] Honor `false` for distinct hosts constraint (#16907)
* Honor value for distinct_hosts constraint
* Add test for feasibility checking for `false`
---------
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
2023-04-17 17:43:56 -04:00
Tim Gross 04e049caed
license: show Terminated field in `license get` command (#16892) 2023-04-17 09:01:43 -04:00
Tim Gross 62548616d4
client: allow `drain_on_shutdown` configuration (#16827)
Adds a new configuration to clients to optionally allow them to drain their
workloads on shutdown. The client sends the `Node.UpdateDrain` RPC targeting
itself and then monitors the drain state as seen by the server until the drain
is complete or the deadline expires. If it loses connection with the server, it
will monitor local client status instead to ensure allocations are stopped
before exiting.
2023-04-14 15:35:32 -04:00
Tim Gross 5a9abdc469
drain: use client status to determine drain is complete (#14348)
If an allocation is slow to stop because of `kill_timeout` or `shutdown_delay`,
the node drain is marked as complete prematurely, even though drain monitoring
will continue to report allocation migrations. This impacts the UI or API
clients that monitor node draining to shut down nodes.

This changeset updates the behavior to wait until the client status of all
drained allocs are terminal before marking the node as done draining.
2023-04-13 08:55:28 -04:00
Seth Hoenig ec1a8ae12a
deps: update docker to 23.0.3 (#16862)
* [no ci] deps: update docker to 23.0.3

This PR brings our docker/docker dependency (which is hosted at github.com/moby/moby)
up to 23.0.3 (forward about 2 years). Refactored our use of docker/libnetwork to
reference the package in its new home, which is docker/docker/libnetwork (it is
no longer an independent repository). Some minor nearby test case cleanup as well.

* add cl
2023-04-12 14:13:36 -05:00
Juana De La Cuesta 8302085384
Deployment Status Command Does Not Respect -namespace Wildcard (#16792)
* func: add namespace support for list deployment

* func: add wildcard to namespace filter for deployments

* Update deployment_endpoint.go

* style: use must instead of require or asseert

* style: rename paginator to avoid clash with import

* style: add changelog entry

* fix: add missing parameter for upsert jobs
2023-04-12 11:02:14 +02:00
James Rasell bc01d47071
consul/connect: fixed a bug where restarting proxy tasks failed. (#16815)
The first start of a Consul Connect proxy sidecar triggers a run
of the envoy_version hook which modifies the task config image
entry. The modification takes into account a number of factors to
correctly populate this. Importantly, once the hook has run, it
marks itself as done so the taskrunner will not execute it again.

When the client receives a non-destructive update for the
allocation which the proxy sidecar is a member of, it will update
and overwrite the task definition within the taskerunner. In doing
so it overwrite the modification performed by the hook. If the
allocation is restarted, the envoy_version hook will be skipped as
it previously marked itself as done, and therefore the sidecar
config image is incorrect and causes a driver error.

The fix removes the hook in marking itself as done to the view of
the taskrunner.
2023-04-11 15:56:03 +01:00
Seth Hoenig ba728f8f97
api: enable support for setting original job source (#16763)
* api: enable support for setting original source alongside job

This PR adds support for setting job source material along with
the registration of a job.

This includes a new HTTP endpoint and a new RPC endpoint for
making queries for the original source of a job. The
HTTP endpoint is /v1/job/<id>/submission?version=<version> and
the RPC method is Job.GetJobSubmission.

The job source (if submitted, and doing so is always optional), is
stored in the job_submission memdb table, separately from the
actual job. This way we do not incur overhead of reading the large
string field throughout normal job operations.

The server config now includes job_max_source_size for configuring
the maximum size the job source may be, before the server simply
drops the source material. This should help prevent Bad Things from
happening when huge jobs are submitted. If the value is set to 0,
all job source material will be dropped.

* api: avoid writing var content to disk for parsing

* api: move submission validation into RPC layer

* api: return an error if updating a job submission without namespace or job id

* api: be exact about the job index we associate a submission with (modify)

* api: reword api docs scheduling

* api: prune all but the last 6 job submissions

* api: protect against nil job submission in job validation

* api: set max job source size in test server

* api: fixups from pr
2023-04-11 08:45:08 -05:00
Daniel Bennett fa33ee567a
gracefully recover tasks that use csi node plugins (#16809)
new WaitForPlugin() called during csiHook.Prerun,
so that on startup, clients can recover running
tasks that use CSI volumes, instead of them being
terminated and rescheduled because they need a
node plugin that is "not found" *yet*, only because
the plugin task has not yet been recovered.
2023-04-10 17:15:33 -05:00
Tim Gross 1335543731
ephemeral disk: `migrate` should imply `sticky` (#16826)
The `ephemeral_disk` block's `migrate` field allows for best-effort migration of
the ephemeral disk data to new nodes. The documentation says the `migrate` field
is only respected if `sticky=true`, but in fact if client ACLs are not set the
data is migrated even if `sticky=false`.

The existing behavior when client ACLs are disabled has existed since the early
implementation, so "fixing" that case now would silently break backwards
compatibility. Additionally, having `migrate` not imply `sticky` seems
nonsensical: it suggests that if we place on a new node we migrate the data but
if we place on the same node, we throw the data away!

Update so that `migrate=true` implies `sticky=true` as follows:

* The failure mode when client ACLs are enabled comes from the server not passing
  along a migration token. Update the server so that the server provides a
  migration token whenever `migrate=true` and not just when `sticky=true` too.
* Update the scheduler so that `migrate` implies `sticky`.
* Update the client so that we check for `migrate || sticky` where appropriate.
* Refactor the E2E tests to move them off the old framework and make the intention
  of the test more clear.
2023-04-07 16:33:45 -04:00
Michael Schurter a8b379f962
docker: default device.container_path to host_path (#16811)
* docker: default device.container_path to host_path

Matches docker cli behavior.

Fixes #16754
2023-04-06 14:44:33 -07:00
Tim Gross 6f2b9266bc
Merge pull request #16794 from hashicorp/post-1.5.3-release
Post 1.5.3 release
2023-04-05 13:02:37 -04:00
the-nando f541f2e59b
Do not set attributes when spawning the getter child (#16791)
* Do not set attributes when spawning the getter child

* Cleanup

* Cleanup

---------

Co-authored-by: the-nando <the-nando@invalid.local>
2023-04-05 11:47:51 -05:00
Tim Gross 66a01bb35a upgrade go to 1.20.3 2023-04-05 12:18:19 -04:00
Tim Gross 8278f23042 acl: fix ACL bypass for anon requests that pass thru client HTTP
Requests without an ACL token that pass thru the client's HTTP API are treated
as though they come from the client itself. This allows bypass of ACLs on RPC
requests where ACL permissions are checked (like `Job.Register`). Invalid tokens
are correctly rejected.

Fix the bypass by only setting a client ID on the identity if we have a valid node secret.

Note that this changeset will break rate metrics for RPCs sent by clients
without a client secret such as `Node.GetClientAllocs`; these requests will be
recorded as anonymous.

Future work should:
* Ensure the node secret is sent with all client-driven RPCs except
  `Node.Register` which is TOFU.
* Create a new `acl.ACL` object from client requests so that we
  can enforce ACLs for all endpoints in a uniform way that's less error-prone.~
2023-04-05 12:17:51 -04:00
Juana De La Cuesta 9b4871fece
Prevent kill_timeout greater than progress_deadline (#16761)
* func: add validation for kill timeout smaller than progress dealine

* style: add changelog

* style: typo in changelog

* style: remove refactored test

* Update .changelog/16761.txt

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/structs/structs.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

---------

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>
2023-04-04 18:17:10 +02:00
James Rasell cb6ba80f0f
cli: stream both stdout and stderr when following an alloc. (#16556)
This update changes the behaviour when following logs from an
allocation, so that both stdout and stderr files streamed when the
operator supplies the follow flag. The previous behaviour is held
when all other flags and situations are provided.

Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
2023-04-04 10:42:27 +01:00
Georgy Buranov ca80546ef7
take maximum processor Mhz (#16740)
* take maximum processor Mhz

* remove break

* cl: add cl for 16740

---------

Co-authored-by: Seth Hoenig <shoenig@duck.com>
2023-03-31 11:25:32 -05:00
Horacio Monsalvo 20372b1721
connect: add meta on ConsulSidecarService (#16705)
Co-authored-by: Sol-Stiep <sol.stiep@southworks.com>
2023-03-30 16:09:28 -04:00
Piotr Kazmierczak acfc266c30 acl: JWT changelog entry and typo fix 2023-03-30 09:40:11 +02:00
Tim Gross 76284a09a0
docker: move pause container recovery to after `SetConfig` (#16713)
When we added recovery of pause containers in #16352 we called the recovery
function from the plugin factory function. But in our plugin setup protocol, a
plugin isn't ready for use until we call `SetConfig`. This meant that
recovering pause containers was always done with the default
config. Setting up the Docker client only happens once, so setting the wrong
config in the recovery function also means that all other Docker API calls will
use the default config.

Move the `recoveryPauseContainers` call into the `SetConfig`. Fix the error
handling so that we return any error but also don't log when the context is
canceled, which happens twice during normal startup as we fingerprint the
driver.
2023-03-29 16:20:37 -04:00
dependabot[bot] afa9608475
build(deps): bump github.com/opencontainers/runc from 1.1.4 to 1.1.5 (#16712)
* build(deps): bump github.com/opencontainers/runc from 1.1.4 to 1.1.5

Bumps [github.com/opencontainers/runc](https://github.com/opencontainers/runc) from 1.1.4 to 1.1.5.
- [Release notes](https://github.com/opencontainers/runc/releases)
- [Changelog](https://github.com/opencontainers/runc/blob/v1.1.5/CHANGELOG.md)
- [Commits](https://github.com/opencontainers/runc/compare/v1.1.4...v1.1.5)

---
updated-dependencies:
- dependency-name: github.com/opencontainers/runc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* changelog entry

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
2023-03-29 15:05:05 -04:00
Elvis Pranskevichus 11a9bb6ce7
drivers/exec: Fix handling of capabilities for unprivileged tasks (#16643)
Currently, the `exec` driver is only setting the Bounding set, which is
not sufficient to actually enable the requisite capabilities for the
task process.  In order for the capabilities to survive `execve`
performed by libcontainer, the `Permitted`, `Inheritable`, and `Ambient`
sets must also be set.

Per CAPABILITIES (7):

> Ambient: This is a set of capabilities that are preserved across an
> execve(2) of a program that is not privileged.  The ambient capability
> set obeys the invariant that no capability can ever be ambient if it
> is not both permitted and inheritable.
2023-03-28 12:12:55 -04:00
Seth Hoenig 87f4b71df0
client/fingerprint: correctly fingerprint E/P cores of Apple Silicon chips (#16672)
* client/fingerprint: correctly fingerprint E/P cores of Apple Silicon chips

This PR adds detection of asymetric core types (Power & Efficiency) (P/E)
when running on M1/M2 Apple Silicon CPUs. This functionality is provided
by shoenig/go-m1cpu which makes use of the Apple IOKit framework to read
undocumented registers containing CPU performance data. Currently working
on getting that functionality merged upstream into gopsutil, but gopsutil
would still not support detecting P vs E cores like this PR does.

Also refactors the CPUFingerprinter code to handle the mixed core
types, now setting power vs efficiency cpu attributes.

For now the scheduler is still unaware of mixed core types - on Apple
platforms tasks cannot reserve cores anyway so it doesn't matter, but
at least now the total CPU shares available will be correct.

Future work should include adding support for detecting P/E cores on
the latest and upcoming Intel chips, where computation of total cpu shares
is currently incorrect. For that, we should also include updating the
scheduler to be core-type aware, so that tasks of resources.cores on Linux
platforms can be assigned the correct number of CPU shares for the core
type(s) they have been assigned.

node attributes before

cpu.arch                  = arm64
cpu.modelname             = Apple M2 Pro
cpu.numcores              = 12
cpu.reservablecores       = 0
cpu.totalcompute          = 1000

node attributes after

cpu.arch                  = arm64
cpu.frequency.efficiency  = 2424
cpu.frequency.power       = 3504
cpu.modelname             = Apple M2 Pro
cpu.numcores.efficiency   = 4
cpu.numcores.power        = 8
cpu.reservablecores       = 0
cpu.totalcompute          = 37728

* fingerprint/cpu: follow up cr items
2023-03-28 08:27:58 -05:00