Commit graph

4334 commits

Author SHA1 Message Date
James Rasell 96f7c84e4e
variable: fixup metadata copy comment and remove unrequired type. (#17234) 2023-05-18 13:49:41 +01:00
Piotr Kazmierczak fe272c3686
refactor acl.UpsertTokens to avoid unnecessary RPC calls. (#17194)
New RPC endpoints introduced during OIDC and JWT auth perform unnecessary many RPC calls when they upsert generated ACL tokens, as pointed out by @tgross.

This PR moves the common logic from acl.UpsertTokens method into a helper method that contains common logic, and sidesteps authentication, metrics, etc.
2023-05-16 09:31:51 +02:00
Luiz Aoqui 389212bfda
node pool: initial base work (#17163)
Implementation of the base work for the new node pools feature. It includes a new `NodePool` struct and its corresponding state store table.

Upon start the state store is populated with two built-in node pools that cannot be modified nor deleted:

  * `all` is a node pool that always includes all nodes in the cluster.
  * `default` is the node pool where nodes that don't specify a node pool in their configuration are placed.
2023-05-15 10:49:08 -04:00
Seth Hoenig 81e36b3650
core: eliminate second index on job_submissions table (#17146)
* core: eliminate second index on job_submissions table

This PR refactors the job_submissions state store code to eliminate the
use of a second index formerly used for purging all versions of a given
job. In practice we ended up with duplicate entries on the table. Instead,
use index prefix scanning on the primary index and tidy up any potential
for creating (or removing) duplicates.

* core: pr comments followup
2023-05-11 09:51:08 -05:00
Tim Gross 9ed75e1f72
client: de-duplicate alloc updates and gate during restore (#17074)
When client nodes are restarted, all allocations that have been scheduled on the
node have their modify index updated, including terminal allocations. There are
several contributing factors:

* The `allocSync` method that updates the servers isn't gated on first contact
  with the servers. This means that if a server updates the desired state while
  the client is down, the `allocSync` races with the `Node.ClientGetAlloc`
  RPC. This will typically result in the client updating the server with "running"
  and then immediately thereafter "complete".

* The `allocSync` method unconditionally sends the `Node.UpdateAlloc` RPC even
  if it's possible to assert that the server has definitely seen the client
  state. The allocrunner may queue-up updates even if we gate sending them. So
  then we end up with a race between the allocrunner updating its internal state
  to overwrite the previous update and `allocSync` sending the bogus or duplicate
  update.

This changeset adds tracking of server-acknowledged state to the
allocrunner. This state gets checked in the `allocSync` before adding the update
to the batch, and updated when `Node.UpdateAlloc` returns successfully. To
implement this we need to be able to equality-check the updates against the last
acknowledged state. We also need to add the last acknowledged state to the
client state DB, otherwise we'd drop unacknowledged updates across restarts.

The client restart test has been expanded to cover a variety of allocation
states, including allocs stopped before shutdown, allocs stopped by the server
while the client is down, and allocs that have been completely GC'd on the
server while the client is down. I've also bench tested scenarios where the task
workload is killed while the client is down, resulting in a failed restore.

Fixes #16381
2023-05-11 09:05:24 -04:00
Seth Hoenig 74714272cc
api: set the job submission during job reversion (#17097)
* api: set the job submission during job reversion

This PR fixes a bug where the job submission would always be nil when
a job goes through a reversion to a previous version. Basically we need
to detect when this happens, lookup the submission of the job version
being reverted to, and set that as the submission of the new job being
created.

* e2e: add e2e test for job submissions during reversion

This e2e test ensures a reverted job inherits the job submission
associated with the version of the job being reverted to.
2023-05-08 14:18:34 -05:00
Daniel Bennett a7ed6f5c53
full task cleanup when alloc prerun hook fails (#17104)
to avoid leaking task resources (e.g. containers,
iptables) if allocRunner prerun fails during
restore on client restart.

now if prerun fails, TaskRunner.MarkFailedKill()
will only emit an event, mark the task as failed,
and cancel the tr's killCtx, so then ar.runTasks()
-> tr.Run() can take care of the actual cleanup.

removed from (formerly) tr.MarkFailedDead(),
now handled by tr.Run():
 * set task state as dead
 * save task runner local state
 * task stop hooks

also done in tr.Run() now that it's not skipped:
 * handleKill() to kill tasks while respecting
   their shutdown delay, and retrying as needed
   * also includes task preKill hooks
 * clearDriverHandle() to destroy the task
   and associated resources
 * task exited hooks
2023-05-08 13:17:10 -05:00
stswidwinski 9c1c2cb5d2
Correct the status description and modify time of canceled evals. (#17071)
Fix for #17070. Corrected the status description and modify time of evals which are canceled due to another eval having completed in the meantime.
2023-05-08 08:50:36 -04:00
Seth Hoenig fff2eec625
connect: use heuristic to detect sidecar task driver (#17065)
* connect: use heuristic to detect sidecar task driver

This PR adds a heuristic to detect whether to use the podman task driver
for the connect sidecar proxy. The podman driver will be selected if there
is at least one task in the task group configured to use podman, and there
are zero tasks in the group configured to use docker. In all other cases
the task driver defaults to docker.

After this change, we should be able to run typical Connect jobspecs
(e.g. nomad job init [-short] -connect) on Clusters configured with the
podman task driver, without modification to the job files.

Closes #17042

* golf: cleanup driver detection logic
2023-05-05 10:19:30 -05:00
James Rasell 6ec4a69f47
scale: fixed a bug where evals could be created with wrong type. (#17092)
The job scale RPC endpoint hard-coded the eval creation to use the
type of service. This meant scaling events triggered on jobs of
type batch would create evaluations with the wrong type, which
does not seem to cause any problems, just confusion when
correlating the two.
2023-05-05 14:46:10 +01:00
Tim Gross 17bd930ca9
logs: fix missing allocation logs after update to Nomad 1.5.4 (#17087)
When the server restarts for the upgrade, it loads the `structs.Job` from the
Raft snapshot/logs. The jobspec has long since been parsed, so none of the
guards around the default value are in play. The empty field value for `Enabled`
is the zero value, which is false.

This doesn't impact any running allocation because we don't replace running
allocations when either the client or server restart. But as soon as any
allocation gets rescheduled (ex. you drain all your clients during upgrades),
it'll be using the `structs.Job` that the server has, which has `Enabled =
false`, and logs will not be collected.

This changeset fixes the bug by adding a new field `Disabled` which defaults to
false (so that the zero value works), and deprecates the old field.

Fixes #17076
2023-05-04 16:01:18 -04:00
Michael Schurter 3b3b02b741
dep: update from jwt/v4 to jwt/v5 (#17062)
Their release notes are here: https://github.com/golang-jwt/jwt/releases

Seemed wise to upgrade before we do even more with JWTs. For example
this upgrade *would* have mattered if we already implemented common JWT
claims such as expiration. Since we didn't rely on any claim
verification this upgrade is a noop...

...except for 1 test that called `Claims.Valid()`! Removing that
assertion *seems* scary, but it didn't actually do anything because we
didn't implement any of the standard claims it validated:

https://github.com/golang-jwt/jwt/blob/v4.5.0/map_claims.go#L120-L151

So functionally this major upgrade is a noop.
2023-05-03 11:17:38 -07:00
Luiz Aoqui 7b5a8f1fb0
Revert "hashicorp/go-msgpack v2 (#16810)" (#17047)
This reverts commit 8a98520d56eed3848096734487d8bd3eb9162a65.
2023-05-01 17:18:34 -04:00
Michael Schurter d3b0bbc088
deps: update go-bexpr from 0.1.11 to 0.1.12 (#16991)
Pulls in https://github.com/hashicorp/go-bexpr/pull/38

Fixes #16758
2023-04-27 09:01:42 -07:00
James Rasell ac98c2ed40
vars: ensure struct reciever names are consistent. (#16995) 2023-04-27 13:51:11 +01:00
James Rasell 4d2c1403c2
scale: do not allow scaling of jobs with type system. (#16969) 2023-04-25 15:47:44 +01:00
Tim Gross 72cbe53f19
logs: allow disabling log collection in jobspec (#16962)
Some Nomad users ship application logs out-of-band via syslog. For these users
having `logmon` (and `docker_logger`) running is unnecessary overhead. Allow
disabling the logmon and pointing the task's stdout/stderr to /dev/null.

This changeset is the first of several incremental improvements to log
collection short of full-on logging plugins. The next step will likely be to
extend the internal-only task driver configuration so that cluster
administrators can turn off log collection for the entire driver.

---

Fixes: #11175

Co-authored-by: Thomas Weber <towe75@googlemail.com>
2023-04-24 10:00:27 -04:00
valodzka 379497a484
fix host port handling for ipv6 (#16723) 2023-04-20 19:53:20 -07:00
James Rasell 367cfa6d93
rpc: use "+" concatination in hot path RPC rate limit metrics. (#16923) 2023-04-18 13:41:34 +01:00
Ian Fijolek 619f49afcf
hashicorp/go-msgpack v2 (#16810)
* Upgrade from hashicorp/go-msgpack v1.1.5 to v2.1.0

Fixes #16808

* Update hashicorp/net-rpc-msgpackrpc to v2 to match go-msgpack

* deps: use go-msgpack v2.0.0

go-msgpack v2.1.0 includes some code changes that we will need to
investigate furthere to assess its impact on Nomad, so keeping this
dependency on v2.0.0 for now since it's no-op.

---------

Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
2023-04-17 17:02:05 -04:00
Tim Gross 62548616d4
client: allow drain_on_shutdown configuration (#16827)
Adds a new configuration to clients to optionally allow them to drain their
workloads on shutdown. The client sends the `Node.UpdateDrain` RPC targeting
itself and then monitors the drain state as seen by the server until the drain
is complete or the deadline expires. If it loses connection with the server, it
will monitor local client status instead to ensure allocations are stopped
before exiting.
2023-04-14 15:35:32 -04:00
Tim Gross 5a9abdc469
drain: use client status to determine drain is complete (#14348)
If an allocation is slow to stop because of `kill_timeout` or `shutdown_delay`,
the node drain is marked as complete prematurely, even though drain monitoring
will continue to report allocation migrations. This impacts the UI or API
clients that monitor node draining to shut down nodes.

This changeset updates the behavior to wait until the client status of all
drained allocs are terminal before marking the node as done draining.
2023-04-13 08:55:28 -04:00
James Rasell b7a41fe48d
core: ensure all Server receiver names are consistent. (#16859) 2023-04-12 14:03:07 +01:00
Juana De La Cuesta 8302085384
Deployment Status Command Does Not Respect -namespace Wildcard (#16792)
* func: add namespace support for list deployment

* func: add wildcard to namespace filter for deployments

* Update deployment_endpoint.go

* style: use must instead of require or asseert

* style: rename paginator to avoid clash with import

* style: add changelog entry

* fix: add missing parameter for upsert jobs
2023-04-12 11:02:14 +02:00
Tim Gross a9a350cfdb
drainer: fix codec race condition in integration test (#16845)
msgpackrpc codec handles are specific to a connection and cannot be shared
between goroutines; this can cause corrupted decoding. Fix the drainer
integration test so that we create separate codecs for the goroutines that the
test helper spins up to simulate client updates.

This changeset also refactors the drainer integration test to bring it up to
current idioms and library usages, make assertions more clear, and reduce
duplication.
2023-04-11 14:31:13 -04:00
Seth Hoenig ba728f8f97
api: enable support for setting original job source (#16763)
* api: enable support for setting original source alongside job

This PR adds support for setting job source material along with
the registration of a job.

This includes a new HTTP endpoint and a new RPC endpoint for
making queries for the original source of a job. The
HTTP endpoint is /v1/job/<id>/submission?version=<version> and
the RPC method is Job.GetJobSubmission.

The job source (if submitted, and doing so is always optional), is
stored in the job_submission memdb table, separately from the
actual job. This way we do not incur overhead of reading the large
string field throughout normal job operations.

The server config now includes job_max_source_size for configuring
the maximum size the job source may be, before the server simply
drops the source material. This should help prevent Bad Things from
happening when huge jobs are submitted. If the value is set to 0,
all job source material will be dropped.

* api: avoid writing var content to disk for parsing

* api: move submission validation into RPC layer

* api: return an error if updating a job submission without namespace or job id

* api: be exact about the job index we associate a submission with (modify)

* api: reword api docs scheduling

* api: prune all but the last 6 job submissions

* api: protect against nil job submission in job validation

* api: set max job source size in test server

* api: fixups from pr
2023-04-11 08:45:08 -05:00
Piotr Kazmierczak dea8b1a093
acl: bump JWT auth gate to 1.5.4 (#16838) 2023-04-11 10:07:45 +02:00
hashicorp-copywrite[bot] 005636afa0 [COMPLIANCE] Add Copyright and License Headers 2023-04-10 15:36:59 +00:00
Tim Gross 1335543731
ephemeral disk: migrate should imply sticky (#16826)
The `ephemeral_disk` block's `migrate` field allows for best-effort migration of
the ephemeral disk data to new nodes. The documentation says the `migrate` field
is only respected if `sticky=true`, but in fact if client ACLs are not set the
data is migrated even if `sticky=false`.

The existing behavior when client ACLs are disabled has existed since the early
implementation, so "fixing" that case now would silently break backwards
compatibility. Additionally, having `migrate` not imply `sticky` seems
nonsensical: it suggests that if we place on a new node we migrate the data but
if we place on the same node, we throw the data away!

Update so that `migrate=true` implies `sticky=true` as follows:

* The failure mode when client ACLs are enabled comes from the server not passing
  along a migration token. Update the server so that the server provides a
  migration token whenever `migrate=true` and not just when `sticky=true` too.
* Update the scheduler so that `migrate` implies `sticky`.
* Update the client so that we check for `migrate || sticky` where appropriate.
* Refactor the E2E tests to move them off the old framework and make the intention
  of the test more clear.
2023-04-07 16:33:45 -04:00
hc-github-team-nomad-core 3578078caf Prepare for next release 2023-04-05 12:31:42 -04:00
hc-github-team-nomad-core b64ee2726d Generate files for 1.5.3 release 2023-04-05 12:31:30 -04:00
Tim Gross 8278f23042 acl: fix ACL bypass for anon requests that pass thru client HTTP
Requests without an ACL token that pass thru the client's HTTP API are treated
as though they come from the client itself. This allows bypass of ACLs on RPC
requests where ACL permissions are checked (like `Job.Register`). Invalid tokens
are correctly rejected.

Fix the bypass by only setting a client ID on the identity if we have a valid node secret.

Note that this changeset will break rate metrics for RPCs sent by clients
without a client secret such as `Node.GetClientAllocs`; these requests will be
recorded as anonymous.

Future work should:
* Ensure the node secret is sent with all client-driven RPCs except
  `Node.Register` which is TOFU.
* Create a new `acl.ACL` object from client requests so that we
  can enforce ACLs for all endpoints in a uniform way that's less error-prone.~
2023-04-05 12:17:51 -04:00
Juana De La Cuesta 9b4871fece
Prevent kill_timeout greater than progress_deadline (#16761)
* func: add validation for kill timeout smaller than progress dealine

* style: add changelog

* style: typo in changelog

* style: remove refactored test

* Update .changelog/16761.txt

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/structs/structs.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

---------

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>
2023-04-04 18:17:10 +02:00
Juana De La Cuesta 1fc13b83d8
style: update documentation (#16729) 2023-03-31 16:38:16 +02:00
Daniel Bennett c42950e342
ent: move all license info into LicenseConfig{} (#16738)
and add new TestConfigForServer() to get a
valid nomad.Config to use in tests

Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
2023-03-30 16:15:05 -05:00
Horacio Monsalvo 20372b1721
connect: add meta on ConsulSidecarService (#16705)
Co-authored-by: Sol-Stiep <sol.stiep@southworks.com>
2023-03-30 16:09:28 -04:00
Piotr Kazmierczak 1a5eba24a6 acl: set minACLJWTAuthMethodVersion to 1.5.3 and adjust code comment 2023-03-30 15:30:42 +02:00
Piotr Kazmierczak d98c8f6759 acl: rebased on main and changed the gate to 1.5.3-dev 2023-03-30 09:40:12 +02:00
Piotr Kazmierczak 16b6bd9ff2 acl: fix canonicalization of JWT auth method mock (#16531) 2023-03-30 09:39:56 +02:00
Piotr Kazmierczak 2b353902a1 acl: HTTP endpoints for JWT auth (#16519) 2023-03-30 09:39:56 +02:00
Piotr Kazmierczak e48c48e89b acl: RPC endpoints for JWT auth (#15918) 2023-03-30 09:39:56 +02:00
Piotr Kazmierczak a9230fb0b7 acl: JWT auth method 2023-03-30 09:39:56 +02:00
Juana De La Cuesta 320884b8ee
Multiple instances of a periodic job are run simultaneously, when prohibit_overlap is true (#16583)
* Multiple instances of a periodic job are run simultaneously, when prohibit_overlap is true
Fixes #11052
When restoring periodic dispatcher, all periodic jobs are forced without checking for previous childre.

* Multiple instances of a periodic job are run simultaneously, when prohibit_overlap is true
Fixes #11052
When restoring periodic dispatcher, all periodic jobs are forced without checking for previous children.

* style: refactor force run function

* fix: remove defer and inline unlock for speed optimization

* Update nomad/leader.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* style: refactor tests to use must

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* Update nomad/leader_test.go

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>

* fix: move back from defer to calling unlock before returning.

createEval cant be called with the lock on

* style: refactor test to use must

* added new entry to changelog and update comments

---------

Co-authored-by: James Rasell <jrasell@hashicorp.com>
Co-authored-by: James Rasell <jrasell@users.noreply.github.com>
2023-03-27 17:25:05 +02:00
Juana De La Cuesta 21b675244e
style: rename ForceRun to ForceEval, for clarity (#16617) 2023-03-27 15:38:48 +02:00
Tim Gross 977c88dcea
drainer: test refactoring to clarify behavior around delete/down nodes (#16612)
This changeset refactors the tests of the draining node watcher so that we don't
mock the node watcher's `Remove` and `Update` methods for its own tests. Instead
we'll mock the node watcher's dependencies (the job watcher and deadline
notifier) and now unit tests can cover the real code. This allows us to remove a
bunch of TODOs in `watch_nodes.go` around testing and clarify some important
behaviors:

* Nodes that are down or disconnected will still be watched until the scheduler
  decides what to do with their allocations. This will drive the job watcher but
  not the node watcher, and that lets the node watcher gracefully handle cases
  where a heartbeat fails but the node heartbeats again before its allocs can be
  evicted.

* Stop watching nodes that have been deleted. The blocking query for nodes set
  the maximum index to the highest index of a node it found, rather than the
  index of the nodes table. This misses updates to the index from deleting
  nodes. This was done as an performance optimization to avoid excessive
  unblocking, but because the query is over all nodes anyways there's no
  optimization to be had here. Remove the optimization so we can detect deleted
  nodes without having to wait for an update to an unrelated node.
2023-03-23 14:07:09 -04:00
Michael Schurter f8884d8b52
client/metadata: fix crasher caused by AllowStale = false (#16549)
Fixes #16517

Given a 3 Server cluster with at least 1 Client connected to Follower 1:

If a NodeMeta.{Apply,Read} for the Client request is received by
Follower 1 with `AllowStale = false` the Follower will forward the
request to the Leader.

The Leader, not being connected to the target Client, will forward the
RPC to Follower 1.

Follower 1, seeing AllowStale=false, will forward the request to the
Leader.

The Leader, not being connected to... well hoppefully you get the
picture: an infinite loop occurs.
2023-03-20 16:32:32 -07:00
Seth Hoenig d6dcc53c0a
tls enforcement flaky tests (#16543)
* tests: add WaitForLeaders helpers using must/wait timings

* tests: start servers for mtls tests together

Fixes #16253 (hopefully)
2023-03-17 14:11:13 -05:00
Piotr Kazmierczak 14927e93bc
acl: fix canonicalization of OIDC auth method mock (#16534) 2023-03-17 15:37:54 +01:00
Seth Hoenig ed7177de76
scheduler: annotate tasksUpdated with reason and purge DeepEquals (#16421)
* scheduler: annotate tasksUpdated with reason and purge DeepEquals

* cr: move opaque into helper

* cr: swap affinity/spread hashing for slice equal

* contributing: update checklist-jobspec with notes about struct methods

* cr: add more cases to wait config equal method

* cr: use reflect when comparing envoy config blocks

* cl: add cl
2023-03-14 09:46:00 -05:00
Luiz Aoqui adf147cb36
acl: update job eval requirement to submit-job (#16463)
The job evaluate endpoint creates a new evaluation for the job which is
a write operation. This change modifies the necessary capability from
`read-job` to `submit-job` to better reflect this.
2023-03-13 17:13:54 -04:00