Commit graph

4384 commits

Author SHA1 Message Date
Tim Gross 9dfb51579c
scheduler: refactor system util tests (#16416)
The tests for the system allocs reconciling code path (`diffSystemAllocs`)
include many impossible test environments, such as passing allocs for the wrong
node into the function. This makes the test assertions nonsensible for use in
walking yourself through the correct behavior.

I've pulled this changeset out of PR #16097 so that we can merge these
improvements and revisit the right approach to fix the problem in #16097 with
less urgency now that the PFNR bug fix has been merged. This changeset breaks up
a couple of tests, expands test coverage, and makes test assertions more
clear. It also corrects one bit of production code that behaves fine in
production because of canonicalization, but forces us to remember to set values
in tests to compensate.
2023-03-13 11:59:31 -04:00
Seth Hoenig 630bd8eb68
scheduler: add simple benchmark for tasksUpdated (#16422)
In preperation for some refactoring to tasksUpdated, add a benchmark to the
old code so it's easy to compare with the changes, making sure nothing goes
off the rails for performance.
2023-03-13 10:44:14 -05:00
hc-github-team-nomad-core 2d1a4d90e9 Prepare for next release 2023-03-13 11:13:27 -04:00
hc-github-team-nomad-core 35167e692a Generate files for 1.5.1 release 2023-03-13 11:13:27 -04:00
Tim Gross 1cf28996e7 acl: prevent privilege escalation via workload identity
ACL policies can be associated with a job so that the job's Workload Identity
can have expanded access to other policy objects, including other
variables. Policies set on the variables the job automatically has access to
were ignored, but this includes policies with `deny` capabilities.

Additionally, when resolving claims for a workload identity without any attached
policies, the `ResolveClaims` method returned a `nil` ACL object, which is
treated similarly to a management token. While this was safe in Nomad 1.4.x,
when the workload identity token was exposed to the task via the `identity`
block, this allows a user with `submit-job` capabilities to escalate their
privileges.

We originally implemented automatic workload access to Variables as a separate
code path in the Variables RPC endpoint so that we don't have to generate
on-the-fly policies that blow up the ACL policy cache. This is fairly brittle
but also the behavior around wildcard paths in policies different from the rest
of our ACL polices, which is hard to reason about.

Add an `ACLClaim` parameter to the `AllowVariableOperation` method so that we
can push all this logic into the `acl` package and the behavior can be
consistent. This will allow a `deny` policy to override automatic access (and
probably speed up checks of non-automatic variable access).
2023-03-13 11:13:27 -04:00
Luiz Aoqui 7305a374e3
allocrunner: fix health check monitoring for Consul services (#16402)
Services must be interpolated to replace runtime variables before they
can be compared against the values returned by Consul.
2023-03-10 14:43:31 -05:00
Tim Gross 99d46e5a49
scheduling: prevent self-collision in dynamic port network offerings (#16401)
When the scheduler tries to find a placement for a new allocation, it iterates
over a subset of nodes. For each node, we populate a `NetworkIndex` bitmap with
the ports of all existing allocations and any other allocations already proposed
as part of this same evaluation via its `SetAllocs` method. Then we make an
"ask" of the `NetworkIndex` in `AssignPorts` for any ports we need and receive
an "offer" in return. The offer will include both static ports and any dynamic
port assignments.

The `AssignPorts` method was written to support group networks, and it shares
code that selects dynamic ports with the original `AssignTaskNetwork`
code. `AssignTaskNetwork` can request multiple ports from the bitmap at a
time. But `AssignPorts` requests them one at a time and does not account for
possible collisions, and doesn't return an error in that case.

What happens next varies:

1. If the scheduler doesn't place the allocation on that node, the port
   conflict is thrown away and there's no problem.
2. If the node is picked and this is the only allocation (or last allocation),
   the plan applier will reject the plan when it calls `SetAllocs`, as we'd expect.
3. If the node is picked and there are additional allocations in the same eval
   that iterate over the same node, their call to `SetAllocs` will detect the
   impossible state and the node will be rejected. This can have the puzzling
   behavior where a second task group for the job without any networking at all
   can hit a port collision error!

It looks like this bug has existed since we implemented group networks, but
there are several factors that add up to making the issue rare for many users
yet frustratingly frequent for others:

* You're more likely to hit this bug the more tightly packed your range for
  dynamic ports is. With 12000 ports in the range by default, many clusters can
  avoid this for a long time.
* You're more likely to hit case (3) for jobs with lots of allocations or if a
  scheduler has to iterate over a large number of nodes, such as with system jobs,
  jobs with `spread` blocks, or (sometimes) jobs using `unique` constraints.

For unlucky combinations of these factors, it's possible that case (3) happens
repeatedly, preventing scheduling of a given job until a client state
change (ex. restarting the agent so all its allocations are rescheduled
elsewhere) re-opens the range of dynamic ports available.

This changeset:

* Fixes the bug by accounting for collisions in dynamic port selection in
  `AssignPorts`.
* Adds test coverage for `AssignPorts`, expands coverage of this case for the
  deprecated `AssignTaskNetwork`, and tightens the dynamic port range in a
  scheduler test for spread scheduling to more easily detect this kind of problem
  in the future.
* Adds a `String()` method to `Bitmap` so that any future "screaming" log lines
  have a human-readable list of used ports.
2023-03-09 10:09:54 -05:00
Lance Haig e89c3d3b36
Update ioutil library references to os and io respectively for e2e helper nomad (#16332)
No user facing changes so I assume no change log is required
2023-03-08 09:39:03 -06:00
Tim Gross a2ceab3d8c
scheduler: correctly detect inplace update with wildcard datacenters (#16362)
Wildcard datacenters introduced a bug where a job with any wildcard datacenters
will always be treated as a destructive update when we check whether a
datacenter has been removed from the jobspec.

Includes updating the helper so that callers don't have to loop over the job's
datacenters.
2023-03-07 10:05:59 -05:00
Luiz Aoqui 2a1a790820
client: don't emit task shutdown delay event if not waiting (#16281) 2023-03-03 18:22:06 -05:00
Tim Gross 3c0eaba9db
remove backcompat support for non-atomic job registration (#16305)
In Nomad 0.12.1 we introduced atomic job registration/deregistration, where the
new eval was written in the same raft entry. Backwards-compatibility checks were
supposed to have been removed in Nomad 1.1.0, but we missed that. This is long
safe to remove.
2023-03-03 15:52:22 -05:00
Tim Gross 8747059b86
service: fix regression in task access to list/read endpoint (#16316)
When native service discovery was added, we used the node secret as the auth
token. Once Workload Identity was added in Nomad 1.4.x we needed to use the
claim token for `template` blocks, and so we allowed valid claims to bypass the
ACL policy check to preserve the existing behavior. (Invalid claims are still
rejected, so this didn't widen any security boundary.)

In reworking authentication for 1.5.0, we unintentionally removed this
bypass. For WIs without a policy attached to their job, everything works as
expected because the resulting `acl.ACL` is nil. But once a policy is attached
to the job the `acl.ACL` is no longer nil and this causes permissions errors.

Fix the regression by adding back the bypass for valid claims. In future work,
we should strongly consider getting turning the implicit policies into real
`ACLPolicy` objects (even if not stored in state) so that we don't have these
kind of brittle exceptions to the auth code.
2023-03-03 11:41:19 -05:00
Tim Gross 0e1b554299
handle FSM.Apply errors in raftApply (#16287)
The signature of the `raftApply` function requires that the caller unwrap the
first returned value (the response from `FSM.Apply`) to see if it's an
error. This puts the burden on the caller to remember to check two different
places for errors, and we've done so inconsistently.

Update `raftApply` to do the unwrapping for us and return any `FSM.Apply` error
as the error value. Similar work was done in Consul in
https://github.com/hashicorp/consul/pull/9991. This eliminates some boilerplate
and surfaces a few minor bugs in the process:

* job deregistrations of already-GC'd jobs were still emitting evals
* reconcile job summaries does not return scheduler errors
* node updates did not report errors associated with inconsistent service
  discovery or CSI plugin states

Note that although _most_ of the `FSM.Apply` functions return only errors (which
makes it tempting to remove the first return value entirely), there are few that
return `bool` for some reason and Variables relies on the response value for
proper CAS checking.
2023-03-02 13:51:09 -05:00
Michael Schurter bd7b60712e
Accept Workload Identities for Client RPCs (#16254)
This change resolves policies for workload identities when calling Client RPCs. Previously only ACL tokens could be used for Client RPCs.

Since the same cache is used for both bearer tokens (ACL and Workload ID), the token cache size was doubled.

---------

Co-authored-by: James Rasell <jrasell@users.noreply.github.com>
2023-02-27 10:17:47 -08:00
Daniel Bennett 39e3a1ac3e
build/cli: Add BuildDate (#16216)
* build: add BuildDate to version info

will be used in enterprise to compare to license expiration time

* cli: multi-line version output, add BuildDate

before:
$ nomad version
Nomad v1.4.3 (coolfakecommithashomgoshsuchacoolonewoww)

after:
$ nomad version
Nomad v1.5.0-dev
BuildDate 2023-02-17T19:29:26Z
Revision coolfakecommithashomgoshsuchacoolonewoww

compare consul:
$ consul version
Consul v1.14.4
Revision dae670fe
Build Date 2023-01-26T15:47:10Z
Protocol 2 spoken by default, blah blah blah...

and vault:
$ vault version
Vault v1.12.3 (209b3dd99fe8ca320340d08c70cff5f620261f9b), built 2023-02-02T09:07:27Z

* docs: update version command output
2023-02-27 11:27:40 -06:00
Tim Gross 4c9688271a
CSI: fix potential state store corruptions (#16256)
The `CSIVolume` struct has references to allocations that are "denormalized"; we
don't store them on the `CSIVolume` struct but hydrate them on read. Tests
detecting potential state store corruptions found two locations where we're not
copying the volume before denormalizing:

* When garbage collecting CSI volume claims.
* When checking if it's safe to force-deregister the volume.

There are no known user-visible problems associated with these bugs but both
have the potential of mutating volume claims outside of a FSM transaction. This
changeset also cleans up state mutations in some CSI tests so as to avoid having
working tests cover up potential future bugs.
2023-02-27 08:47:08 -05:00
James Rasell 8295d0e516
acl: add validation to binding rule selector on upsert. (#16210)
* acl: add validation to binding rule selector on upsert.

* docs: add more information on binding rule selector escaping.
2023-02-17 15:38:55 +01:00
Alessio Perugini 4e9ec24b22
Allow configurable range of Job priorities (#16084) 2023-02-17 09:23:13 -05:00
Michael Schurter 671d9f64ec
Minor post-1.5-beta1 API, code, and docs cleanups (#16193)
* api: return error on parse failure

* docs: clarify anonymous policy with task api
2023-02-16 10:32:21 -08:00
Tim Gross 27cc6a2ff9
fix test flake for RPC TLS enforcement test (#16199)
The RPC TLS enforcement test was frequently failing with broken connections. The
most likely cause was that the tests started to run before the server had
started its RPC server. Wait until it self-elects to ensure that the RPC server
is up. This seems to have corrected the error; I ran this 3 times without a
failure (even accounting for `gotestsum` retries).

Also, fix a minor test bug that didn't impact the test but showed an incorrect
usage for `Status.Ping.`
2023-02-16 11:50:40 -05:00
Will Nicholson 4dc83757a6
eventstream: Handle missing policy documents in event streams (#15495)
Fixes https://github.com/hashicorp/nomad/issues/15493

Co-authored-by: Tim Gross <tgross@hashicorp.com>
2023-02-14 11:27:39 -05:00
Seth Hoenig 165791dd89
artifact: protect against unbounded artifact decompression (1.5.0) (#16151)
* artifact: protect against unbounded artifact decompression

Starting with 1.5.0, set defaut values for artifact decompression limits.

artifact.decompression_size_limit (default "100GB") - the maximum amount of
data that will be decompressed before triggering an error and cancelling
the operation

artifact.decompression_file_count_limit (default 4096) - the maximum number
of files that will be decompressed before triggering an error and
cancelling the operation.

* artifact: assert limits cannot be nil in validation
2023-02-14 09:28:39 -06:00
Charlie Voiselle d93ba0cf32
Add warnings to var put for non-alphanumeric keys. (#15933)
* Warn when Items key isn't directly accessible

Go template requires that map keys are alphanumeric for direct access
using the dotted reference syntax. This warns users when they create
keys that run afoul of this requirement.

- cli: use regex to detect invalid indentifiers in var keys
- test: fix slash in escape test case
- api: share warning formatting function between API and CLI
- ui: warn if var key has characters other than _, letter, or number

---------
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
Co-authored-by: Luiz Aoqui <luiz@hashicorp.com>
2023-02-13 16:14:59 -05:00
Seth Hoenig df3e8f82da
deps: update go-set, go-landlock (#16146)
Made a breaking change in go-set (String() signature), need to update
both these dependencies together and also fix a thing in structs.go
2023-02-13 08:26:30 -06:00
Charlie Voiselle 65ce3ec8de
[core] Do not start the plugin loader on non-clients (#16111)
The plugin loader loads task and device driver plugins which are not
used on server nodes.
2023-02-10 15:33:16 -05:00
Tim Gross 65c7e149d3
eval broker: use write lock when reaping cancelable evals (#16112)
The eval broker's `Cancelable` method used by the cancelable eval reaper mutates
the slice of cancelable evals by removing a batch at a time from the slice. But
this method unsafely uses a read lock despite this mutation. Under normal
workloads this is likely to be safe but when the eval broker is under the heavy
load this feature is intended to fix, we're likely to have a race
condition. Switch this to a write lock, like the other locks that mutate the
eval broker state.

This changeset also adjusts the timeout to allow poorly-sized Actions runners
more time to schedule the appropriate goroutines. The test has also been updated
to use `shoenig/test/wait` so we can have sensible reporting of the results
rather than just a timeout error when things go wrong.
2023-02-10 10:40:41 -05:00
Tim Gross c2bd829fe2
tests: don't mutate global structs in core scheduler tests (#16120)
Some of the core scheduler tests need the maximum batch size for writes to be
smaller than the usual `structs.MaxUUIDsPerWriteRequest`. But they do so by
unsafely modifying the global struct, which creates test flakes in other tests.

Modify the functions under test to take a batch size parameter. Production code
will pass the global while the tests can inject smaller values. Turn the
`structs.MaxUUIDsPerWriteRequest` into a constant, and add a semgrep rule for
avoiding this kind of thing in the future.
2023-02-10 09:26:00 -05:00
Tim Gross 69a2040e82
acl: never return auth errors for ACL.Bootstrap RPC (#16108)
In #15901 we introduced pre-forwarding authentication for RPCs so that we can
grab the identity for rate metrics. The `ACL.Bootstrap` RPC is an
unauthenticated endpoint, so any error message from authentication is not
particularly useful. This would be harmless, but if you try to bootstrap with
your `NOMAD_TOKEN` already set (perhaps because you were talking to another
cluster previously from the same shell session), you'll get an authentication
error instead of just having the token be ignored. This is a regression from the
existing behavior, so have this endpoint ignore auth errors the same way we do
for every other unauthenticated endpoint (ex `Status.Peers`)
2023-02-09 10:02:56 -05:00
Seth Hoenig 0e7bf87ee1
deps: upgrade to hashicorp/golang-lru/v2 (#16085) 2023-02-08 15:20:33 -06:00
James Rasell 53e0f424e9
rcp: bump SSO feature gate version. (#16080) 2023-02-07 15:45:07 -08:00
Michael Schurter 35d65c7c7e
Dynamic Node Metadata (#15844)
Fixes #14617
Dynamic Node Metadata allows Nomad users, and their jobs, to update Node metadata through an API. Currently Node metadata is only reloaded when a Client agent is restarted.

Includes new UI for editing metadata as well.

---------

Co-authored-by: Phil Renaud <phil.renaud@hashicorp.com>
2023-02-07 14:42:25 -08:00
Seth Hoenig 590ae08752
main: remove deprecated uses of rand.Seed (#16074)
* main: remove deprecated uses of rand.Seed

go1.20 deprecates rand.Seed, and seeds the rand package
automatically. Remove cases where we seed the random package,
and cleanup the one case where we intentionally create a
known random source.

* cl: update cl

* mod: update go mod
2023-02-07 09:19:38 -06:00
Michael Schurter 0a496c845e
Task API via Unix Domain Socket (#15864)
This change introduces the Task API: a portable way for tasks to access Nomad's HTTP API. This particular implementation uses a Unix Domain Socket and, unlike the agent's HTTP API, always requires authentication even if ACLs are disabled.

This PR contains the core feature and tests but followup work is required for the following TODO items:

- Docs - might do in a followup since dynamic node metadata / task api / workload id all need to interlink
- Unit tests for auth middleware
- Caching for auth middleware
- Rate limiting on negative lookups for auth middleware

---------

Co-authored-by: Seth Hoenig <shoenig@duck.com>
2023-02-06 11:31:22 -08:00
Phil Renaud d3c351d2d2
Label for the Web UI (#16006)
* Demoable state

* Demo mirage color

* Label as a block with foreground and background colours

* Test mock updates

* Go test updated

* Documentation update for label support
2023-02-02 16:29:04 -05:00
Tim Gross 19a2c065f4
System and sysbatch jobs always have zero index (#16030)
Service jobs should have unique allocation Names, derived from the
Job.ID. System jobs do not have unique allocation Names because the index is
intended to indicated the instance out of a desired count size. Because system
jobs do not have an explicit count but the results are based on the targeted
nodes, the index is less informative and this was intentionally omitted from the
original design.

Update docs to make it clear that NOMAD_ALLOC_INDEX is always zero for 
system/sysbatch jobs

Validate that `volume.per_alloc` is incompatible with system/sysbatch jobs.
System and sysbatch jobs always have a `NOMAD_ALLOC_INDEX` of 0. So
interpolation via `per_alloc` will not work as soon as there's more than one
allocation placed. Validate against this on job submission.
2023-02-02 16:18:01 -05:00
Daniel Bennett 335f0a5371
docs: how to troubleshoot consul connect envoy (#15908)
* largely a doc-ification of this commit message:
  d47678074bf8ae9ff2da3c91d0729bf03aee8446
  this doesn't spell out all the possible failure modes,
  but should be a good starting point for folks.

* connect: add doc link to envoy bootstrap error

* add Unwrap() to RecoverableError
  mainly for easier testing
2023-02-02 14:20:26 -06:00
Charlie Voiselle cc6f4719f1
Add option to expose workload token to task (#15755)
Add `identity` jobspec block to expose workload identity tokens to tasks.

---------

Co-authored-by: Anders <mail@anars.dk>
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
2023-02-02 10:59:14 -08:00
Phil Renaud 3db9f11c37
[feat] Nomad Job Templates (#15746)
* Extend variables under the nomad path prefix to allow for job-templates (#15570)

* Extend variables under the nomad path prefix to allow for job-templates

* Add job-templates to error message hinting

* RadioCard component for Job Templates (#15582)

* chore: add

* test: component API

* ui: component template

* refact: remove  bc naming collission

* styles: remove SASS var causing conflicts

* Disallow specific variable at nomad/job-templates (#15681)

* Disallows variables at exactly nomad/job-templates

* idiomatic refactor

* Expanding nomad job init to accept a template flag (#15571)

* Adding a string flag for templates on job init

* data-down actions-up version of a custom template editor within variable

* Dont force grid on job template editor

* list-templates flag started

* Correctly slice from end of path name

* Pre-review cleanup

* Variable form acceptance test for job template editing

* Some review cleanup

* List Job templates test

* Example from template test

* Using must.assertions instead of require etc

* ui: add choose template button (#15596)

* ui: add new routes

* chore: update file directory

* ui: add choose template button

* test: button and page navigation

* refact: update var name

* ui: use `Button` component from `HDS` (#15607)

* ui: integrate  buttons

* refact: remove  helper

* ui: remove icons on non-tertiary buttons

* refact: update normalize method for key/value pairs (#15612)

* `revert`: `onCancel` for `JobDefinition`

The `onCancel` method isn't included in the component API for `JobEditor` and the primary cancel behavior exists outside of the component. With the exception of the `JobDefinition` page where we include this button in the top right of the component instead of next to the `Plan` button.

* style: increase button size

* style: keep lime green

* ui: select template (#15613)

* ui: deprecate unused component

* ui: deprecate tests

* ui: jobs.run.templates.index

* ui: update logic to handle templates

* refact: revert key/value changes

* style: padding for cards + buttons

* temp: fixtures for mirage testing

* Revert "refact: revert key/value changes"

This reverts commit 124e95d12140be38fc921f7e15243034092c4063.

* ui: guard template for unsaved job

* ui: handle reading template variable

* Revert "refact: update normalize method for key/value pairs (#15612)"

This reverts commit 6f5ffc9b610702aee7c47fbff742cc81f819ab74.

* revert: remove test fixtures

* revert: prettier problems

* refact: test doesnt need filter expression

* styling: button sizes and responsive cards

* refact: remove route guarding

* ui: update variable adapter

* refact: remove model editing behavior

* refact: model should query variables to populate editor

* ui: clear qp on exit

* refact: cleanup deprecated API

* refact: query all namespaces

* refact: deprecate action

* ui: rely on  collection

* refact: patch deprecate transition API

* refact: patch test to expect namespace qp

* styling: padding, conditionals

* ui: flashMessage on 404

* test: update for o(n+1) query

* ui: create new job template (#15744)

* refact: remove unused code

* refact: add type safety

* test: select template flow

* test: add data-test attrs

* chore: remove dead code

* test: create new job flow

* ui: add create button

* ui: create job template

* refact: no need for wildcard

* refact:  record instead of delete

* styling: spacing

* ui: add error handling and form validation to job create template (#15767)

* ui: handle server side errors

* ui: show error to prevent duplicate

* refact: conditional namespace

* ui: save as template flow (#15787)

* bug:  patches failing tests associated with `pretender` (#15812)

* refact: update assertion

* refact: test set-up

* ui: job templates manager view (#15815)

* ui: manager list view

* test: edit flow

* refact: deprecate column-helper

* ui: template edit and delete flow (#15823)

* ui: manager list view

* refact: update title

* refact: update permissions

* ui: template edit page

* bug: typo

* refact: update toast messages

* bug:  clear selections on exit (#15827)

* bug:  clear controllers on exit

* test: mirage config changes (#15828)

* refact: deprecate column-helper

* style: update z-index for HDS

* Revert "style: update z-index for HDS"

This reverts commit d3d87ceab6d083f7164941587448607838944fc1.

* refact: update delete button

* refact: edit redirect

* refact: patch reactivity issues

* styling: fixed width

* refact: override defaults

* styling: edit text causing overflow

* styling:  add inline text

Co-authored-by: Phil Renaud <phil.renaud@hashicorp.com>

* bug: edit `text` to `template`

Co-authored-by: Phil Renaud <phil.renaud@hashicorp.com>

Co-authored-by: Phil Renaud <phil.renaud@hashicorp.com>

* test:  delete flow job templates (#15896)

* refact: edit names

* bug:  set correct ref to store

* chore: trim whitespace:

* test: delete flow

* bug: reactively update view (#15904)

* Initialized default jobs (#15856)

* Initialized default jobs

* More jobs scaffolded

* Better commenting on a couple example job specs

* Adapter doing the work

* fall back to epic config

* Label format helper and custom serialization logic

* Test updates to account for a never-empty state

* Test suite uses settled and maintain RecordArray in adapter return

* Updates to hello-world and variables example jobspecs

* Parameterized job gets optional payload output

* Formatting changes for param and service discovery job templates

* Multi-group service discovery job

* Basic test for default templates (#15965)

* Basic test for default templates

* Percy snapshot for manage page

* Some late-breaking design changes

* Some copy edits to the header paragraphs for job templates (#15967)

* Added some init options for job templates (#15994)

* Async method for populating default job templates from the variable adapter

---------

Co-authored-by: Jai <41024828+ChaiWithJai@users.noreply.github.com>
2023-02-02 10:37:40 -05:00
jmwilkinson 37834dffda
Allow wildcard datacenters to be specified in job file (#11170)
Also allows for default value of `datacenters = ["*"]`
2023-02-02 09:57:45 -05:00
Seth Hoenig ca7ead191e
consul: restore consul token when reverting a job (#15996)
* consul: reset consul token on job during registration of a reversion

* e2e: add test for reverting a job with a consul service

* cl: fixup cl entry
2023-02-01 14:02:45 -06:00
James Rasell 9e8325d63c
acl: fix a bug in token creation when parsing expiration TTLs. (#15999)
The ACL token decoding was not correctly handling time duration
syntax such as "1h" which forced people to use the nanosecond
representation via the HTTP API.

The change adds an unmarshal function which allows this syntax to
be used, along with other styles correctly.
2023-02-01 17:43:41 +01:00
James Rasell 67acfd9f6b
acl: return 400 not 404 code when creating an invalid policy. (#16000) 2023-02-01 17:40:15 +01:00
Mike Nomitch 80848b202e
Increases max variable size to 64KiB from 16KiB (#15983) 2023-01-31 13:32:36 -05:00
stswidwinski 16eefbbf4d
GC: ensure no leakage of evaluations for batch jobs. (#15097)
Prior to 2409f72 the code compared the modification index of a job to itself. Afterwards, the code compared the creation index of the job to itself. In either case there should never be a case of re-parenting of allocs causing the evaluation to trivially always result in false, which leads to unreclaimable memory.

Prior to this change allocations and evaluations for batch jobs were never garbage collected until the batch job was explicitly stopped. The new `batch_eval_gc_threshold` server configuration controls how often they are collected. The default threshold is `24h`.
2023-01-31 13:32:14 -05:00
Jorge Marey d1c9aad762
Rename fields on proxyConfig (#15541)
* Change api Fields for expose and paths

* Add changelog entry

* changelog: add deprecation notes about connect fields

* api: minor style tweaks

---------

Co-authored-by: Seth Hoenig <shoenig@duck.com>
2023-01-30 09:31:16 -06:00
Piotr Kazmierczak 14b53df3b6
renamed stanza to block for consistency with other projects (#15941) 2023-01-30 15:48:43 +01:00
Seth Hoenig 074b76e3bf
consul: check for acceptable service identity on consul tokens (#15928)
When registering a job with a service and 'consul.allow_unauthenticated=false',
we scan the given Consul token for an acceptable policy or role with an
acceptable policy, but did not scan for an acceptable service identity (which
is backed by an acceptable virtual policy). This PR updates our consul token
validation to also accept a matching service identity when registering a service
into Consul.

Fixes #15902
2023-01-27 18:15:51 -06:00
Tim Gross 881a4cfaff
metrics: Add remaining server RPC rate metrics (#15901) 2023-01-27 08:29:53 -05:00
Tim Gross ce3eef8037
metrics: Add rate metrics to Client CSI endpoints (#15905)
Also tightens up authentication for these endpoints by enforcing the server
certificate name is valid. We protect these endpoints currently by mTLS and
can't use an auth token because these endpoints are (uniquely) called by the
leader and followers for a given node won't have the leader's ephemeral ACL
token. Add a certificate name check that requests come from a server and not a
client, because no client should ever send these RPCs directly.
2023-01-26 16:40:58 -05:00
Tim Gross bed8716e44
metrics: Add metrics to unauthenticated endpoints (#15899) 2023-01-26 15:05:51 -05:00
Tim Gross 5e75ea9fb3
metrics: Add RPC rate metrics to endpoints that validate TLS names (#15900) 2023-01-26 15:04:25 -05:00
Yorick Gersie 2a5c423ae0
Allow per_alloc to be used with host volumes (#15780)
Disallowing per_alloc for host volumes in some cases makes life of a nomad user much harder.
When we rely on the NOMAD_ALLOC_INDEX for any configuration that needs to be re-used across
restarts we need to make sure allocation placement is consistent. With CSI volumes we can
use the `per_alloc` feature but for some reason this is explicitly disabled for host volumes.

Ensure host volumes understand the concept of per_alloc
2023-01-26 09:14:47 -05:00
Piotr Kazmierczak f4d6efe69f
acl: make auth method default across all types (#15869) 2023-01-26 14:17:11 +01:00
James Rasell 5d33891910
sso: allow binding rules to create management ACL tokens. (#15860)
* sso: allow binding rules to create management ACL tokens.

* docs: update binding rule docs to detail management type addition.
2023-01-26 09:57:44 +01:00
Tim Gross 6677a103c2
metrics: measure rate of RPC requests that serve API (#15876)
This changeset configures the RPC rate metrics that were added in #15515 to all
the RPCs that support authenticated HTTP API requests. These endpoints already
configured with pre-forwarding authentication in #15870, and a handful of others
were done already as part of the proof-of-concept work. So this changeset is
entirely copy-and-pasting one method call into a whole mess of handlers.

Upcoming PRs will wire up pre-forwarding auth and rate metrics for the remaining
set of RPCs that have no API consumers or aren't authenticated, in smaller
chunks that can be more thoughtfully reviewed.
2023-01-25 16:37:24 -05:00
Luiz Aoqui 3479e2231f
core: enforce strict steps for clients reconnect (#15808)
When a Nomad client that is running an allocation with
`max_client_disconnect` set misses a heartbeat the Nomad server will
update its status to `disconnected`.

Upon reconnecting, the client will make three main RPC calls:

- `Node.UpdateStatus` is used to set the client status to `ready`.
- `Node.UpdateAlloc` is used to update the client-side information about
  allocations, such as their `ClientStatus`, task states etc.
- `Node.Register` is used to upsert the entire node information,
  including its status.

These calls are made concurrently and are also running in parallel with
the scheduler. Depending on the order they run the scheduler may end up
with incomplete data when reconciling allocations.

For example, a client disconnects and its replacement allocation cannot
be placed anywhere else, so there's a pending eval waiting for
resources.

When this client comes back the order of events may be:

1. Client calls `Node.UpdateStatus` and is now `ready`.
2. Scheduler reconciles allocations and places the replacement alloc to
   the client. The client is now assigned two allocations: the original
   alloc that is still `unknown` and the replacement that is `pending`.
3. Client calls `Node.UpdateAlloc` and updates the original alloc to
   `running`.
4. Scheduler notices too many allocs and stops the replacement.

This creates unnecessary placements or, in a different order of events,
may leave the job without any allocations running until the whole state
is updated and reconciled.

To avoid problems like this clients must update _all_ of its relevant
information before they can be considered `ready` and available for
scheduling.

To achieve this goal the RPC endpoints mentioned above have been
modified to enforce strict steps for nodes reconnecting:

- `Node.Register` does not set the client status anymore.
- `Node.UpdateStatus` sets the reconnecting client to the `initializing`
  status until it successfully calls `Node.UpdateAlloc`.

These changes are done server-side to avoid the need of additional
coordination between clients and servers. Clients are kept oblivious of
these changes and will keep making these calls as they normally would.

The verification of whether allocations have been updates is done by
storing and comparing the Raft index of the last time the client missed
a heartbeat and the last time it updated its allocations.
2023-01-25 15:53:59 -05:00
Tim Gross f3f64af821
WI: allow workloads to use RPCs associated with HTTP API (#15870)
This changeset allows Workload Identities to authenticate to all the RPCs that
support HTTP API endpoints, for use with PR #15864.

* Extends the work done for pre-forwarding authentication to all RPCs that
  support a HTTP API endpoint.
* Consolidates the auth helpers used by the CSI, Service Registration, and Node
  endpoints that are currently used to support both tokens and client secrets.

Intentionally excluded from this changeset:
* The Variables endpoint still has custom handling because of the implicit
  policies. Ideally we'll figure out an efficient way to resolve those into real
  policies and then we can get rid of that custom handling.
* The RPCs that don't currently support auth tokens (i.e. those that don't
  support HTTP endpoints) have not been updated with the new pre-forwarding auth
  We'll be doing this under a separate PR to support RPC rate metrics.
2023-01-25 14:33:06 -05:00
Tim Gross cf9e5f3327
acl: Fix panic when bogus token is passed (#15863)
If a consumer of the new `Authenticate` method gets passed a bogus token that's
a correctly-shaped UUID, it will correctly get an identity without a ACL
token. But most consumers will then panic when they consume this nil `ACLToken`
for authorization.

Because no API client should ever send a bogus auth token, update the
`Authenticate` method to create the identity with remote IP (for metrics
tracking) but also return an `ErrPermissionDenied`.
2023-01-25 10:03:17 -05:00
Tim Gross 055434cca9
add metric for count of RPC requests (#15515)
Implement a metric for RPC requests with labels on the identity, so that
administrators can monitor the source of requests within the cluster. This
changeset demonstrates the change with the new `ACL.WhoAmI` RPC, and we'll wire
up the remaining RPCs once we've threaded the new pre-forwarding authentication
through the all.

Note that metrics are measured after we forward but before we return any
authentication error. This ensures that we only emit metrics on the server that
actually serves the request. We'll perform rate limiting at the same place.

Includes telemetry configuration to omit identity labels.
2023-01-24 11:54:20 -05:00
Tim Gross 2030d62920
implement pre-forwarding auth on select RPCs (#15513)
In #15417 we added a new `Authenticate` method to the server that returns an
`AuthenticatedIdentity` struct. This changeset implements this method for a
small number of RPC endpoints that together represent all the various ways in
which RPCs are sent, so that we can validate that we're happy with this
approach.
2023-01-24 10:52:07 -05:00
Michael Schurter ace5faf948
core: backoff considerably when worker is behind raft (#15523)
Upon dequeuing an evaluation workers snapshot their state store at the
eval's wait index or later. This ensures we process an eval at a point
in time after it was created or updated. Processing an eval on an old
snapshot could cause any number of problems such as:

1. Since job registration atomically updates an eval and job in a single
   raft entry, scheduling against indexes before that may not have the
   eval's job or may have an older version.
2. The older the scheduler's snapshot, the higher the likelihood
   something has changed in the cluster state which will cause the plan
   applier to reject the scheduler's plan. This could waste work or
   even cause eval's to be failed needlessly.

However, the workers run in parallel with a new server pulling the
cluster state from a peer. During this time, which may be many minutes
long, the state store is likely far behind the minimum index required
to process evaluations.

This PR addresses this by adding an additional long backoff period after
an eval is nacked. If the scheduler's indexes catches up within the
additional backoff, it will unblock early to dequeue the next eval.

When the server shuts down we'll get a `context.Canceled` error from the state
store method. We need to bubble this error up so that other callers can detect
it. Handle this case separately when waiting after dequeue so that we can warn
on shutdown instead of throwing an ambiguous error message with just the text
"canceled."

While there may be more precise ways to block scheduling until the
server catches up, this approach adds little risk and covers additional
cases where a server may be temporarily behind due to a spike in load or
a saturated network.

For testing, we make the `raftSyncLimit` into a parameter on the worker's `run` method 
so that we can run backoff tests without waiting 30+ seconds. We haven't followed thru
and made all the worker globals into worker parameters, because there isn't much
use outside of testing, but we can consider that in the future.

Co-authored-by: Tim Gross <tgross@hashicorp.com>
2023-01-24 08:56:35 -05:00
Tim Gross a51149736d
Rename nomad.broker.total_blocked metric (#15835)
This changeset fixes a long-standing point of confusion in metrics emitted by
the eval broker. The eval broker has a queue of "blocked" evals that are waiting
for an in-flight ("unacked") eval of the same job to be completed. But this
"blocked" state is not the same as the `blocked` status that we write to raft
and expose in the Nomad API to end users. There's a second metric
`nomad.blocked_eval.total_blocked` that refers to evaluations in that
state. This has caused ongoing confusion in major customer incidents and even in
our own documentation! (Fixed in this PR.)

There's little functional change in this PR aside from the name of the metric
emitted, but there's a bit refactoring to clean up the names in `eval_broker.go`
so that there aren't name collisions and multiple names for the same
state. Changes included are:
* Everything that was previously called "pending" referred to entities that were
  associated witht he "ready" metric. These are all now called "ready" to match
  the metric.
* Everything named "blocked" in `eval_broker.go` is now named "pending", except
  for a couple of comments that actually refer to blocked RPCs.
* Added a note to the upgrade guide docs for 1.5.0.
* Fixed the scheduling performance metrics docs because the description for
  `nomad.broker.total_blocked` was actually the description for
  `nomad.blocked_eval.total_blocked`.
2023-01-20 14:23:56 -05:00
Charlie Voiselle 5ea1d8a970
Add raft snapshot configuration options (#15522)
* Add config elements
* Wire in snapshot configuration to raft
* Add hot reload of raft config
* Add documentation for new raft settings
* Add changelog
2023-01-20 14:21:51 -05:00
Seth Hoenig d2d8ebbeba
consul: correctly interpret missing consul checks as unhealthy (#15822)
* consul: correctly understand missing consul checks as unhealthy

This PR fixes a bug where Nomad assumed any registered Checks would exist
in the service registration coming back from Consul. In some cases, the
Consul may be slow in processing the check registration, and the response
object would not contain checks. Nomad would then scan the empty response
looking for Checks with failing health status, finding none, and then
marking a task/alloc as healthy.

In reality, we must always use Nomad's view of what checks should exist as
the source of truth, and compare that with the response Consul gives us,
making sure they match, before scanning the Consul response for failing
check statuses.

Fixes #15536

* consul: minor CR refactor using maps not sets

* consul: observe transition from healthy to unhealthy checks

* consul: spell healthy correctly
2023-01-19 14:01:12 -06:00
James Rasell fad9b40e53
Merge branch 'main' into sso/gh-13120-oidc-login 2023-01-18 10:05:31 +00:00
Phil Renaud 98c5259f3e
[sso] OIDC Updates for the UI (#15804)
* Updated UI to handle OIDC method changes

* Remove redundant store unload call
2023-01-17 17:01:47 -05:00
James Rasell abe8e1cf29
updates based on code review from @tgross. 2023-01-17 08:45:17 +00:00
James Rasell d29d3412d8
rpc: add OIDC login related endpoints.
This adds new OIDC endpoints on the RPC endpoint. These two RPCs
handle generating the OIDC provider URL and then completing the
login by exchanging the provider token with an internal Nomad
token.

The RPC endpoints both do double forwarding. The initial forward
is to ensure we are talking to the regional leader; the second
then takes into account whether the auth method generates local or
global tokens. If it creates global tokens, we must then forward
onto the federated regional leader.
2023-01-13 13:14:29 +00:00
Seth Hoenig fe7795ce16
consul/connect: support for proxy upstreams opaque config (#15761)
This PR adds support for configuring `proxy.upstreams[].config` for
Consul Connect upstreams. This is an opaque config value to Nomad -
the data is passed directly to Consul and is unknown to Nomad.
2023-01-12 08:20:54 -06:00
Anthony Davis 1c32471805
Fix rejoin_after_leave behavior (#15552) 2023-01-11 16:39:24 -05:00
Daniel Bennett 7d1059b5ae
connect: ingress gateway validation for http hosts and wildcards (#15749)
* connect: fix non-"tcp" ingress gateway validation

changes apply to http, http2, and grpc:
* if "hosts" is excluded, consul will use its default domain
  e.g. <service-name>.ingress.dc1.consul
* can't set hosts with "*" service name
* test http2 and grpc too
2023-01-11 11:52:32 -06:00
Seth Hoenig 719eee8112
consul: add client configuration for grpc_ca_file (#15701)
* [no ci] first pass at plumbing grpc_ca_file

* consul: add support for grpc_ca_file for tls grpc connections in consul 1.14+

This PR adds client config to Nomad for specifying consul.grpc_ca_file

These changes combined with https://github.com/hashicorp/consul/pull/15913 should
finally enable Nomad users to upgrade to Consul 1.14+ and use tls grpc connections.

* consul: add cl entgry for grpc_ca_file

* docs: mention grpc_tls changes due to Consul 1.14
2023-01-11 09:34:28 -06:00
Seth Hoenig 83450c8762
vault: configure user agent on Nomad vault clients (#15745)
* vault: configure user agent on Nomad vault clients

This PR attempts to set the User-Agent header on each Vault API client
created by Nomad. Still need to figure a way to set User-Agent on the
Vault client created internally by consul-template.

* vault: fixup find-and-replace gone awry
2023-01-10 10:39:45 -06:00
Piotr Kazmierczak be36a1924f
acl: binding rules evaluation (#15697)
Binder provides an interface for binding claims and ACL roles/policies of Nomad.
2023-01-10 16:08:08 +01:00
Tim Gross 32f6ce1c54
Authenticate method improvements (#15734)
This changeset covers a sidebar discussion that @schmichael and I had around the
design for pre-forwarding auth. This includes some changes extracted out of
#15513 to make it easier to review both and leave a clean history.

* Remove fast path for NodeID. Previously-connected clients will have a NodeID
  set on the context, and because this is a large portion of the RPCs sent we
  fast-pathed it at the top of the `Authenticate` method. But the context is
  shared for all yamux streams over the same yamux session (and TCP
  connection). This lets an authenticated HTTP request to a client use the
  NodeID for authentication, which is a privilege escalation. Remove the fast
  path and annotate it so that we don't break it again.

* Add context to decisions around AuthenticatedIdentity. The `Authenticate`
  method taken on its own looks like it wants to return an `acl.ACL` that folds
  over all the various identity types (creating an ephemeral ACL on the fly if
  neccessary). But keeping these fields idependent allows RPC handlers to
  differentiate between internal and external origins so we most likely want to
  avoid this. Leave some docstrings as a warning as to why this is built the way
  it is.

* Mutate the request rather than returning. When reviewing #15513 we decided
  that forcing the request handler to call `SetIdentity` was repetitive and
  error prone. Instead, the `Authenticate` method mutates the request by setting
  its `AuthenticatedIdentity`.
2023-01-10 09:46:38 -05:00
Seth Hoenig 84cb5fb03d
deps: update shoenig/test to v0.6.0 (#15715)
Adds support for custom cmp.Options; need to fix one minor thing
causing api breakage.
2023-01-09 09:37:08 -06:00
Seth Hoenig 7214e21402
ci: swap freeport for portal in packages (#15661) 2023-01-03 11:25:20 -06:00
Seth Hoenig 266ca25a81
cleanup: remove usage of consul/sdk/testutil/retry (#15609)
This PR removes usages of `consul/sdk/testutil/retry`, as part of the
ongoing effort to remove use of any non-API module from Consul.

There is one remanining usage in the helper/freeport package, but that
will get removed as part of #15589
2023-01-02 08:06:20 -06:00
Piotr Kazmierczak 3fd9fbeece
fix failing UpsertBindingRules unit test (#15604)
UpsertBindingRules RPC changed in eacecb8,
validation happens after the ID check now, because we don't want validation to
fail for update payloads which may contain incomplete objects.
2022-12-21 15:19:09 +01:00
Piotr Kazmierczak b500dfa969
bugfix: unit test for GetACLBindingRules (#15583)
Unit test for GetACLBindingRules state store method would fail because we'd
expect order of returned items.
2022-12-20 15:06:09 +01:00
Piotr Kazmierczak 20a01a0bba
acl: modify update endpoints behavior (#15580)
API and RPC endpoints for ACLAuthMethods and ACLBindingRules should allow users
to send incomplete objects in order to, e.g., update single fields. This PR
provides "merging" functionality for these endpoints.
2022-12-20 11:22:19 +01:00
James Rasell b8aa53d09f
core: add ACL binding rule to replication system. (#15555)
ACL binding rule create and deletes are always forwarded to the
authoritative region. In order to make these available in
federated regions, the leaders in these regions need to replicate
from the authoritative.
2022-12-16 09:08:00 +01:00
James Rasell 95c9ffa505
ACL: add ACL binding rule RPC and HTTP API handlers. (#15529)
This change add the RPC ACL binding rule handlers. These handlers
are responsible for the creation, updating, reading, and deletion
of binding rules.

The write handlers are feature gated so that they can only be used
when all federated servers are running the required version.

The HTTP API handlers and API SDK have also been added where
required. This allows the endpoints to be called from the API by users
and clients.
2022-12-15 09:18:55 +01:00
James Rasell 13f207ea78
events: add ACL binding rules to core events stream topics. (#15544) 2022-12-14 14:49:49 +01:00
James Rasell 3c941c6bc3
acl: add binding rule object state schema and functionality. (#15511)
This change adds a new table that will store ACL binding rule
objects. The two indexes allow fast lookups by their ID, or by
which auth method they are linked to. Snapshot persist and
restore functionality ensures this table can be saved and
restored from snapshots.

In order to write and delete the object to state, new Raft messages
have been added.

All RPC request and response structs, along with object functions
such as diff and canonicalize have been included within this work
as it is nicely separated from the other areas of work.
2022-12-14 08:48:18 +01:00
Seth Hoenig be3f89b5f9
artifact: enable inheriting environment variables from client (#15514)
* artifact: enable inheriting environment variables from client

This PR adds client configuration for specifying environment variables that
should be inherited by the artifact sandbox process from the Nomad Client agent.

Most users should not need to set these values but the configuration is provided
to ensure backwards compatability. Configuration of go-getter should ideally be
done through the artifact block in a jobspec task.

e.g.

```hcl
client {
  artifact {
    set_environment_variables = "TMPDIR,GIT_SSH_OPTS"
  }
}
```

Closes #15498

* website: update set_environment_variables text to mention PATH
2022-12-09 15:46:07 -06:00
Piotr Kazmierczak db98e26375
bugfix: acl sso auth methods test failures (#15512)
This PR fixes unit test failures introduced in f4e89e2
2022-12-09 18:47:32 +01:00
Piotr Kazmierczak 08f50f7dbf
acl: make sure there is only one default Auth Method per type (#15504)
This PR adds a check that makes sure we don't insert a duplicate default ACL auth method for a given type.
2022-12-09 14:46:54 +01:00
Seth Hoenig 825c5cc65e
artifact: add client toggle to disable filesystem isolation (#15503)
This PR adds the client config option for turning off filesystem isolation,
applicable on Linux systems where filesystem isolation is possible and
enabled by default.

```hcl
client{
  artifact {
    disable_filesystem_isolation = <bool:false>
  }
}
```

Closes #15496
2022-12-08 12:29:23 -06:00
Piotr Kazmierczak 1cb45630f0
acl: canonicalize ACL Auth Method object (#15492) 2022-12-08 14:05:46 +01:00
Piotr Kazmierczak 10f80f7d9a
bugfix: make sure streaming endpoints are only registered once (#15484)
Streaming RPCs should only be registered once, not on every RPC call, because they set keys in StreamingRpcRegistry.registry map. This PR fixes it by checking whether endpoints are already registered before calling .register() method. Fixes #15474

Co-authored-by: Tim Gross <tgross@hashicorp.com>
2022-12-07 17:01:45 +01:00
Tim Gross e0fddee386
Pre forwarding authentication (#15417)
Upcoming work to instrument the rate of RPC requests by consumer (and eventually
rate limit) require that we authenticate a RPC request before forwarding. Add a
new top-level `Authenticate` method to the server and have it return an
`AuthenticatedIdentity` struct. RPC handlers will use the relevant fields of
this identity for performing authorization.

This changeset includes:
* The main implementation of `Authenticate`
* Provide a new RPC `ACL.WhoAmI` for debugging authentication. This endpoint
  returns the same `AuthenticatedIdentity` that will be used by RPC handlers. At
  some point we might want to give this an equivalent HTTP endpoint but I didn't
  want to add that to our public API until some of the other Workload Identity
  work is solidified, especially if we don't need it yet.
* A full coverage test of the `Authenticate` method. This sets up two server
  nodes with mTLS and ACLs, some tokens, and some allocations with workload
  identities.
* Wire up an example of using `Authenticate` in the `Namespace.Upsert` RPC and
  see how authorization happens after forwarding.
* A new semgrep rule for `Authenticate`, which we'll need to update once we're
  ready to wire up more RPC endpoints with authorization steps.
2022-12-06 14:44:03 -05:00
Piotr Kazmierczak d3aac1fe08
acl: sso auth method snapshot restore test (#15482) 2022-12-06 15:15:29 +01:00
Piotr Kazmierczak 777173e8da
acl: added type to ACL Auth Method stub (#15480) 2022-12-06 14:47:05 +01:00
Tim Gross dfed1ba5bc
remove most static RPC handlers (#15451)
Nomad server components that aren't in the `nomad` package like the deployment
watcher and volume watcher need to make RPC calls but can't import the Server
struct to do so because it creates a circular reference. These components have a
"shim" object that gets populated to pass a "static" handler that has no RPC
context.

Most RPC handlers are never used in this way, but during server setup we were
constructing a set of static handlers for most RPC endpoints anyways. This is
slightly wasteful but also confusing to developers who end up being encouraged
to just copy what was being done for previous RPCs.

This changeset includes the following refactorings:
* Remove the static handlers field on the server
* Instead construct just the specific static handlers we need to pass into the
  deployment watcher and volume watcher.
* Remove the unnecessary static handler from heartbeater
* Update various tests to avoid needing the static endpoints and have them use a
  endpoint constructed on the spot.

Follow-up work will examine whether we can remove the RPCs from deployment
watcher and volume watcher entirely, falling back to raft applies like node
drainer does currently.
2022-12-02 10:12:05 -05:00
Tim Gross 33f32d526e
fix enterprise endpoint registration (#15446)
In #15430 we refactored the RPC endpoint configuration to make adding the RPC
context easier. But when implementing the change on the Enterprise side, I
discovered that the registration of enterprise endpoints was being done
incorrectly -- this doesn't show up on OSS because the registration is always a
no-op here.
2022-12-01 10:44:33 -05:00
Tim Gross f61f801e77
provide RPCContext to all RPC handlers (#15430)
Upcoming work to instrument the rate of RPC requests by consumer (and eventually
rate limit) requires that we thread the `RPCContext` through all RPC
handlers so that we can access the underlying connection. This changeset adds
the context to everywhere we intend to initially support it and intentionally
excludes streaming RPCs and client RPCs.

To improve the ergonomics of adding the context everywhere its needed and to
clarify the requirements of dynamic vs static handlers, I've also done a good
bit of refactoring here:

* canonicalized the RPC handler fields so they're as close to identical as
  possible without introducing unused fields (i.e. I didn't add loggers if the
  handler doesn't use them already).
* canonicalized the imports in the handler files.
* added a `NewExampleEndpoint` function for each handler that ensures we're
  constructing the handlers with the required arguments.
* reordered the registration in server.go to match the order of the files (to
  make it easier to see if we've missed one), and added a bunch of commentary
  there as to what the difference between static and dynamic handlers is.
2022-12-01 10:05:15 -05:00
Piotr Kazmierczak 0eccd3286c
acl: sso auth methods RPC/API/CLI should return created or updated objects (#15410)
Currently CRUD code that operates on SSO auth methods does not return created or updated object upon creation/update. This is bad UX and inconsistent behavior compared to other ACL objects like roles, policies or tokens.

This PR fixes it.

Relates to #13120
2022-11-29 07:36:36 +01:00
James Rasell 726d419da1
acl: replicate auth-methods from federated cluster leaders. (#15366) 2022-11-28 09:20:24 +01:00
Luiz Aoqui 8f91be26ab
scheduler: create placements for non-register MRD (#15325)
* scheduler: create placements for non-register MRD

For multiregion jobs, the scheduler does not create placements on
registration because the deployment must wait for the other regions.
Once of these regions will then trigger the deployment to run.

Currently, this is done in the scheduler by considering any eval for a
multiregion job as "paused" since it's expected that another region will
eventually unpause it.

This becomes a problem where evals not triggered by a job registration
happen, such as on a node update. These types of regional changes do not
have other regions waiting to progress the deployment, and so they were
never resulting in placements.

The fix is to create a deployment at job registration time. This
additional piece of state allows the scheduler to differentiate between
a multiregion change, where there are other regions engaged in the
deployment so no placements are required, from a regional change, where
the scheduler does need to create placements.

This deployment starts in the new "initializing" status to signal to the
scheduler that it needs to compute the initial deployment state. The
multiregion deployment will wait until this deployment state is
persisted and its starts is set to "pending". Without this state
transition it's possible to hit a race condition where the plan applier
and the deployment watcher may step of each other and overwrite their
changes.

* changelog: add entry for #15325
2022-11-25 12:45:34 -05:00
James Rasell 32dfa431f3
sso: add ACL auth-method HTTP API CRUD endpoints (#15338)
* core: remove custom auth-method TTLS and use ACL token TTLS.

* agent: add ACL auth-method HTTP endpoints for CRUD actions.

* api: add ACL auth-method client.
2022-11-23 09:38:02 +01:00
Piotr Kazmierczak bb66b5e770
acl: sso auth method RPC endpoints (#15221)
This PR implements RPC endpoints for SSO auth methods.

This PR is part of the SSO work captured under ☂️ ticket #13120.
2022-11-21 10:15:39 +01:00
Piotr Kazmierczak d02241cad5
acl: sso auth method event stream (#15280)
This PR implements SSO auth method support in the event stream.

This PR is part of the SSO work captured under ☂️ ticket #13120.
2022-11-21 10:06:05 +01:00
Tim Gross 05a46e6648
make eval cancelation really async with Eval.Ack (#15298)
Ensure we never block in the `Eval.Ack`
2022-11-18 08:38:17 -05:00
Tim Gross b74a868aae
make eval cancelation async with Eval.Ack (#15294)
In #14621 we added an eval canelation reaper goroutine with a channel that
allowed us to wake it up. But we forgot to actually send on this channel from
`Eval.Ack` and are still committing the cancelations synchronously. Fix this by
sending on the buffered channel to wake up the reaper instead.
2022-11-17 16:40:41 -05:00
Tim Gross d0f9e887f7
autopilot: include only servers from the same region (#15290)
When we migrated to the updated autopilot library in Nomad 1.4.0, the interface
for finding servers changed. Previously autopilot would get the serf members and
call `IsServer` on each of them, leaving it up to the implementor to filter out
clients (and in Nomad's case, other regions). But in the "new" autopilot
library, the equivalent interface is `KnownServers` for which we did not filter
by region. This causes spurious attempts for the cross-region stats fetching,
which results in TLS errors and a lot of log noise.

Filter the member set by region to fix the regression.
2022-11-17 12:09:36 -05:00
Tim Gross 510eb435dc
remove deprecated AllocUpdateRequestType raft entry (#15285)
After Deployments were added in Nomad 0.6.0, the `AllocUpdateRequestType` raft
log entry was no longer in use. Mark this as deprecated, remove the associated
dead code, and remove references to the metrics it emits from the docs. We'll
leave the entry itself just in case we encounter old raft logs that we need to
be able to safely load.
2022-11-17 12:08:04 -05:00
Tim Gross dd3a07302e
keyring: update handle to state inside replication loop (#15227)
* keyring: update handle to state inside replication loop

When keyring replication starts, we take a handle to the state store. But
whenever a snapshot is restored, this handle is invalidated and no longer points
to a state store that is receiving new keys. This leaks a bunch of memory too!

In addition to operator-initiated restores, when fresh servers are added to
existing clusters with large-enough state, the keyring replication can get
started quickly enough that it's running before the snapshot from the existing
clusters have been restored.

Fix this by updating the handle to the state store on each pass.
2022-11-17 08:40:12 -05:00
Tim Gross 6415fb4284
eval broker: shed all but one blocked eval per job after ack (#14621)
When an evaluation is acknowledged by a scheduler, the resulting plan is
guaranteed to cover up to the `waitIndex` set by the worker based on the most
recent evaluation for that job in the state store. At that point, we no longer
need to retain blocked evaluations in the broker that are older than that index.

Move all but the highest priority / highest `ModifyIndex` blocked eval into a
canceled set. When the `Eval.Ack` RPC returns from the eval broker it will
signal a reap of a batch of cancelable evals to write to raft. This paces the
cancelations limited by how frequently the schedulers are acknowledging evals;
this should reduce the risk of cancelations from overwhelming raft relative to
scheduler progress. In order to avoid straggling batches when the cluster is
quiet, we also include a periodic sweep through the cancelable list.
2022-11-16 16:10:11 -05:00
Tim Gross e8c83c3ecc
test: ensure leader is still valid in reelection test (#15267)
The `TestLeader_Reelection` test waits for a leader to be elected and then makes
some other assertions. But it implcitly assumes that there's no failure of
leadership before shutting down the leader, which can lead to a panic in the
tests. Assert there's still a leader before the shutdown.
2022-11-16 11:04:02 -05:00
Tim Gross 37134a4a37
eval delete: move batching of deletes into RPC handler and state (#15117)
During unusual outage recovery scenarios on large clusters, a backlog of
millions of evaluations can appear. In these cases, the `eval delete` command can
put excessive load on the cluster by listing large sets of evals to extract the
IDs and then sending larges batches of IDs. Although the command's batch size
was carefully tuned, we still need to be JSON deserialize, re-serialize to
MessagePack, send the log entries through raft, and get the FSM applied.

To improve performance of this recovery case, move the batching process into the
RPC handler and the state store. The design here is a little weird, so let's
look a the failed options first:

* A naive solution here would be to just send the filter as the raft request and
  let the FSM apply delete the whole set in a single operation. Benchmarking with
  1M evals on a 3 node cluster demonstrated this can block the FSM apply for
  several minutes, which puts the cluster at risk if there's a leadership
  failover (the barrier write can't be made while this apply is in-flight).

* A less naive but still bad solution would be to have the RPC handler filter
  and paginate, and then hand a list of IDs to the existing raft log
  entry. Benchmarks showed this blocked the FSM apply for 20-30s at a time and
  took roughly an hour to complete.

Instead, we're filtering and paginating in the RPC handler to find a page token,
and then passing both the filter and page token in the raft log. The FSM apply
recreates the paginator using the filter and page token to get roughly the same
page of evaluations, which it then deletes. The pagination process is fairly
cheap (only abut 5% of the total FSM apply time), so counter-intuitively this
rework ends up being much faster. A benchmark of 1M evaluations showed this
blocked the FSM apply for 20-30ms at a time (typical for normal operations) and
completes in less than 4 minutes.

Note that, as with the existing design, this delete is not consistent: a new
evaluation inserted "behind" the cursor of the pagination will fail to be
deleted.
2022-11-14 14:08:13 -05:00
Piotr Kazmierczak 4851f9e68a
acl: sso auth method schema and store functions (#15191)
This PR implements ACLAuthMethod type, acl_auth_methods table schema and crud state store methods. It also updates nomadSnapshot.Persist and nomadSnapshot.Restore methods in order for them to work with the new table, and adds two new Raft messages: ACLAuthMethodsUpsertRequestType and ACLAuthMethodsDeleteRequestType

This PR is part of the SSO work captured under ☂️ ticket #13120.
2022-11-10 19:42:41 +01:00
Drew Gonzales aac9404ee5
server: add git revision to serf tags (#9159) 2022-11-07 10:34:33 -05:00
Tim Gross 9e1c0b46d8
API for Eval.Count (#15147)
Add a new `Eval.Count` RPC and associated HTTP API endpoints. This API is
designed to support interactive use in the `nomad eval delete` command to get a
count of evals expected to be deleted before doing so.

The state store operations to do this sort of thing are somewhat expensive, but
it's cheaper than serializing a big list of evals to JSON. Note that although it
seems like this could be done as an extra parameter and response field on
`Eval.List`, having it as its own endpoint avoids having to change the response
body shape and lets us avoid handling the legacy filter params supported by
`Eval.List`.
2022-11-07 08:53:19 -05:00
Luiz Aoqui e4c8b59919
Update alloc after reconnect and enforece client heartbeat order (#15068)
* scheduler: allow updates after alloc reconnects

When an allocation reconnects to a cluster the scheduler needs to run
special logic to handle the reconnection, check if a replacement was
create and stop one of them.

If the allocation kept running while the node was disconnected, it will
be reconnected with `ClientStatus: running` and the node will have
`Status: ready`. This combination is the same as the normal steady state
of allocation, where everything is running as expected.

In order to differentiate between the two states (an allocation that is
reconnecting and one that is just running) the scheduler needs an extra
piece of state.

The current implementation uses the presence of a
`TaskClientReconnected` task event to detect when the allocation has
reconnected and thus must go through the reconnection process. But this
event remains even after the allocation is reconnected, causing all
future evals to consider the allocation as still reconnecting.

This commit changes the reconnect logic to use an `AllocState` to
register when the allocation was reconnected. This provides the
following benefits:

  - Only a limited number of task states are kept, and they are used for
    many other events. It's possible that, upon reconnecting, several
    actions are triggered that could cause the `TaskClientReconnected`
    event to be dropped.
  - Task events are set by clients and so their timestamps are subject
    to time skew from servers. This prevents using time to determine if
    an allocation reconnected after a disconnect event.
  - Disconnect events are already stored as `AllocState` and so storing
    reconnects there as well makes it the only source of information
    required.

With the new logic, the reconnection logic is only triggered if the
last `AllocState` is a disconnect event, meaning that the allocation has
not been reconnected yet. After the reconnection is handled, the new
`ClientStatus` is store in `AllocState` allowing future evals to skip
the reconnection logic.

* scheduler: prevent spurious placement on reconnect

When a client reconnects it makes two independent RPC calls:

  - `Node.UpdateStatus` to heartbeat and set its status as `ready`.
  - `Node.UpdateAlloc` to update the status of its allocations.

These two calls can happen in any order, and in case the allocations are
updated before a heartbeat it causes the state to be the same as a node
being disconnected: the node status will still be `disconnected` while
the allocation `ClientStatus` is set to `running`.

The current implementation did not handle this order of events properly,
and the scheduler would create an unnecessary placement since it
considered the allocation was being disconnected. This extra allocation
would then be quickly stopped by the heartbeat eval.

This commit adds a new code path to handle this order of events. If the
node is `disconnected` and the allocation `ClientStatus` is `running`
the scheduler will check if the allocation is actually reconnecting
using its `AllocState` events.

* rpc: only allow alloc updates from `ready` nodes

Clients interact with servers using three main RPC methods:

  - `Node.GetAllocs` reads allocation data from the server and writes it
    to the client.
  - `Node.UpdateAlloc` reads allocation from from the client and writes
    them to the server.
  - `Node.UpdateStatus` writes the client status to the server and is
    used as the heartbeat mechanism.

These three methods are called periodically by the clients and are done
so independently from each other, meaning that there can't be any
assumptions in their ordering.

This can generate scenarios that are hard to reason about and to code
for. For example, when a client misses too many heartbeats it will be
considered `down` or `disconnected` and the allocations it was running
are set to `lost` or `unknown`.

When connectivity is restored the to rest of the cluster, the natural
mental model is to think that the client will heartbeat first and then
update its allocations status into the servers.

But since there's no inherit order in these calls the reverse is just as
possible: the client updates the alloc status and then heartbeats. This
results in a state where allocs are, for example, `running` while the
client is still `disconnected`.

This commit adds a new verification to the `Node.UpdateAlloc` method to
reject updates from nodes that are not `ready`, forcing clients to
heartbeat first. Since this check is done server-side there is no need
to coordinate operations client-side: they can continue sending these
requests independently and alloc update will succeed after the heartbeat
is done.

* chagelog: add entry for #15068

* code review

* client: skip terminal allocations on reconnect

When the client reconnects with the server it synchronizes the state of
its allocations by sending data using the `Node.UpdateAlloc` RPC and
fetching data using the `Node.GetClientAllocs` RPC.

If the data fetch happens before the data write, `unknown` allocations
will still be in this state and would trigger the
`allocRunner.Reconnect` flow.

But when the server `DesiredStatus` for the allocation is `stop` the
client should not reconnect the allocation.

* apply more code review changes

* scheduler: persist changes to reconnected allocs

Reconnected allocs have a new AllocState entry that must be persisted by
the plan applier.

* rpc: read node ID from allocs in UpdateAlloc

The AllocUpdateRequest struct is used in three disjoint use cases:

1. Stripped allocs from clients Node.UpdateAlloc RPC using the Allocs,
   and WriteRequest fields
2. Raft log message using the Allocs, Evals, and WriteRequest fields
3. Plan updates using the AllocsStopped, AllocsUpdated, and Job fields

Adding a new field that would only be used in one these cases (1) made
things more confusing and error prone. While in theory an
AllocUpdateRequest could send allocations from different nodes, in
practice this never actually happens since only clients call this method
with their own allocations.

* scheduler: remove logic to handle exceptional case

This condition could only be hit if, somehow, the allocation status was
set to "running" while the client was "unknown". This was addressed by
enforcing an order in "Node.UpdateStatus" and "Node.UpdateAlloc" RPC
calls, so this scenario is not expected to happen.

Adding unnecessary code to the scheduler makes it harder to read and
reason about it.

* more code review

* remove another unused test
2022-11-04 16:25:11 -04:00
Charlie Voiselle 79c4478f5b
template: error on missing key (#15141)
* Support error_on_missing_value for templates
* Update docs for template stanza
2022-11-04 13:23:01 -04:00
Charlie Voiselle 83e43e01c1
Add missing timer reset (#15134) 2022-11-03 18:57:57 -04:00
Phil Renaud ffb4c63af7
[ui] Adds meta to job list stub and displays a pack logo on the jobs index (#14833)
* Adds meta to job list stub and displays a pack logo on the jobs index

* Changelog

* Modifying struct for optional meta param

* Explicitly ask for meta anytime I look up a job from index or job page

* Test case for the endpoint

* adding meta field to API struct and ommitting from response if empty

* passthru method added to api/jobs.list

* Meta param listed in docs for jobs list

* Update api/jobs.go

Co-authored-by: Tim Gross <tgross@hashicorp.com>

Co-authored-by: Tim Gross <tgross@hashicorp.com>
2022-11-02 16:58:24 -04:00
Tim Gross 4d7a4171cd
volumewatcher: prevent panic on nil volume (#15101)
If a GC claim is written and then volume is deleted before the `volumewatcher`
enters its run loop, we panic on the nil-pointer access. Simply doing a
nil-check at the top of the loop reveals a race condition around shutting down
the loop just as a new update is coming in.

Have the parent `volumeswatcher` send an initial update on the channel before
returning, so that we're still holding the lock. Update the watcher's `Stop`
method to set the running state, which lets us avoid having a second context and
makes stopping synchronous. This reduces the cases we have to handle in the run
loop.

Updated the tests now that we'll safely return from the goroutine and stop the
runner in a larger set of cases. Ran the tests with the `-race` detection flag
and fixed up any problems found here as well.
2022-11-01 16:53:10 -04:00
Tim Gross 38542f256e
variables: limit rekey eval to half the nack timeout (#15102)
In order to limit how much the rekey job can monopolize a scheduler worker, we
limit how long it can run to 1min before stopping work and emitting a new
eval. But this exactly matches the default nack timeout, so it'll fail the eval
rather than getting a chance to emit a new one.

Set the timeout for the rekey eval to half the configured nack timeout.
2022-11-01 16:50:50 -04:00
Tim Gross 903b5baaa4
keyring: safely handle missing keys and restore GC (#15092)
When replication of a single key fails, the replication loop breaks early and
therefore keys that fall later in the sorting order will never get
replicated. This is particularly a problem for clusters impacted by the bug that
caused #14981 and that were later upgraded; the keys that were never replicated
can now never be replicated, and so we need to handle them safely.

Included in the replication fix:
* Refactor the replication loop so that each key replicated in a function call
  that returns an error, to make the workflow more clear and reduce nesting. Log
  the error and continue.
* Improve stability of keyring replication tests. We no longer block leadership
  on initializing the keyring, so there's a race condition in the keyring tests
  where we can test for the existence of the root key before the keyring has
  been initialize. Change this to an "eventually" test.

But these fixes aren't enough to fix #14981 because they'll end up seeing an
error once a second complaining about the missing key, so we also need to fix
keyring GC so the keys can be removed from the state store. Now we'll store the
key ID used to sign a workload identity in the Allocation, and we'll index the
Allocation table on that so we can track whether any live Allocation was signed
with a particular key ID.
2022-11-01 15:00:50 -04:00
Tim Gross dab9388c75
refactor eval delete safety check (#15070)
The `Eval.Delete` endpoint has a helper that takes a list of jobs and allocs and
determines whether the eval associated with those is safe to delete (based on
their state). Filtering improvements to the `Eval.Delete` endpoint are going to
need this check to run in the state store itself for consistency.

Refactor to push this check down into the state store to keep the eventual diff
for that work reasonable.
2022-10-28 09:10:33 -04:00
Tim Gross 9c37a234e7
test: refactor EvalEndpoint_Delete (#15065)
While working on filtering improvements to the `Eval.Delete` endpoint I noticed
that this test was going to need to expand significantly and needed some
refactoring to make that work nicely. In order to reduce the size of the
eventual diff, I've pulled this refactoring out into its own changeset.
2022-10-27 15:29:22 -04:00
hc-github-team-nomad-core 38b1c8a22a Prepare for next release 2022-10-27 13:08:05 -04:00
hc-github-team-nomad-core fbef8881cd Generate files for 1.4.2 release 2022-10-27 13:08:05 -04:00
Tim Gross 9d906d4632 variables: fix filter on List RPC
The List RPC correctly authorized against the prefix argument. But when
filtering results underneath the prefix, it only checked authorization for
standard ACL tokens and not Workload Identity. This results in WI tokens being
able to read List results (metadata only: variable paths and timestamps) for
variables under the `nomad/` prefix that belong to other jobs in the same
namespace.

Fixes the filtering and split the `handleMixedAuthEndpoint` function into
separate authentication and authorization steps so that we don't need to
re-verify the claim token on each filtered object.

Also includes:
* update semgrep rule for mixed auth endpoints
* variables: List returns empty set when all results are filtered
2022-10-27 13:08:05 -04:00
James Rasell da5069bded event stream: ensure token expiry is correctly checked for subs.
This change ensures that a token's expiry is checked before every
event is sent to the caller. Previously, a token could still be
used to listen for events after it had expired, as long as the
subscription was made while it was unexpired. This would last until
the token was garbage collected from state.

The check occurs within the RPC as there is currently no state
update when a token expires.
2022-10-27 13:08:05 -04:00
Tim Gross aca95c0bc6
keyring: remove root key GC (#15034) 2022-10-25 17:06:18 -04:00
Tim Gross c45d9a9ea8
keyring: refactor to hold locks for less time (#15026)
Follow-up from https://github.com/hashicorp/nomad/pull/14987/files#r1003611644

We don't need to hold the lock when querying the state store, so move the
read-lock to the interior of the `activeKeySet` function.
2022-10-24 16:23:44 -04:00
Tim Gross b9922631bd
keyring: fix missing GC config, don't rotate on manual GC (#15009)
The configuration knobs for root keyring garbage collection are present in the
consumer and present in the user-facing config, but we missed the spot where we
copy from one to the other. Fix this so that users can set their own thresholds.

The root key is automatically rotated every ~30d, but the function that does
both rotation and key GC was wired up such that `nomad system gc` caused an
unexpected key rotation. Split this into two functions so that `nomad system gc`
cleans up old keys without forcing a rotation, which will be done periodially
or by the `nomad operator root keyring rotate` command.
2022-10-24 08:43:42 -04:00
Tim Gross 3a811ac5e7
keyring: fixes for keyring replication on cluster join (#14987)
* keyring: don't unblock early if rate limit burst exceeded

The rate limiter returns an error and unblocks early if its burst limit is
exceeded (unless the burst limit is Inf). Ensure we're not unblocking early,
otherwise we'll only slow down the cases where we're already pausing to make
external RPC requests.

* keyring: set MinQueryIndex on stale queries

When keyring replication makes a stale query to non-leader peers to find a key
the leader doesn't have, we need to make sure the peer we're querying has had a
chance to catch up to the most current index for that key. Otherwise it's
possible for newly-added servers to query another newly-added server and get a
non-error nil response for that key ID.

Ensure that we're setting the correct reply index in the blocking query.

Note that the "not found" case does not return an error, just an empty key. So
as a belt-and-suspenders, update the handling of empty responses so that we
don't break the loop early if we hit a server that doesn't have the key.

* test for adding new servers to keyring

* leader: initialize keyring after we have consistent reads

Wait until we're sure the FSM is current before we try to initialize the
keyring.

Also, if a key is rotated immediately following a leader election, plans that
are in-flight may get signed before the new leader has the key. Allow for a
short timeout-and-retry to avoid rejecting plans
2022-10-21 12:33:16 -04:00
James Rasell 206fb04dc1
acl: allow tokens to read policies linked via roles to the token. (#14982)
ACL tokens are granted permissions either by direct policy links
or via ACL role links. Callers should therefore be able to read
policies directly assigned to the caller token or indirectly by
ACL role links.
2022-10-21 09:05:17 +02:00
James Rasell 215b4e7e36
acl: add ACL roles to event stream topic and resolve policies. (#14923)
This changes adds ACL role creation and deletion to the event
stream. It is exposed as a single topic with two types; the filter
is primarily the role ID but also includes the role name.

While conducting this work it was also discovered that the events
stream has its own ACL resolution logic. This did not account for
ACL tokens which included role links, or tokens with expiry times.
ACL role links are now resolved to their policies and tokens are
checked for expiry correctly.
2022-10-20 09:43:35 +02:00
James Rasell 8e25048f3d
acl: gate ACL role write and delete RPC usage on v1.4.0 or greater. (#14908) 2022-10-18 16:46:11 +02:00
James Rasell 9923f9e6f3
nnsd: gate registration write & delete RPC use on v1.3.0 or greater. (#14924) 2022-10-18 15:30:28 +02:00
Tim Gross 3c78980b78
make version checks specific to region (1.4.x) (#14912)
* One-time tokens are not replicated between regions, so we don't want to enforce
  that the version check across all of serf, just members in the same region.
* Scheduler: Disconnected clients handling is specific to a single region, so we
  don't want to enforce that the version check across all of serf, just members in
  the same region.
* Variables: enforce version check in Apply RPC
* Cleans up a bunch of legacy checks.

This changeset is specific to 1.4.x and the changes for previous versions of
Nomad will be manually backported in a separate PR.
2022-10-17 16:23:51 -04:00
Tim Gross c721ce618e
keyring: filter by region before checking version (#14901)
In #14821 we fixed a panic that can happen if a leadership election happens in
the middle of an upgrade. That fix checks that all servers are at the minimum
version before initializing the keyring (which blocks evaluation processing
during trhe upgrade). But the check we implemented is over the serf membership,
which includes servers in any federated regions, which don't necessarily have
the same upgrade cycle.

Filter the version check by the leader's region.

Also bump up log levels of major keyring operations
2022-10-17 13:21:16 -04:00
Seth Hoenig 1593963cd1
servicedisco: implicit constraint for nomad v1.4 when using nsd checks (#14868)
This PR adds a jobspec mutator to constrain jobs making use of checks
in the nomad service provider to nomad clients of at least v1.4.0.

Before, in a mixed client version cluster it was possible to submit
an NSD job making use of checks and for that job to land on an older,
incompatible client node.

Closes #14862
2022-10-11 08:21:42 -05:00
Seth Hoenig 69ced2a2bd
services: remove assertion on 'task' field being set (#14864)
This PR removes the assertion around when the 'task' field of
a check may be set. Starting in Nomad 1.4 we automatically set
the task field on all checks in support of the NSD checks feature.

This is causing validation problems elsewhere, e.g. when a group
service using the Consul provider sets 'task' it will fail
validation that worked previously.

The assertion of leaving 'task' unset was only about making sure
job submitters weren't expecting some behavior, but in practice
is causing bugs now that we need the task field for more than it
was originally added for.

We can simply update the docs, noting when the task field set by
job submitters actually has value.
2022-10-10 13:02:33 -05:00
Seth Hoenig 5e38a0e82c
cleanup: rename Equals to Equal for consistency (#14759) 2022-10-10 09:28:46 -05:00
Hemanth Krishna e516fc266f
enhancement: UpdateTask when Task is waiting for ShutdownDelay (#14775)
Signed-off-by: Hemanth Krishna <hkpdev008@gmail.com>
2022-10-06 16:33:28 -04:00
Giovani Avelar a625de2062
Allow specification of a custom job name/prefix for parameterized jobs (#14631) 2022-10-06 16:21:40 -04:00
Tim Gross 80ec5e1346
fix panic from keyring raft entries being written during upgrade (#14821)
During an upgrade to Nomad 1.4.0, if a server running 1.4.0 becomes the leader
before one of the 1.3.x servers, the old server will crash because the keyring
is initialized and writes a raft entry.

Wait until all members are on a version that supports the keyring before
initializing it.
2022-10-06 12:47:02 -04:00
Michael Schurter 0df5c7d5ae
test: fix flaky test (#14713)
Need to wait for Stop evals to be processed before you can expect
subsequent RPCs to see the alloc's DesiredStatus=stop.
2022-09-27 10:36:16 -07:00
Tim Gross 87681fca68
CSI: ensure initial unpublish state is checkpointed (#14675)
A test flake revealed a bug in the CSI unpublish workflow, where an unpublish
that comes from a client that's successfully done the node-unpublish step will
not have the claim checkpointed if the controller-unpublish step fails. This
will result in a delay in releasing the volume claim until the next GC.

This changeset also ensures we're using a new snapshot after each write to raft,
and fixes two timing issues in test where either the volume watcher can
unpublish before the unpublish RPC is sent or we don't wait long enough in
resource-restricted environements like GHA.
2022-09-27 08:43:45 -04:00
Seth Hoenig 87ec5fdee5
deps: update set and test (#14680)
This PR updates go-set and shoenig/test, which introduced some breaking
API changes.
2022-09-26 08:28:03 -05:00
Seth Hoenig ae5b800085
cleanup: rearrange mocks package (#14660)
This PR splits up the nomad/mock package into more files. Specific features
that have a lot of mocks get their own file (e.g. acl, variables, csi, connect, etc.).
Above that, functions that return jobs/allocs/nodes are in the job/alloc/node file. And
lastly other mocks/helpers are in mock.go
2022-09-22 13:49:58 -05:00
Derek Strickland 6874997f91
scheduler: Fix bug where the would treat multiregion jobs as paused for job types that don't use deployments (#14659)
* scheduler: Fix bug where the scheduler would treat multiregion jobs as paused for job types that don't use deployments

Co-authored-by: Tim Gross <tgross@hashicorp.com>

Co-authored-by: Tim Gross <tgross@hashicorp.com>
2022-09-22 14:31:27 -04:00
Florian Apolloner f66d61e17f
consul: Removed unused ConsulUsage.Kinds. (#11303) 2022-09-22 10:07:14 -05:00
Jorge Marey 584ddfe859
Add Namespace, Job and Group to envoy stats (#14311) 2022-09-22 10:38:21 -04:00