Upcoming work to instrument the rate of RPC requests by consumer (and eventually
rate limit) requires that we thread the `RPCContext` through all RPC
handlers so that we can access the underlying connection. This changeset adds
the context to everywhere we intend to initially support it and intentionally
excludes streaming RPCs and client RPCs.
To improve the ergonomics of adding the context everywhere its needed and to
clarify the requirements of dynamic vs static handlers, I've also done a good
bit of refactoring here:
* canonicalized the RPC handler fields so they're as close to identical as
possible without introducing unused fields (i.e. I didn't add loggers if the
handler doesn't use them already).
* canonicalized the imports in the handler files.
* added a `NewExampleEndpoint` function for each handler that ensures we're
constructing the handlers with the required arguments.
* reordered the registration in server.go to match the order of the files (to
make it easier to see if we've missed one), and added a bunch of commentary
there as to what the difference between static and dynamic handlers is.
This PR is a continuation of #14917, where we missed the ipv6 cases.
Consul auto-inserts tagged_addresses for keys
- lan_ipv4
- wan_ipv4
- lan_ipv6
- wan_ipv6
even though the service registration coming from Nomad does not contain such
elements. When doing the differential between services Nomad expects to be
registered vs. the services actually registered into Consul, we must first
purge these automatically inserted tagged_addresses if they do not exist in
the Nomad view of the Consul service.
Currently CRUD code that operates on SSO auth methods does not return created or updated object upon creation/update. This is bad UX and inconsistent behavior compared to other ACL objects like roles, policies or tokens.
This PR fixes it.
Relates to #13120
This PR adds a secondary path for cleaning up iptables created for an allocation
when the normal CNI library fails to do so. This typically happens when the state
of the pause container is unexpected - e.g. deleted out of band from Nomad. Before,
the iptables rules would be leaked which could lead to unexpected nat routing
behavior later on (in addition to leaked resources). With this change, we scan
for the rules created on behalf of the allocation being GC'd and delete them.
Fixes#6385
* Top nav auth dropdown (#15055)
* Basic dropdown styles
* Some cleanup
* delog
* Default nomad hover state styles
* Component separation-of-concerns and acceptance tests for auth dropdown
* lintfix
* [ui, sso] Handle token expiry 500s (#15073)
* Handle error states generally
* Dont direct, just redirect
* no longer need explicit error on controller
* Redirect on token-doesnt-exist
* Forgot to import our time lib
* Linting on _blank
* Redirect tests
* changelog
* [ui, sso] warn user about pending token expiry (#15091)
* Handle error states generally
* Dont direct, just redirect
* no longer need explicit error on controller
* Linting on _blank
* Custom notification actions and shift the template to within an else block
* Lintfix
* Make the closeAction optional
* changelog
* Add a mirage token that will always expire in 11 minutes
* Test for token expiry with ember concurrency waiters
* concurrency handling for earlier test, and button redirect test
* [ui] if ACLs are disabled, remove the Sign In link from the top of the UI (#15114)
* Remove top nav link if ACLs disabled
* Change to an enabled-by-default model since you get no agent config when ACLs are disabled but you lack a token
* PR feedback addressed; down with double negative conditionals
* lintfix
* ember getter instead of ?.prop
* [SSO] Auth Methods and Mock OIDC Flow (#15155)
* Big ol first pass at a redirect sign in flow
* dont recursively add queryparams on redirect
* Passing state and code qps
* In which I go off the deep end and embed a faux provider page in the nomad ui
* Buggy but self-contained flow
* Flow auto-delay added and a little more polish to resetting token
* secret passing turned to accessor passing
* Handle SSO Failure
* General cleanup and test fix
* Lintfix
* SSO flow acceptance tests
* Percy snapshots added
* Explicitly note the OIDC test route is mirage only
* Handling failure case for complete-auth
* Leentfeex
* Tokens page styles (#15273)
* styling and moving columns around
* autofocus and enter press handling
* Styles refined
* Split up manager and regular tests
* Standardizing to a binary status state
* Serialize auth-methods response to use "name" as primary key (#15380)
* Serializer for unique-by-name
* Use @classic because of class extension
* scheduler: create placements for non-register MRD
For multiregion jobs, the scheduler does not create placements on
registration because the deployment must wait for the other regions.
Once of these regions will then trigger the deployment to run.
Currently, this is done in the scheduler by considering any eval for a
multiregion job as "paused" since it's expected that another region will
eventually unpause it.
This becomes a problem where evals not triggered by a job registration
happen, such as on a node update. These types of regional changes do not
have other regions waiting to progress the deployment, and so they were
never resulting in placements.
The fix is to create a deployment at job registration time. This
additional piece of state allows the scheduler to differentiate between
a multiregion change, where there are other regions engaged in the
deployment so no placements are required, from a regional change, where
the scheduler does need to create placements.
This deployment starts in the new "initializing" status to signal to the
scheduler that it needs to compute the initial deployment state. The
multiregion deployment will wait until this deployment state is
persisted and its starts is set to "pending". Without this state
transition it's possible to hit a race condition where the plan applier
and the deployment watcher may step of each other and overwrite their
changes.
* changelog: add entry for #15325
When the scheduler checks feasibility of each node, it creates a "stack" which
carries attributes of the job and task group it needs to check feasibility
for. The `system` and `sysbatch` scheduler use a different stack than `service`
and `batch` jobs. This stack was missing the call to set the job ID and
namespace for the CSI check. This prevents CSI volumes from being scheduled for
system jobs whenever the volume is in a non-default namespace.
Set the job ID and namespace to match the generic scheduler.
This PR modifies the disconnect helper job to run as root, which is necesary
for manipulating iptables as it does. Also re-organizes the final test logic
to wait for client re-connect before looking for the replacement (3rd) allocation
in case that client was needed to run the alloc (also giving the sheduler more
time to do its thing).
Skips the other 3 tests, which fail and I cannot yet figure out what is going on.
Adds @hashicorp/nomad-eng to the codeowners list for the build and release
workflow files, so that we can fix problems that arise without being
bottlenecked on another team.
The `ubuntu-latest` runner has been migrated to Ubuntu 22.04, which doesn't have
all the same multilib packages as 20.04. Although we'll probably want to migrate
eventually, we should ship Nomad 1.4.3 with the same toolchain as we did
previously so that we're not introducing new issues.
* e2e: fixup oversubscription test case for jammy
jammy uses cgroups v2, need to lookup the max memory limit from the
unified heirarchy format
* e2e: set constraint to require cgroups v2 on oversub docker test
* client: accommodate Consul 1.14.0 gRPC and agent self changes.
Consul 1.14.0 changed the way in which gRPC listeners are
configured, particularly when using TLS. Prior to the change, a
single listener was responsible for handling plain-text and
encrypted gRPC requests. In 1.14.0 and beyond, separate listeners
will be used for each, defaulting to 8502 and 8503 for plain-text
and TLS respectively.
The change means that Nomad’s Consul Connect integration would not
work when integrated with Consul clusters using TLS and running
1.14.0 or greater.
The Nomad Consul fingerprinter identifies the gRPC port Consul has
exposed using the "DebugConfig.GRPCPort" value from Consul’s
“/v1/agent/self” endpoint. In Consul 1.14.0 and greater, this only
represents the plain-text gRPC port which is likely to be disbaled
in clusters running TLS. In order to fix this issue, Nomad now
takes into account the Consul version and configured scheme to
optionally use “DebugConfig.GRPCTLSPort” value from Consul’s agent
self return.
The “consul_grcp_socket” allocrunner hook has also been updated so
that the fingerprinted gRPC port attribute is passed in. This
provides a better fallback method, when the operator does not
configure the “consul.grpc_address” option.
* docs: modify Consul Connect entries to detail 1.14.0 changes.
* changelog: add entry for #15309
* fixup: tidy tests and clean version match from review feedback.
* fixup: use strings tolower func.
This PR adds trace logging around the differential done between a Nomad service
registration and its corresponding Consul service registration, in an effort
to shed light on why a service registration request is being made.