Add support for using existing vault auto-auth configurations as the
provider configuration when using Vault's CA provider with AliCloud.
AliCloud requires 2 extra fields to enable it to use STS (it's preferred
auth setup). Our vault-plugin-auth-alicloud package contained a method
to help generate them as they require you to make an http call to
a faked endpoint proxy to get them (url and headers base64 encoded).
* Update the consul-k8s cli docs for the new `proxy log` subcommand
* Updated consul-k8s docs from PR feedback
* Added proxy log command to release notes
* Add some basic ui improvements for api-gateway services
* Add changelog entry
* Use ternary for null check
* Update gateway doc links
* rename changelog entry for new PR
* Fix test
Receiving an "acl not found" error from an RPC in the agent cache and the
streaming/event components will cause any request loops to cease under the
assumption that they will never work again if the token was destroyed. This
prevents log spam (#14144, #9738).
Unfortunately due to things like:
- authz requests going to stale servers that may not have witnessed the token
creation yet
- authz requests in a secondary datacenter happening before the tokens get
replicated to that datacenter
- authz requests from a primary TO a secondary datacenter happening before the
tokens get replicated to that datacenter
The caller will get an "acl not found" *before* the token exists, rather than
just after. The machinery added above in the linked PRs will kick in and
prevent the request loop from looping around again once the tokens actually
exist.
For `consul-dataplane` usages, where xDS is served by the Consul servers
rather than the clients ultimately this is not a problem because in that
scenario the `agent/proxycfg` machinery is on-demand and launched by a new xDS
stream needing data for a specific service in the catalog. If the watching
goroutines are terminated it ripples down and terminates the xDS stream, which
CDP will eventually re-establish and restart everything.
For Consul client usages, the `agent/proxycfg` machinery is ahead-of-time
launched at service registration time (called "local" in some of the proxycfg
machinery) so when the xDS stream comes in the data is already ready to go. If
the watching goroutines terminate it should terminate the xDS stream, but
there's no mechanism to re-spawn the watching goroutines. If the xDS stream
reconnects it will see no `ConfigSnapshot` and will not get one again until
the client agent is restarted, or the service is re-registered with something
changed in it.
This PR fixes a few things in the machinery:
- there was an inadvertent deadlock in fetching snapshot from the proxycfg
machinery by xDS, such that when the watching goroutine terminated the
snapshots would never be fetched. This caused some of the xDS machinery to
get indefinitely paused and not finish the teardown properly.
- Every 30s we now attempt to re-insert all locally registered services into
the proxycfg machinery.
- When services are re-inserted into the proxycfg machinery we special case
"dead" ones such that we unilaterally replace them rather that doing that
conditionally.
Updated Params field to re-frame as supporting arguments specific to the
supported vault-agent auth-auth methods with links to each methods
"#configuration" section.
Included a call out limits on parameters supported.
Adds support for the approle auth-method. Only handles using the approle
role/secret to auth and it doesn't support the agent's extra management
configuration options (wrap and delete after read) as they are not
required as part of the auth (ie. they are vault agent things).
- When an envoy version is out of a supported range, we now return the envoy version being used as `major.minor.x` to indicate that it is the minor version at most that is incompatible
- When an envoy version is in the list of unsupported envoy versions we return back the envoy version in the error message as `major.minor.patch` as now the exact version matters.
* Fix issue where terminating gateway service resolvers weren't properly cleaned up
* Add integration test for cleaning up resolvers
* Add changelog entry
* Use state test and drop integration test
* Leverage ServiceResolver ConnectTimeout for route timeouts to make TerminatingGateway upstream timeouts configurable
* Regenerate golden files
* Add RequestTimeout field
* Add changelog entry
Adds support for a jwt token in a file. Simply reads the file and sends
the read in jwt along to the vault login.
It also supports a legacy mode with the jwt string being passed
directly. In which case the path is made optional.
Does the required dance with the local HTTP endpoint to get the required
data for the jwt based auth setup in Azure. Keeps support for 'legacy'
mode where all login data is passed on via the auth methods parameters.
Refactored check for hardcoded /login fields.
* converted main services page to services overview page
* set up services usage dirs
* added Define Services usage page
* converted health checks everything page to Define Health Checks usage page
* added Register Services and Nodes usage page
* converted Query with DNS to Discover Services and Nodes Overview page
* added Configure DNS Behavior usage page
* added Enable Static DNS Lookups usage page
* added the Enable Dynamic Queries DNS Queries usage page
* added the Configuration dir and overview page - may not need the overview, tho
* fixed the nav from previous commit
* added the Services Configuration Reference page
* added Health Checks Configuration Reference page
* updated service defaults configuraiton entry to new configuration ref format
* fixed some bad links found by checker
* more bad links found by checker
* another bad link found by checker
* converted main services page to services overview page
* set up services usage dirs
* added Define Services usage page
* converted health checks everything page to Define Health Checks usage page
* added Register Services and Nodes usage page
* converted Query with DNS to Discover Services and Nodes Overview page
* added Configure DNS Behavior usage page
* added Enable Static DNS Lookups usage page
* added the Enable Dynamic Queries DNS Queries usage page
* added the Configuration dir and overview page - may not need the overview, tho
* fixed the nav from previous commit
* added the Services Configuration Reference page
* added Health Checks Configuration Reference page
* updated service defaults configuraiton entry to new configuration ref format
* fixed some bad links found by checker
* more bad links found by checker
* another bad link found by checker
* fixed cross-links between new topics
* updated links to the new services pages
* fixed bad links in scale file
* tweaks to titles and phrasing
* fixed typo in checks.mdx
* started updating the conf ref to latest template
* update SD conf ref to match latest CT standard
* Apply suggestions from code review
Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com>
* remove previous version of the checks page
* fixed cross-links
* Apply suggestions from code review
Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com>
---------
Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com>
Fixes a regression from #15631 in the output of `consul version` from:
Consul v1.16.0-dev
+ent
Revision 56b86acbe5+CHANGES
to
Consul v1.16.0-dev+ent
Revision 56b86acbe5+CHANGES
* test(gateways): add API Gateway HTTPRoute ParentRef change test
* test(gateways): add checkRouteError helper
* test(gateways): remove EOF check
in CI this seems to sometimes be 'connection reset by peer' instead
* Update test/integration/consul-container/test/gateways/http_route_test.go
Fixes a regression in #16044
The consul acl token read -self cli command should not require an -accessor-id because typically the persona invoking this would not already know the accessor id of their own token.
Registering gRPC balancers is thread-unsafe because they are stored in a
global map variable that is accessed without holding a lock. Therefore,
it's expected that balancers are registered _once_ at the beginning of
your program (e.g. in a package `init` function) and certainly not after
you've started dialing connections, etc.
> NOTE: this function must only be called during initialization time
> (i.e. in an init() function), and is not thread-safe.
While this is fine for us in production, it's challenging for tests that
spin up multiple agents in-memory. We currently register a balancer per-
agent which holds agent-specific state that cannot safely be shared.
This commit introduces our own registry that _is_ thread-safe, and
implements the Builder interface such that we can call gRPC's `Register`
method once, on start-up. It uses the same pattern as our resolver
registry where we use the dial target's host (aka "authority"), which is
unique per-agent, to determine which builder to use.