* endpoints xds cluster configuration
* clusters xds native generation
* resources test fix
* fix reversion in resources_test
* Update agent/proxycfg/api_gateway.go
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
* gofmt
* Modify getReadyUpstreams to filter upstreams by listener (#17410)
Each listener would previously have all upstreams from any route that bound to the listener. This is problematic when a route bound to one listener also binds to other listeners and so includes upstreams for multiple listeners. The list for a given listener would then wind up including upstreams for other listeners.
* Update agent/proxycfg/api_gateway.go
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Restore import blocking
* Undo removal of unrelated code
---------
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
This change enables workflows where you are reapplying a resource that should have an owner ref to publish modifications to the resources data without performing a read to figure out the current owner resource incarnations UID.
Basically we want workflows similar to `kubectl apply` or `consul config write` to be able to work seamlessly even for owned resources.
In these cases the users intention is to have the resource owned by the “current” incarnation of the owner resource.
* endpoints xds cluster configuration
* resources test fix
* fix reversion in resources_test
* Update agent/proxycfg/api_gateway.go
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
* gofmt
* Modify getReadyUpstreams to filter upstreams by listener (#17410)
Each listener would previously have all upstreams from any route that bound to the listener. This is problematic when a route bound to one listener also binds to other listeners and so includes upstreams for multiple listeners. The list for a given listener would then wind up including upstreams for other listeners.
* Update agent/proxycfg/api_gateway.go
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* Restore import blocking
* Skip to next route if route has no upstreams
* cleanup
* change set from bool to empty struct
---------
Co-authored-by: John Maguire <john.maguire@hashicorp.com>
Co-authored-by: Nathan Coleman <nathan.coleman@hashicorp.com>
* agent: configure server lastseen timestamp
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use correct config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use default age in test golden data
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add changelog
Signed-off-by: Dan Bond <danbond@protonmail.com>
* fix runtime test
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add server_metadata
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* correctly check if metadata file does not exist
Signed-off-by: Dan Bond <danbond@protonmail.com>
* follow instructions for adding new config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* Update agent/agent.go
Co-authored-by: Dan Upton <daniel@floppy.co>
* agent/config: add validation for duration with min
Signed-off-by: Dan Bond <danbond@protonmail.com>
* docs: add new server_rejoin_age_max config definition
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add unit test for checking server last seen
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: log continually for 60s before erroring
Signed-off-by: Dan Bond <danbond@protonmail.com>
* pr comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* remove unneeded todo
* agent: fix error message
Signed-off-by: Dan Bond <danbond@protonmail.com>
---------
Signed-off-by: Dan Bond <danbond@protonmail.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
* Add v1/internal/service-virtual-ip for manually setting service VIPs
* Attach service virtual IP info to compiled discovery chain
* Separate auto-assigned and manual VIPs in response
The grpc resolver implementation is fed from changes to the
router.Router. Within the router there is a map of various areas storing
the addressing information for servers in those areas. All map entries
are of the WAN variety except a single special entry for the LAN.
Addressing information in the LAN "area" are local addresses intended
for use when making a client-to-server or server-to-server request.
The client agent correctly updates this LAN area when receiving lan serf
events, so by extension the grpc resolver works fine in that scenario.
The server agent only initially populates a single entry in the LAN area
(for itself) on startup, and then never mutates that area map again.
For normal RPCs a different structure is used for LAN routing.
Additionally when selecting a server to contact in the local datacenter
it will randomly select addresses from either the LAN or WAN addressed
entries in the map.
Unfortunately this means that the grpc resolver stack as it exists on
server agents is either broken or only accidentally functions by having
servers dial each other over the WAN-accessible address. If the operator
disables the serf wan port completely likely this incidental functioning
would break.
This PR enforces that local requests for servers (both for stale reads
or leader forwarded requests) exclusively use the LAN "area" information
and also fixes it so that servers keep that area up to date in the
router.
A test for the grpc resolver logic was added, as well as a higher level
full-stack test to ensure the externally perceived bug does not return.
* snapshot: some improvments to the snapshot process
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
UNIX domain socket paths are limited to 104-108 characters, depending on
the OS. This limit was quite easy to exceed when testing the feature on
Kubernetes, due to how proxy IDs encode the Pod ID eg:
metrics-collector-59467bcb9b-fkkzl-hcp-metrics-collector-sidecar-proxy
To ensure we stay under that character limit this commit makes a
couple changes:
- Use a b64 encoded SHA1 hash of the namespace + proxy ID to create a
short and deterministic socket file name.
- Add validation to proxy registrations and proxy-defaults to enforce a
limit on the socket directory length.
Fix multiple issues related to proxycfg health queries.
1. The datacenter was not being provided to a proxycfg query, which resulted in
bypassing agentless query optimizations and using the normal API instead.
2. The health rpc endpoint would return a zero index when insufficient ACLs were
detected. This would result in the agent cache performing an infinite loop of
queries in rapid succession without backoff.
Fix issue with peer stream node cleanup.
This commit encompasses a few problems that are closely related due to their
proximity in the code.
1. The peerstream utilizes node IDs in several locations to determine which
nodes / services / checks should be cleaned up or created. While VM deployments
with agents will likely always have a node ID, agentless uses synthetic nodes
and does not populate the field. This means that for consul-k8s deployments, all
services were likely bundled together into the same synthetic node in some code
paths (but not all), resulting in strange behavior. The Node.Node field should
be used instead as a unique identifier, as it should always be populated.
2. The peerstream cleanup process for unused nodes uses an incorrect query for
node deregistration. This query is NOT namespace aware and results in the node
(and corresponding services) being deregistered prematurely whenever it has zero
default-namespace services and 1+ non-default-namespace services registered on
it. This issue is tricky to find due to the incorrect logic mentioned in #1,
combined with the fact that the affected services must be co-located on the same
node as the currently deregistering service for this to be encountered.
3. The stream tracker did not understand differences between services in
different namespaces and could therefore report incorrect numbers. It was
updated to utilize the full service name to avoid conflicts and return proper
results.
When using vault as a CA and generating the local signing cert, try to
enable the PKI endpoint's auto-tidy feature with it set to tidy expired
issuers.
This adds filtering for service-defaults: consul config list -filter 'MutualTLSMode == "permissive"'.
It adds CLI warnings when the CLI writes a config entry and sees that either service-defaults or proxy-defaults contains MutualTLSMode=permissive, or sees that the mesh config entry contains AllowEnablingPermissiveMutualTLSMode=true.
Partitioned downstreams with peered upstreams could not properly merge central config info (i.e. proxy-defaults and service-defaults things like mesh gateway modes) if the upstream had an empty DestinationPartition field in Enterprise.
Due to data flow, if this setup is done using Consul client agents the field is never empty and thus does not experience the bug.
When a service is registered directly to the catalog as is the case for consul-dataplane use this field may be empty and and the internal machinery of the merging function doesn't handle this well.
This PR ensures the internal machinery of that function is referentially self-consistent.
* Persist HCP management token from server config
We want to move away from injecting an initial management token into
Consul clusters linked to HCP. The reasoning is that by using a separate
class of token we can have more flexibility in terms of allowing HCP's
token to co-exist with the user's management token.
Down the line we can also more easily adjust the permissions attached to
HCP's token to limit it's scope.
With these changes, the cloud management token is like the initial
management token in that iit has the same global management policy and
if it is created it effectively bootstraps the ACL system.
* Update SDK and mock HCP server
The HCP management token will now be sent in a special field rather than
as Consul's "initial management" token configuration.
This commit also updates the mock HCP server to more accurately reflect
the behavior of the CCM backend.
* Refactor HCP bootstrapping logic and add tests
We want to allow users to link Consul clusters that already exist to
HCP. Existing clusters need care when bootstrapped by HCP, since we do
not want to do things like change ACL/TLS settings for a running
cluster.
Additional changes:
* Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK
requires HTTPS to fetch a token from the Auth URL, even if the backend
server is mocked. By pulling the hcp.Client creation out we can modify
its TLS configuration in tests while keeping the secure behavior in
production code.
* Add light validation for data received/loaded.
* Sanitize initial_management token from received config, since HCP will
only ever use the CloudConfig.MangementToken.
* Add changelog entry
* Move status condition for invalid certifcate to reference the listener
that is using the certificate
* Fix where we set the condition status for listeners and certificate
refs, added tests
* Add changelog
* Add MaxEjectionPercent to config entry
* Add BaseEjectionTime to config entry
* Add MaxEjectionPercent and BaseEjectionTime to protobufs
* Add MaxEjectionPercent and BaseEjectionTime to api
* Fix integration test breakage
* Verify MaxEjectionPercent and BaseEjectionTime in integration test upstream confings
* Website docs for MaxEjectionPercent and BaseEjection time
* Add `make docs` to browse docs at http://localhost:3000
* Changelog entry
* so that is the difference between consul-docker and dev-docker
* blah
* update proto funcs
* update proto
---------
Co-authored-by: Maliz <maliheh.monshizadeh@hashicorp.com>
* Fix straggler from renaming Register->RegisterTypes
* somehow a lint failure got through previously
* Fix lint-consul-retry errors
* adding in fix for success jobs getting skipped. (#17132)
* Temporarily disable inmem backend conformance test to get green pipeline
* Another test needs disabling
---------
Co-authored-by: John Murret <john.murret@hashicorp.com>