Although the metric is defined, there is no code which ever sets its
value - the code in question is genuinely asymmetric - there are 3 types
of object for which registration can be tracked, but only 2 for which
deregistration can be tracked.
Having this type live in the agent/consul package makes it difficult to
put anything that relies on token resolution (e.g. the new gRPC services)
in separate packages without introducing import cycles.
For example, if package foo imports agent/consul for the ACLResolveResult
type it means that agent/consul cannot import foo to register its service.
We've previously worked around this by wrapping the ACLResolver to
"downgrade" its return type to an acl.Authorizer - aside from the
added complexity, this also loses the resolved identity information.
In the future, we may want to move the whole ACLResolver into the
acl/resolver package. For now, putting the result type there at least,
fixes the immediate import cycle issues.
This is only configured in xDS when a service with an L7 protocol is
exported.
They also load any relevant trust bundles for the peered services to
eventually use for L7 SPIFFE validation during mTLS termination.
Adds the merge-central-config query param option to the /catalog/node-services/:node-name API,
to get a service definition in the response that is merged with central defaults (proxy-defaults/service-defaults).
Updated the consul connect envoy command to use this option when
retrieving the proxy service details so as to render the bootstrap configuration correctly.
When our peer deletes the peering it is locally marked as terminated.
This termination should kick off deleting all imported data, but should
not delete the peering object itself.
Keeping peerings marked as terminated acts as a signal that the action
took place.
Once a peering is marked for deletion a new leader routine will now
clean up all imported resources and then the peering itself.
A lot of the logic was grabbed from the namespace/partitions deferred
deletions but with a handful of simplifications:
- The rate limiting is not configurable.
- Deleting imported nodes/services/checks is done by deleting nodes with
the Txn API. The services and checks are deleted as a side-effect.
- There is no "round rate limiter" like with namespaces and partitions.
This is because peerings are purely local, and deleting a peering in
the datacenter does not depend on deleting data from other DCs like
with WAN-federated namespaces. All rate limiting is handled by the
Raft rate limiter.
1. Fix a bug where the peering leader routine would not track all active
peerings in the "stored" reconciliation map. This could lead to
tearing down streams where the token was generated, since the
ConnectedStreams() method used for reconciliation returns all streams
and not just the ones initiated by this leader routine.
2. Fix a race where stream contexts were being canceled before
termination messages were being processed by a peer.
Previously the leader routine would tear down streams by canceling
their context right after the termination message was sent. This
context cancelation could be propagated to the server side faster
than the termination message. Now there is a change where the
dialing peer uses CloseSend() to signal when no more messages will
be sent. Eventually the server peer will read an EOF after receiving
and processing the preceding termination message.
Using CloseSend() is actually not enough to address the issue
mentioned, since it doesn't wait for the server peer to finish
processing messages. Because of this now the dialing peer also reads
from the stream until an error signals that there are no more
messages. Receiving an EOF from our peer indicates that they
processed the termination message and have no additional work to do.
Given that the stream is being closed, all the messages received by
Recv are discarded. We only check for errors to avoid importing new
data.
When deleting a peering we do not want to delete the peering and all
imported data in a single operation, since deleting a large amount of
data at once could overload Consul.
Instead we defer deletion of peerings so that:
1. When a peering deletion request is received via gRPC the peering is
marked for deletion by setting the DeletedAt field.
2. A leader routine will monitor for peerings that are marked for
deletion and kick off a throttled deletion of all imported resources
before deleting the peering itself.
This commit mostly addresses point #1 by modifying the peering service
to mark peerings for deletion. Another key change is to add a
PeeringListDeleted state store function which can return all peerings
marked for deletion. This function is what will be watched by the
deferred deletion leader routine.
Previously, imported data would never be deleted. As
nodes/services/checks were registered and deregistered, resources
deleted from the exporting cluster would accumulate in the imported
cluster.
This commit makes updates to replication so that whenever an update is
received for a service name we reconcile what was present in the catalog
against what was received.
This handleUpdateService method can handle both updates and deletions.
When converting from Consul intentions to xds RBAC rules, services imported from other peers must encode additional data like partition (from the remote cluster) and trust domain.
This PR updates the PeeringTrustBundle to hold the sending side's local partition as ExportedPartition. It also updates RBAC code to encode SpiffeIDs of imported services with the ExportedPartition and TrustDomain.
Mesh gateways can use hostnames in their tagged addresses (#7999). This is useful
if you were to expose a mesh gateway using a cloud networking load balancer appliance
that gives you a DNS name but no reliable static IPs.
Envoy cannot accept hostnames via EDS and those must be configured using CDS.
There was already logic when configuring gateways in other locations in the code, but
given the illusions in play for peering the downstream of a peered service wasn't aware
that it should be doing that.
Also:
- ensuring that we always try to use wan-like addresses to cross peer boundaries.
Require use of mesh gateways in order for service mesh data plane
traffic to flow between peers.
This also adds plumbing for envoy integration tests involving peers, and
one starter peering test.
* when enterprise meta are wildcard assume it's a service intention
* fix partition and namespace
* move kind outside the loops
* get the kind check outside the loop and add a comment
Co-authored-by: github-team-consul-core <github-team-consul-core@hashicorp.com>
* update gateway-services table with endpoints
* fix failing test
* remove unneeded config in test
* rename "endpoint" to "destination"
* more endpoint renaming to destination in tests
* update isDestination based on service-defaults config entry creation
* use a 3 state kind to be able to set the kind to unknown (when neither a service or a destination exist)
* set unknown state to empty to avoid modifying alot of tests
* fix logic to set the kind correctly on CRUD
* fix failing tests
* add missing tests and fix service delete
* fix failing test
* Apply suggestions from code review
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* fix a bug with kind and add relevant test
* fix compile error
* fix failing tests
* add kind to clone
* fix failing tests
* fix failing tests in catalog endpoint
* fix service dump test
* Apply suggestions from code review
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* remove duplicate tests
* first draft of destinations intention in connect proxy
* remove ServiceDestinationList
* fix failing tests
* fix agent/consul failing tests
* change to filter intentions in the state store instead of adding a field.
* fix failing tests
* fix comment
* fix comments
* store service kind destination and add relevant tests
* changes based on review
* filter on destinations when querying source match
* change state store API to get an IntentionTarget parameter
* add intentions tests
* add destination upstream endpoint
* fix failing test
* fix failing test and a bug with wildcard intentions
* fix failing test
* Apply suggestions from code review
Co-authored-by: alex <8968914+acpana@users.noreply.github.com>
* add missing test and clarify doc
* fix style
* gofmt intention.go
* fix merge introduced issue
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Co-authored-by: alex <8968914+acpana@users.noreply.github.com>
Co-authored-by: github-team-consul-core <github-team-consul-core@hashicorp.com>
* update gateway-services table with endpoints
* fix failing test
* remove unneeded config in test
* rename "endpoint" to "destination"
* more endpoint renaming to destination in tests
* update isDestination based on service-defaults config entry creation
* use a 3 state kind to be able to set the kind to unknown (when neither a service or a destination exist)
* set unknown state to empty to avoid modifying alot of tests
* fix logic to set the kind correctly on CRUD
* fix failing tests
* add missing tests and fix service delete
* fix failing test
* Apply suggestions from code review
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* fix a bug with kind and add relevant test
* fix compile error
* fix failing tests
* add kind to clone
* fix failing tests
* fix failing tests in catalog endpoint
* fix service dump test
* Apply suggestions from code review
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* remove duplicate tests
* first draft of destinations intention in connect proxy
* remove ServiceDestinationList
* fix failing tests
* fix agent/consul failing tests
* change to filter intentions in the state store instead of adding a field.
* fix failing tests
* fix comment
* fix comments
* store service kind destination and add relevant tests
* changes based on review
* filter on destinations when querying source match
* Apply suggestions from code review
Co-authored-by: alex <8968914+acpana@users.noreply.github.com>
* fix style
* Apply suggestions from code review
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
* rename destinationType to targetType.
Co-authored-by: Dan Stough <dan.stough@hashicorp.com>
Co-authored-by: alex <8968914+acpana@users.noreply.github.com>
Co-authored-by: github-team-consul-core <github-team-consul-core@hashicorp.com>
Mesh gateways will now enable tcp connections with SNI names including peering information so that those connections may be proxied.
Note: this does not change the callers to use these mesh gateways.
This is the OSS portion of enterprise PR 1994
Rather than directly interrogating the agent-local state for HTTP
checks using the `HTTPCheckFetcher` interface, we now rely on the
config snapshot containing the checks.
This reduces the number of changes required to support server xDS
sessions.
It's not clear why the fetching approach was introduced in
931d167ebb2300839b218d08871f22323c60175d.
Envoy's SPIFFE certificate validation extension allows for us to
validate against different root certificates depending on the trust
domain of the dialing proxy.
If there are any trust bundles from peers in the config snapshot then we
use the SPIFFE validator as the validation context, rather than the
usual TrustedCA.
The injected validation config includes the local root certificates as
well.
There are a handful of changes in this commit:
* When querying trust bundles for a service we need to be able to
specify the namespace of the service.
* The endpoint needs to track the index because the cache watches use
it.
* Extracted bulk of the endpoint's logic to a state store function
so that index tracking could be tested more easily.
* Removed check for service existence, deferring that sort of work to ACL authz
* Added the cache type
Given that the exported-services config entry can use wildcards, the
precedence for wildcards is handled as with intentions. The most exact
match is the match that applies for any given service. We do not take
the union of all that apply.
Another update that was made was to reflect that only one
exported-services config entry applies to any given service in a
partition. This is a pre-existing constraint that gets enforced by
the Normalize() method on that config entry type.
For mTLS to work between two proxies in peered clusters with different root CAs,
proxies need to configure their outbound listener to use different root certificates
for validation.
Up until peering was introduced proxies would only ever use one set of root certificates
to validate all mesh traffic, both inbound and outbound. Now an upstream proxy
may have a leaf certificate signed by a CA that's different from the dialing proxy's.
This PR makes changes to proxycfg and xds so that the upstream TLS validation
uses different root certificates depending on which cluster is being dialed.
This is the OSS portion of enterprise PRs 1904, 1905, 1906, 1907, 1949,
and 1971.
It replaces the proxycfg manager's direct dependency on the agent cache
with interfaces that will be implemented differently when serving xDS
sessions from a Consul server.