Merge branch 'eculver/envoy-1.19.1' into eculver/remove-envoy-1.15
This commit is contained in:
commit
64f94b10ce
|
@ -0,0 +1,3 @@
|
|||
```release-note:improvement
|
||||
connect: Add low-level feature to allow an Ingress to retrieve TLS certificates from SDS.
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:bug
|
||||
ui: hide create button for policies/roles/namespace if users token has no write permissions to those areas
|
||||
```
|
|
@ -0,0 +1,4 @@
|
|||
```release-note:bug
|
||||
ui: Ignore reported permissions for KV area meaning the KV is always enabled
|
||||
for both read/write access if the HTTP API allows.
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:bug
|
||||
ui: Gracefully recover from non-existant DC errors
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:improvement
|
||||
telemetry: Add new metrics for the count of KV entries in the Consul store.
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:bug
|
||||
ui: **(Enterprise Only)** Fix saving intentions with namespaced source/destination
|
||||
```
|
|
@ -0,0 +1,6 @@
|
|||
```release-note:bug
|
||||
grpc: strip local ACL tokens from RPCs during forwarding if crossing datacenters
|
||||
```
|
||||
```release-note:feature
|
||||
partitions: allow for partition queries to be forwarded
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:improvement
|
||||
audit-logging: **(Enterprise Only)** Audit logs will now include select HTTP headers in each logs payload. Those headers are: `Forwarded`, `Via`, `X-Forwarded-For`, `X-Forwarded-Host` and `X-Forwarded-Proto`.
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:bug
|
||||
connect: Fix upstream listener escape hatch for prepared queries
|
||||
```
|
|
@ -0,0 +1,4 @@
|
|||
```release-note:improvement
|
||||
ui: Add initial support for partitions to intentions
|
||||
```
|
||||
|
|
@ -0,0 +1,4 @@
|
|||
```release-note:improvement
|
||||
ui: Removed informational panel from the namespace selector menu when editing
|
||||
namespaces
|
||||
```
|
|
@ -0,0 +1,3 @@
|
|||
```release-note:bug
|
||||
ui: Don't show a CRD warning for read-only intentions
|
||||
```
|
107
CHANGELOG.md
107
CHANGELOG.md
|
@ -1,5 +1,80 @@
|
|||
## UNRELEASED
|
||||
|
||||
## 1.11.0-alpha (September 16, 2021)
|
||||
|
||||
SECURITY:
|
||||
|
||||
* rpc: authorize raft requests [CVE-2021-37219](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-37219) [[GH-10925](https://github.com/hashicorp/consul/issues/10925)]
|
||||
|
||||
FEATURES:
|
||||
|
||||
* config: add agent config flag for enterprise clients to indicate they wish to join a particular partition [[GH-10572](https://github.com/hashicorp/consul/issues/10572)]
|
||||
* connect: include optional partition prefixes in SPIFFE identifiers [[GH-10507](https://github.com/hashicorp/consul/issues/10507)]
|
||||
* partitions: **(Enterprise only)** Adds admin partitions, a new feature to enhance Consul's multitenancy capabilites.
|
||||
* ui: Add UI support to use Vault as an external source for a service [[GH-10769](https://github.com/hashicorp/consul/issues/10769)]
|
||||
* ui: Adds a copy button to each composite row in tokens list page, if Secret ID returns an actual ID [[GH-10735](https://github.com/hashicorp/consul/issues/10735)]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* acl: replication routine to report the last error message. [[GH-10612](https://github.com/hashicorp/consul/issues/10612)]
|
||||
* api: Enable setting query options on agent health and maintenance endpoints. [[GH-10691](https://github.com/hashicorp/consul/issues/10691)]
|
||||
* checks: add failures_before_warning setting for interval checks. [[GH-10969](https://github.com/hashicorp/consul/issues/10969)]
|
||||
* config: **(Enterprise Only)** Allow specifying permission mode for audit logs. [[GH-10732](https://github.com/hashicorp/consul/issues/10732)]
|
||||
* config: add `dns_config.recursor_strategy` flag to control the order which DNS recursors are queried [[GH-10611](https://github.com/hashicorp/consul/issues/10611)]
|
||||
* connect/ca: cease including the common name field in generated x509 non-CA certificates [[GH-10424](https://github.com/hashicorp/consul/issues/10424)]
|
||||
* connect: Support manipulating HTTP headers in the mesh. [[GH-10613](https://github.com/hashicorp/consul/issues/10613)]
|
||||
* connect: update supported envoy versions to 1.18.4, 1.17.4, 1.16.5 [[GH-10961](https://github.com/hashicorp/consul/issues/10961)]
|
||||
* debug: Add a new /v1/agent/metrics/stream API endpoint for streaming of metrics [[GH-10399](https://github.com/hashicorp/consul/issues/10399)]
|
||||
* debug: rename cluster capture target to members, to be more consistent with the terms used by the API. [[GH-10804](https://github.com/hashicorp/consul/issues/10804)]
|
||||
* structs: prohibit config entries from referencing more than one partition at a time [[GH-10478](https://github.com/hashicorp/consul/issues/10478)]
|
||||
* telemetry: add a new `agent.tls.cert.expiry` metric for tracking when the Agent TLS certificate expires. [[GH-10768](https://github.com/hashicorp/consul/issues/10768)]
|
||||
* telemetry: add a new `mesh.active-root-ca.expiry` metric for tracking when the root certificate expires. [[GH-9924](https://github.com/hashicorp/consul/issues/9924)]
|
||||
|
||||
DEPRECATIONS:
|
||||
|
||||
* config: the `ports.grpc` and `addresses.grpc` configuration settings have been renamed to `ports.xds` and `addresses.xds` to better match their function. [[GH-10588](https://github.com/hashicorp/consul/issues/10588)]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* api: Fix default values used for optional fields in autopilot configuration update (POST to `/v1/operator/autopilot/configuration`) [[GH-10558](https://github.com/hashicorp/consul/issues/10558)] [[GH-10559](https://github.com/hashicorp/consul/issues/10559)]
|
||||
* api: Revert early out errors from license APIs to allow v1.10+ clients to
|
||||
manage licenses on older servers [[GH-10952](https://github.com/hashicorp/consul/issues/10952)]
|
||||
* check root and intermediate CA expiry before using it to sign a leaf certificate. [[GH-10500](https://github.com/hashicorp/consul/issues/10500)]
|
||||
* connect/ca: ensure edits to the key type/bits for the connect builtin CA will regenerate the roots [[GH-10330](https://github.com/hashicorp/consul/issues/10330)]
|
||||
* connect/ca: require new vault mount points when updating the key type/bits for the vault connect CA provider [[GH-10331](https://github.com/hashicorp/consul/issues/10331)]
|
||||
* dns: return an empty answer when asked for an addr dns with type other then A and AAAA. [[GH-10401](https://github.com/hashicorp/consul/issues/10401)]
|
||||
* tls: consider presented intermediates during server connection tls handshake. [[GH-10964](https://github.com/hashicorp/consul/issues/10964)]
|
||||
* use the MaxQueryTime instead of RPCHoldTimeout for blocking RPC queries
|
||||
[[GH-8978](https://github.com/hashicorp/consul/pull/8978)]. [[GH-10299](https://github.com/hashicorp/consul/issues/10299)]
|
||||
|
||||
## 1.10.3 (September 27, 2021)
|
||||
|
||||
FEATURES:
|
||||
|
||||
* sso/oidc: **(Enterprise only)** Add support for providing acr_values in OIDC auth flow [[GH-11026](https://github.com/hashicorp/consul/issues/11026)]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* audit-logging: **(Enterprise Only)** Audit logs will now include select HTTP headers in each logs payload. Those headers are: `Forwarded`, `Via`, `X-Forwarded-For`, `X-Forwarded-Host` and `X-Forwarded-Proto`. [[GH-11107](https://github.com/hashicorp/consul/issues/11107)]
|
||||
* connect: update supported envoy versions to 1.18.4, 1.17.4, 1.16.5 [[GH-10961](https://github.com/hashicorp/consul/issues/10961)]
|
||||
* telemetry: Add new metrics for the count of KV entries in the Consul store. [[GH-11090](https://github.com/hashicorp/consul/issues/11090)]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* api: Revert early out errors from license APIs to allow v1.10+ clients to
|
||||
manage licenses on older servers [[GH-10952](https://github.com/hashicorp/consul/issues/10952)]
|
||||
* connect: Fix upstream listener escape hatch for prepared queries [[GH-11109](https://github.com/hashicorp/consul/issues/11109)]
|
||||
* grpc: strip local ACL tokens from RPCs during forwarding if crossing datacenters [[GH-11099](https://github.com/hashicorp/consul/issues/11099)]
|
||||
* tls: consider presented intermediates during server connection tls handshake. [[GH-10964](https://github.com/hashicorp/consul/issues/10964)]
|
||||
* ui: **(Enterprise Only)** Fix saving intentions with namespaced source/destination [[GH-11095](https://github.com/hashicorp/consul/issues/11095)]
|
||||
* ui: Don't show a CRD warning for read-only intentions [[GH-11149](https://github.com/hashicorp/consul/issues/11149)]
|
||||
* ui: Ensure routing-config page blocking queries are cleaned up correctly [[GH-10915](https://github.com/hashicorp/consul/issues/10915)]
|
||||
* ui: Ignore reported permissions for KV area meaning the KV is always enabled
|
||||
for both read/write access if the HTTP API allows. [[GH-10916](https://github.com/hashicorp/consul/issues/10916)]
|
||||
* ui: hide create button for policies/roles/namespace if users token has no write permissions to those areas [[GH-10914](https://github.com/hashicorp/consul/issues/10914)]
|
||||
* xds: ensure the active streams counters are 64 bit aligned on 32 bit systems [[GH-11085](https://github.com/hashicorp/consul/issues/11085)]
|
||||
* xds: fixed a bug where Envoy sidecars could enter a state where they failed to receive xds updates from Consul [[GH-10987](https://github.com/hashicorp/consul/issues/10987)]
|
||||
|
||||
## 1.10.2 (August 27, 2021)
|
||||
|
||||
KNOWN ISSUES:
|
||||
|
@ -202,6 +277,23 @@ NOTES:
|
|||
|
||||
* legal: **(Enterprise only)** Enterprise binary downloads will now include a copy of the EULA and Terms of Evaluation in the zip archive
|
||||
|
||||
## 1.9.10 (September 27, 2021)
|
||||
|
||||
FEATURES:
|
||||
|
||||
* sso/oidc: **(Enterprise only)** Add support for providing acr_values in OIDC auth flow [[GH-11026](https://github.com/hashicorp/consul/issues/11026)]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* audit-logging: **(Enterprise Only)** Audit logs will now include select HTTP headers in each logs payload. Those headers are: `Forwarded`, `Via`, `X-Forwarded-For`, `X-Forwarded-Host` and `X-Forwarded-Proto`. [[GH-11107](https://github.com/hashicorp/consul/issues/11107)]
|
||||
* connect: update supported envoy versions to 1.16.5 [[GH-10961](https://github.com/hashicorp/consul/issues/10961)]
|
||||
* telemetry: Add new metrics for the count of KV entries in the Consul store. [[GH-11090](https://github.com/hashicorp/consul/issues/11090)]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* tls: consider presented intermediates during server connection tls handshake. [[GH-10964](https://github.com/hashicorp/consul/issues/10964)]
|
||||
* ui: **(Enterprise Only)** Fix saving intentions with namespaced source/destination [[GH-11095](https://github.com/hashicorp/consul/issues/11095)]
|
||||
|
||||
## 1.9.9 (August 27, 2021)
|
||||
|
||||
KNOWN ISSUES:
|
||||
|
@ -543,6 +635,21 @@ BUG FIXES:
|
|||
* telemetry: fixed a bug that caused logs to be flooded with `[WARN] agent.router: Non-server in server-only area` [[GH-8685](https://github.com/hashicorp/consul/issues/8685)]
|
||||
* ui: show correct datacenter for gateways [[GH-8704](https://github.com/hashicorp/consul/issues/8704)]
|
||||
|
||||
## 1.8.16 (September 27, 2021)
|
||||
|
||||
FEATURES:
|
||||
|
||||
* sso/oidc: **(Enterprise only)** Add support for providing acr_values in OIDC auth flow [[GH-11026](https://github.com/hashicorp/consul/issues/11026)]
|
||||
|
||||
IMPROVEMENTS:
|
||||
|
||||
* audit-logging: **(Enterprise Only)** Audit logs will now include select HTTP headers in each logs payload. Those headers are: `Forwarded`, `Via`, `X-Forwarded-For`, `X-Forwarded-Host` and `X-Forwarded-Proto`. [[GH-11107](https://github.com/hashicorp/consul/issues/11107)]
|
||||
|
||||
BUG FIXES:
|
||||
|
||||
* tls: consider presented intermediates during server connection tls handshake. [[GH-10964](https://github.com/hashicorp/consul/issues/10964)]
|
||||
* ui: **(Enterprise Only)** Fixes a visual issue where namespaces would "double up" in the Source/Destination select menu when creating/editing intentions [[GH-11102](https://github.com/hashicorp/consul/issues/11102)]
|
||||
|
||||
## 1.8.15 (August 27, 2021)
|
||||
|
||||
KNOWN ISSUES:
|
||||
|
|
|
@ -2864,44 +2864,6 @@ func (a *Agent) AdvertiseAddrLAN() string {
|
|||
return a.config.AdvertiseAddrLAN.String()
|
||||
}
|
||||
|
||||
// resolveProxyCheckAddress returns the best address to use for a TCP check of
|
||||
// the proxy's public listener. It expects the input to already have default
|
||||
// values populated by applyProxyConfigDefaults. It may return an empty string
|
||||
// indicating that the TCP check should not be created at all.
|
||||
//
|
||||
// By default this uses the proxy's bind address which in turn defaults to the
|
||||
// agent's bind address. If the proxy bind address ends up being 0.0.0.0 we have
|
||||
// to assume the agent can dial it over loopback which is usually true.
|
||||
//
|
||||
// In some topologies such as proxy being in a different container, the IP the
|
||||
// agent used to dial proxy over a local bridge might not be the same as the
|
||||
// container's public routable IP address so we allow a manual override of the
|
||||
// check address in config "tcp_check_address" too.
|
||||
//
|
||||
// Finally the TCP check can be disabled by another manual override
|
||||
// "disable_tcp_check" in cases where the agent will never be able to dial the
|
||||
// proxy directly for some reason.
|
||||
func (a *Agent) resolveProxyCheckAddress(proxyCfg map[string]interface{}) string {
|
||||
// If user disabled the check return empty string
|
||||
if disable, ok := proxyCfg["disable_tcp_check"].(bool); ok && disable {
|
||||
return ""
|
||||
}
|
||||
|
||||
// If user specified a custom one, use that
|
||||
if chkAddr, ok := proxyCfg["tcp_check_address"].(string); ok && chkAddr != "" {
|
||||
return chkAddr
|
||||
}
|
||||
|
||||
// If we have a bind address and its diallable, use that
|
||||
if bindAddr, ok := proxyCfg["bind_address"].(string); ok &&
|
||||
bindAddr != "" && bindAddr != "0.0.0.0" && bindAddr != "[::]" {
|
||||
return bindAddr
|
||||
}
|
||||
|
||||
// Default to localhost
|
||||
return "127.0.0.1"
|
||||
}
|
||||
|
||||
func (a *Agent) cancelCheckMonitors(checkID structs.CheckID) {
|
||||
// Stop any monitors
|
||||
delete(a.checkReapAfter, checkID)
|
||||
|
|
|
@ -477,14 +477,6 @@ func (s *Server) replicateACLType(ctx context.Context, logger hclog.Logger, tr a
|
|||
return remoteIndex, false, nil
|
||||
}
|
||||
|
||||
// IsACLReplicationEnabled returns true if ACL replication is enabled.
|
||||
// DEPRECATED (ACL-Legacy-Compat) - with new ACLs at least policy replication is required
|
||||
func (s *Server) IsACLReplicationEnabled() bool {
|
||||
authDC := s.config.PrimaryDatacenter
|
||||
return len(authDC) > 0 && (authDC != s.config.Datacenter) &&
|
||||
s.config.ACLTokenReplication
|
||||
}
|
||||
|
||||
func (s *Server) updateACLReplicationStatusError(errorMsg string) {
|
||||
s.aclReplicationStatusLock.Lock()
|
||||
defer s.aclReplicationStatusLock.Unlock()
|
||||
|
@ -499,8 +491,6 @@ func (s *Server) updateACLReplicationStatusIndex(replicationType structs.ACLRepl
|
|||
|
||||
s.aclReplicationStatus.LastSuccess = time.Now().Round(time.Second).UTC()
|
||||
switch replicationType {
|
||||
case structs.ACLReplicateLegacy:
|
||||
s.aclReplicationStatus.ReplicatedIndex = index
|
||||
case structs.ACLReplicateTokens:
|
||||
s.aclReplicationStatus.ReplicatedTokenIndex = index
|
||||
case structs.ACLReplicatePolicies:
|
||||
|
|
|
@ -1,276 +0,0 @@
|
|||
package consul
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
metrics "github.com/armon/go-metrics"
|
||||
"github.com/hashicorp/go-hclog"
|
||||
|
||||
"github.com/hashicorp/consul/agent/structs"
|
||||
)
|
||||
|
||||
// aclIterator simplifies the algorithm below by providing a basic iterator that
|
||||
// moves through a list of ACLs and returns nil when it's exhausted. It also has
|
||||
// methods for pre-sorting the ACLs being iterated over by ID, which should
|
||||
// already be true, but since this is crucial for correctness and we are taking
|
||||
// input from other servers, we sort to make sure.
|
||||
type aclIterator struct {
|
||||
acls structs.ACLs
|
||||
|
||||
// index is the current position of the iterator.
|
||||
index int
|
||||
}
|
||||
|
||||
// newACLIterator returns a new ACL iterator.
|
||||
func newACLIterator(acls structs.ACLs) *aclIterator {
|
||||
return &aclIterator{acls: acls}
|
||||
}
|
||||
|
||||
// See sort.Interface.
|
||||
func (a *aclIterator) Len() int {
|
||||
return len(a.acls)
|
||||
}
|
||||
|
||||
// See sort.Interface.
|
||||
func (a *aclIterator) Swap(i, j int) {
|
||||
a.acls[i], a.acls[j] = a.acls[j], a.acls[i]
|
||||
}
|
||||
|
||||
// See sort.Interface.
|
||||
func (a *aclIterator) Less(i, j int) bool {
|
||||
return a.acls[i].ID < a.acls[j].ID
|
||||
}
|
||||
|
||||
// Front returns the item at index position, or nil if the list is exhausted.
|
||||
func (a *aclIterator) Front() *structs.ACL {
|
||||
if a.index < len(a.acls) {
|
||||
return a.acls[a.index]
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Next advances the iterator to the next index.
|
||||
func (a *aclIterator) Next() {
|
||||
a.index++
|
||||
}
|
||||
|
||||
// reconcileACLs takes the local and remote ACL state, and produces a list of
|
||||
// changes required in order to bring the local ACLs into sync with the remote
|
||||
// ACLs. You can supply lastRemoteIndex as a hint that replication has succeeded
|
||||
// up to that remote index and it will make this process more efficient by only
|
||||
// comparing ACL entries modified after that index. Setting this to 0 will force
|
||||
// a full compare of all existing ACLs.
|
||||
func reconcileLegacyACLs(local, remote structs.ACLs, lastRemoteIndex uint64) structs.ACLRequests {
|
||||
// Since sorting the lists is crucial for correctness, we are depending
|
||||
// on data coming from other servers potentially running a different,
|
||||
// version of Consul, and sorted-ness is kind of a subtle property of
|
||||
// the state store indexing, it's prudent to make sure things are sorted
|
||||
// before we begin.
|
||||
localIter, remoteIter := newACLIterator(local), newACLIterator(remote)
|
||||
sort.Sort(localIter)
|
||||
sort.Sort(remoteIter)
|
||||
|
||||
// Run through both lists and reconcile them.
|
||||
var changes structs.ACLRequests
|
||||
for localIter.Front() != nil || remoteIter.Front() != nil {
|
||||
// If the local list is exhausted, then process this as a remote
|
||||
// add. We know from the loop condition that there's something
|
||||
// in the remote list.
|
||||
if localIter.Front() == nil {
|
||||
changes = append(changes, &structs.ACLRequest{
|
||||
Op: structs.ACLSet,
|
||||
ACL: *(remoteIter.Front()),
|
||||
})
|
||||
remoteIter.Next()
|
||||
continue
|
||||
}
|
||||
|
||||
// If the remote list is exhausted, then process this as a local
|
||||
// delete. We know from the loop condition that there's something
|
||||
// in the local list.
|
||||
if remoteIter.Front() == nil {
|
||||
changes = append(changes, &structs.ACLRequest{
|
||||
Op: structs.ACLDelete,
|
||||
ACL: *(localIter.Front()),
|
||||
})
|
||||
localIter.Next()
|
||||
continue
|
||||
}
|
||||
|
||||
// At this point we know there's something at the front of each
|
||||
// list we need to resolve.
|
||||
|
||||
// If the remote list has something local doesn't, we add it.
|
||||
if localIter.Front().ID > remoteIter.Front().ID {
|
||||
changes = append(changes, &structs.ACLRequest{
|
||||
Op: structs.ACLSet,
|
||||
ACL: *(remoteIter.Front()),
|
||||
})
|
||||
remoteIter.Next()
|
||||
continue
|
||||
}
|
||||
|
||||
// If local has something remote doesn't, we delete it.
|
||||
if localIter.Front().ID < remoteIter.Front().ID {
|
||||
changes = append(changes, &structs.ACLRequest{
|
||||
Op: structs.ACLDelete,
|
||||
ACL: *(localIter.Front()),
|
||||
})
|
||||
localIter.Next()
|
||||
continue
|
||||
}
|
||||
|
||||
// Local and remote have an ACL with the same ID, so we might
|
||||
// need to compare them.
|
||||
l, r := localIter.Front(), remoteIter.Front()
|
||||
if r.RaftIndex.ModifyIndex > lastRemoteIndex && !r.IsSame(l) {
|
||||
changes = append(changes, &structs.ACLRequest{
|
||||
Op: structs.ACLSet,
|
||||
ACL: *r,
|
||||
})
|
||||
}
|
||||
localIter.Next()
|
||||
remoteIter.Next()
|
||||
}
|
||||
return changes
|
||||
}
|
||||
|
||||
// FetchLocalACLs returns the ACLs in the local state store.
|
||||
func (s *Server) fetchLocalLegacyACLs() (structs.ACLs, error) {
|
||||
_, local, err := s.fsm.State().ACLTokenList(nil, false, true, "", "", "", nil, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
now := time.Now()
|
||||
|
||||
var acls structs.ACLs
|
||||
for _, token := range local {
|
||||
if token.IsExpired(now) {
|
||||
continue
|
||||
}
|
||||
if acl, err := token.Convert(); err == nil && acl != nil {
|
||||
acls = append(acls, acl)
|
||||
}
|
||||
}
|
||||
|
||||
return acls, nil
|
||||
}
|
||||
|
||||
// FetchRemoteACLs is used to get the remote set of ACLs from the ACL
|
||||
// datacenter. The lastIndex parameter is a hint about which remote index we
|
||||
// have replicated to, so this is expected to block until something changes.
|
||||
func (s *Server) fetchRemoteLegacyACLs(lastRemoteIndex uint64) (*structs.IndexedACLs, error) {
|
||||
defer metrics.MeasureSince([]string{"leader", "fetchRemoteACLs"}, time.Now())
|
||||
|
||||
args := structs.DCSpecificRequest{
|
||||
Datacenter: s.config.PrimaryDatacenter,
|
||||
QueryOptions: structs.QueryOptions{
|
||||
Token: s.tokens.ReplicationToken(),
|
||||
MinQueryIndex: lastRemoteIndex,
|
||||
AllowStale: true,
|
||||
},
|
||||
}
|
||||
var remote structs.IndexedACLs
|
||||
if err := s.RPC("ACL.List", &args, &remote); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &remote, nil
|
||||
}
|
||||
|
||||
// UpdateLocalACLs is given a list of changes to apply in order to bring the
|
||||
// local ACLs in-line with the remote ACLs from the ACL datacenter.
|
||||
func (s *Server) updateLocalLegacyACLs(changes structs.ACLRequests, ctx context.Context) (bool, error) {
|
||||
defer metrics.MeasureSince([]string{"leader", "updateLocalACLs"}, time.Now())
|
||||
|
||||
minTimePerOp := time.Second / time.Duration(s.config.ACLReplicationApplyLimit)
|
||||
for _, change := range changes {
|
||||
// Note that we are using the single ACL interface here and not
|
||||
// performing all this inside a single transaction. This is OK
|
||||
// for two reasons. First, there's nothing else other than this
|
||||
// replication routine that alters the local ACLs, so there's
|
||||
// nothing to contend with locally. Second, if an apply fails
|
||||
// in the middle (most likely due to losing leadership), the
|
||||
// next replication pass will clean up and check everything
|
||||
// again.
|
||||
var reply string
|
||||
start := time.Now()
|
||||
if err := aclApplyInternal(s, change, &reply); err != nil {
|
||||
return false, err
|
||||
}
|
||||
|
||||
// Do a smooth rate limit to wait out the min time allowed for
|
||||
// each op. If this op took longer than the min, then the sleep
|
||||
// time will be negative and we will just move on.
|
||||
elapsed := time.Since(start)
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return true, nil
|
||||
case <-time.After(minTimePerOp - elapsed):
|
||||
// do nothing
|
||||
}
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// replicateACLs is a runs one pass of the algorithm for replicating ACLs from
|
||||
// a remote ACL datacenter to local state. If there's any error, this will return
|
||||
// 0 for the lastRemoteIndex, which will cause us to immediately do a full sync
|
||||
// next time.
|
||||
func (s *Server) replicateLegacyACLs(ctx context.Context, logger hclog.Logger, lastRemoteIndex uint64) (uint64, bool, error) {
|
||||
remote, err := s.fetchRemoteLegacyACLs(lastRemoteIndex)
|
||||
if err != nil {
|
||||
return 0, false, fmt.Errorf("failed to retrieve remote ACLs: %v", err)
|
||||
}
|
||||
|
||||
// Need to check if we should be stopping. This will be common as the fetching process is a blocking
|
||||
// RPC which could have been hanging around for a long time and during that time leadership could
|
||||
// have been lost.
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return 0, true, nil
|
||||
default:
|
||||
// do nothing
|
||||
}
|
||||
|
||||
// Measure everything after the remote query, which can block for long
|
||||
// periods of time. This metric is a good measure of how expensive the
|
||||
// replication process is.
|
||||
defer metrics.MeasureSince([]string{"leader", "replicateACLs"}, time.Now())
|
||||
|
||||
local, err := s.fetchLocalLegacyACLs()
|
||||
if err != nil {
|
||||
return 0, false, fmt.Errorf("failed to retrieve local ACLs: %v", err)
|
||||
}
|
||||
|
||||
// If the remote index ever goes backwards, it's a good indication that
|
||||
// the remote side was rebuilt and we should do a full sync since we
|
||||
// can't make any assumptions about what's going on.
|
||||
if remote.QueryMeta.Index < lastRemoteIndex {
|
||||
logger.Warn(
|
||||
"Legacy ACL replication remote index moved backwards, forcing a full ACL sync",
|
||||
"from", lastRemoteIndex,
|
||||
"to", remote.QueryMeta.Index,
|
||||
)
|
||||
lastRemoteIndex = 0
|
||||
}
|
||||
|
||||
// Calculate the changes required to bring the state into sync and then
|
||||
// apply them.
|
||||
changes := reconcileLegacyACLs(local, remote.ACLs, lastRemoteIndex)
|
||||
exit, err := s.updateLocalLegacyACLs(changes, ctx)
|
||||
if exit {
|
||||
return 0, true, nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return 0, false, fmt.Errorf("failed to sync ACL changes: %v", err)
|
||||
}
|
||||
|
||||
// Return the index we got back from the remote side, since we've synced
|
||||
// up with the remote state as of that index.
|
||||
return remote.QueryMeta.Index, false, nil
|
||||
}
|
|
@ -1,491 +0,0 @@
|
|||
package consul
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/hashicorp/consul/agent/structs"
|
||||
tokenStore "github.com/hashicorp/consul/agent/token"
|
||||
"github.com/hashicorp/consul/sdk/testutil/retry"
|
||||
"github.com/hashicorp/consul/testrpc"
|
||||
)
|
||||
|
||||
func TestACLReplication_Sorter(t *testing.T) {
|
||||
t.Parallel()
|
||||
acls := structs.ACLs{
|
||||
&structs.ACL{ID: "a"},
|
||||
&structs.ACL{ID: "b"},
|
||||
&structs.ACL{ID: "c"},
|
||||
}
|
||||
|
||||
sorter := &aclIterator{acls, 0}
|
||||
if len := sorter.Len(); len != 3 {
|
||||
t.Fatalf("bad: %d", len)
|
||||
}
|
||||
if !sorter.Less(0, 1) {
|
||||
t.Fatalf("should be less")
|
||||
}
|
||||
if sorter.Less(1, 0) {
|
||||
t.Fatalf("should not be less")
|
||||
}
|
||||
if !sort.IsSorted(sorter) {
|
||||
t.Fatalf("should be sorted")
|
||||
}
|
||||
|
||||
expected := structs.ACLs{
|
||||
&structs.ACL{ID: "b"},
|
||||
&structs.ACL{ID: "a"},
|
||||
&structs.ACL{ID: "c"},
|
||||
}
|
||||
sorter.Swap(0, 1)
|
||||
if !reflect.DeepEqual(acls, expected) {
|
||||
t.Fatalf("bad: %v", acls)
|
||||
}
|
||||
if sort.IsSorted(sorter) {
|
||||
t.Fatalf("should not be sorted")
|
||||
}
|
||||
sort.Sort(sorter)
|
||||
if !sort.IsSorted(sorter) {
|
||||
t.Fatalf("should be sorted")
|
||||
}
|
||||
}
|
||||
|
||||
func TestACLReplication_Iterator(t *testing.T) {
|
||||
t.Parallel()
|
||||
acls := structs.ACLs{}
|
||||
|
||||
iter := newACLIterator(acls)
|
||||
if front := iter.Front(); front != nil {
|
||||
t.Fatalf("bad: %v", front)
|
||||
}
|
||||
iter.Next()
|
||||
if front := iter.Front(); front != nil {
|
||||
t.Fatalf("bad: %v", front)
|
||||
}
|
||||
|
||||
acls = structs.ACLs{
|
||||
&structs.ACL{ID: "a"},
|
||||
&structs.ACL{ID: "b"},
|
||||
&structs.ACL{ID: "c"},
|
||||
}
|
||||
iter = newACLIterator(acls)
|
||||
if front := iter.Front(); front != acls[0] {
|
||||
t.Fatalf("bad: %v", front)
|
||||
}
|
||||
iter.Next()
|
||||
if front := iter.Front(); front != acls[1] {
|
||||
t.Fatalf("bad: %v", front)
|
||||
}
|
||||
iter.Next()
|
||||
if front := iter.Front(); front != acls[2] {
|
||||
t.Fatalf("bad: %v", front)
|
||||
}
|
||||
iter.Next()
|
||||
if front := iter.Front(); front != nil {
|
||||
t.Fatalf("bad: %v", front)
|
||||
}
|
||||
}
|
||||
|
||||
func TestACLReplication_reconcileACLs(t *testing.T) {
|
||||
t.Parallel()
|
||||
parseACLs := func(raw string) structs.ACLs {
|
||||
var acls structs.ACLs
|
||||
for _, key := range strings.Split(raw, "|") {
|
||||
if len(key) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
tuple := strings.Split(key, ":")
|
||||
index, err := strconv.Atoi(tuple[1])
|
||||
if err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
acl := &structs.ACL{
|
||||
ID: tuple[0],
|
||||
Rules: tuple[2],
|
||||
RaftIndex: structs.RaftIndex{
|
||||
ModifyIndex: uint64(index),
|
||||
},
|
||||
}
|
||||
acls = append(acls, acl)
|
||||
}
|
||||
return acls
|
||||
}
|
||||
|
||||
parseChanges := func(changes structs.ACLRequests) string {
|
||||
var ret string
|
||||
for i, change := range changes {
|
||||
if i > 0 {
|
||||
ret += "|"
|
||||
}
|
||||
ret += fmt.Sprintf("%s:%s:%s", change.Op, change.ACL.ID, change.ACL.Rules)
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
tests := []struct {
|
||||
local string
|
||||
remote string
|
||||
lastRemoteIndex uint64
|
||||
expected string
|
||||
}{
|
||||
// Everything empty.
|
||||
{
|
||||
local: "",
|
||||
remote: "",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "",
|
||||
},
|
||||
// First time with empty local.
|
||||
{
|
||||
local: "",
|
||||
remote: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "set:bbb:X|set:ccc:X|set:ddd:X|set:eee:X",
|
||||
},
|
||||
// Remote not sorted.
|
||||
{
|
||||
local: "",
|
||||
remote: "ddd:2:X|bbb:3:X|ccc:9:X|eee:11:X",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "set:bbb:X|set:ccc:X|set:ddd:X|set:eee:X",
|
||||
},
|
||||
// Neither side sorted.
|
||||
{
|
||||
local: "ddd:2:X|bbb:3:X|ccc:9:X|eee:11:X",
|
||||
remote: "ccc:9:X|bbb:3:X|ddd:2:X|eee:11:X",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "",
|
||||
},
|
||||
// Fully replicated, nothing to do.
|
||||
{
|
||||
local: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
remote: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "",
|
||||
},
|
||||
// Change an ACL.
|
||||
{
|
||||
local: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
remote: "bbb:3:X|ccc:33:Y|ddd:2:X|eee:11:X",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "set:ccc:Y",
|
||||
},
|
||||
// Change an ACL, but mask the change by the last replicated
|
||||
// index. This isn't how things work normally, but it proves
|
||||
// we are skipping the full compare based on the index.
|
||||
{
|
||||
local: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
remote: "bbb:3:X|ccc:33:Y|ddd:2:X|eee:11:X",
|
||||
lastRemoteIndex: 33,
|
||||
expected: "",
|
||||
},
|
||||
// Empty everything out.
|
||||
{
|
||||
local: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
remote: "",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "delete:bbb:X|delete:ccc:X|delete:ddd:X|delete:eee:X",
|
||||
},
|
||||
// Adds on the ends and in the middle.
|
||||
{
|
||||
local: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
remote: "aaa:99:X|bbb:3:X|ccc:9:X|ccx:101:X|ddd:2:X|eee:11:X|fff:102:X",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "set:aaa:X|set:ccx:X|set:fff:X",
|
||||
},
|
||||
// Deletes on the ends and in the middle.
|
||||
{
|
||||
local: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
remote: "ccc:9:X",
|
||||
lastRemoteIndex: 0,
|
||||
expected: "delete:bbb:X|delete:ddd:X|delete:eee:X",
|
||||
},
|
||||
// Everything.
|
||||
{
|
||||
local: "bbb:3:X|ccc:9:X|ddd:2:X|eee:11:X",
|
||||
remote: "aaa:99:X|bbb:3:X|ccx:101:X|ddd:103:Y|eee:11:X|fff:102:X",
|
||||
lastRemoteIndex: 11,
|
||||
expected: "set:aaa:X|delete:ccc:X|set:ccx:X|set:ddd:Y|set:fff:X",
|
||||
},
|
||||
}
|
||||
for i, test := range tests {
|
||||
local, remote := parseACLs(test.local), parseACLs(test.remote)
|
||||
changes := reconcileLegacyACLs(local, remote, test.lastRemoteIndex)
|
||||
if actual := parseChanges(changes); actual != test.expected {
|
||||
t.Errorf("test case %d failed: %s", i, actual)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestACLReplication_updateLocalACLs_RateLimit(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("too slow for testing.Short")
|
||||
}
|
||||
|
||||
t.Parallel()
|
||||
dir1, s1 := testServerWithConfig(t, func(c *Config) {
|
||||
c.Datacenter = "dc2"
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
c.ACLReplicationApplyLimit = 1
|
||||
})
|
||||
s1.tokens.UpdateReplicationToken("secret", tokenStore.TokenSourceConfig)
|
||||
defer os.RemoveAll(dir1)
|
||||
defer s1.Shutdown()
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc2")
|
||||
|
||||
changes := structs.ACLRequests{
|
||||
&structs.ACLRequest{
|
||||
Op: structs.ACLSet,
|
||||
ACL: structs.ACL{
|
||||
ID: "secret",
|
||||
Type: "client",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// Should be throttled to 1 Hz.
|
||||
start := time.Now()
|
||||
if _, err := s1.updateLocalLegacyACLs(changes, context.Background()); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
if dur := time.Since(start); dur < time.Second {
|
||||
t.Fatalf("too slow: %9.6f", dur.Seconds())
|
||||
}
|
||||
|
||||
changes = append(changes,
|
||||
&structs.ACLRequest{
|
||||
Op: structs.ACLSet,
|
||||
ACL: structs.ACL{
|
||||
ID: "secret",
|
||||
Type: "client",
|
||||
},
|
||||
})
|
||||
|
||||
// Should be throttled to 1 Hz.
|
||||
start = time.Now()
|
||||
if _, err := s1.updateLocalLegacyACLs(changes, context.Background()); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
if dur := time.Since(start); dur < 2*time.Second {
|
||||
t.Fatalf("too fast: %9.6f", dur.Seconds())
|
||||
}
|
||||
}
|
||||
|
||||
func TestACLReplication_IsACLReplicationEnabled(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("too slow for testing.Short")
|
||||
}
|
||||
|
||||
t.Parallel()
|
||||
// ACLs not enabled.
|
||||
dir1, s1 := testServerWithConfig(t, func(c *Config) {
|
||||
c.PrimaryDatacenter = ""
|
||||
c.ACLsEnabled = false
|
||||
})
|
||||
defer os.RemoveAll(dir1)
|
||||
defer s1.Shutdown()
|
||||
if s1.IsACLReplicationEnabled() {
|
||||
t.Fatalf("should not be enabled")
|
||||
}
|
||||
|
||||
// ACLs enabled but not replication.
|
||||
dir2, s2 := testServerWithConfig(t, func(c *Config) {
|
||||
c.Datacenter = "dc2"
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
})
|
||||
defer os.RemoveAll(dir2)
|
||||
defer s2.Shutdown()
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc1")
|
||||
testrpc.WaitForLeader(t, s2.RPC, "dc2")
|
||||
|
||||
if s2.IsACLReplicationEnabled() {
|
||||
t.Fatalf("should not be enabled")
|
||||
}
|
||||
|
||||
// ACLs enabled with replication.
|
||||
dir3, s3 := testServerWithConfig(t, func(c *Config) {
|
||||
c.Datacenter = "dc2"
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
c.ACLTokenReplication = true
|
||||
})
|
||||
defer os.RemoveAll(dir3)
|
||||
defer s3.Shutdown()
|
||||
testrpc.WaitForLeader(t, s3.RPC, "dc2")
|
||||
if !s3.IsACLReplicationEnabled() {
|
||||
t.Fatalf("should be enabled")
|
||||
}
|
||||
|
||||
// ACLs enabled with replication, but inside the ACL datacenter
|
||||
// so replication should be disabled.
|
||||
dir4, s4 := testServerWithConfig(t, func(c *Config) {
|
||||
c.Datacenter = "dc1"
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
c.ACLTokenReplication = true
|
||||
})
|
||||
defer os.RemoveAll(dir4)
|
||||
defer s4.Shutdown()
|
||||
testrpc.WaitForLeader(t, s4.RPC, "dc1")
|
||||
if s4.IsACLReplicationEnabled() {
|
||||
t.Fatalf("should not be enabled")
|
||||
}
|
||||
}
|
||||
|
||||
// Note that this test is testing that legacy token data is replicated, NOT
|
||||
// directly testing the legacy acl replication goroutine code.
|
||||
//
|
||||
// Actually testing legacy replication is difficult to do without old binaries.
|
||||
func TestACLReplication_LegacyTokens(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("too slow for testing.Short")
|
||||
}
|
||||
|
||||
t.Parallel()
|
||||
dir1, s1 := testServerWithConfig(t, func(c *Config) {
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
c.ACLMasterToken = "root"
|
||||
})
|
||||
defer os.RemoveAll(dir1)
|
||||
defer s1.Shutdown()
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc1")
|
||||
client := rpcClient(t, s1)
|
||||
defer client.Close()
|
||||
|
||||
dir2, s2 := testServerWithConfig(t, func(c *Config) {
|
||||
c.Datacenter = "dc2"
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
c.ACLTokenReplication = true
|
||||
c.ACLReplicationRate = 100
|
||||
c.ACLReplicationBurst = 100
|
||||
c.ACLReplicationApplyLimit = 1000000
|
||||
})
|
||||
s2.tokens.UpdateReplicationToken("root", tokenStore.TokenSourceConfig)
|
||||
testrpc.WaitForLeader(t, s2.RPC, "dc2")
|
||||
defer os.RemoveAll(dir2)
|
||||
defer s2.Shutdown()
|
||||
|
||||
// Try to join.
|
||||
joinWAN(t, s2, s1)
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc1")
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc2")
|
||||
|
||||
// Wait for legacy acls to be disabled so we are clear that
|
||||
// legacy replication isn't meddling.
|
||||
waitForNewACLs(t, s1)
|
||||
waitForNewACLs(t, s2)
|
||||
waitForNewACLReplication(t, s2, structs.ACLReplicateTokens, 1, 1, 0)
|
||||
|
||||
// Create a bunch of new tokens.
|
||||
var id string
|
||||
for i := 0; i < 50; i++ {
|
||||
arg := structs.ACLRequest{
|
||||
Datacenter: "dc1",
|
||||
Op: structs.ACLSet,
|
||||
ACL: structs.ACL{
|
||||
Name: "User token",
|
||||
Type: structs.ACLTokenTypeClient,
|
||||
Rules: testACLPolicy,
|
||||
},
|
||||
WriteRequest: structs.WriteRequest{Token: "root"},
|
||||
}
|
||||
if err := s1.RPC("ACL.Apply", &arg, &id); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
checkSame := func() error {
|
||||
index, remote, err := s1.fsm.State().ACLTokenList(nil, true, true, "", "", "", nil, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, local, err := s2.fsm.State().ACLTokenList(nil, true, true, "", "", "", nil, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if got, want := len(remote), len(local); got != want {
|
||||
return fmt.Errorf("got %d remote ACLs want %d", got, want)
|
||||
}
|
||||
for i, token := range remote {
|
||||
if !bytes.Equal(token.Hash, local[i].Hash) {
|
||||
return fmt.Errorf("ACLs differ")
|
||||
}
|
||||
}
|
||||
|
||||
var status structs.ACLReplicationStatus
|
||||
s2.aclReplicationStatusLock.RLock()
|
||||
status = s2.aclReplicationStatus
|
||||
s2.aclReplicationStatusLock.RUnlock()
|
||||
if !status.Enabled || !status.Running ||
|
||||
status.ReplicatedTokenIndex != index ||
|
||||
status.SourceDatacenter != "dc1" {
|
||||
return fmt.Errorf("ACL replication status differs")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
// Wait for the replica to converge.
|
||||
retry.Run(t, func(r *retry.R) {
|
||||
if err := checkSame(); err != nil {
|
||||
r.Fatal(err)
|
||||
}
|
||||
})
|
||||
|
||||
// Create more new tokens.
|
||||
for i := 0; i < 50; i++ {
|
||||
arg := structs.ACLRequest{
|
||||
Datacenter: "dc1",
|
||||
Op: structs.ACLSet,
|
||||
ACL: structs.ACL{
|
||||
Name: "User token",
|
||||
Type: structs.ACLTokenTypeClient,
|
||||
Rules: testACLPolicy,
|
||||
},
|
||||
WriteRequest: structs.WriteRequest{Token: "root"},
|
||||
}
|
||||
var dontCare string
|
||||
if err := s1.RPC("ACL.Apply", &arg, &dontCare); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
}
|
||||
// Wait for the replica to converge.
|
||||
retry.Run(t, func(r *retry.R) {
|
||||
if err := checkSame(); err != nil {
|
||||
r.Fatal(err)
|
||||
}
|
||||
})
|
||||
|
||||
// Delete a token.
|
||||
arg := structs.ACLRequest{
|
||||
Datacenter: "dc1",
|
||||
Op: structs.ACLDelete,
|
||||
ACL: structs.ACL{
|
||||
ID: id,
|
||||
},
|
||||
WriteRequest: structs.WriteRequest{Token: "root"},
|
||||
}
|
||||
var dontCare string
|
||||
if err := s1.RPC("ACL.Apply", &arg, &dontCare); err != nil {
|
||||
t.Fatalf("err: %v", err)
|
||||
}
|
||||
// Wait for the replica to converge.
|
||||
retry.Run(t, func(r *retry.R) {
|
||||
if err := checkSame(); err != nil {
|
||||
r.Fatal(err)
|
||||
}
|
||||
})
|
||||
}
|
|
@ -58,7 +58,7 @@ func (d *AutopilotDelegate) NotifyState(state *autopilot.State) {
|
|||
func (d *AutopilotDelegate) RemoveFailedServer(srv *autopilot.Server) {
|
||||
go func() {
|
||||
if err := d.server.RemoveFailedNode(srv.Name, false); err != nil {
|
||||
d.server.logger.Error("failedto remove server", "name", srv.Name, "id", srv.ID, "error", err)
|
||||
d.server.logger.Error("failed to remove server", "name", srv.Name, "id", srv.ID, "error", err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
|
|
@ -75,7 +75,7 @@ func TestIntentionApply_new(t *testing.T) {
|
|||
//nolint:staticcheck
|
||||
ixn.Intention.UpdatePrecedence()
|
||||
// Partition fields will be normalized on Intention.Get
|
||||
ixn.Intention.NormalizePartitionFields()
|
||||
ixn.Intention.FillPartitionAndNamespace(nil, true)
|
||||
require.Equal(t, ixn.Intention, actual)
|
||||
}
|
||||
|
||||
|
@ -269,7 +269,7 @@ func TestIntentionApply_updateGood(t *testing.T) {
|
|||
//nolint:staticcheck
|
||||
ixn.Intention.UpdatePrecedence()
|
||||
// Partition fields will be normalized on Intention.Get
|
||||
ixn.Intention.NormalizePartitionFields()
|
||||
ixn.Intention.FillPartitionAndNamespace(nil, true)
|
||||
require.Equal(t, ixn.Intention, actual)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -674,15 +674,6 @@ func (s *Server) initializeACLs(ctx context.Context, upgrade bool) error {
|
|||
// launch the upgrade go routine to generate accessors for everything
|
||||
s.startACLUpgrade(ctx)
|
||||
} else {
|
||||
if s.UseLegacyACLs() && !upgrade {
|
||||
if s.IsACLReplicationEnabled() {
|
||||
s.startLegacyACLReplication(ctx)
|
||||
}
|
||||
// return early as we don't want to start new ACL replication
|
||||
// or ACL token reaping as these are new ACL features.
|
||||
return nil
|
||||
}
|
||||
|
||||
if upgrade {
|
||||
s.stopACLReplication()
|
||||
}
|
||||
|
@ -783,67 +774,6 @@ func (s *Server) stopACLUpgrade() {
|
|||
s.leaderRoutineManager.Stop(aclUpgradeRoutineName)
|
||||
}
|
||||
|
||||
// This function is only intended to be run as a managed go routine, it will block until
|
||||
// the context passed in indicates that it should exit.
|
||||
func (s *Server) runLegacyACLReplication(ctx context.Context) error {
|
||||
var lastRemoteIndex uint64
|
||||
legacyACLLogger := s.aclReplicationLogger(logging.Legacy)
|
||||
limiter := rate.NewLimiter(rate.Limit(s.config.ACLReplicationRate), s.config.ACLReplicationBurst)
|
||||
|
||||
for {
|
||||
if err := limiter.Wait(ctx); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if s.tokens.ReplicationToken() == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
index, exit, err := s.replicateLegacyACLs(ctx, legacyACLLogger, lastRemoteIndex)
|
||||
if exit {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
metrics.SetGauge([]string{"leader", "replication", "acl-legacy", "status"},
|
||||
0,
|
||||
)
|
||||
lastRemoteIndex = 0
|
||||
s.updateACLReplicationStatusError(err.Error())
|
||||
legacyACLLogger.Warn("Legacy ACL replication error (will retry if still leader)", "error", err)
|
||||
} else {
|
||||
metrics.SetGauge([]string{"leader", "replication", "acl-legacy", "status"},
|
||||
1,
|
||||
)
|
||||
metrics.SetGauge([]string{"leader", "replication", "acl-legacy", "index"},
|
||||
float32(index),
|
||||
)
|
||||
lastRemoteIndex = index
|
||||
s.updateACLReplicationStatusIndex(structs.ACLReplicateLegacy, index)
|
||||
legacyACLLogger.Debug("Legacy ACL replication completed through remote index", "index", index)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Server) startLegacyACLReplication(ctx context.Context) {
|
||||
if s.InACLDatacenter() {
|
||||
return
|
||||
}
|
||||
|
||||
// unlike some other leader routines this initializes some extra state
|
||||
// and therefore we want to prevent re-initialization if things are already
|
||||
// running
|
||||
if s.leaderRoutineManager.IsRunning(legacyACLReplicationRoutineName) {
|
||||
return
|
||||
}
|
||||
|
||||
s.initReplicationStatus()
|
||||
|
||||
s.leaderRoutineManager.Start(ctx, legacyACLReplicationRoutineName, s.runLegacyACLReplication)
|
||||
s.logger.Info("started legacy ACL replication")
|
||||
s.updateACLReplicationStatusRunning(structs.ACLReplicateLegacy)
|
||||
}
|
||||
|
||||
func (s *Server) startACLReplication(ctx context.Context) {
|
||||
if s.InACLDatacenter() {
|
||||
return
|
||||
|
@ -966,7 +896,6 @@ func (s *Server) aclReplicationLogger(singularNoun string) hclog.Logger {
|
|||
|
||||
func (s *Server) stopACLReplication() {
|
||||
// these will be no-ops when not started
|
||||
s.leaderRoutineManager.Stop(legacyACLReplicationRoutineName)
|
||||
s.leaderRoutineManager.Stop(aclPolicyReplicationRoutineName)
|
||||
s.leaderRoutineManager.Stop(aclRoleReplicationRoutineName)
|
||||
s.leaderRoutineManager.Stop(aclTokenReplicationRoutineName)
|
||||
|
|
|
@ -1542,30 +1542,6 @@ func TestLeader_ConfigEntryBootstrap_Fail(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestLeader_ACLLegacyReplication(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("too slow for testing.Short")
|
||||
}
|
||||
|
||||
t.Parallel()
|
||||
|
||||
// This test relies on configuring a secondary DC with no route to the primary DC
|
||||
// Having no route will cause the ACL mode checking of the primary to "fail". In this
|
||||
// scenario legacy ACL replication should be enabled without also running new ACL
|
||||
// replication routines.
|
||||
cb := func(c *Config) {
|
||||
c.Datacenter = "dc2"
|
||||
c.ACLTokenReplication = true
|
||||
}
|
||||
_, srv, _ := testACLServerWithConfig(t, cb, true)
|
||||
waitForLeaderEstablishment(t, srv)
|
||||
|
||||
require.True(t, srv.leaderRoutineManager.IsRunning(legacyACLReplicationRoutineName))
|
||||
require.False(t, srv.leaderRoutineManager.IsRunning(aclPolicyReplicationRoutineName))
|
||||
require.False(t, srv.leaderRoutineManager.IsRunning(aclRoleReplicationRoutineName))
|
||||
require.False(t, srv.leaderRoutineManager.IsRunning(aclTokenReplicationRoutineName))
|
||||
}
|
||||
|
||||
func TestDatacenterSupportsFederationStates(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("too slow for testing.Short")
|
||||
|
|
|
@ -22,6 +22,7 @@ import (
|
|||
msgpackrpc "github.com/hashicorp/net-rpc-msgpackrpc"
|
||||
"github.com/hashicorp/raft"
|
||||
"github.com/hashicorp/yamux"
|
||||
"google.golang.org/grpc"
|
||||
|
||||
"github.com/hashicorp/consul/acl"
|
||||
"github.com/hashicorp/consul/agent/consul/state"
|
||||
|
@ -556,13 +557,87 @@ func canRetry(info structs.RPCInfo, err error, start time.Time, config *Config)
|
|||
return info != nil && info.IsRead() && lib.IsErrEOF(err)
|
||||
}
|
||||
|
||||
// ForwardRPC is used to forward an RPC request to a remote DC or to the local leader
|
||||
// Returns a bool of if forwarding was performed, as well as any error
|
||||
// ForwardRPC is used to potentially forward an RPC request to a remote DC or
|
||||
// to the local leader depending upon the request.
|
||||
//
|
||||
// Returns a bool of if forwarding was performed, as well as any error. If
|
||||
// false is returned (with no error) it is assumed that the current server
|
||||
// should handle the request.
|
||||
func (s *Server) ForwardRPC(method string, info structs.RPCInfo, reply interface{}) (bool, error) {
|
||||
firstCheck := time.Now()
|
||||
forwardToDC := func(dc string) error {
|
||||
return s.forwardDC(method, dc, info, reply)
|
||||
}
|
||||
forwardToLeader := func(leader *metadata.Server) error {
|
||||
return s.connPool.RPC(s.config.Datacenter, leader.ShortName, leader.Addr,
|
||||
method, info, reply)
|
||||
}
|
||||
return s.forwardRPC(info, forwardToDC, forwardToLeader)
|
||||
}
|
||||
|
||||
// ForwardGRPC is used to potentially forward an RPC request to a remote DC or
|
||||
// to the local leader depending upon the request.
|
||||
//
|
||||
// Returns a bool of if forwarding was performed, as well as any error. If
|
||||
// false is returned (with no error) it is assumed that the current server
|
||||
// should handle the request.
|
||||
func (s *Server) ForwardGRPC(connPool GRPCClientConner, info structs.RPCInfo, f func(*grpc.ClientConn) error) (handled bool, err error) {
|
||||
forwardToDC := func(dc string) error {
|
||||
conn, err := connPool.ClientConn(dc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return f(conn)
|
||||
}
|
||||
forwardToLeader := func(leader *metadata.Server) error {
|
||||
conn, err := connPool.ClientConnLeader()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return f(conn)
|
||||
}
|
||||
return s.forwardRPC(info, forwardToDC, forwardToLeader)
|
||||
}
|
||||
|
||||
// forwardRPC is used to potentially forward an RPC request to a remote DC or
|
||||
// to the local leader depending upon the request.
|
||||
//
|
||||
// If info.RequestDatacenter() does not match the local datacenter, then the
|
||||
// request will be forwarded to the DC using forwardToDC.
|
||||
//
|
||||
// Stale read requests will be handled locally if the current node has an
|
||||
// initialized raft database, otherwise requests will be forwarded to the local
|
||||
// leader using forwardToLeader.
|
||||
//
|
||||
// Returns a bool of if forwarding was performed, as well as any error. If
|
||||
// false is returned (with no error) it is assumed that the current server
|
||||
// should handle the request.
|
||||
func (s *Server) forwardRPC(
|
||||
info structs.RPCInfo,
|
||||
forwardToDC func(dc string) error,
|
||||
forwardToLeader func(leader *metadata.Server) error,
|
||||
) (handled bool, err error) {
|
||||
// Forward the request to the requested datacenter.
|
||||
if handled, err := s.forwardRequestToOtherDatacenter(info, forwardToDC); handled || err != nil {
|
||||
return handled, err
|
||||
}
|
||||
|
||||
// See if we should let this server handle the read request without
|
||||
// shipping the request to the leader.
|
||||
if s.canServeReadRequest(info) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
return s.forwardRequestToLeader(info, forwardToLeader)
|
||||
}
|
||||
|
||||
// forwardRequestToOtherDatacenter is an implementation detail of forwardRPC.
|
||||
// See the comment for forwardRPC for more details.
|
||||
func (s *Server) forwardRequestToOtherDatacenter(info structs.RPCInfo, forwardToDC func(dc string) error) (handled bool, err error) {
|
||||
// Handle DC forwarding
|
||||
dc := info.RequestDatacenter()
|
||||
if dc == "" {
|
||||
dc = s.config.Datacenter
|
||||
}
|
||||
if dc != s.config.Datacenter {
|
||||
// Local tokens only work within the current datacenter. Check to see
|
||||
// if we are attempting to forward one to a remote datacenter and strip
|
||||
|
@ -581,15 +656,23 @@ func (s *Server) ForwardRPC(method string, info structs.RPCInfo, reply interface
|
|||
}
|
||||
}
|
||||
|
||||
err := s.forwardDC(method, dc, info, reply)
|
||||
return true, err
|
||||
return true, forwardToDC(dc)
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
// canServeReadRequest determines if the request is a stale read request and
|
||||
// the current node can safely process that request.
|
||||
func (s *Server) canServeReadRequest(info structs.RPCInfo) bool {
|
||||
// Check if we can allow a stale read, ensure our local DB is initialized
|
||||
if info.IsRead() && info.AllowStaleRead() && !s.raft.LastContact().IsZero() {
|
||||
return false, nil
|
||||
}
|
||||
return info.IsRead() && info.AllowStaleRead() && !s.raft.LastContact().IsZero()
|
||||
}
|
||||
|
||||
// forwardRequestToLeader is an implementation detail of forwardRPC.
|
||||
// See the comment for forwardRPC for more details.
|
||||
func (s *Server) forwardRequestToLeader(info structs.RPCInfo, forwardToLeader func(leader *metadata.Server) error) (handled bool, err error) {
|
||||
firstCheck := time.Now()
|
||||
CHECK_LEADER:
|
||||
// Fail fast if we are in the process of leaving
|
||||
select {
|
||||
|
@ -608,8 +691,7 @@ CHECK_LEADER:
|
|||
|
||||
// Handle the case of a known leader
|
||||
if leader != nil {
|
||||
rpcErr = s.connPool.RPC(s.config.Datacenter, leader.ShortName, leader.Addr,
|
||||
method, info, reply)
|
||||
rpcErr = forwardToLeader(leader)
|
||||
if rpcErr == nil {
|
||||
return true, nil
|
||||
}
|
||||
|
|
|
@ -3,6 +3,7 @@ package consul
|
|||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/x509"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
|
@ -25,15 +26,17 @@ import (
|
|||
"github.com/hashicorp/raft"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"github.com/hashicorp/consul/agent/connect"
|
||||
"google.golang.org/grpc"
|
||||
|
||||
"github.com/hashicorp/consul/acl"
|
||||
"github.com/hashicorp/consul/agent/connect"
|
||||
"github.com/hashicorp/consul/agent/consul/state"
|
||||
agent_grpc "github.com/hashicorp/consul/agent/grpc"
|
||||
"github.com/hashicorp/consul/agent/pool"
|
||||
"github.com/hashicorp/consul/agent/structs"
|
||||
tokenStore "github.com/hashicorp/consul/agent/token"
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/hashicorp/consul/proto/pbsubscribe"
|
||||
"github.com/hashicorp/consul/sdk/testutil"
|
||||
"github.com/hashicorp/consul/sdk/testutil/retry"
|
||||
"github.com/hashicorp/consul/testrpc"
|
||||
|
@ -964,6 +967,201 @@ func TestRPC_LocalTokenStrippedOnForward(t *testing.T) {
|
|||
require.Equal(t, localToken2.SecretID, arg.WriteRequest.Token, "token should not be stripped")
|
||||
}
|
||||
|
||||
func TestRPC_LocalTokenStrippedOnForward_GRPC(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("too slow for testing.Short")
|
||||
}
|
||||
|
||||
t.Parallel()
|
||||
dir1, s1 := testServerWithConfig(t, func(c *Config) {
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
c.ACLResolverSettings.ACLDefaultPolicy = "deny"
|
||||
c.ACLMasterToken = "root"
|
||||
c.RPCConfig.EnableStreaming = true
|
||||
})
|
||||
s1.tokens.UpdateAgentToken("root", tokenStore.TokenSourceConfig)
|
||||
defer os.RemoveAll(dir1)
|
||||
defer s1.Shutdown()
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc1")
|
||||
codec := rpcClient(t, s1)
|
||||
defer codec.Close()
|
||||
|
||||
dir2, s2 := testServerWithConfig(t, func(c *Config) {
|
||||
c.Datacenter = "dc2"
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.ACLsEnabled = true
|
||||
c.ACLResolverSettings.ACLDefaultPolicy = "deny"
|
||||
c.ACLTokenReplication = true
|
||||
c.ACLReplicationRate = 100
|
||||
c.ACLReplicationBurst = 100
|
||||
c.ACLReplicationApplyLimit = 1000000
|
||||
c.RPCConfig.EnableStreaming = true
|
||||
})
|
||||
s2.tokens.UpdateReplicationToken("root", tokenStore.TokenSourceConfig)
|
||||
s2.tokens.UpdateAgentToken("root", tokenStore.TokenSourceConfig)
|
||||
testrpc.WaitForLeader(t, s2.RPC, "dc2")
|
||||
defer os.RemoveAll(dir2)
|
||||
defer s2.Shutdown()
|
||||
codec2 := rpcClient(t, s2)
|
||||
defer codec2.Close()
|
||||
|
||||
// Try to join.
|
||||
joinWAN(t, s2, s1)
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc1")
|
||||
testrpc.WaitForLeader(t, s1.RPC, "dc2")
|
||||
|
||||
// Wait for legacy acls to be disabled so we are clear that
|
||||
// legacy replication isn't meddling.
|
||||
waitForNewACLs(t, s1)
|
||||
waitForNewACLs(t, s2)
|
||||
waitForNewACLReplication(t, s2, structs.ACLReplicateTokens, 1, 1, 0)
|
||||
|
||||
// create simple service policy
|
||||
policy, err := upsertTestPolicyWithRules(codec, "root", "dc1", `
|
||||
node_prefix "" { policy = "read" }
|
||||
service_prefix "" { policy = "read" }
|
||||
`)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Wait for it to replicate
|
||||
retry.Run(t, func(r *retry.R) {
|
||||
_, p, err := s2.fsm.State().ACLPolicyGetByID(nil, policy.ID, &structs.EnterpriseMeta{})
|
||||
require.Nil(r, err)
|
||||
require.NotNil(r, p)
|
||||
})
|
||||
|
||||
// create local token that only works in DC2
|
||||
localToken2, err := upsertTestToken(codec, "root", "dc2", func(token *structs.ACLToken) {
|
||||
token.Local = true
|
||||
token.Policies = []structs.ACLTokenPolicyLink{
|
||||
{ID: policy.ID},
|
||||
}
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
runStep(t, "Register a dummy node with a service", func(t *testing.T) {
|
||||
req := &structs.RegisterRequest{
|
||||
Node: "node1",
|
||||
Address: "3.4.5.6",
|
||||
Datacenter: "dc1",
|
||||
Service: &structs.NodeService{
|
||||
ID: "redis1",
|
||||
Service: "redis",
|
||||
Address: "3.4.5.6",
|
||||
Port: 8080,
|
||||
},
|
||||
WriteRequest: structs.WriteRequest{Token: "root"},
|
||||
}
|
||||
var out struct{}
|
||||
require.NoError(t, s1.RPC("Catalog.Register", &req, &out))
|
||||
})
|
||||
|
||||
var conn *grpc.ClientConn
|
||||
{
|
||||
client, builder := newClientWithGRPCResolver(t, func(c *Config) {
|
||||
c.Datacenter = "dc2"
|
||||
c.PrimaryDatacenter = "dc1"
|
||||
c.RPCConfig.EnableStreaming = true
|
||||
})
|
||||
joinLAN(t, client, s2)
|
||||
testrpc.WaitForTestAgent(t, client.RPC, "dc2", testrpc.WithToken("root"))
|
||||
|
||||
pool := agent_grpc.NewClientConnPool(agent_grpc.ClientConnPoolConfig{
|
||||
Servers: builder,
|
||||
DialingFromServer: false,
|
||||
DialingFromDatacenter: "dc2",
|
||||
})
|
||||
|
||||
conn, err = pool.ClientConn("dc2")
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
// Try to use it locally (it should work)
|
||||
runStep(t, "token used locally should work", func(t *testing.T) {
|
||||
arg := &pbsubscribe.SubscribeRequest{
|
||||
Topic: pbsubscribe.Topic_ServiceHealth,
|
||||
Key: "redis",
|
||||
Token: localToken2.SecretID,
|
||||
Datacenter: "dc2",
|
||||
}
|
||||
event, err := getFirstSubscribeEventOrError(conn, arg)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, event)
|
||||
|
||||
// make sure that token restore defer works
|
||||
require.Equal(t, localToken2.SecretID, arg.Token, "token should not be stripped")
|
||||
})
|
||||
|
||||
runStep(t, "token used remotely should not work", func(t *testing.T) {
|
||||
arg := &pbsubscribe.SubscribeRequest{
|
||||
Topic: pbsubscribe.Topic_ServiceHealth,
|
||||
Key: "redis",
|
||||
Token: localToken2.SecretID,
|
||||
Datacenter: "dc1",
|
||||
}
|
||||
|
||||
event, err := getFirstSubscribeEventOrError(conn, arg)
|
||||
|
||||
// NOTE: the subscription endpoint is a filtering style instead of a
|
||||
// hard-fail style so when the token isn't present 100% of the data is
|
||||
// filtered out leading to a stream with an empty snapshot.
|
||||
require.NoError(t, err)
|
||||
require.IsType(t, &pbsubscribe.Event_EndOfSnapshot{}, event.Payload)
|
||||
require.True(t, event.Payload.(*pbsubscribe.Event_EndOfSnapshot).EndOfSnapshot)
|
||||
})
|
||||
|
||||
runStep(t, "update anonymous token to read services", func(t *testing.T) {
|
||||
tokenUpsertReq := structs.ACLTokenSetRequest{
|
||||
Datacenter: "dc1",
|
||||
ACLToken: structs.ACLToken{
|
||||
AccessorID: structs.ACLTokenAnonymousID,
|
||||
Policies: []structs.ACLTokenPolicyLink{
|
||||
{ID: policy.ID},
|
||||
},
|
||||
},
|
||||
WriteRequest: structs.WriteRequest{Token: "root"},
|
||||
}
|
||||
token := structs.ACLToken{}
|
||||
err = msgpackrpc.CallWithCodec(codec, "ACL.TokenSet", &tokenUpsertReq, &token)
|
||||
require.NoError(t, err)
|
||||
require.NotEmpty(t, token.SecretID)
|
||||
})
|
||||
|
||||
runStep(t, "token used remotely should fallback on anonymous token now", func(t *testing.T) {
|
||||
arg := &pbsubscribe.SubscribeRequest{
|
||||
Topic: pbsubscribe.Topic_ServiceHealth,
|
||||
Key: "redis",
|
||||
Token: localToken2.SecretID,
|
||||
Datacenter: "dc1",
|
||||
}
|
||||
|
||||
event, err := getFirstSubscribeEventOrError(conn, arg)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, event)
|
||||
|
||||
// So now that we can read data, we should get a snapshot with just instances of the "consul" service.
|
||||
require.NoError(t, err)
|
||||
|
||||
require.IsType(t, &pbsubscribe.Event_ServiceHealth{}, event.Payload)
|
||||
esh := event.Payload.(*pbsubscribe.Event_ServiceHealth)
|
||||
|
||||
require.Equal(t, pbsubscribe.CatalogOp_Register, esh.ServiceHealth.Op)
|
||||
csn := esh.ServiceHealth.CheckServiceNode
|
||||
|
||||
require.NotNil(t, csn)
|
||||
require.NotNil(t, csn.Node)
|
||||
require.Equal(t, "node1", csn.Node.Node)
|
||||
require.Equal(t, "3.4.5.6", csn.Node.Address)
|
||||
require.NotNil(t, csn.Service)
|
||||
require.Equal(t, "redis1", csn.Service.ID)
|
||||
require.Equal(t, "redis", csn.Service.Service)
|
||||
|
||||
// make sure that token restore defer works
|
||||
require.Equal(t, localToken2.SecretID, arg.Token, "token should not be stripped")
|
||||
})
|
||||
}
|
||||
|
||||
func TestCanRetry(t *testing.T) {
|
||||
type testCase struct {
|
||||
name string
|
||||
|
@ -1362,3 +1560,23 @@ func isConnectionClosedError(err error) bool {
|
|||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func getFirstSubscribeEventOrError(conn *grpc.ClientConn, req *pbsubscribe.SubscribeRequest) (*pbsubscribe.Event, error) {
|
||||
streamClient := pbsubscribe.NewStateChangeSubscriptionClient(conn)
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
handle, err := streamClient.Subscribe(ctx, req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
event, err := handle.Recv()
|
||||
if err == io.EOF {
|
||||
return nil, nil
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return event, nil
|
||||
}
|
||||
|
|
|
@ -95,7 +95,6 @@ const (
|
|||
)
|
||||
|
||||
const (
|
||||
legacyACLReplicationRoutineName = "legacy ACL replication"
|
||||
aclPolicyReplicationRoutineName = "ACL policy replication"
|
||||
aclRoleReplicationRoutineName = "ACL role replication"
|
||||
aclTokenReplicationRoutineName = "ACL token replication"
|
||||
|
|
|
@ -61,7 +61,7 @@ func (s *Restore) ACLBindingRule(rule *structs.ACLBindingRule) error {
|
|||
|
||||
// ACLAuthMethods is used when saving a snapshot
|
||||
func (s *Snapshot) ACLAuthMethods() (memdb.ResultIterator, error) {
|
||||
iter, err := s.tx.Get("acl-auth-methods", "id")
|
||||
iter, err := s.tx.Get(tableACLAuthMethods, indexID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -649,7 +649,7 @@ func aclTokenGetTxn(tx ReadTxn, ws memdb.WatchSet, value, index string, entMeta
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
// ACLTokenList is used to list out all of the ACLs in the state store.
|
||||
// ACLTokenList return a list of ACL Tokens that match the policy, role, and method.
|
||||
func (s *Store) ACLTokenList(ws memdb.WatchSet, local, global bool, policy, role, methodName string, methodMeta, entMeta *structs.EnterpriseMeta) (uint64, structs.ACLTokens, error) {
|
||||
tx := s.db.Txn(false)
|
||||
defer tx.Abort()
|
||||
|
@ -1741,3 +1741,51 @@ func intFromBool(cond bool) byte {
|
|||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func aclPolicyInsert(tx WriteTxn, policy *structs.ACLPolicy) error {
|
||||
if err := tx.Insert(tableACLPolicies, policy); err != nil {
|
||||
return fmt.Errorf("failed inserting acl policy: %v", err)
|
||||
}
|
||||
return updateTableIndexEntries(tx, tableACLPolicies, policy.ModifyIndex, &policy.EnterpriseMeta)
|
||||
}
|
||||
|
||||
func aclRoleInsert(tx WriteTxn, role *structs.ACLRole) error {
|
||||
// insert the role into memdb
|
||||
if err := tx.Insert(tableACLRoles, role); err != nil {
|
||||
return fmt.Errorf("failed inserting acl role: %v", err)
|
||||
}
|
||||
|
||||
// update acl-roles index
|
||||
return updateTableIndexEntries(tx, tableACLRoles, role.ModifyIndex, &role.EnterpriseMeta)
|
||||
}
|
||||
|
||||
func aclTokenInsert(tx WriteTxn, token *structs.ACLToken) error {
|
||||
// insert the token into memdb
|
||||
if err := tx.Insert(tableACLTokens, token); err != nil {
|
||||
return fmt.Errorf("failed inserting acl token: %v", err)
|
||||
}
|
||||
// update the overall acl-tokens index
|
||||
return updateTableIndexEntries(tx, tableACLTokens, token.ModifyIndex, token.EnterpriseMetadata())
|
||||
}
|
||||
|
||||
func aclAuthMethodInsert(tx WriteTxn, method *structs.ACLAuthMethod) error {
|
||||
// insert the auth method into memdb
|
||||
if err := tx.Insert(tableACLAuthMethods, method); err != nil {
|
||||
return fmt.Errorf("failed inserting acl role: %v", err)
|
||||
}
|
||||
|
||||
// update acl-auth-methods index
|
||||
return updateTableIndexEntries(tx, tableACLAuthMethods, method.ModifyIndex, &method.EnterpriseMeta)
|
||||
}
|
||||
|
||||
func aclBindingRuleInsert(tx WriteTxn, rule *structs.ACLBindingRule) error {
|
||||
rule.EnterpriseMeta.Normalize()
|
||||
|
||||
// insert the role into memdb
|
||||
if err := tx.Insert(tableACLBindingRules, rule); err != nil {
|
||||
return fmt.Errorf("failed inserting acl role: %v", err)
|
||||
}
|
||||
|
||||
// update acl-binding-rules index
|
||||
return updateTableIndexEntries(tx, tableACLBindingRules, rule.ModifyIndex, &rule.EnterpriseMeta)
|
||||
}
|
||||
|
|
|
@ -11,15 +11,10 @@ import (
|
|||
"github.com/hashicorp/consul/agent/structs"
|
||||
)
|
||||
|
||||
func aclPolicyInsert(tx WriteTxn, policy *structs.ACLPolicy) error {
|
||||
if err := tx.Insert(tableACLPolicies, policy); err != nil {
|
||||
return fmt.Errorf("failed inserting acl policy: %v", err)
|
||||
func updateTableIndexEntries(tx WriteTxn, tableName string, modifyIndex uint64, _ *structs.EnterpriseMeta) error {
|
||||
if err := indexUpdateMaxTxn(tx, modifyIndex, tableName); err != nil {
|
||||
return fmt.Errorf("failed updating %s index: %v", tableName, err)
|
||||
}
|
||||
|
||||
if err := indexUpdateMaxTxn(tx, policy.ModifyIndex, tableACLPolicies); err != nil {
|
||||
return fmt.Errorf("failed updating acl policies index: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -56,20 +51,6 @@ func (s *Store) ACLPolicyUpsertValidateEnterprise(*structs.ACLPolicy, *structs.A
|
|||
///// ACL Token Functions /////
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
func aclTokenInsert(tx WriteTxn, token *structs.ACLToken) error {
|
||||
// insert the token into memdb
|
||||
if err := tx.Insert(tableACLTokens, token); err != nil {
|
||||
return fmt.Errorf("failed inserting acl token: %v", err)
|
||||
}
|
||||
|
||||
// update the overall acl-tokens index
|
||||
if err := indexUpdateMaxTxn(tx, token.ModifyIndex, tableACLTokens); err != nil {
|
||||
return fmt.Errorf("failed updating acl tokens index: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func aclTokenGetFromIndex(tx ReadTxn, id string, index string, entMeta *structs.EnterpriseMeta) (<-chan struct{}, interface{}, error) {
|
||||
return tx.FirstWatch(tableACLTokens, index, id)
|
||||
}
|
||||
|
@ -119,19 +100,6 @@ func (s *Store) ACLTokenUpsertValidateEnterprise(token *structs.ACLToken, existi
|
|||
///// ACL Role Functions /////
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
func aclRoleInsert(tx WriteTxn, role *structs.ACLRole) error {
|
||||
// insert the role into memdb
|
||||
if err := tx.Insert(tableACLRoles, role); err != nil {
|
||||
return fmt.Errorf("failed inserting acl role: %v", err)
|
||||
}
|
||||
|
||||
// update the overall acl-roles index
|
||||
if err := indexUpdateMaxTxn(tx, role.ModifyIndex, tableACLRoles); err != nil {
|
||||
return fmt.Errorf("failed updating acl roles index: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func aclRoleGetByID(tx ReadTxn, id string, _ *structs.EnterpriseMeta) (<-chan struct{}, interface{}, error) {
|
||||
return tx.FirstWatch(tableACLRoles, indexID, id)
|
||||
}
|
||||
|
@ -165,20 +133,6 @@ func (s *Store) ACLRoleUpsertValidateEnterprise(role *structs.ACLRole, existing
|
|||
///// ACL Binding Rule Functions /////
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
func aclBindingRuleInsert(tx WriteTxn, rule *structs.ACLBindingRule) error {
|
||||
// insert the role into memdb
|
||||
if err := tx.Insert(tableACLBindingRules, rule); err != nil {
|
||||
return fmt.Errorf("failed inserting acl role: %v", err)
|
||||
}
|
||||
|
||||
// update the overall acl-binding-rules index
|
||||
if err := indexUpdateMaxTxn(tx, rule.ModifyIndex, tableACLBindingRules); err != nil {
|
||||
return fmt.Errorf("failed updating acl binding-rules index: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func aclBindingRuleGetByID(tx ReadTxn, id string, _ *structs.EnterpriseMeta) (<-chan struct{}, interface{}, error) {
|
||||
return tx.FirstWatch(tableACLBindingRules, indexID, id)
|
||||
}
|
||||
|
@ -220,43 +174,29 @@ func (s *Store) ACLBindingRuleUpsertValidateEnterprise(rule *structs.ACLBindingR
|
|||
///// ACL Auth Method Functions /////
|
||||
///////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
func aclAuthMethodInsert(tx WriteTxn, method *structs.ACLAuthMethod) error {
|
||||
// insert the role into memdb
|
||||
if err := tx.Insert("acl-auth-methods", method); err != nil {
|
||||
return fmt.Errorf("failed inserting acl role: %v", err)
|
||||
}
|
||||
|
||||
// update the overall acl-auth-methods index
|
||||
if err := indexUpdateMaxTxn(tx, method.ModifyIndex, "acl-auth-methods"); err != nil {
|
||||
return fmt.Errorf("failed updating acl auth methods index: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func aclAuthMethodGetByName(tx ReadTxn, method string, _ *structs.EnterpriseMeta) (<-chan struct{}, interface{}, error) {
|
||||
return tx.FirstWatch("acl-auth-methods", "id", method)
|
||||
return tx.FirstWatch(tableACLAuthMethods, indexID, Query{Value: method})
|
||||
}
|
||||
|
||||
func aclAuthMethodList(tx ReadTxn, entMeta *structs.EnterpriseMeta) (memdb.ResultIterator, error) {
|
||||
return tx.Get("acl-auth-methods", "id")
|
||||
return tx.Get(tableACLAuthMethods, indexID)
|
||||
}
|
||||
|
||||
func aclAuthMethodDeleteWithMethod(tx WriteTxn, method *structs.ACLAuthMethod, idx uint64) error {
|
||||
// remove the method
|
||||
if err := tx.Delete("acl-auth-methods", method); err != nil {
|
||||
if err := tx.Delete(tableACLAuthMethods, method); err != nil {
|
||||
return fmt.Errorf("failed deleting acl auth method: %v", err)
|
||||
}
|
||||
|
||||
// update the overall acl-auth-methods index
|
||||
if err := indexUpdateMaxTxn(tx, idx, "acl-auth-methods"); err != nil {
|
||||
if err := indexUpdateMaxTxn(tx, idx, tableACLAuthMethods); err != nil {
|
||||
return fmt.Errorf("failed updating acl auth methods index: %v", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func aclAuthMethodMaxIndex(tx ReadTxn, _ *structs.ACLAuthMethod, entMeta *structs.EnterpriseMeta) uint64 {
|
||||
return maxIndexTxn(tx, "acl-auth-methods")
|
||||
return maxIndexTxn(tx, tableACLAuthMethods)
|
||||
}
|
||||
|
||||
func aclAuthMethodUpsertValidateEnterprise(_ ReadTxn, method *structs.ACLAuthMethod, existing *structs.ACLAuthMethod) error {
|
||||
|
|
|
@ -172,3 +172,23 @@ func testIndexerTableACLBindingRules() map[string]indexerTestCase {
|
|||
},
|
||||
}
|
||||
}
|
||||
|
||||
func testIndexerTableACLAuthMethods() map[string]indexerTestCase {
|
||||
obj := &structs.ACLAuthMethod{
|
||||
Name: "ThEAuthMethod",
|
||||
EnterpriseMeta: structs.EnterpriseMeta{},
|
||||
}
|
||||
encodedName := []byte{0x74, 0x68, 0x65, 0x61, 0x75, 0x74, 0x68, 0x6d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x0}
|
||||
return map[string]indexerTestCase{
|
||||
indexID: {
|
||||
read: indexValue{
|
||||
source: obj.Name,
|
||||
expected: encodedName,
|
||||
},
|
||||
write: indexValue{
|
||||
source: obj,
|
||||
expected: encodedName,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
|
|
@ -314,23 +314,6 @@ func indexAuthMethodFromACLBindingRule(raw interface{}) ([]byte, error) {
|
|||
return b.Bytes(), nil
|
||||
}
|
||||
|
||||
func authMethodsTableSchema() *memdb.TableSchema {
|
||||
return &memdb.TableSchema{
|
||||
Name: tableACLAuthMethods,
|
||||
Indexes: map[string]*memdb.IndexSchema{
|
||||
indexID: {
|
||||
Name: indexID,
|
||||
AllowMissing: false,
|
||||
Unique: true,
|
||||
Indexer: &memdb.StringFieldIndex{
|
||||
Field: "Name",
|
||||
Lowercase: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func indexFromUUIDString(raw interface{}) ([]byte, error) {
|
||||
index, ok := raw.(string)
|
||||
if !ok {
|
||||
|
@ -499,3 +482,35 @@ func indexExpiresFromACLToken(raw interface{}, local bool) ([]byte, error) {
|
|||
b.Time(*p.ExpirationTime)
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
|
||||
func authMethodsTableSchema() *memdb.TableSchema {
|
||||
return &memdb.TableSchema{
|
||||
Name: tableACLAuthMethods,
|
||||
Indexes: map[string]*memdb.IndexSchema{
|
||||
indexID: {
|
||||
Name: indexID,
|
||||
AllowMissing: false,
|
||||
Unique: true,
|
||||
Indexer: indexerSingle{
|
||||
readIndex: indexFromQuery,
|
||||
writeIndex: indexNameFromACLAuthMethod,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func indexNameFromACLAuthMethod(raw interface{}) ([]byte, error) {
|
||||
p, ok := raw.(*structs.ACLAuthMethod)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unexpected type %T for structs.ACLAuthMethod index", raw)
|
||||
}
|
||||
|
||||
if p.Name == "" {
|
||||
return nil, errMissingValueForIndex
|
||||
}
|
||||
|
||||
var b indexBuilder
|
||||
b.String(strings.ToLower(p.Name))
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
|
|
|
@ -4149,7 +4149,7 @@ func TestStateStore_ACLAuthMethods_Snapshot_Restore(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
require.Equal(t, uint64(2), idx)
|
||||
require.ElementsMatch(t, methods, res)
|
||||
require.Equal(t, uint64(2), s.maxIndex("acl-auth-methods"))
|
||||
require.Equal(t, uint64(2), s.maxIndex(tableACLAuthMethods))
|
||||
}()
|
||||
}
|
||||
|
||||
|
|
|
@ -156,7 +156,7 @@ func TestStore_IntentionSetGet_basic(t *testing.T) {
|
|||
//nolint:staticcheck
|
||||
expected.SetHash()
|
||||
|
||||
expected.NormalizePartitionFields()
|
||||
expected.FillPartitionAndNamespace(nil, true)
|
||||
}
|
||||
require.True(t, watchFired(ws), "watch fired")
|
||||
|
||||
|
@ -1098,7 +1098,7 @@ func TestStore_IntentionsList(t *testing.T) {
|
|||
UpdatedAt: testTimeA,
|
||||
}
|
||||
if !legacy {
|
||||
ret.NormalizePartitionFields()
|
||||
ret.FillPartitionAndNamespace(nil, true)
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
|
|
@ -2,6 +2,7 @@ package state
|
|||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/go-memdb"
|
||||
)
|
||||
|
@ -75,11 +76,37 @@ func indexTableSchema() *memdb.TableSchema {
|
|||
Name: indexID,
|
||||
AllowMissing: false,
|
||||
Unique: true,
|
||||
Indexer: &memdb.StringFieldIndex{
|
||||
Field: "Key",
|
||||
Lowercase: true,
|
||||
Indexer: indexerSingle{
|
||||
readIndex: indexFromString,
|
||||
writeIndex: indexNameFromIndexEntry,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func indexNameFromIndexEntry(raw interface{}) ([]byte, error) {
|
||||
p, ok := raw.(*IndexEntry)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unexpected type %T for IndexEntry index", raw)
|
||||
}
|
||||
|
||||
if p.Key == "" {
|
||||
return nil, errMissingValueForIndex
|
||||
}
|
||||
|
||||
var b indexBuilder
|
||||
b.String(strings.ToLower(p.Key))
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
|
||||
func indexFromString(raw interface{}) ([]byte, error) {
|
||||
q, ok := raw.(string)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unexpected type %T for string prefix query", raw)
|
||||
}
|
||||
|
||||
var b indexBuilder
|
||||
b.String(strings.ToLower(q))
|
||||
return b.Bytes(), nil
|
||||
}
|
||||
|
|
|
@ -295,17 +295,20 @@ func indexUpdateMaxTxn(tx WriteTxn, idx uint64, table string) error {
|
|||
return fmt.Errorf("failed to retrieve existing index: %s", err)
|
||||
}
|
||||
|
||||
// Always take the first update, otherwise do the > check.
|
||||
if ti == nil {
|
||||
if err := tx.Insert(tableIndex, &IndexEntry{table, idx}); err != nil {
|
||||
return fmt.Errorf("failed updating index %s", err)
|
||||
// if this is an update check the idx
|
||||
if ti != nil {
|
||||
cur, ok := ti.(*IndexEntry)
|
||||
if !ok {
|
||||
return fmt.Errorf("failed updating index %T need to be `*IndexEntry`", ti)
|
||||
}
|
||||
// Stored index is newer, don't insert the index
|
||||
if idx <= cur.Value {
|
||||
return nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if cur, ok := ti.(*IndexEntry); ok && idx > cur.Value {
|
||||
if err := tx.Insert(tableIndex, &IndexEntry{table, idx}); err != nil {
|
||||
return fmt.Errorf("failed updating index %s", err)
|
||||
}
|
||||
|
||||
if err := tx.Insert(tableIndex, &IndexEntry{table, idx}); err != nil {
|
||||
return fmt.Errorf("failed updating index %s", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
|
||||
const (
|
||||
serviceNamesUsageTable = "service-names"
|
||||
kvUsageTable = "kv-entries"
|
||||
|
||||
tableUsage = "usage"
|
||||
)
|
||||
|
@ -54,6 +55,11 @@ type NodeUsage struct {
|
|||
EnterpriseNodeUsage
|
||||
}
|
||||
|
||||
type KVUsage struct {
|
||||
KVCount int
|
||||
EnterpriseKVUsage
|
||||
}
|
||||
|
||||
type uniqueServiceState int
|
||||
|
||||
const (
|
||||
|
@ -95,6 +101,9 @@ func updateUsage(tx WriteTxn, changes Changes) error {
|
|||
} else {
|
||||
serviceNameChanges[svc.CompoundServiceName()] += delta
|
||||
}
|
||||
case "kvs":
|
||||
usageDeltas[change.Table] += delta
|
||||
addEnterpriseKVUsage(usageDeltas, change)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -269,6 +278,26 @@ func (s *Store) ServiceUsage() (uint64, ServiceUsage, error) {
|
|||
return serviceInstances.Index, results, nil
|
||||
}
|
||||
|
||||
func (s *Store) KVUsage() (uint64, KVUsage, error) {
|
||||
tx := s.db.ReadTxn()
|
||||
defer tx.Abort()
|
||||
|
||||
kvs, err := firstUsageEntry(tx, "kvs")
|
||||
if err != nil {
|
||||
return 0, KVUsage{}, fmt.Errorf("failed kvs lookup: %s", err)
|
||||
}
|
||||
|
||||
usage := KVUsage{
|
||||
KVCount: kvs.Count,
|
||||
}
|
||||
results, err := compileEnterpriseKVUsage(tx, usage)
|
||||
if err != nil {
|
||||
return 0, KVUsage{}, fmt.Errorf("failed kvs lookup: %s", err)
|
||||
}
|
||||
|
||||
return kvs.Index, results, nil
|
||||
}
|
||||
|
||||
func firstUsageEntry(tx ReadTxn, id string) (*UsageEntry, error) {
|
||||
usage, err := tx.First(tableUsage, indexID, id)
|
||||
if err != nil {
|
||||
|
|
|
@ -10,6 +10,7 @@ import (
|
|||
|
||||
type EnterpriseServiceUsage struct{}
|
||||
type EnterpriseNodeUsage struct{}
|
||||
type EnterpriseKVUsage struct{}
|
||||
|
||||
func addEnterpriseNodeUsage(map[string]int, memdb.Change) {}
|
||||
|
||||
|
@ -17,6 +18,8 @@ func addEnterpriseServiceInstanceUsage(map[string]int, memdb.Change) {}
|
|||
|
||||
func addEnterpriseServiceUsage(map[string]int, map[structs.ServiceName]uniqueServiceState) {}
|
||||
|
||||
func addEnterpriseKVUsage(map[string]int, memdb.Change) {}
|
||||
|
||||
func compileEnterpriseServiceUsage(tx ReadTxn, usage ServiceUsage) (ServiceUsage, error) {
|
||||
return usage, nil
|
||||
}
|
||||
|
@ -24,3 +27,7 @@ func compileEnterpriseServiceUsage(tx ReadTxn, usage ServiceUsage) (ServiceUsage
|
|||
func compileEnterpriseNodeUsage(tx ReadTxn, usage NodeUsage) (NodeUsage, error) {
|
||||
return usage, nil
|
||||
}
|
||||
|
||||
func compileEnterpriseKVUsage(tx ReadTxn, usage KVUsage) (KVUsage, error) {
|
||||
return usage, nil
|
||||
}
|
||||
|
|
|
@ -45,6 +45,44 @@ func TestStateStore_Usage_NodeUsage_Delete(t *testing.T) {
|
|||
require.Equal(t, usage.Nodes, 1)
|
||||
}
|
||||
|
||||
func TestStateStore_Usage_KVUsage(t *testing.T) {
|
||||
s := testStateStore(t)
|
||||
|
||||
// No keys have been registered, and thus no usage entry exists
|
||||
idx, usage, err := s.KVUsage()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, idx, uint64(0))
|
||||
require.Equal(t, usage.KVCount, 0)
|
||||
|
||||
testSetKey(t, s, 0, "key-1", "0", nil)
|
||||
testSetKey(t, s, 1, "key-2", "0", nil)
|
||||
testSetKey(t, s, 2, "key-2", "1", nil)
|
||||
|
||||
idx, usage, err = s.KVUsage()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, idx, uint64(2))
|
||||
require.Equal(t, usage.KVCount, 2)
|
||||
}
|
||||
|
||||
func TestStateStore_Usage_KVUsage_Delete(t *testing.T) {
|
||||
s := testStateStore(t)
|
||||
|
||||
testSetKey(t, s, 0, "key-1", "0", nil)
|
||||
testSetKey(t, s, 1, "key-2", "0", nil)
|
||||
testSetKey(t, s, 2, "key-2", "1", nil)
|
||||
|
||||
idx, usage, err := s.KVUsage()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, idx, uint64(2))
|
||||
require.Equal(t, usage.KVCount, 2)
|
||||
|
||||
require.NoError(t, s.KVSDelete(3, "key-2", nil))
|
||||
idx, usage, err = s.KVUsage()
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, idx, uint64(3))
|
||||
require.Equal(t, usage.KVCount, 1)
|
||||
}
|
||||
|
||||
func TestStateStore_Usage_ServiceUsageEmpty(t *testing.T) {
|
||||
s := testStateStore(t)
|
||||
|
||||
|
|
|
@ -26,21 +26,8 @@ func (s subscribeBackend) ResolveTokenAndDefaultMeta(
|
|||
|
||||
var _ subscribe.Backend = (*subscribeBackend)(nil)
|
||||
|
||||
// Forward requests to a remote datacenter by calling f if the target dc does not
|
||||
// match the config. Does nothing but return handled=false if dc is not specified,
|
||||
// or if it matches the Datacenter in config.
|
||||
//
|
||||
// TODO: extract this so that it can be used with other grpc services.
|
||||
// TODO: rename to ForwardToDC
|
||||
func (s subscribeBackend) Forward(dc string, f func(*grpc.ClientConn) error) (handled bool, err error) {
|
||||
if dc == "" || dc == s.srv.config.Datacenter {
|
||||
return false, nil
|
||||
}
|
||||
conn, err := s.connPool.ClientConn(dc)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return true, f(conn)
|
||||
func (s subscribeBackend) Forward(info structs.RPCInfo, f func(*grpc.ClientConn) error) (handled bool, err error) {
|
||||
return s.srv.ForwardGRPC(s.connPool, info, f)
|
||||
}
|
||||
|
||||
func (s subscribeBackend) Subscribe(req *stream.SubscribeRequest) (*stream.Subscription, error) {
|
||||
|
|
|
@ -362,17 +362,19 @@ func TestSubscribeBackend_IntegrationWithServer_DeliversAllMessages(t *testing.T
|
|||
}
|
||||
|
||||
func newClientWithGRPCResolver(t *testing.T, ops ...func(*Config)) (*Client, *resolver.ServerResolverBuilder) {
|
||||
builder := resolver.NewServerResolverBuilder(newTestResolverConfig(t, "client"))
|
||||
resolver.Register(builder)
|
||||
t.Cleanup(func() {
|
||||
resolver.Deregister(builder.Authority())
|
||||
})
|
||||
|
||||
_, config := testClientConfig(t)
|
||||
for _, op := range ops {
|
||||
op(config)
|
||||
}
|
||||
|
||||
builder := resolver.NewServerResolverBuilder(newTestResolverConfig(t,
|
||||
"client."+config.Datacenter+"."+string(config.NodeID)))
|
||||
|
||||
resolver.Register(builder)
|
||||
t.Cleanup(func() {
|
||||
resolver.Deregister(builder.Authority())
|
||||
})
|
||||
|
||||
deps := newDefaultDeps(t, config)
|
||||
deps.Router = router.NewRouter(
|
||||
deps.Logger,
|
||||
|
|
|
@ -36,6 +36,10 @@ var Gauges = []prometheus.GaugeDefinition{
|
|||
Name: []string{"consul", "members", "servers"},
|
||||
Help: "Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6.",
|
||||
},
|
||||
{
|
||||
Name: []string{"consul", "kv", "entries"},
|
||||
Help: "Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.10.3.",
|
||||
},
|
||||
}
|
||||
|
||||
type getMembersFunc func() []serf.Member
|
||||
|
@ -145,6 +149,7 @@ func (u *UsageMetricsReporter) Run(ctx context.Context) {
|
|||
}
|
||||
|
||||
func (u *UsageMetricsReporter) runOnce() {
|
||||
u.logger.Trace("Starting usage run")
|
||||
state := u.stateProvider.State()
|
||||
|
||||
_, nodeUsage, err := state.NodeUsage()
|
||||
|
@ -163,6 +168,14 @@ func (u *UsageMetricsReporter) runOnce() {
|
|||
|
||||
members := u.memberUsage()
|
||||
u.emitMemberUsage(members)
|
||||
|
||||
_, kvUsage, err := state.KVUsage()
|
||||
if err != nil {
|
||||
u.logger.Warn("failed to retrieve kv entry usage from state store", "error", err)
|
||||
}
|
||||
|
||||
u.emitKVUsage(kvUsage)
|
||||
|
||||
}
|
||||
|
||||
func (u *UsageMetricsReporter) memberUsage() []serf.Member {
|
||||
|
|
|
@ -58,3 +58,11 @@ func (u *UsageMetricsReporter) emitServiceUsage(serviceUsage state.ServiceUsage)
|
|||
u.metricLabels,
|
||||
)
|
||||
}
|
||||
|
||||
func (u *UsageMetricsReporter) emitKVUsage(kvUsage state.KVUsage) {
|
||||
metrics.SetGaugeWithLabels(
|
||||
[]string{"consul", "state", "kv_entries"},
|
||||
float32(kvUsage.KVCount),
|
||||
u.metricLabels,
|
||||
)
|
||||
}
|
||||
|
|
|
@ -57,6 +57,11 @@ func TestUsageReporter_emitNodeUsage_OSS(t *testing.T) {
|
|||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.state.kv_entries;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.kv_entries",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
},
|
||||
getMembersFunc: func() []serf.Member { return []serf.Member{} },
|
||||
},
|
||||
|
@ -114,6 +119,11 @@ func TestUsageReporter_emitNodeUsage_OSS(t *testing.T) {
|
|||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.state.kv_entries;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.kv_entries",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -199,6 +209,11 @@ func TestUsageReporter_emitServiceUsage_OSS(t *testing.T) {
|
|||
{Name: "datacenter", Value: "dc1"},
|
||||
},
|
||||
},
|
||||
"consul.usage.test.consul.state.kv_entries;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.kv_entries",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
},
|
||||
getMembersFunc: func() []serf.Member { return []serf.Member{} },
|
||||
},
|
||||
|
@ -276,6 +291,11 @@ func TestUsageReporter_emitServiceUsage_OSS(t *testing.T) {
|
|||
{Name: "datacenter", Value: "dc1"},
|
||||
},
|
||||
},
|
||||
"consul.usage.test.consul.state.kv_entries;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.kv_entries",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -314,3 +334,156 @@ func TestUsageReporter_emitServiceUsage_OSS(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestUsageReporter_emitKVUsage_OSS(t *testing.T) {
|
||||
type testCase struct {
|
||||
modfiyStateStore func(t *testing.T, s *state.Store)
|
||||
getMembersFunc getMembersFunc
|
||||
expectedGauges map[string]metrics.GaugeValue
|
||||
}
|
||||
cases := map[string]testCase{
|
||||
"empty-state": {
|
||||
expectedGauges: map[string]metrics.GaugeValue{
|
||||
// --- node ---
|
||||
"consul.usage.test.consul.state.nodes;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.nodes",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
// --- member ---
|
||||
"consul.usage.test.consul.members.clients;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.members.clients",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.members.servers;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.members.servers",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
// --- service ---
|
||||
"consul.usage.test.consul.state.services;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.services",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.state.service_instances;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.service_instances",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.state.kv_entries;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.kv_entries",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
},
|
||||
getMembersFunc: func() []serf.Member { return []serf.Member{} },
|
||||
},
|
||||
"nodes": {
|
||||
modfiyStateStore: func(t *testing.T, s *state.Store) {
|
||||
require.NoError(t, s.EnsureNode(1, &structs.Node{Node: "foo", Address: "127.0.0.1"}))
|
||||
require.NoError(t, s.EnsureNode(2, &structs.Node{Node: "bar", Address: "127.0.0.2"}))
|
||||
require.NoError(t, s.EnsureNode(3, &structs.Node{Node: "baz", Address: "127.0.0.2"}))
|
||||
|
||||
require.NoError(t, s.KVSSet(4, &structs.DirEntry{Key: "a", Value: []byte{1}}))
|
||||
require.NoError(t, s.KVSSet(5, &structs.DirEntry{Key: "b", Value: []byte{1}}))
|
||||
require.NoError(t, s.KVSSet(6, &structs.DirEntry{Key: "c", Value: []byte{1}}))
|
||||
require.NoError(t, s.KVSSet(7, &structs.DirEntry{Key: "d", Value: []byte{1}}))
|
||||
require.NoError(t, s.KVSDelete(8, "d", &structs.EnterpriseMeta{}))
|
||||
require.NoError(t, s.KVSDelete(9, "c", &structs.EnterpriseMeta{}))
|
||||
require.NoError(t, s.KVSSet(10, &structs.DirEntry{Key: "e", Value: []byte{1}}))
|
||||
require.NoError(t, s.KVSSet(11, &structs.DirEntry{Key: "f", Value: []byte{1}}))
|
||||
},
|
||||
getMembersFunc: func() []serf.Member {
|
||||
return []serf.Member{
|
||||
{
|
||||
Name: "foo",
|
||||
Tags: map[string]string{"role": "consul"},
|
||||
Status: serf.StatusAlive,
|
||||
},
|
||||
{
|
||||
Name: "bar",
|
||||
Tags: map[string]string{"role": "consul"},
|
||||
Status: serf.StatusAlive,
|
||||
},
|
||||
{
|
||||
Name: "baz",
|
||||
Tags: map[string]string{"role": "node"},
|
||||
Status: serf.StatusAlive,
|
||||
},
|
||||
}
|
||||
},
|
||||
expectedGauges: map[string]metrics.GaugeValue{
|
||||
// --- node ---
|
||||
"consul.usage.test.consul.state.nodes;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.nodes",
|
||||
Value: 3,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
// --- member ---
|
||||
"consul.usage.test.consul.members.servers;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.members.servers",
|
||||
Value: 2,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.members.clients;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.members.clients",
|
||||
Value: 1,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
// --- service ---
|
||||
"consul.usage.test.consul.state.services;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.services",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.state.service_instances;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.service_instances",
|
||||
Value: 0,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
"consul.usage.test.consul.state.kv_entries;datacenter=dc1": {
|
||||
Name: "consul.usage.test.consul.state.kv_entries",
|
||||
Value: 4,
|
||||
Labels: []metrics.Label{{Name: "datacenter", Value: "dc1"}},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for name, tcase := range cases {
|
||||
t.Run(name, func(t *testing.T) {
|
||||
// Only have a single interval for the test
|
||||
sink := metrics.NewInmemSink(1*time.Minute, 1*time.Minute)
|
||||
cfg := metrics.DefaultConfig("consul.usage.test")
|
||||
cfg.EnableHostname = false
|
||||
metrics.NewGlobal(cfg, sink)
|
||||
|
||||
mockStateProvider := &mockStateProvider{}
|
||||
s, err := newStateStore()
|
||||
require.NoError(t, err)
|
||||
if tcase.modfiyStateStore != nil {
|
||||
tcase.modfiyStateStore(t, s)
|
||||
}
|
||||
mockStateProvider.On("State").Return(s)
|
||||
|
||||
reporter, err := NewUsageMetricsReporter(
|
||||
new(Config).
|
||||
WithStateProvider(mockStateProvider).
|
||||
WithLogger(testutil.Logger(t)).
|
||||
WithDatacenter("dc1").
|
||||
WithGetMembersFunc(tcase.getMembersFunc),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
reporter.runOnce()
|
||||
|
||||
intervals := sink.Data()
|
||||
require.Len(t, intervals, 1)
|
||||
intv := intervals[0]
|
||||
|
||||
assertEqualGaugeMaps(t, tcase.expectedGauges, intv.Gauges)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,86 @@
|
|||
package agent
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/hashicorp/consul/acl"
|
||||
"github.com/hashicorp/consul/agent/consul"
|
||||
"github.com/hashicorp/consul/agent/structs"
|
||||
"github.com/hashicorp/consul/lib"
|
||||
"github.com/hashicorp/serf/serf"
|
||||
"github.com/stretchr/testify/mock"
|
||||
)
|
||||
|
||||
type delegateMock struct {
|
||||
mock.Mock
|
||||
}
|
||||
|
||||
func (m *delegateMock) GetLANCoordinate() (lib.CoordinateSet, error) {
|
||||
ret := m.Called()
|
||||
return ret.Get(0).(lib.CoordinateSet), ret.Error(1)
|
||||
}
|
||||
|
||||
func (m *delegateMock) Leave() error {
|
||||
return m.Called().Error(0)
|
||||
}
|
||||
|
||||
func (m *delegateMock) LANMembers() []serf.Member {
|
||||
return m.Called().Get(0).([]serf.Member)
|
||||
}
|
||||
|
||||
func (m *delegateMock) LANMembersAllSegments() ([]serf.Member, error) {
|
||||
ret := m.Called()
|
||||
return ret.Get(0).([]serf.Member), ret.Error(1)
|
||||
}
|
||||
|
||||
func (m *delegateMock) LANSegmentMembers(segment string) ([]serf.Member, error) {
|
||||
ret := m.Called()
|
||||
return ret.Get(0).([]serf.Member), ret.Error(1)
|
||||
}
|
||||
|
||||
func (m *delegateMock) LocalMember() serf.Member {
|
||||
return m.Called().Get(0).(serf.Member)
|
||||
}
|
||||
|
||||
func (m *delegateMock) JoinLAN(addrs []string) (n int, err error) {
|
||||
ret := m.Called(addrs)
|
||||
return ret.Int(0), ret.Error(1)
|
||||
}
|
||||
|
||||
func (m *delegateMock) RemoveFailedNode(node string, prune bool) error {
|
||||
return m.Called(node, prune).Error(0)
|
||||
}
|
||||
|
||||
func (m *delegateMock) ResolveTokenToIdentity(token string) (structs.ACLIdentity, error) {
|
||||
ret := m.Called(token)
|
||||
return ret.Get(0).(structs.ACLIdentity), ret.Error(1)
|
||||
}
|
||||
|
||||
func (m *delegateMock) ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (acl.Authorizer, error) {
|
||||
ret := m.Called(token, entMeta, authzContext)
|
||||
return ret.Get(0).(acl.Authorizer), ret.Error(1)
|
||||
}
|
||||
|
||||
func (m *delegateMock) RPC(method string, args interface{}, reply interface{}) error {
|
||||
return m.Called(method, args, reply).Error(0)
|
||||
}
|
||||
|
||||
func (m *delegateMock) UseLegacyACLs() bool {
|
||||
return m.Called().Bool(0)
|
||||
}
|
||||
|
||||
func (m *delegateMock) SnapshotRPC(args *structs.SnapshotRequest, in io.Reader, out io.Writer, replyFn structs.SnapshotReplyFn) error {
|
||||
return m.Called(args, in, out, replyFn).Error(0)
|
||||
}
|
||||
|
||||
func (m *delegateMock) Shutdown() error {
|
||||
return m.Called().Error(0)
|
||||
}
|
||||
|
||||
func (m *delegateMock) Stats() map[string]map[string]string {
|
||||
return m.Called().Get(0).(map[string]map[string]string)
|
||||
}
|
||||
|
||||
func (m *delegateMock) ReloadConfig(config consul.ReloadableConfig) error {
|
||||
return m.Called(config).Error(0)
|
||||
}
|
|
@ -80,13 +80,13 @@ func (s *handlerIngressGateway) handleUpdate(ctx context.Context, u cache.Update
|
|||
return fmt.Errorf("invalid type for config entry: %T", resp.Entry)
|
||||
}
|
||||
|
||||
snap.IngressGateway.TLSEnabled = gatewayConf.TLS.Enabled
|
||||
snap.IngressGateway.TLSSet = true
|
||||
snap.IngressGateway.GatewayConfigLoaded = true
|
||||
snap.IngressGateway.TLSConfig = gatewayConf.TLS
|
||||
|
||||
// Load each listener's config from the config entry so we don't have to
|
||||
// pass listener config through "upstreams" types as that grows.
|
||||
for _, l := range gatewayConf.Listeners {
|
||||
key := IngressListenerKey{Protocol: l.Protocol, Port: l.Port}
|
||||
key := IngressListenerKeyFromListener(l)
|
||||
snap.IngressGateway.Listeners[key] = l
|
||||
}
|
||||
|
||||
|
@ -123,7 +123,7 @@ func (s *handlerIngressGateway) handleUpdate(ctx context.Context, u cache.Update
|
|||
|
||||
hosts = append(hosts, service.Hosts...)
|
||||
|
||||
id := IngressListenerKey{Protocol: service.Protocol, Port: service.Port}
|
||||
id := IngressListenerKeyFromGWService(*service)
|
||||
upstreamsMap[id] = append(upstreamsMap[id], u)
|
||||
}
|
||||
|
||||
|
@ -169,7 +169,9 @@ func makeUpstream(g *structs.GatewayService) structs.Upstream {
|
|||
}
|
||||
|
||||
func (s *handlerIngressGateway) watchIngressLeafCert(ctx context.Context, snap *ConfigSnapshot) error {
|
||||
if !snap.IngressGateway.TLSSet || !snap.IngressGateway.HostsSet {
|
||||
// Note that we DON'T test for TLS.Enabled because we need a leaf cert for the
|
||||
// gateway even without TLS to use as a client cert.
|
||||
if !snap.IngressGateway.GatewayConfigLoaded || !snap.IngressGateway.HostsSet {
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -197,7 +199,7 @@ func (s *handlerIngressGateway) watchIngressLeafCert(ctx context.Context, snap *
|
|||
func (s *handlerIngressGateway) generateIngressDNSSANs(snap *ConfigSnapshot) []string {
|
||||
// Update our leaf cert watch with wildcard entries for our DNS domains as well as any
|
||||
// configured custom hostnames from the service.
|
||||
if !snap.IngressGateway.TLSEnabled {
|
||||
if !snap.IngressGateway.TLSConfig.Enabled {
|
||||
return nil
|
||||
}
|
||||
|
||||
|
|
|
@ -3,11 +3,11 @@ package proxycfg
|
|||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"github.com/hashicorp/consul/agent/connect"
|
||||
"sort"
|
||||
|
||||
"github.com/mitchellh/copystructure"
|
||||
|
||||
"github.com/hashicorp/consul/agent/connect"
|
||||
"github.com/hashicorp/consul/agent/structs"
|
||||
)
|
||||
|
||||
|
@ -305,9 +305,13 @@ func (c *configSnapshotMeshGateway) IsEmpty() bool {
|
|||
type configSnapshotIngressGateway struct {
|
||||
ConfigSnapshotUpstreams
|
||||
|
||||
// TLSEnabled is whether this gateway's listeners should have TLS configured.
|
||||
TLSEnabled bool
|
||||
TLSSet bool
|
||||
// TLSConfig is the gateway-level TLS configuration. Listener/service level
|
||||
// config is preserved in the Listeners map below.
|
||||
TLSConfig structs.GatewayTLSConfig
|
||||
|
||||
// GatewayConfigLoaded is used to determine if we have received the initial
|
||||
// ingress-gateway config entry yet.
|
||||
GatewayConfigLoaded bool
|
||||
|
||||
// Hosts is the list of extra host entries to add to our leaf cert's DNS SANs.
|
||||
Hosts []string
|
||||
|
@ -346,6 +350,14 @@ func (k *IngressListenerKey) RouteName() string {
|
|||
return fmt.Sprintf("%d", k.Port)
|
||||
}
|
||||
|
||||
func IngressListenerKeyFromGWService(s structs.GatewayService) IngressListenerKey {
|
||||
return IngressListenerKey{Protocol: s.Protocol, Port: s.Port}
|
||||
}
|
||||
|
||||
func IngressListenerKeyFromListener(l structs.IngressListener) IngressListenerKey {
|
||||
return IngressListenerKey{Protocol: l.Protocol, Port: l.Port}
|
||||
}
|
||||
|
||||
// ConfigSnapshot captures all the resulting config needed for a proxy instance.
|
||||
// It is meant to be point-in-time coherent and is used to deliver the current
|
||||
// config state to observers who need it to be pushed in (e.g. XDS server).
|
||||
|
@ -403,7 +415,7 @@ func (s *ConfigSnapshot) Valid() bool {
|
|||
case structs.ServiceKindIngressGateway:
|
||||
return s.Roots != nil &&
|
||||
s.IngressGateway.Leaf != nil &&
|
||||
s.IngressGateway.TLSSet &&
|
||||
s.IngressGateway.GatewayConfigLoaded &&
|
||||
s.IngressGateway.HostsSet
|
||||
default:
|
||||
return false
|
||||
|
|
|
@ -942,8 +942,8 @@ func TestState_WatchesAndUpdates(t *testing.T) {
|
|||
},
|
||||
verifySnapshot: func(t testing.TB, snap *ConfigSnapshot) {
|
||||
require.False(t, snap.Valid(), "gateway without hosts set is not valid")
|
||||
require.True(t, snap.IngressGateway.TLSSet)
|
||||
require.False(t, snap.IngressGateway.TLSEnabled)
|
||||
require.True(t, snap.IngressGateway.GatewayConfigLoaded)
|
||||
require.False(t, snap.IngressGateway.TLSConfig.Enabled)
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -1111,8 +1111,8 @@ func TestState_WatchesAndUpdates(t *testing.T) {
|
|||
},
|
||||
verifySnapshot: func(t testing.TB, snap *ConfigSnapshot) {
|
||||
require.True(t, snap.Valid())
|
||||
require.True(t, snap.IngressGateway.TLSSet)
|
||||
require.True(t, snap.IngressGateway.TLSEnabled)
|
||||
require.True(t, snap.IngressGateway.GatewayConfigLoaded)
|
||||
require.True(t, snap.IngressGateway.TLSConfig.Enabled)
|
||||
require.True(t, snap.IngressGateway.HostsSet)
|
||||
require.Len(t, snap.IngressGateway.Hosts, 1)
|
||||
require.Len(t, snap.IngressGateway.Upstreams, 1)
|
||||
|
|
|
@ -1622,7 +1622,16 @@ func TestConfigSnapshotIngress(t testing.T) *ConfigSnapshot {
|
|||
|
||||
func TestConfigSnapshotIngressWithTLSListener(t testing.T) *ConfigSnapshot {
|
||||
snap := testConfigSnapshotIngressGateway(t, true, "tcp", "default")
|
||||
snap.IngressGateway.TLSEnabled = true
|
||||
snap.IngressGateway.TLSConfig.Enabled = true
|
||||
return snap
|
||||
}
|
||||
|
||||
func TestConfigSnapshotIngressWithGatewaySDS(t testing.T) *ConfigSnapshot {
|
||||
snap := testConfigSnapshotIngressGateway(t, true, "tcp", "default")
|
||||
snap.IngressGateway.TLSConfig.SDS = &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster",
|
||||
CertResource: "cert-resource",
|
||||
}
|
||||
return snap
|
||||
}
|
||||
|
||||
|
|
|
@ -37,13 +37,13 @@ var _ pbsubscribe.StateChangeSubscriptionServer = (*Server)(nil)
|
|||
|
||||
type Backend interface {
|
||||
ResolveTokenAndDefaultMeta(token string, entMeta *structs.EnterpriseMeta, authzContext *acl.AuthorizerContext) (acl.Authorizer, error)
|
||||
Forward(dc string, f func(*grpc.ClientConn) error) (handled bool, err error)
|
||||
Forward(info structs.RPCInfo, f func(*grpc.ClientConn) error) (handled bool, err error)
|
||||
Subscribe(req *stream.SubscribeRequest) (*stream.Subscription, error)
|
||||
}
|
||||
|
||||
func (h *Server) Subscribe(req *pbsubscribe.SubscribeRequest, serverStream pbsubscribe.StateChangeSubscription_SubscribeServer) error {
|
||||
logger := newLoggerForRequest(h.Logger, req)
|
||||
handled, err := h.Backend.Forward(req.Datacenter, forwardToDC(req, serverStream, logger))
|
||||
handled, err := h.Backend.Forward(req, forwardToDC(req, serverStream, logger))
|
||||
if handled || err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
|
@ -292,7 +292,7 @@ func (b testBackend) ResolveTokenAndDefaultMeta(
|
|||
return b.authorizer(token, entMeta), nil
|
||||
}
|
||||
|
||||
func (b testBackend) Forward(_ string, fn func(*gogrpc.ClientConn) error) (handled bool, err error) {
|
||||
func (b testBackend) Forward(_ structs.RPCInfo, fn func(*gogrpc.ClientConn) error) (handled bool, err error) {
|
||||
if b.forwardConn != nil {
|
||||
return true, fn(b.forwardConn)
|
||||
}
|
||||
|
|
|
@ -1242,7 +1242,6 @@ func (m *ACLAuthMethod) UnmarshalJSON(data []byte) (err error) {
|
|||
type ACLReplicationType string
|
||||
|
||||
const (
|
||||
ACLReplicateLegacy ACLReplicationType = "legacy"
|
||||
ACLReplicatePolicies ACLReplicationType = "policies"
|
||||
ACLReplicateRoles ACLReplicationType = "roles"
|
||||
ACLReplicateTokens ACLReplicationType = "tokens"
|
||||
|
@ -1250,8 +1249,6 @@ const (
|
|||
|
||||
func (t ACLReplicationType) SingularNoun() string {
|
||||
switch t {
|
||||
case ACLReplicateLegacy:
|
||||
return "legacy"
|
||||
case ACLReplicatePolicies:
|
||||
return "policy"
|
||||
case ACLReplicateRoles:
|
||||
|
|
|
@ -90,6 +90,7 @@ func (a *ACL) Convert() *ACLToken {
|
|||
}
|
||||
|
||||
// Convert attempts to convert an ACLToken into an ACLCompat.
|
||||
// TODO(ACL-Legacy-Compat): remove
|
||||
func (tok *ACLToken) Convert() (*ACL, error) {
|
||||
if tok.Type == "" {
|
||||
return nil, fmt.Errorf("Cannot convert ACLToken into compat token")
|
||||
|
@ -105,21 +106,6 @@ func (tok *ACLToken) Convert() (*ACL, error) {
|
|||
return compat, nil
|
||||
}
|
||||
|
||||
// IsSame checks if one ACL is the same as another, without looking
|
||||
// at the Raft information (that's why we didn't call it IsEqual). This is
|
||||
// useful for seeing if an update would be idempotent for all the functional
|
||||
// parts of the structure.
|
||||
func (a *ACL) IsSame(other *ACL) bool {
|
||||
if a.ID != other.ID ||
|
||||
a.Name != other.Name ||
|
||||
a.Type != other.Type ||
|
||||
a.Rules != other.Rules {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// ACLRequest is used to create, update or delete an ACL
|
||||
type ACLRequest struct {
|
||||
Datacenter string
|
||||
|
|
|
@ -6,53 +6,6 @@ import (
|
|||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestStructs_ACL_IsSame(t *testing.T) {
|
||||
acl := &ACL{
|
||||
ID: "guid",
|
||||
Name: "An ACL for testing",
|
||||
Type: "client",
|
||||
Rules: "service \"\" { policy = \"read\" }",
|
||||
}
|
||||
if !acl.IsSame(acl) {
|
||||
t.Fatalf("should be equal to itself")
|
||||
}
|
||||
|
||||
other := &ACL{
|
||||
ID: "guid",
|
||||
Name: "An ACL for testing",
|
||||
Type: "client",
|
||||
Rules: "service \"\" { policy = \"read\" }",
|
||||
RaftIndex: RaftIndex{
|
||||
CreateIndex: 1,
|
||||
ModifyIndex: 2,
|
||||
},
|
||||
}
|
||||
if !acl.IsSame(other) || !other.IsSame(acl) {
|
||||
t.Fatalf("should not care about Raft fields")
|
||||
}
|
||||
|
||||
check := func(twiddle, restore func()) {
|
||||
if !acl.IsSame(other) || !other.IsSame(acl) {
|
||||
t.Fatalf("should be the same")
|
||||
}
|
||||
|
||||
twiddle()
|
||||
if acl.IsSame(other) || other.IsSame(acl) {
|
||||
t.Fatalf("should not be the same")
|
||||
}
|
||||
|
||||
restore()
|
||||
if !acl.IsSame(other) || !other.IsSame(acl) {
|
||||
t.Fatalf("should be the same")
|
||||
}
|
||||
}
|
||||
|
||||
check(func() { other.ID = "nope" }, func() { other.ID = "guid" })
|
||||
check(func() { other.Name = "nope" }, func() { other.Name = "An ACL for testing" })
|
||||
check(func() { other.Type = "management" }, func() { other.Type = "client" })
|
||||
check(func() { other.Rules = "" }, func() { other.Rules = "service \"\" { policy = \"read\" }" })
|
||||
}
|
||||
|
||||
func TestStructs_ACL_Convert(t *testing.T) {
|
||||
|
||||
acl := &ACL{
|
||||
|
|
|
@ -21,7 +21,9 @@ type IngressGatewayConfigEntry struct {
|
|||
// service. This should match the name provided in the service definition.
|
||||
Name string
|
||||
|
||||
// TLS holds the TLS configuration for this gateway.
|
||||
// TLS holds the TLS configuration for this gateway. It would be nicer if it
|
||||
// were a pointer so it could be omitempty when read back in JSON but that
|
||||
// would be a breaking API change now as we currently always return it.
|
||||
TLS GatewayTLSConfig
|
||||
|
||||
// Listeners declares what ports the ingress gateway should listen on, and
|
||||
|
@ -43,6 +45,9 @@ type IngressListener struct {
|
|||
// current supported values are: (tcp | http | http2 | grpc).
|
||||
Protocol string
|
||||
|
||||
// TLS config for this listener.
|
||||
TLS *GatewayTLSConfig `json:",omitempty"`
|
||||
|
||||
// Services declares the set of services to which the listener forwards
|
||||
// traffic.
|
||||
//
|
||||
|
@ -75,6 +80,11 @@ type IngressService struct {
|
|||
// using a "tcp" listener.
|
||||
Hosts []string
|
||||
|
||||
// TLS configuration overrides for this service. At least one entry must exist
|
||||
// in Hosts to use set and the Listener must also have a default Cert loaded
|
||||
// from SDS.
|
||||
TLS *GatewayServiceTLSConfig `json:",omitempty"`
|
||||
|
||||
// Allow HTTP header manipulation to be configured.
|
||||
RequestHeaders *HTTPHeaderModifiers `json:",omitempty" alias:"request_headers"`
|
||||
ResponseHeaders *HTTPHeaderModifiers `json:",omitempty" alias:"response_headers"`
|
||||
|
@ -84,8 +94,24 @@ type IngressService struct {
|
|||
}
|
||||
|
||||
type GatewayTLSConfig struct {
|
||||
// Indicates that TLS should be enabled for this gateway service
|
||||
// Indicates that TLS should be enabled for this gateway or listener
|
||||
Enabled bool
|
||||
|
||||
// SDS allows configuring TLS certificate from an SDS service.
|
||||
SDS *GatewayTLSSDSConfig `json:",omitempty"`
|
||||
}
|
||||
|
||||
type GatewayServiceTLSConfig struct {
|
||||
// Note no Enabled field here since it doesn't make sense to disable TLS on
|
||||
// one host on a TLS-configured listener.
|
||||
|
||||
// SDS allows configuring TLS certificate from an SDS service.
|
||||
SDS *GatewayTLSSDSConfig `json:",omitempty"`
|
||||
}
|
||||
|
||||
type GatewayTLSSDSConfig struct {
|
||||
ClusterName string `json:",omitempty" alias:"cluster_name"`
|
||||
CertResource string `json:",omitempty" alias:"cert_resource"`
|
||||
}
|
||||
|
||||
func (e *IngressGatewayConfigEntry) GetKind() string {
|
||||
|
@ -134,6 +160,77 @@ func (e *IngressGatewayConfigEntry) Normalize() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// validateServiceSDS validates the SDS config for a specific service on a
|
||||
// specific listener. It checks inherited config properties from listener and
|
||||
// gateway level and ensures they are valid all the way down. If called on
|
||||
// several services some of these checks will be duplicated but that isn't a big
|
||||
// deal and it's significantly easier to reason about and read if this is in one
|
||||
// place rather than threaded through the multi-level loop in Validate with
|
||||
// other checks.
|
||||
func (e *IngressGatewayConfigEntry) validateServiceSDS(lis IngressListener, svc IngressService) error {
|
||||
// First work out if there is valid gateway-level SDS config
|
||||
gwSDSClusterSet := false
|
||||
gwSDSCertSet := false
|
||||
if e.TLS.SDS != nil {
|
||||
// Gateway level SDS config must set ClusterName if it specifies a default
|
||||
// certificate. Just a clustername is OK though if certs are specified
|
||||
// per-listener.
|
||||
if e.TLS.SDS.ClusterName == "" && e.TLS.SDS.CertResource != "" {
|
||||
return fmt.Errorf("TLS.SDS.ClusterName is required if CertResource is set")
|
||||
}
|
||||
// Note we rely on the fact that ClusterName must be non-empty if any SDS
|
||||
// properties are defined at this level (as validated above) in validation
|
||||
// below that uses this variable. If that changes we will need to change the
|
||||
// code below too.
|
||||
gwSDSClusterSet = (e.TLS.SDS.ClusterName != "")
|
||||
gwSDSCertSet = (e.TLS.SDS.CertResource != "")
|
||||
}
|
||||
|
||||
// Validate listener-level SDS config.
|
||||
lisSDSCertSet := false
|
||||
lisSDSClusterSet := false
|
||||
if lis.TLS != nil && lis.TLS.SDS != nil {
|
||||
lisSDSCertSet = (lis.TLS.SDS.CertResource != "")
|
||||
lisSDSClusterSet = (lis.TLS.SDS.ClusterName != "")
|
||||
}
|
||||
|
||||
// If SDS was setup at gw level but without a default CertResource, the
|
||||
// listener MUST set a CertResource.
|
||||
if gwSDSClusterSet && !gwSDSCertSet && !lisSDSCertSet {
|
||||
return fmt.Errorf("TLS.SDS.CertResource is required if ClusterName is set for gateway (listener on port %d)", lis.Port)
|
||||
}
|
||||
|
||||
// If listener set a cluster name then it requires a cert resource too.
|
||||
if lisSDSClusterSet && !lisSDSCertSet {
|
||||
return fmt.Errorf("TLS.SDS.CertResource is required if ClusterName is set for listener (listener on port %d)", lis.Port)
|
||||
}
|
||||
|
||||
// If a listener-level cert is given, we need a cluster from at least one
|
||||
// level.
|
||||
if lisSDSCertSet && !lisSDSClusterSet && !gwSDSClusterSet {
|
||||
return fmt.Errorf("TLS.SDS.ClusterName is required if CertResource is set (listener on port %d)", lis.Port)
|
||||
}
|
||||
|
||||
// Validate service-level SDS config
|
||||
svcSDSSet := (svc.TLS != nil && svc.TLS.SDS != nil && svc.TLS.SDS.CertResource != "")
|
||||
|
||||
// Service SDS is only supported with Host names because we need to bind
|
||||
// specific service certs to one or more SNI hostnames.
|
||||
if svcSDSSet && len(svc.Hosts) < 1 {
|
||||
sid := NewServiceID(svc.Name, &svc.EnterpriseMeta)
|
||||
return fmt.Errorf("A service specifying TLS.SDS.CertResource must have at least one item in Hosts (service %q on listener on port %d)",
|
||||
sid.String(), lis.Port)
|
||||
}
|
||||
// If this service specified a certificate, there must be an SDS cluster set
|
||||
// at one of the three levels.
|
||||
if svcSDSSet && svc.TLS.SDS.ClusterName == "" && !lisSDSClusterSet && !gwSDSClusterSet {
|
||||
sid := NewServiceID(svc.Name, &svc.EnterpriseMeta)
|
||||
return fmt.Errorf("TLS.SDS.ClusterName is required if CertResource is set (service %q on listener on port %d)",
|
||||
sid.String(), lis.Port)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *IngressGatewayConfigEntry) Validate() error {
|
||||
if err := validateConfigEntryMeta(e.Meta); err != nil {
|
||||
return err
|
||||
|
@ -204,6 +301,11 @@ func (e *IngressGatewayConfigEntry) Validate() error {
|
|||
}
|
||||
serviceNames[sid] = struct{}{}
|
||||
|
||||
// Validate SDS configuration for this service
|
||||
if err := e.validateServiceSDS(listener, s); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, h := range s.Hosts {
|
||||
if declaredHosts[h] {
|
||||
return fmt.Errorf("Hosts must be unique within a specific listener (listener on port %d)", listener.Port)
|
||||
|
|
|
@ -437,6 +437,7 @@ func TestIngressGatewayConfigEntry(t *testing.T) {
|
|||
},
|
||||
},
|
||||
},
|
||||
expectUnchanged: true,
|
||||
},
|
||||
"request header manip not allowed for non-http protocol": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
|
@ -503,6 +504,377 @@ func TestIngressGatewayConfigEntry(t *testing.T) {
|
|||
// differs between Ent and OSS default/default/web vs web
|
||||
validateErr: "cannot be added multiple times (listener on port 1111)",
|
||||
},
|
||||
"TLS.SDS kitchen sink": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
TLS: GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "secret-service1",
|
||||
CertResource: "some-ns/ingress-default",
|
||||
},
|
||||
},
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "http",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "secret-service2",
|
||||
CertResource: "some-ns/ingress-1111",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "web",
|
||||
Hosts: []string{"*"},
|
||||
TLS: &GatewayServiceTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "secret-service3",
|
||||
CertResource: "some-ns/web",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"TLS.SDS gateway-level": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
TLS: GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "secret-service1",
|
||||
CertResource: "some-ns/ingress-default",
|
||||
},
|
||||
},
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectUnchanged: true,
|
||||
},
|
||||
"TLS.SDS listener-level": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "secret-service1",
|
||||
CertResource: "some-ns/db1",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db1",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Port: 2222,
|
||||
Protocol: "tcp",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "secret-service2",
|
||||
CertResource: "some-ns/db2",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectUnchanged: true,
|
||||
},
|
||||
"TLS.SDS gateway-level cluster only": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
TLS: GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "secret-service",
|
||||
},
|
||||
},
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
CertResource: "some-ns/db1",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db1",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Port: 2222,
|
||||
Protocol: "tcp",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
CertResource: "some-ns/db2",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectUnchanged: true,
|
||||
},
|
||||
"TLS.SDS mixed TLS and non-TLS listeners": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
// No Gateway level TLS.Enabled or SDS config
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster",
|
||||
CertResource: "some-ns/db1",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db1",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Port: 2222,
|
||||
Protocol: "tcp",
|
||||
// No TLS config
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectUnchanged: true,
|
||||
},
|
||||
"TLS.SDS only service-level mixed": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
// No Gateway level TLS.Enabled or SDS config
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "http",
|
||||
// No TLS config
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "web",
|
||||
Hosts: []string{"www.example.com"},
|
||||
TLS: &GatewayServiceTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster",
|
||||
CertResource: "web-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "api",
|
||||
Hosts: []string{"api.example.com"},
|
||||
TLS: &GatewayServiceTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster",
|
||||
CertResource: "api-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Port: 2222,
|
||||
Protocol: "http",
|
||||
// No TLS config
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db2",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
expectUnchanged: true,
|
||||
},
|
||||
"TLS.SDS requires cluster if gateway-level cert specified": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
TLS: GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
CertResource: "foo",
|
||||
},
|
||||
},
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateErr: "TLS.SDS.ClusterName is required if CertResource is set",
|
||||
},
|
||||
"TLS.SDS listener requires cluster if there is no gateway-level one": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
CertResource: "foo",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateErr: "TLS.SDS.ClusterName is required if CertResource is set",
|
||||
},
|
||||
"TLS.SDS listener requires a cert resource if gw ClusterName set": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
TLS: GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "foo",
|
||||
},
|
||||
},
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateErr: "TLS.SDS.CertResource is required if ClusterName is set for gateway (listener on port 1111)",
|
||||
},
|
||||
"TLS.SDS listener requires a cert resource if listener ClusterName set": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "tcp",
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "foo",
|
||||
},
|
||||
},
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "db",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
validateErr: "TLS.SDS.CertResource is required if ClusterName is set for listener (listener on port 1111)",
|
||||
},
|
||||
"TLS.SDS at service level is not supported without Hosts set": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "http",
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "*",
|
||||
TLS: &GatewayServiceTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
CertResource: "foo",
|
||||
ClusterName: "sds-cluster",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
// Note we don't assert the last part `(service \"*\" on listener on port 1111)`
|
||||
// since the service name is normalized differently on OSS and Ent
|
||||
validateErr: "A service specifying TLS.SDS.CertResource must have at least one item in Hosts",
|
||||
},
|
||||
"TLS.SDS at service level needs a cluster from somewhere": {
|
||||
entry: &IngressGatewayConfigEntry{
|
||||
Kind: "ingress-gateway",
|
||||
Name: "ingress-web",
|
||||
|
||||
Listeners: []IngressListener{
|
||||
{
|
||||
Port: 1111,
|
||||
Protocol: "http",
|
||||
Services: []IngressService{
|
||||
{
|
||||
Name: "foo",
|
||||
Hosts: []string{"foo.example.com"},
|
||||
TLS: &GatewayServiceTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
CertResource: "foo",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
// Note we don't assert the last part `(service \"foo\" on listener on port 1111)`
|
||||
// since the service name is normalized differently on OSS and Ent
|
||||
validateErr: "TLS.SDS.ClusterName is required if CertResource is set",
|
||||
},
|
||||
}
|
||||
|
||||
testConfigEntryNormalizeAndValidate(t, cases)
|
||||
|
|
|
@ -6,8 +6,10 @@ import (
|
|||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/hashicorp/go-msgpack/codec"
|
||||
"github.com/hashicorp/hcl"
|
||||
"github.com/mitchellh/copystructure"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
|
@ -2519,8 +2521,9 @@ type configEntryTestcase struct {
|
|||
normalizeErr string
|
||||
validateErr string
|
||||
|
||||
// Only one of either expected or check can be set.
|
||||
expected ConfigEntry
|
||||
// Only one of expected, expectUnchanged or check can be set.
|
||||
expected ConfigEntry
|
||||
expectUnchanged bool
|
||||
// check is called between normalize and validate
|
||||
check func(t *testing.T, entry ConfigEntry)
|
||||
}
|
||||
|
@ -2531,21 +2534,50 @@ func testConfigEntryNormalizeAndValidate(t *testing.T, cases map[string]configEn
|
|||
for name, tc := range cases {
|
||||
tc := tc
|
||||
t.Run(name, func(t *testing.T) {
|
||||
err := tc.entry.Normalize()
|
||||
beforeNormalize, err := copystructure.Copy(tc.entry)
|
||||
require.NoError(t, err)
|
||||
|
||||
err = tc.entry.Normalize()
|
||||
if tc.normalizeErr != "" {
|
||||
testutil.RequireErrorContains(t, err, tc.normalizeErr)
|
||||
return
|
||||
}
|
||||
require.NoError(t, err)
|
||||
|
||||
if tc.expected != nil && tc.check != nil {
|
||||
t.Fatal("cannot set both 'expected' and 'check' test case fields")
|
||||
checkMethods := 0
|
||||
if tc.expected != nil {
|
||||
checkMethods++
|
||||
}
|
||||
if tc.expectUnchanged {
|
||||
checkMethods++
|
||||
}
|
||||
if tc.check != nil {
|
||||
checkMethods++
|
||||
}
|
||||
|
||||
if checkMethods > 1 {
|
||||
t.Fatal("cannot set more than one of 'expected', 'expectUnchanged' and 'check' test case fields")
|
||||
}
|
||||
|
||||
if tc.expected != nil {
|
||||
require.Equal(t, tc.expected, tc.entry)
|
||||
}
|
||||
|
||||
if tc.expectUnchanged {
|
||||
// EnterpriseMeta.Normalize behaves differently in Ent and OSS which
|
||||
// causes an exact comparison to fail. It's still useful to assert that
|
||||
// nothing else changes though during Normalize. So we ignore
|
||||
// EnterpriseMeta Defaults.
|
||||
opts := cmp.Options{
|
||||
cmp.Comparer(func(a, b EnterpriseMeta) bool {
|
||||
return a.IsSame(&b)
|
||||
}),
|
||||
}
|
||||
if diff := cmp.Diff(beforeNormalize, tc.entry, opts); diff != "" {
|
||||
t.Fatalf("expect unchanged after Normalize, got diff:\n%s", diff)
|
||||
}
|
||||
}
|
||||
|
||||
if tc.check != nil {
|
||||
tc.check(t, tc.entry)
|
||||
}
|
||||
|
|
|
@ -74,8 +74,3 @@ func (ixn *Intention) FillPartitionAndNamespace(entMeta *EnterpriseMeta, fillDef
|
|||
ixn.SourcePartition = ""
|
||||
ixn.DestinationPartition = ""
|
||||
}
|
||||
|
||||
func (ixn *Intention) NormalizePartitionFields() {
|
||||
ixn.SourcePartition = ""
|
||||
ixn.DestinationPartition = ""
|
||||
}
|
||||
|
|
|
@ -79,6 +79,10 @@ func (m *EnterpriseMeta) NamespaceOrEmpty() string {
|
|||
return ""
|
||||
}
|
||||
|
||||
func (m *EnterpriseMeta) InDefaultNamespace() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
func (m *EnterpriseMeta) PartitionOrDefault() string {
|
||||
return "default"
|
||||
}
|
||||
|
@ -91,6 +95,10 @@ func (m *EnterpriseMeta) PartitionOrEmpty() string {
|
|||
return ""
|
||||
}
|
||||
|
||||
func (m *EnterpriseMeta) InDefaultPartition() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// ReplicationEnterpriseMeta stub
|
||||
func ReplicationEnterpriseMeta() *EnterpriseMeta {
|
||||
return &emptyEnterpriseMeta
|
||||
|
|
|
@ -15,6 +15,6 @@ func TestIntention(t testing.T) *Intention {
|
|||
SourceType: IntentionSourceConsul,
|
||||
Meta: map[string]string{},
|
||||
}
|
||||
ixn.NormalizePartitionFields()
|
||||
ixn.FillPartitionAndNamespace(nil, true)
|
||||
return ixn
|
||||
}
|
||||
|
|
|
@ -146,7 +146,7 @@ func (b backend) ResolveTokenAndDefaultMeta(string, *structs.EnterpriseMeta, *ac
|
|||
return acl.AllowAll(), nil
|
||||
}
|
||||
|
||||
func (b backend) Forward(string, func(*grpc.ClientConn) error) (handled bool, err error) {
|
||||
func (b backend) Forward(structs.RPCInfo, func(*grpc.ClientConn) error) (handled bool, err error) {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -252,6 +252,20 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg.
|
|||
// default config if there is an error so it's safe to continue.
|
||||
s.Logger.Warn("failed to parse", "upstream", u.Identifier(), "error", err)
|
||||
}
|
||||
|
||||
// If escape hatch is present, create a listener from it and move on to the next
|
||||
if cfg.EnvoyListenerJSON != "" {
|
||||
upstreamListener, err := makeListenerFromUserConfig(cfg.EnvoyListenerJSON)
|
||||
if err != nil {
|
||||
s.Logger.Error("failed to parse envoy_listener_json",
|
||||
"upstream", u.Identifier(),
|
||||
"error", err)
|
||||
continue
|
||||
}
|
||||
resources = append(resources, upstreamListener)
|
||||
continue
|
||||
}
|
||||
|
||||
upstreamListener := makeListener(id, u, envoy_core_v3.TrafficDirection_OUTBOUND)
|
||||
|
||||
filterChain, err := s.makeUpstreamFilterChainForDiscoveryChain(
|
||||
|
@ -497,75 +511,47 @@ func (s *ResourceGenerator) listenersFromSnapshotGateway(cfgSnap *proxycfg.Confi
|
|||
return resources, err
|
||||
}
|
||||
|
||||
func (s *ResourceGenerator) makeIngressGatewayListeners(address string, cfgSnap *proxycfg.ConfigSnapshot) ([]proto.Message, error) {
|
||||
var resources []proto.Message
|
||||
func resolveListenerSDSConfig(cfgSnap *proxycfg.ConfigSnapshot, listenerKey proxycfg.IngressListenerKey) (*structs.GatewayTLSSDSConfig, error) {
|
||||
var mergedCfg structs.GatewayTLSSDSConfig
|
||||
|
||||
for listenerKey, upstreams := range cfgSnap.IngressGateway.Upstreams {
|
||||
var tlsContext *envoy_tls_v3.DownstreamTlsContext
|
||||
if cfgSnap.IngressGateway.TLSEnabled {
|
||||
tlsContext = &envoy_tls_v3.DownstreamTlsContext{
|
||||
CommonTlsContext: makeCommonTLSContextFromLeaf(cfgSnap, cfgSnap.Leaf()),
|
||||
RequireClientCertificate: &wrappers.BoolValue{Value: false},
|
||||
}
|
||||
gwSDS := cfgSnap.IngressGateway.TLSConfig.SDS
|
||||
if gwSDS != nil {
|
||||
mergedCfg.ClusterName = gwSDS.ClusterName
|
||||
mergedCfg.CertResource = gwSDS.CertResource
|
||||
}
|
||||
|
||||
listenerCfg, ok := cfgSnap.IngressGateway.Listeners[listenerKey]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("no listener config found for listener on port %d", listenerKey.Port)
|
||||
}
|
||||
|
||||
if listenerCfg.TLS != nil && listenerCfg.TLS.SDS != nil {
|
||||
if listenerCfg.TLS.SDS.ClusterName != "" {
|
||||
mergedCfg.ClusterName = listenerCfg.TLS.SDS.ClusterName
|
||||
}
|
||||
|
||||
if listenerKey.Protocol == "tcp" {
|
||||
// We rely on the invariant of upstreams slice always having at least 1
|
||||
// member, because this key/value pair is created only when a
|
||||
// GatewayService is returned in the RPC
|
||||
u := upstreams[0]
|
||||
id := u.Identifier()
|
||||
|
||||
chain := cfgSnap.IngressGateway.DiscoveryChain[id]
|
||||
|
||||
var upstreamListener proto.Message
|
||||
upstreamListener, err := s.makeUpstreamListenerForDiscoveryChain(
|
||||
&u,
|
||||
address,
|
||||
chain,
|
||||
cfgSnap,
|
||||
tlsContext,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
resources = append(resources, upstreamListener)
|
||||
} else {
|
||||
// If multiple upstreams share this port, make a special listener for the protocol.
|
||||
listener := makePortListener(listenerKey.Protocol, address, listenerKey.Port, envoy_core_v3.TrafficDirection_OUTBOUND)
|
||||
opts := listenerFilterOpts{
|
||||
useRDS: true,
|
||||
protocol: listenerKey.Protocol,
|
||||
filterName: listenerKey.RouteName(),
|
||||
routeName: listenerKey.RouteName(),
|
||||
cluster: "",
|
||||
statPrefix: "ingress_upstream_",
|
||||
routePath: "",
|
||||
httpAuthzFilter: nil,
|
||||
}
|
||||
filter, err := makeListenerFilter(opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
transportSocket, err := makeDownstreamTLSTransportSocket(tlsContext)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
listener.FilterChains = []*envoy_listener_v3.FilterChain{
|
||||
{
|
||||
Filters: []*envoy_listener_v3.Filter{
|
||||
filter,
|
||||
},
|
||||
TransportSocket: transportSocket,
|
||||
},
|
||||
}
|
||||
resources = append(resources, listener)
|
||||
if listenerCfg.TLS.SDS.CertResource != "" {
|
||||
mergedCfg.CertResource = listenerCfg.TLS.SDS.CertResource
|
||||
}
|
||||
}
|
||||
|
||||
return resources, nil
|
||||
// Validate. Either merged should have both fields empty or both set. Other
|
||||
// cases shouldn't be possible as we validate them at input but be robust to
|
||||
// bugs later.
|
||||
switch {
|
||||
case mergedCfg.ClusterName == "" && mergedCfg.CertResource == "":
|
||||
return nil, nil
|
||||
|
||||
case mergedCfg.ClusterName != "" && mergedCfg.CertResource != "":
|
||||
return &mergedCfg, nil
|
||||
|
||||
case mergedCfg.ClusterName == "" && mergedCfg.CertResource != "":
|
||||
return nil, fmt.Errorf("missing SDS cluster name for listener on port %d", listenerKey.Port)
|
||||
|
||||
case mergedCfg.ClusterName != "" && mergedCfg.CertResource == "":
|
||||
return nil, fmt.Errorf("missing SDS cert resource for listener on port %d", listenerKey.Port)
|
||||
}
|
||||
|
||||
return &mergedCfg, nil
|
||||
}
|
||||
|
||||
// makeListener returns a listener with name and bind details set. Filters must
|
||||
|
@ -1570,9 +1556,9 @@ func makeTLSInspectorListenerFilter() (*envoy_listener_v3.ListenerFilter, error)
|
|||
return &envoy_listener_v3.ListenerFilter{Name: "envoy.filters.listener.tls_inspector"}, nil
|
||||
}
|
||||
|
||||
func makeSNIFilterChainMatch(sniMatch string) *envoy_listener_v3.FilterChainMatch {
|
||||
func makeSNIFilterChainMatch(sniMatches ...string) *envoy_listener_v3.FilterChainMatch {
|
||||
return &envoy_listener_v3.FilterChainMatch{
|
||||
ServerNames: []string{sniMatch},
|
||||
ServerNames: sniMatches,
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1749,7 +1735,6 @@ func makeCommonTLSContextFromLeaf(cfgSnap *proxycfg.ConfigSnapshot, leaf *struct
|
|||
return nil
|
||||
}
|
||||
|
||||
// TODO(banks): verify this actually works with Envoy (docs are not clear).
|
||||
rootPEMS := ""
|
||||
for _, root := range cfgSnap.Roots.Roots {
|
||||
rootPEMS += ca.EnsureTrailingNewline(root.RootCert)
|
||||
|
|
|
@ -0,0 +1,243 @@
|
|||
package xds
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
envoy_core_v3 "github.com/envoyproxy/go-control-plane/envoy/config/core/v3"
|
||||
envoy_listener_v3 "github.com/envoyproxy/go-control-plane/envoy/config/listener/v3"
|
||||
envoy_tls_v3 "github.com/envoyproxy/go-control-plane/envoy/extensions/transport_sockets/tls/v3"
|
||||
"github.com/golang/protobuf/proto"
|
||||
"github.com/golang/protobuf/ptypes/duration"
|
||||
"github.com/golang/protobuf/ptypes/wrappers"
|
||||
"github.com/hashicorp/consul/agent/proxycfg"
|
||||
"github.com/hashicorp/consul/agent/structs"
|
||||
)
|
||||
|
||||
func (s *ResourceGenerator) makeIngressGatewayListeners(address string, cfgSnap *proxycfg.ConfigSnapshot) ([]proto.Message, error) {
|
||||
var resources []proto.Message
|
||||
|
||||
for listenerKey, upstreams := range cfgSnap.IngressGateway.Upstreams {
|
||||
var tlsContext *envoy_tls_v3.DownstreamTlsContext
|
||||
|
||||
sdsCfg, err := resolveListenerSDSConfig(cfgSnap, listenerKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if sdsCfg != nil {
|
||||
// Set up listener TLS from SDS
|
||||
tlsContext = &envoy_tls_v3.DownstreamTlsContext{
|
||||
CommonTlsContext: makeCommonTLSContextFromSDS(*sdsCfg),
|
||||
RequireClientCertificate: &wrappers.BoolValue{Value: false},
|
||||
}
|
||||
} else if cfgSnap.IngressGateway.TLSConfig.Enabled {
|
||||
tlsContext = &envoy_tls_v3.DownstreamTlsContext{
|
||||
CommonTlsContext: makeCommonTLSContextFromLeaf(cfgSnap, cfgSnap.Leaf()),
|
||||
RequireClientCertificate: &wrappers.BoolValue{Value: false},
|
||||
}
|
||||
}
|
||||
|
||||
if listenerKey.Protocol == "tcp" {
|
||||
// We rely on the invariant of upstreams slice always having at least 1
|
||||
// member, because this key/value pair is created only when a
|
||||
// GatewayService is returned in the RPC
|
||||
u := upstreams[0]
|
||||
id := u.Identifier()
|
||||
|
||||
chain := cfgSnap.IngressGateway.DiscoveryChain[id]
|
||||
|
||||
var upstreamListener proto.Message
|
||||
upstreamListener, err := s.makeUpstreamListenerForDiscoveryChain(
|
||||
&u,
|
||||
address,
|
||||
chain,
|
||||
cfgSnap,
|
||||
tlsContext,
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
resources = append(resources, upstreamListener)
|
||||
} else {
|
||||
// If multiple upstreams share this port, make a special listener for the protocol.
|
||||
listener := makePortListener(listenerKey.Protocol, address, listenerKey.Port, envoy_core_v3.TrafficDirection_OUTBOUND)
|
||||
opts := listenerFilterOpts{
|
||||
useRDS: true,
|
||||
protocol: listenerKey.Protocol,
|
||||
filterName: listenerKey.RouteName(),
|
||||
routeName: listenerKey.RouteName(),
|
||||
cluster: "",
|
||||
statPrefix: "ingress_upstream_",
|
||||
routePath: "",
|
||||
httpAuthzFilter: nil,
|
||||
}
|
||||
|
||||
// Generate any filter chains needed for services with custom TLS certs
|
||||
// via SDS.
|
||||
sniFilterChains, err := makeSDSOverrideFilterChains(cfgSnap, listenerKey, opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If there are any sni filter chains, we need a TLS inspector filter!
|
||||
if len(sniFilterChains) > 0 {
|
||||
tlsInspector, err := makeTLSInspectorListenerFilter()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
listener.ListenerFilters = []*envoy_listener_v3.ListenerFilter{tlsInspector}
|
||||
}
|
||||
|
||||
listener.FilterChains = sniFilterChains
|
||||
|
||||
// See if there are other services that didn't have specific SNI-matching
|
||||
// filter chains. If so add a default filterchain to serve them.
|
||||
if len(sniFilterChains) < len(upstreams) {
|
||||
defaultFilter, err := makeListenerFilter(opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
transportSocket, err := makeDownstreamTLSTransportSocket(tlsContext)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
listener.FilterChains = append(listener.FilterChains,
|
||||
&envoy_listener_v3.FilterChain{
|
||||
Filters: []*envoy_listener_v3.Filter{
|
||||
defaultFilter,
|
||||
},
|
||||
TransportSocket: transportSocket,
|
||||
})
|
||||
}
|
||||
|
||||
resources = append(resources, listener)
|
||||
}
|
||||
}
|
||||
|
||||
return resources, nil
|
||||
}
|
||||
|
||||
func routeNameForUpstream(l structs.IngressListener, s structs.IngressService) string {
|
||||
key := proxycfg.IngressListenerKeyFromListener(l)
|
||||
|
||||
// If the upstream service doesn't have any TLS overrides then it can just use
|
||||
// the combined filterchain with all the merged routes.
|
||||
if !ingressServiceHasSDSOverrides(s) {
|
||||
return key.RouteName()
|
||||
}
|
||||
|
||||
// Return a specific route for this service as it needs a custom FilterChain
|
||||
// to serve its custom cert so we should attach its routes to a separate Route
|
||||
// too. We need this to be consistent between OSS and Enterprise to avoid xDS
|
||||
// config golden files in tests conflicting so we can't use ServiceID.String()
|
||||
// which normalizes to included all identifiers in Enterprise.
|
||||
sn := s.ToServiceName()
|
||||
svcIdentifier := sn.Name
|
||||
if !sn.InDefaultPartition() || !sn.InDefaultNamespace() {
|
||||
// Non-default partition/namespace, use a full identifier
|
||||
svcIdentifier = sn.String()
|
||||
}
|
||||
return fmt.Sprintf("%s_%s", key.RouteName(), svcIdentifier)
|
||||
}
|
||||
|
||||
func ingressServiceHasSDSOverrides(s structs.IngressService) bool {
|
||||
return s.TLS != nil &&
|
||||
s.TLS.SDS != nil &&
|
||||
s.TLS.SDS.CertResource != ""
|
||||
}
|
||||
|
||||
// ingress services that specify custom TLS certs via SDS overrides need to get
|
||||
// their own filter chain and routes. This will generate all the extra filter
|
||||
// chains an ingress listener needs. It may be empty and expects the default
|
||||
// catch-all chain and route to contain all the other services that share the
|
||||
// default TLS config.
|
||||
func makeSDSOverrideFilterChains(cfgSnap *proxycfg.ConfigSnapshot,
|
||||
listenerKey proxycfg.IngressListenerKey,
|
||||
filterOpts listenerFilterOpts) ([]*envoy_listener_v3.FilterChain, error) {
|
||||
|
||||
listenerCfg, ok := cfgSnap.IngressGateway.Listeners[listenerKey]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("no listener config found for listener on port %d", listenerKey.Port)
|
||||
}
|
||||
|
||||
var chains []*envoy_listener_v3.FilterChain
|
||||
|
||||
for _, svc := range listenerCfg.Services {
|
||||
if !ingressServiceHasSDSOverrides(svc) {
|
||||
continue
|
||||
}
|
||||
|
||||
if len(svc.Hosts) < 1 {
|
||||
// Shouldn't be possible with validation but be careful
|
||||
return nil, fmt.Errorf("no hosts specified with SDS certificate (service %q on listener on port %d)",
|
||||
svc.ToServiceName().ToServiceID().String(), listenerKey.Port)
|
||||
}
|
||||
|
||||
// Service has a certificate resource override. Return a new filter chain
|
||||
// with the right TLS cert and a filter that will load only the routes for
|
||||
// this service.
|
||||
routeName := routeNameForUpstream(listenerCfg, svc)
|
||||
filterOpts.filterName = routeName
|
||||
filterOpts.routeName = routeName
|
||||
filter, err := makeListenerFilter(filterOpts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tlsContext := &envoy_tls_v3.DownstreamTlsContext{
|
||||
CommonTlsContext: makeCommonTLSContextFromSDS(*svc.TLS.SDS),
|
||||
RequireClientCertificate: &wrappers.BoolValue{Value: false},
|
||||
}
|
||||
|
||||
transportSocket, err := makeDownstreamTLSTransportSocket(tlsContext)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
chain := &envoy_listener_v3.FilterChain{
|
||||
// Only match traffic for this service's hosts.
|
||||
FilterChainMatch: makeSNIFilterChainMatch(svc.Hosts...),
|
||||
Filters: []*envoy_listener_v3.Filter{
|
||||
filter,
|
||||
},
|
||||
TransportSocket: transportSocket,
|
||||
}
|
||||
|
||||
chains = append(chains, chain)
|
||||
}
|
||||
|
||||
return chains, nil
|
||||
}
|
||||
|
||||
func makeCommonTLSContextFromSDS(sdsCfg structs.GatewayTLSSDSConfig) *envoy_tls_v3.CommonTlsContext {
|
||||
return &envoy_tls_v3.CommonTlsContext{
|
||||
TlsParams: &envoy_tls_v3.TlsParameters{},
|
||||
TlsCertificateSdsSecretConfigs: []*envoy_tls_v3.SdsSecretConfig{
|
||||
{
|
||||
Name: sdsCfg.CertResource,
|
||||
SdsConfig: &envoy_core_v3.ConfigSource{
|
||||
ConfigSourceSpecifier: &envoy_core_v3.ConfigSource_ApiConfigSource{
|
||||
ApiConfigSource: &envoy_core_v3.ApiConfigSource{
|
||||
ApiType: envoy_core_v3.ApiConfigSource_GRPC,
|
||||
TransportApiVersion: envoy_core_v3.ApiVersion_V3,
|
||||
// Note ClusterNames can't be set here - that's only for REST type
|
||||
// we need a full GRPC config instead.
|
||||
GrpcServices: []*envoy_core_v3.GrpcService{
|
||||
{
|
||||
TargetSpecifier: &envoy_core_v3.GrpcService_EnvoyGrpc_{
|
||||
EnvoyGrpc: &envoy_core_v3.GrpcService_EnvoyGrpc{
|
||||
ClusterName: sdsCfg.ClusterName,
|
||||
},
|
||||
},
|
||||
Timeout: &duration.Duration{Seconds: 5},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
ResourceApiVersion: envoy_core_v3.ApiVersion_V3,
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
|
@ -159,20 +159,30 @@ func TestListenersFromSnapshot(t *testing.T) {
|
|||
name: "custom-upstream",
|
||||
create: proxycfg.TestConfigSnapshot,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
snap.Proxy.Upstreams[0].Config["envoy_listener_json"] =
|
||||
customListenerJSON(t, customListenerJSONOptions{
|
||||
Name: "custom-upstream",
|
||||
})
|
||||
for i := range snap.Proxy.Upstreams {
|
||||
if snap.Proxy.Upstreams[i].Config == nil {
|
||||
snap.Proxy.Upstreams[i].Config = map[string]interface{}{}
|
||||
}
|
||||
snap.Proxy.Upstreams[i].Config["envoy_listener_json"] =
|
||||
customListenerJSON(t, customListenerJSONOptions{
|
||||
Name: snap.Proxy.Upstreams[i].Identifier() + ":custom-upstream",
|
||||
})
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "custom-upstream-ignored-with-disco-chain",
|
||||
create: proxycfg.TestConfigSnapshotDiscoveryChainWithFailover,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
snap.Proxy.Upstreams[0].Config["envoy_listener_json"] =
|
||||
customListenerJSON(t, customListenerJSONOptions{
|
||||
Name: "custom-upstream",
|
||||
})
|
||||
for i := range snap.Proxy.Upstreams {
|
||||
if snap.Proxy.Upstreams[i].Config == nil {
|
||||
snap.Proxy.Upstreams[i].Config = map[string]interface{}{}
|
||||
}
|
||||
snap.Proxy.Upstreams[i].Config["envoy_listener_json"] =
|
||||
customListenerJSON(t, customListenerJSONOptions{
|
||||
Name: snap.Proxy.Upstreams[i].Identifier() + ":custom-upstream",
|
||||
})
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -475,6 +485,10 @@ func TestListenersFromSnapshot(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "http", Port: 8080}: {},
|
||||
{Protocol: "http", Port: 443}: {},
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
|
@ -489,6 +503,251 @@ func TestListenersFromSnapshot(t *testing.T) {
|
|||
create: proxycfg.TestConfigSnapshotIngressWithTLSListener,
|
||||
setup: nil,
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-listener-gw-level",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithGatewaySDS,
|
||||
setup: nil,
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-listener-listener-level",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithGatewaySDS,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
snap.IngressGateway.Upstreams = map[proxycfg.IngressListenerKey]structs.Upstreams{
|
||||
{Protocol: "tcp", Port: 8080}: {
|
||||
{
|
||||
DestinationName: "foo",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "tcp", Port: 8080}: {
|
||||
Port: 8080,
|
||||
TLS: &structs.GatewayTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
// Override the cert, fall back to the cluster at gw level. We
|
||||
// don't test every possible valid combination here since we
|
||||
// already did that in TestResolveListenerSDSConfig. This is
|
||||
// just an extra check to make sure that data is plumbed through
|
||||
// correctly.
|
||||
CertResource: "listener-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-listener-gw-level-http",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithGatewaySDS,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
snap.IngressGateway.Upstreams = map[proxycfg.IngressListenerKey]structs.Upstreams{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
{
|
||||
DestinationName: "foo",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
Port: 8080,
|
||||
TLS: &structs.GatewayTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
// Override the cert, fall back to the cluster at gw level. We
|
||||
// don't test every possible valid combination here since we
|
||||
// already did that in TestResolveListenerSDSConfig. This is
|
||||
// just an extra check to make sure that data is plumbed through
|
||||
// correctly.
|
||||
CertResource: "listener-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-listener-gw-level-mixed-tls",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithGatewaySDS,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
// Disable GW-level defaults so we can mix TLS and non-TLS listeners
|
||||
snap.IngressGateway.TLSConfig.SDS = nil
|
||||
|
||||
// Setup two TCP listeners, one with and one without SDS config
|
||||
snap.IngressGateway.Upstreams = map[proxycfg.IngressListenerKey]structs.Upstreams{
|
||||
{Protocol: "tcp", Port: 8080}: {
|
||||
{
|
||||
DestinationName: "secure",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
},
|
||||
{Protocol: "tcp", Port: 9090}: {
|
||||
{
|
||||
DestinationName: "insecure",
|
||||
LocalBindPort: 9090,
|
||||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "tcp", Port: 8080}: {
|
||||
Port: 8080,
|
||||
TLS: &structs.GatewayTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "listener-sds-cluster",
|
||||
CertResource: "listener-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
{Protocol: "tcp", Port: 9090}: {
|
||||
Port: 9090,
|
||||
TLS: nil,
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-service-level",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithGatewaySDS,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
// Disable GW-level defaults so we can test only service-level
|
||||
snap.IngressGateway.TLSConfig.SDS = nil
|
||||
|
||||
// Setup http listeners, one multiple services with SDS
|
||||
snap.IngressGateway.Upstreams = map[proxycfg.IngressListenerKey]structs.Upstreams{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
{
|
||||
DestinationName: "s1",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
{
|
||||
DestinationName: "s2",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
Port: 8080,
|
||||
Services: []structs.IngressService{
|
||||
{
|
||||
Name: "s1",
|
||||
Hosts: []string{"s1.example.com"},
|
||||
TLS: &structs.GatewayServiceTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster-1",
|
||||
CertResource: "s1.example.com-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "s2",
|
||||
Hosts: []string{"s2.example.com"},
|
||||
TLS: &structs.GatewayServiceTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster-2",
|
||||
CertResource: "s2.example.com-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
TLS: nil, // no listener-level SDS config
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-listener+service-level",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithGatewaySDS,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
// Disable GW-level defaults so we can test only service-level
|
||||
snap.IngressGateway.TLSConfig.SDS = nil
|
||||
|
||||
// Setup http listeners, one multiple services with SDS
|
||||
snap.IngressGateway.Upstreams = map[proxycfg.IngressListenerKey]structs.Upstreams{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
{
|
||||
DestinationName: "s1",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
{
|
||||
DestinationName: "s2",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
Port: 8080,
|
||||
Services: []structs.IngressService{
|
||||
{
|
||||
Name: "s1",
|
||||
Hosts: []string{"s1.example.com"},
|
||||
TLS: &structs.GatewayServiceTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster-1",
|
||||
CertResource: "s1.example.com-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "s2",
|
||||
// s2 uses the default listener cert
|
||||
},
|
||||
},
|
||||
TLS: &structs.GatewayTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster-2",
|
||||
CertResource: "*.example.com-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-service-level-mixed-no-tls",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithGatewaySDS,
|
||||
setup: func(snap *proxycfg.ConfigSnapshot) {
|
||||
// Disable GW-level defaults so we can test only service-level
|
||||
snap.IngressGateway.TLSConfig.SDS = nil
|
||||
|
||||
// Setup http listeners, one multiple services with SDS
|
||||
snap.IngressGateway.Upstreams = map[proxycfg.IngressListenerKey]structs.Upstreams{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
{
|
||||
DestinationName: "s1",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
{
|
||||
DestinationName: "s2",
|
||||
LocalBindPort: 8080,
|
||||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
Port: 8080,
|
||||
Services: []structs.IngressService{
|
||||
{
|
||||
Name: "s1",
|
||||
Hosts: []string{"s1.example.com"},
|
||||
TLS: &structs.GatewayServiceTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "sds-cluster-1",
|
||||
CertResource: "s1.example.com-cert",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "s2",
|
||||
// s2 has no SDS config so should be non-TLS
|
||||
},
|
||||
},
|
||||
TLS: nil, // No listener level TLS setup either
|
||||
},
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "transparent-proxy",
|
||||
create: proxycfg.TestConfigSnapshot,
|
||||
|
@ -696,6 +955,17 @@ func TestListenersFromSnapshot(t *testing.T) {
|
|||
|
||||
gName += ".v2compat"
|
||||
|
||||
// It's easy to miss a new type that encodes a version from just
|
||||
// looking at the golden files so lets make it an error here. If
|
||||
// there are ever false positives we can maybe include an allow list
|
||||
// here as it seems safer to assume something was missed than to
|
||||
// assume we'll notice the golden file being wrong. Note the first
|
||||
// one matches both resourceApiVersion and transportApiVersion. I
|
||||
// left it as a suffix in case there are other field names that
|
||||
// follow that convention now or in the future.
|
||||
require.NotContains(t, gotJSON, `ApiVersion": "V3"`)
|
||||
require.NotContains(t, gotJSON, `type.googleapis.com/envoy.api.v3`)
|
||||
|
||||
require.JSONEq(t, goldenEnvoy(t, filepath.Join("listeners", gName), envoyVersion, latestEnvoyVersion_v2, gotJSON), gotJSON)
|
||||
})
|
||||
})
|
||||
|
@ -834,3 +1104,158 @@ var _ ConfigFetcher = (configFetcherFunc)(nil)
|
|||
func (f configFetcherFunc) AdvertiseAddrLAN() string {
|
||||
return f()
|
||||
}
|
||||
|
||||
func TestResolveListenerSDSConfig(t *testing.T) {
|
||||
type testCase struct {
|
||||
name string
|
||||
gwSDS *structs.GatewayTLSSDSConfig
|
||||
lisSDS *structs.GatewayTLSSDSConfig
|
||||
want *structs.GatewayTLSSDSConfig
|
||||
wantErr string
|
||||
}
|
||||
|
||||
run := func(tc testCase) {
|
||||
// fake a snapshot with just the data we care about
|
||||
snap := proxycfg.TestConfigSnapshotIngressWithGatewaySDS(t)
|
||||
// Override TLS configs
|
||||
snap.IngressGateway.TLSConfig.SDS = tc.gwSDS
|
||||
var key proxycfg.IngressListenerKey
|
||||
for k, lisCfg := range snap.IngressGateway.Listeners {
|
||||
if tc.lisSDS == nil {
|
||||
lisCfg.TLS = nil
|
||||
} else {
|
||||
lisCfg.TLS = &structs.GatewayTLSConfig{
|
||||
SDS: tc.lisSDS,
|
||||
}
|
||||
}
|
||||
// Override listener cfg in map
|
||||
snap.IngressGateway.Listeners[k] = lisCfg
|
||||
// Save the last key doesn't matter which as we set same listener config
|
||||
// for all.
|
||||
key = k
|
||||
}
|
||||
|
||||
got, err := resolveListenerSDSConfig(snap, key)
|
||||
if tc.wantErr != "" {
|
||||
require.Error(t, err)
|
||||
require.Contains(t, err.Error(), tc.wantErr)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tc.want, got)
|
||||
}
|
||||
}
|
||||
|
||||
cases := []testCase{
|
||||
{
|
||||
name: "no SDS config",
|
||||
gwSDS: nil,
|
||||
lisSDS: nil,
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
name: "all cluster-level SDS config",
|
||||
gwSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
CertResource: "cert",
|
||||
},
|
||||
lisSDS: nil,
|
||||
want: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
CertResource: "cert",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "all listener-level SDS config",
|
||||
gwSDS: nil,
|
||||
lisSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
CertResource: "cert",
|
||||
},
|
||||
want: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
CertResource: "cert",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "mixed level SDS config",
|
||||
gwSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
},
|
||||
lisSDS: &structs.GatewayTLSSDSConfig{
|
||||
CertResource: "cert",
|
||||
},
|
||||
want: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
CertResource: "cert",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "override cert",
|
||||
gwSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
CertResource: "gw-cert",
|
||||
},
|
||||
lisSDS: &structs.GatewayTLSSDSConfig{
|
||||
CertResource: "lis-cert",
|
||||
},
|
||||
want: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
CertResource: "lis-cert",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "override both",
|
||||
gwSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "gw-cluster",
|
||||
CertResource: "gw-cert",
|
||||
},
|
||||
lisSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "lis-cluster",
|
||||
CertResource: "lis-cert",
|
||||
},
|
||||
want: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "lis-cluster",
|
||||
CertResource: "lis-cert",
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "missing cluster listener",
|
||||
gwSDS: nil,
|
||||
lisSDS: &structs.GatewayTLSSDSConfig{
|
||||
CertResource: "lis-cert",
|
||||
},
|
||||
wantErr: "missing SDS cluster name",
|
||||
},
|
||||
{
|
||||
name: "missing cert listener",
|
||||
gwSDS: nil,
|
||||
lisSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
},
|
||||
wantErr: "missing SDS cert resource",
|
||||
},
|
||||
{
|
||||
name: "missing cluster gw",
|
||||
gwSDS: &structs.GatewayTLSSDSConfig{
|
||||
CertResource: "lis-cert",
|
||||
},
|
||||
lisSDS: nil,
|
||||
wantErr: "missing SDS cluster name",
|
||||
},
|
||||
{
|
||||
name: "missing cert gw",
|
||||
gwSDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "cluster",
|
||||
},
|
||||
lisSDS: nil,
|
||||
wantErr: "missing SDS cert resource",
|
||||
},
|
||||
}
|
||||
|
||||
for _, tc := range cases {
|
||||
t.Run(tc.name, func(t *testing.T) {
|
||||
run(tc)
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -177,13 +177,17 @@ func (s *ResourceGenerator) routesForIngressGateway(
|
|||
continue
|
||||
}
|
||||
|
||||
upstreamRoute := &envoy_route_v3.RouteConfiguration{
|
||||
// Depending on their TLS config, upstreams are either attached to the
|
||||
// default route or have their own routes. We'll add any upstreams that
|
||||
// don't have custom filter chains and routes to this.
|
||||
defaultRoute := &envoy_route_v3.RouteConfiguration{
|
||||
Name: listenerKey.RouteName(),
|
||||
// ValidateClusters defaults to true when defined statically and false
|
||||
// when done via RDS. Re-set the reasonable value of true to prevent
|
||||
// null-routing traffic.
|
||||
ValidateClusters: makeBoolValue(true),
|
||||
}
|
||||
|
||||
for _, u := range upstreams {
|
||||
upstreamID := u.Identifier()
|
||||
chain := chains[upstreamID]
|
||||
|
@ -197,45 +201,42 @@ func (s *ResourceGenerator) routesForIngressGateway(
|
|||
return nil, err
|
||||
}
|
||||
|
||||
// See if we need to configure any special settings on this route config
|
||||
if lCfg, ok := listeners[listenerKey]; ok {
|
||||
if is := findIngressServiceMatchingUpstream(lCfg, u); is != nil {
|
||||
// Set up any header manipulation we need
|
||||
if is.RequestHeaders != nil {
|
||||
virtualHost.RequestHeadersToAdd = append(
|
||||
virtualHost.RequestHeadersToAdd,
|
||||
makeHeadersValueOptions(is.RequestHeaders.Add, true)...,
|
||||
)
|
||||
virtualHost.RequestHeadersToAdd = append(
|
||||
virtualHost.RequestHeadersToAdd,
|
||||
makeHeadersValueOptions(is.RequestHeaders.Set, false)...,
|
||||
)
|
||||
virtualHost.RequestHeadersToRemove = append(
|
||||
virtualHost.RequestHeadersToRemove,
|
||||
is.RequestHeaders.Remove...,
|
||||
)
|
||||
}
|
||||
if is.ResponseHeaders != nil {
|
||||
virtualHost.ResponseHeadersToAdd = append(
|
||||
virtualHost.ResponseHeadersToAdd,
|
||||
makeHeadersValueOptions(is.ResponseHeaders.Add, true)...,
|
||||
)
|
||||
virtualHost.ResponseHeadersToAdd = append(
|
||||
virtualHost.ResponseHeadersToAdd,
|
||||
makeHeadersValueOptions(is.ResponseHeaders.Set, false)...,
|
||||
)
|
||||
virtualHost.ResponseHeadersToRemove = append(
|
||||
virtualHost.ResponseHeadersToRemove,
|
||||
is.ResponseHeaders.Remove...,
|
||||
)
|
||||
}
|
||||
}
|
||||
// Lookup listener and service config details from ingress gateway
|
||||
// definition.
|
||||
lCfg, ok := listeners[listenerKey]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("missing ingress listener config (listener on port %d)", listenerKey.Port)
|
||||
}
|
||||
svc := findIngressServiceMatchingUpstream(lCfg, u)
|
||||
if svc == nil {
|
||||
return nil, fmt.Errorf("missing service in listener config (service %q listener on port %d)",
|
||||
u.DestinationID(), listenerKey.Port)
|
||||
}
|
||||
|
||||
upstreamRoute.VirtualHosts = append(upstreamRoute.VirtualHosts, virtualHost)
|
||||
if err := injectHeaderManipToVirtualHost(svc, virtualHost); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// See if this upstream has its own route/filter chain
|
||||
svcRouteName := routeNameForUpstream(lCfg, *svc)
|
||||
|
||||
// If the routeName is the same as the default one, merge the virtual host
|
||||
// to the default route
|
||||
if svcRouteName == defaultRoute.Name {
|
||||
defaultRoute.VirtualHosts = append(defaultRoute.VirtualHosts, virtualHost)
|
||||
} else {
|
||||
svcRoute := &envoy_route_v3.RouteConfiguration{
|
||||
Name: svcRouteName,
|
||||
ValidateClusters: makeBoolValue(true),
|
||||
VirtualHosts: []*envoy_route_v3.VirtualHost{virtualHost},
|
||||
}
|
||||
result = append(result, svcRoute)
|
||||
}
|
||||
}
|
||||
|
||||
result = append(result, upstreamRoute)
|
||||
if len(defaultRoute.VirtualHosts) > 0 {
|
||||
result = append(result, defaultRoute)
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
|
@ -262,13 +263,23 @@ func findIngressServiceMatchingUpstream(l structs.IngressListener, u structs.Ups
|
|||
// wasn't checked as it didn't matter. Assume there is only one now
|
||||
// though!
|
||||
wantSID := u.DestinationID()
|
||||
var foundSameNSWildcard *structs.IngressService
|
||||
for _, s := range l.Services {
|
||||
sid := structs.NewServiceID(s.Name, &s.EnterpriseMeta)
|
||||
if wantSID.Matches(sid) {
|
||||
return &s
|
||||
}
|
||||
if s.Name == structs.WildcardSpecifier &&
|
||||
s.NamespaceOrDefault() == wantSID.NamespaceOrDefault() &&
|
||||
s.PartitionOrDefault() == wantSID.PartitionOrDefault() {
|
||||
// Make a copy so we don't take a reference to the loop variable
|
||||
found := s
|
||||
foundSameNSWildcard = &found
|
||||
}
|
||||
}
|
||||
return nil
|
||||
// Didn't find an exact match. Return the wildcard from same service if we
|
||||
// found one.
|
||||
return foundSameNSWildcard
|
||||
}
|
||||
|
||||
func generateUpstreamIngressDomains(listenerKey proxycfg.IngressListenerKey, u structs.Upstream) []string {
|
||||
|
@ -753,6 +764,38 @@ func injectHeaderManipToRoute(dest *structs.ServiceRouteDestination, r *envoy_ro
|
|||
return nil
|
||||
}
|
||||
|
||||
func injectHeaderManipToVirtualHost(dest *structs.IngressService, vh *envoy_route_v3.VirtualHost) error {
|
||||
if !dest.RequestHeaders.IsZero() {
|
||||
vh.RequestHeadersToAdd = append(
|
||||
vh.RequestHeadersToAdd,
|
||||
makeHeadersValueOptions(dest.RequestHeaders.Add, true)...,
|
||||
)
|
||||
vh.RequestHeadersToAdd = append(
|
||||
vh.RequestHeadersToAdd,
|
||||
makeHeadersValueOptions(dest.RequestHeaders.Set, false)...,
|
||||
)
|
||||
vh.RequestHeadersToRemove = append(
|
||||
vh.RequestHeadersToRemove,
|
||||
dest.RequestHeaders.Remove...,
|
||||
)
|
||||
}
|
||||
if !dest.ResponseHeaders.IsZero() {
|
||||
vh.ResponseHeadersToAdd = append(
|
||||
vh.ResponseHeadersToAdd,
|
||||
makeHeadersValueOptions(dest.ResponseHeaders.Add, true)...,
|
||||
)
|
||||
vh.ResponseHeadersToAdd = append(
|
||||
vh.ResponseHeadersToAdd,
|
||||
makeHeadersValueOptions(dest.ResponseHeaders.Set, false)...,
|
||||
)
|
||||
vh.ResponseHeadersToRemove = append(
|
||||
vh.ResponseHeadersToRemove,
|
||||
dest.ResponseHeaders.Remove...,
|
||||
)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func injectHeaderManipToWeightedCluster(split *structs.ServiceSplit, c *envoy_route_v3.WeightedCluster_ClusterWeight) error {
|
||||
if !split.RequestHeaders.IsZero() {
|
||||
c.RequestHeadersToAdd = append(
|
||||
|
|
|
@ -155,6 +155,30 @@ func TestRoutesFromSnapshot(t *testing.T) {
|
|||
},
|
||||
},
|
||||
}
|
||||
snap.IngressGateway.Listeners = map[proxycfg.IngressListenerKey]structs.IngressListener{
|
||||
{Protocol: "http", Port: 8080}: {
|
||||
Port: 8080,
|
||||
Services: []structs.IngressService{
|
||||
{
|
||||
Name: "foo",
|
||||
},
|
||||
{
|
||||
Name: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
{Protocol: "http", Port: 443}: {
|
||||
Port: 443,
|
||||
Services: []structs.IngressService{
|
||||
{
|
||||
Name: "baz",
|
||||
},
|
||||
{
|
||||
Name: "qux",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// We do not add baz/qux here so that we test the chain.IsDefault() case
|
||||
entries := []structs.ConfigEntry{
|
||||
|
@ -216,6 +240,45 @@ func TestRoutesFromSnapshot(t *testing.T) {
|
|||
snap.IngressGateway.Listeners[k] = l
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-listener-level",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithRouter,
|
||||
setup: setupIngressWithTwoHTTPServices(t, ingressSDSOpts{
|
||||
// Listener-level SDS means all services share the default route.
|
||||
listenerSDS: true,
|
||||
}),
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-listener-level-wildcard",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithRouter,
|
||||
setup: setupIngressWithTwoHTTPServices(t, ingressSDSOpts{
|
||||
// Listener-level SDS means all services share the default route.
|
||||
listenerSDS: true,
|
||||
wildcard: true,
|
||||
}),
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-service-level",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithRouter,
|
||||
setup: setupIngressWithTwoHTTPServices(t, ingressSDSOpts{
|
||||
listenerSDS: false,
|
||||
// Services should get separate routes and no default since they all
|
||||
// have custom certs.
|
||||
webSDS: true,
|
||||
fooSDS: true,
|
||||
}),
|
||||
},
|
||||
{
|
||||
name: "ingress-with-sds-service-level-mixed-tls",
|
||||
create: proxycfg.TestConfigSnapshotIngressWithRouter,
|
||||
setup: setupIngressWithTwoHTTPServices(t, ingressSDSOpts{
|
||||
listenerSDS: false,
|
||||
// Web needs a separate route as it has custom filter chain but foo
|
||||
// should use default route for listener.
|
||||
webSDS: true,
|
||||
fooSDS: false,
|
||||
}),
|
||||
},
|
||||
{
|
||||
name: "terminating-gateway-lb-config",
|
||||
create: proxycfg.TestConfigSnapshotTerminatingGateway,
|
||||
|
@ -324,6 +387,17 @@ func TestRoutesFromSnapshot(t *testing.T) {
|
|||
|
||||
gName += ".v2compat"
|
||||
|
||||
// It's easy to miss a new type that encodes a version from just
|
||||
// looking at the golden files so lets make it an error here. If
|
||||
// there are ever false positives we can maybe include an allow list
|
||||
// here as it seems safer to assume something was missed than to
|
||||
// assume we'll notice the golden file being wrong. Note the first
|
||||
// one matches both resourceApiVersion and transportApiVersion. I
|
||||
// left it as a suffix in case there are other field names that
|
||||
// follow that convention now or in the future.
|
||||
require.NotContains(t, gotJSON, `ApiVersion": "V3"`)
|
||||
require.NotContains(t, gotJSON, `type.googleapis.com/envoy.api.v3`)
|
||||
|
||||
require.JSONEq(t, goldenEnvoy(t, filepath.Join("routes", gName), envoyVersion, latestEnvoyVersion_v2, gotJSON), gotJSON)
|
||||
})
|
||||
})
|
||||
|
@ -585,3 +659,155 @@ func TestEnvoyLBConfig_InjectToRouteAction(t *testing.T) {
|
|||
})
|
||||
}
|
||||
}
|
||||
|
||||
type ingressSDSOpts struct {
|
||||
listenerSDS, webSDS, fooSDS, wildcard bool
|
||||
entMetas map[string]*structs.EnterpriseMeta
|
||||
}
|
||||
|
||||
// setupIngressWithTwoHTTPServices can be used with
|
||||
// proxycfg.TestConfigSnapshotIngressWithRouter to generate a setup func for an
|
||||
// ingress listener with multiple HTTP services and varying SDS configurations
|
||||
// since those affect how we generate routes.
|
||||
func setupIngressWithTwoHTTPServices(t *testing.T, o ingressSDSOpts) func(snap *proxycfg.ConfigSnapshot) {
|
||||
return func(snap *proxycfg.ConfigSnapshot) {
|
||||
|
||||
snap.IngressGateway.TLSConfig.SDS = nil
|
||||
|
||||
webUpstream := structs.Upstream{
|
||||
DestinationName: "web",
|
||||
// We use empty not default here because of the way upstream identifiers
|
||||
// vary between OSS and Enterprise currently causing test conflicts. In
|
||||
// real life `proxycfg` always sets ingress upstream namespaces to
|
||||
// `NamespaceOrDefault` which shouldn't matter because we should be
|
||||
// consistent within a single binary it's just inconvenient if OSS and
|
||||
// enterprise tests generate different output.
|
||||
DestinationNamespace: o.entMetas["web"].NamespaceOrEmpty(),
|
||||
DestinationPartition: o.entMetas["web"].PartitionOrEmpty(),
|
||||
LocalBindPort: 9191,
|
||||
IngressHosts: []string{
|
||||
"www.example.com",
|
||||
},
|
||||
}
|
||||
fooUpstream := structs.Upstream{
|
||||
DestinationName: "foo",
|
||||
DestinationNamespace: o.entMetas["foo"].NamespaceOrEmpty(),
|
||||
DestinationPartition: o.entMetas["foo"].PartitionOrEmpty(),
|
||||
LocalBindPort: 9191,
|
||||
IngressHosts: []string{
|
||||
"foo.example.com",
|
||||
},
|
||||
}
|
||||
|
||||
// Setup additional HTTP service on same listener with default router
|
||||
snap.IngressGateway.Upstreams = map[proxycfg.IngressListenerKey]structs.Upstreams{
|
||||
{Protocol: "http", Port: 9191}: {webUpstream, fooUpstream},
|
||||
}
|
||||
il := structs.IngressListener{
|
||||
Port: 9191,
|
||||
Services: []structs.IngressService{
|
||||
{
|
||||
Name: "web",
|
||||
Hosts: []string{"www.example.com"},
|
||||
},
|
||||
{
|
||||
Name: "foo",
|
||||
Hosts: []string{"foo.example.com"},
|
||||
},
|
||||
},
|
||||
}
|
||||
for i, svc := range il.Services {
|
||||
if em, ok := o.entMetas[svc.Name]; ok && em != nil {
|
||||
il.Services[i].EnterpriseMeta = *em
|
||||
}
|
||||
}
|
||||
|
||||
// Now set the appropriate SDS configs
|
||||
if o.listenerSDS {
|
||||
il.TLS = &structs.GatewayTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "listener-cluster",
|
||||
CertResource: "listener-cert",
|
||||
},
|
||||
}
|
||||
}
|
||||
if o.webSDS {
|
||||
il.Services[0].TLS = &structs.GatewayServiceTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "web-cluster",
|
||||
CertResource: "www-cert",
|
||||
},
|
||||
}
|
||||
}
|
||||
if o.fooSDS {
|
||||
il.Services[1].TLS = &structs.GatewayServiceTLSConfig{
|
||||
SDS: &structs.GatewayTLSSDSConfig{
|
||||
ClusterName: "foo-cluster",
|
||||
CertResource: "foo-cert",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
if o.wildcard {
|
||||
// undo all that and set just a single wildcard config with no TLS to test
|
||||
// the lookup path where we have to compare an actual resolved upstream to
|
||||
// a wildcard config.
|
||||
il.Services = []structs.IngressService{
|
||||
{
|
||||
Name: "*",
|
||||
},
|
||||
}
|
||||
// We also don't support user-specified hosts with wildcard so remove
|
||||
// those from the upstreams.
|
||||
ups := snap.IngressGateway.Upstreams[proxycfg.IngressListenerKey{Protocol: "http", Port: 9191}]
|
||||
for i := range ups {
|
||||
ups[i].IngressHosts = nil
|
||||
}
|
||||
snap.IngressGateway.Upstreams[proxycfg.IngressListenerKey{Protocol: "http", Port: 9191}] = ups
|
||||
}
|
||||
|
||||
snap.IngressGateway.Listeners[proxycfg.IngressListenerKey{Protocol: "http", Port: 9191}] = il
|
||||
|
||||
entries := []structs.ConfigEntry{
|
||||
&structs.ProxyConfigEntry{
|
||||
Kind: structs.ProxyDefaults,
|
||||
Name: structs.ProxyConfigGlobal,
|
||||
Config: map[string]interface{}{
|
||||
"protocol": "http",
|
||||
},
|
||||
},
|
||||
&structs.ServiceResolverConfigEntry{
|
||||
Kind: structs.ServiceResolver,
|
||||
Name: "web",
|
||||
ConnectTimeout: 22 * time.Second,
|
||||
},
|
||||
&structs.ServiceResolverConfigEntry{
|
||||
Kind: structs.ServiceResolver,
|
||||
Name: "foo",
|
||||
ConnectTimeout: 22 * time.Second,
|
||||
},
|
||||
}
|
||||
for i, e := range entries {
|
||||
switch v := e.(type) {
|
||||
// Add other Service types here if we ever need them above
|
||||
case *structs.ServiceResolverConfigEntry:
|
||||
if em, ok := o.entMetas[v.Name]; ok && em != nil {
|
||||
v.EnterpriseMeta = *em
|
||||
entries[i] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
webChain := discoverychain.TestCompileConfigEntries(t, "web",
|
||||
o.entMetas["web"].NamespaceOrDefault(),
|
||||
o.entMetas["web"].PartitionOrDefault(), "dc1",
|
||||
connect.TestClusterID+".consul", "dc1", nil, entries...)
|
||||
fooChain := discoverychain.TestCompileConfigEntries(t, "foo",
|
||||
o.entMetas["foo"].NamespaceOrDefault(),
|
||||
o.entMetas["web"].PartitionOrDefault(), "dc1",
|
||||
connect.TestClusterID+".consul", "dc1", nil, entries...)
|
||||
|
||||
snap.IngressGateway.DiscoveryChain[webUpstream.Identifier()] = webChain
|
||||
snap.IngressGateway.DiscoveryChain[fooUpstream.Identifier()] = fooChain
|
||||
}
|
||||
}
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "custom-upstream",
|
||||
"name": "db:custom-upstream",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "11.11.11.11",
|
||||
|
@ -27,11 +27,11 @@
|
|||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "prepared_query:geo-cache:127.10.10.10:8181",
|
||||
"name": "prepared_query:geo-cache:custom-upstream",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "127.10.10.10",
|
||||
"portValue": 8181
|
||||
"address": "11.11.11.11",
|
||||
"portValue": 11111
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
|
@ -41,14 +41,13 @@
|
|||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||
"statPrefix": "upstream.prepared_query_geo-cache",
|
||||
"cluster": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul"
|
||||
"statPrefix": "foo-stats",
|
||||
"cluster": "random-cluster"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
]
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "custom-upstream",
|
||||
"name": "db:custom-upstream",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "11.11.11.11",
|
||||
|
@ -27,11 +27,11 @@
|
|||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "prepared_query:geo-cache:127.10.10.10:8181",
|
||||
"name": "prepared_query:geo-cache:custom-upstream",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "127.10.10.10",
|
||||
"portValue": 8181
|
||||
"address": "11.11.11.11",
|
||||
"portValue": 11111
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
|
@ -41,14 +41,13 @@
|
|||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
|
||||
"statPrefix": "upstream.prepared_query_geo-cache",
|
||||
"cluster": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul"
|
||||
"statPrefix": "foo-stats",
|
||||
"cluster": "random-cluster"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
]
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
|
|
154
agent/xds/testdata/listeners/ingress-with-sds-listener+service-level.envoy-1-18-x.golden
vendored
Normal file
154
agent/xds/testdata/listeners/ingress-with-sds-listener+service-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,154 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-1"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "*.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-2"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
154
agent/xds/testdata/listeners/ingress-with-sds-listener+service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
154
agent/xds/testdata/listeners/ingress-with-sds-listener+service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,154 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-1"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "*.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-2"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
82
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level-http.envoy-1-18-x.golden
vendored
Normal file
82
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level-http.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,82 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "listener-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
|
@ -0,0 +1,82 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "listener-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
89
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level-mixed-tls.envoy-1-18-x.golden
vendored
Normal file
89
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level-mixed-tls.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,89 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "insecure:1.2.3.4:9090",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 9090
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||
"statPrefix": "upstream.insecure.default.default.dc1",
|
||||
"cluster": "insecure.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "secure:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||
"statPrefix": "upstream.secure.default.default.dc1",
|
||||
"cluster": "secure.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "listener-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "listener-sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
|
@ -0,0 +1,89 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "insecure:1.2.3.4:9090",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 9090
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
|
||||
"statPrefix": "upstream.insecure.default.default.dc1",
|
||||
"cluster": "insecure.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "secure:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
|
||||
"statPrefix": "upstream.secure.default.default.dc1",
|
||||
"cluster": "secure.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "listener-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "listener-sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
64
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level.envoy-1-18-x.golden
vendored
Normal file
64
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,64 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "db:1.2.3.4:9191",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 9191
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||
"statPrefix": "upstream.db.default.default.dc1",
|
||||
"cluster": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "cert-resource",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
64
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
64
agent/xds/testdata/listeners/ingress-with-sds-listener-gw-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,64 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "db:1.2.3.4:9191",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 9191
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
|
||||
"statPrefix": "upstream.db.default.default.dc1",
|
||||
"cluster": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "cert-resource",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
142
agent/xds/testdata/listeners/ingress-with-sds-listener-listener+service-level.envoy-1-18-x.golden
vendored
Normal file
142
agent/xds/testdata/listeners/ingress-with-sds-listener-listener+service-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,142 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"clusterNames": [
|
||||
"sds-cluster-1"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "*.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"clusterNames": [
|
||||
"sds-cluster-2"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
|
@ -0,0 +1,144 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"clusterNames": [
|
||||
"sds-cluster-1"
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "*.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"clusterNames": [
|
||||
"sds-cluster-2"
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
64
agent/xds/testdata/listeners/ingress-with-sds-listener-listener-level.envoy-1-18-x.golden
vendored
Normal file
64
agent/xds/testdata/listeners/ingress-with-sds-listener-listener-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,64 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "foo:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||
"statPrefix": "upstream.foo.default.default.dc1",
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "listener-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
|
@ -0,0 +1,64 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "foo:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
|
||||
"statPrefix": "upstream.foo.default.default.dc1",
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "listener-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
147
agent/xds/testdata/listeners/ingress-with-sds-listener-service-level.envoy-1-18-x.golden
vendored
Normal file
147
agent/xds/testdata/listeners/ingress-with-sds-listener-service-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,147 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"clusterNames": [
|
||||
"sds-cluster-1"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s2.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s2",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080_s2"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s2.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"clusterNames": [
|
||||
"sds-cluster-2"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
149
agent/xds/testdata/listeners/ingress-with-sds-listener-service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
149
agent/xds/testdata/listeners/ingress-with-sds-listener-service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,149 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"clusterNames": [
|
||||
"sds-cluster-1"
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s2.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s2",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080_s2"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s2.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"clusterNames": [
|
||||
"sds-cluster-2"
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
|
@ -0,0 +1,58 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "db:1.2.3.4:9191",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 9191
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy",
|
||||
"statPrefix": "upstream.db.default.dc1",
|
||||
"cluster": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "cert-resource",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"clusterNames": [
|
||||
"sds-cluster"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
58
agent/xds/testdata/listeners/ingress-with-sds-listener.v2compat.envoy-1-16-x.golden
vendored
Normal file
58
agent/xds/testdata/listeners/ingress-with-sds-listener.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,58 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "db:1.2.3.4:9191",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 9191
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.tcp_proxy",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy",
|
||||
"statPrefix": "upstream.db.default.dc1",
|
||||
"cluster": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "cert-resource",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"clusterNames": [
|
||||
"sds-cluster"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
122
agent/xds/testdata/listeners/ingress-with-sds-service-level-mixed-no-tls.envoy-1-18-x.golden
vendored
Normal file
122
agent/xds/testdata/listeners/ingress-with-sds-service-level-mixed-no-tls.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,122 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-1"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
|
@ -0,0 +1,122 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-1"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
159
agent/xds/testdata/listeners/ingress-with-sds-service-level.envoy-1-18-x.golden
vendored
Normal file
159
agent/xds/testdata/listeners/ingress-with-sds-service-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,159 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-1"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s2.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s2",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
},
|
||||
"routeConfigName": "8080_s2"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s2.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V3",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-2"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V3"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
159
agent/xds/testdata/listeners/ingress-with-sds-service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
159
agent/xds/testdata/listeners/ingress-with-sds-service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,159 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"name": "http:1.2.3.4:8080",
|
||||
"address": {
|
||||
"socketAddress": {
|
||||
"address": "1.2.3.4",
|
||||
"portValue": 8080
|
||||
}
|
||||
},
|
||||
"filterChains": [
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s1.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s1",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080_s1"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s1.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-1"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"filterChainMatch": {
|
||||
"serverNames": [
|
||||
"s2.example.com"
|
||||
]
|
||||
},
|
||||
"filters": [
|
||||
{
|
||||
"name": "envoy.filters.network.http_connection_manager",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager",
|
||||
"statPrefix": "ingress_upstream_8080_s2",
|
||||
"rds": {
|
||||
"configSource": {
|
||||
"ads": {
|
||||
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
},
|
||||
"routeConfigName": "8080_s2"
|
||||
},
|
||||
"httpFilters": [
|
||||
{
|
||||
"name": "envoy.filters.http.router"
|
||||
}
|
||||
],
|
||||
"tracing": {
|
||||
"randomSampling": {
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"transportSocket": {
|
||||
"name": "tls",
|
||||
"typedConfig": {
|
||||
"@type": "type.googleapis.com/envoy.api.v2.auth.DownstreamTlsContext",
|
||||
"commonTlsContext": {
|
||||
"tlsParams": {
|
||||
|
||||
},
|
||||
"tlsCertificateSdsSecretConfigs": [
|
||||
{
|
||||
"name": "s2.example.com-cert",
|
||||
"sdsConfig": {
|
||||
"apiConfigSource": {
|
||||
"apiType": "GRPC",
|
||||
"transportApiVersion": "V2",
|
||||
"grpcServices": [
|
||||
{
|
||||
"envoyGrpc": {
|
||||
"clusterName": "sds-cluster-2"
|
||||
},
|
||||
"timeout": "5s"
|
||||
}
|
||||
]
|
||||
},
|
||||
"resourceApiVersion": "V2"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"requireClientCertificate": false
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"listenerFilters": [
|
||||
{
|
||||
"name": "envoy.filters.listener.tls_inspector"
|
||||
}
|
||||
],
|
||||
"trafficDirection": "OUTBOUND"
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.Listener",
|
||||
"nonce": "00000001"
|
||||
}
|
48
agent/xds/testdata/routes/ingress-with-sds-listener-level-wildcard.envoy-1-18-x.golden
vendored
Normal file
48
agent/xds/testdata/routes/ingress-with-sds-listener-level-wildcard.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"name": "9191",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"web.ingress.*",
|
||||
"web.ingress.*:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.ingress.*",
|
||||
"foo.ingress.*:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
48
agent/xds/testdata/routes/ingress-with-sds-listener-level-wildcard.v2compat.envoy-1-16-x.golden
vendored
Normal file
48
agent/xds/testdata/routes/ingress-with-sds-listener-level-wildcard.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"name": "9191",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"web.ingress.*",
|
||||
"web.ingress.*:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.ingress.*",
|
||||
"foo.ingress.*:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
48
agent/xds/testdata/routes/ingress-with-sds-listener-level.envoy-1-18-x.golden
vendored
Normal file
48
agent/xds/testdata/routes/ingress-with-sds-listener-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"name": "9191",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"www.example.com",
|
||||
"www.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.example.com",
|
||||
"foo.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
48
agent/xds/testdata/routes/ingress-with-sds-listener-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
48
agent/xds/testdata/routes/ingress-with-sds-listener-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"name": "9191",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"www.example.com",
|
||||
"www.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.example.com",
|
||||
"foo.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
55
agent/xds/testdata/routes/ingress-with-sds-service-level-mixed-tls.envoy-1-18-x.golden
vendored
Normal file
55
agent/xds/testdata/routes/ingress-with-sds-service-level-mixed-tls.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"name": "9191",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.example.com",
|
||||
"foo.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"name": "9191_web",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"www.example.com",
|
||||
"www.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
55
agent/xds/testdata/routes/ingress-with-sds-service-level-mixed-tls.v2compat.envoy-1-16-x.golden
vendored
Normal file
55
agent/xds/testdata/routes/ingress-with-sds-service-level-mixed-tls.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"name": "9191",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.example.com",
|
||||
"foo.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"name": "9191_web",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"www.example.com",
|
||||
"www.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
55
agent/xds/testdata/routes/ingress-with-sds-service-level.envoy-1-18-x.golden
vendored
Normal file
55
agent/xds/testdata/routes/ingress-with-sds-service-level.envoy-1-18-x.golden
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"name": "9191_foo",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.example.com",
|
||||
"foo.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"name": "9191_web",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"www.example.com",
|
||||
"www.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.config.route.v3.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
55
agent/xds/testdata/routes/ingress-with-sds-service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
55
agent/xds/testdata/routes/ingress-with-sds-service-level.v2compat.envoy-1-16-x.golden
vendored
Normal file
|
@ -0,0 +1,55 @@
|
|||
{
|
||||
"versionInfo": "00000001",
|
||||
"resources": [
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"name": "9191_foo",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "foo",
|
||||
"domains": [
|
||||
"foo.example.com",
|
||||
"foo.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "foo.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
},
|
||||
{
|
||||
"@type": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"name": "9191_web",
|
||||
"virtualHosts": [
|
||||
{
|
||||
"name": "web",
|
||||
"domains": [
|
||||
"www.example.com",
|
||||
"www.example.com:9191"
|
||||
],
|
||||
"routes": [
|
||||
{
|
||||
"match": {
|
||||
"prefix": "/"
|
||||
},
|
||||
"route": {
|
||||
"cluster": "web.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"validateClusters": true
|
||||
}
|
||||
],
|
||||
"typeUrl": "type.googleapis.com/envoy.api.v2.RouteConfiguration",
|
||||
"nonce": "00000001"
|
||||
}
|
|
@ -440,7 +440,14 @@ func convertTypedConfigsToV2(pb proto.Message) error {
|
|||
return nil
|
||||
case *envoy_core_v2.ConfigSource:
|
||||
if x.ConfigSourceSpecifier != nil {
|
||||
if _, ok := x.ConfigSourceSpecifier.(*envoy_core_v2.ConfigSource_Ads); !ok {
|
||||
switch spec := x.ConfigSourceSpecifier.(type) {
|
||||
case *envoy_core_v2.ConfigSource_Ads:
|
||||
// Nothing else to do
|
||||
break
|
||||
case *envoy_core_v2.ConfigSource_ApiConfigSource:
|
||||
spec.ApiConfigSource.TransportApiVersion = envoy_core_v2.ApiVersion_V2
|
||||
break
|
||||
default:
|
||||
return fmt.Errorf("%T: ConfigSourceSpecifier type %T not handled", x, x.ConfigSourceSpecifier)
|
||||
}
|
||||
}
|
||||
|
@ -491,8 +498,30 @@ func convertTypedConfigsToV2(pb proto.Message) error {
|
|||
case *envoy_http_rbac_v2.RBAC:
|
||||
return nil
|
||||
case *envoy_tls_v2.UpstreamTlsContext:
|
||||
if x.CommonTlsContext != nil {
|
||||
if err := convertTypedConfigsToV2(x.CommonTlsContext); err != nil {
|
||||
return fmt.Errorf("%T: %w", x, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
case *envoy_tls_v2.DownstreamTlsContext:
|
||||
if x.CommonTlsContext != nil {
|
||||
if err := convertTypedConfigsToV2(x.CommonTlsContext); err != nil {
|
||||
return fmt.Errorf("%T: %w", x, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
case *envoy_tls_v2.CommonTlsContext:
|
||||
for _, sds := range x.TlsCertificateSdsSecretConfigs {
|
||||
if err := convertTypedConfigsToV2(sds); err != nil {
|
||||
return fmt.Errorf("%T: %w", x, err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
case *envoy_tls_v2.SdsSecretConfig:
|
||||
if err := convertTypedConfigsToV2(x.SdsConfig); err != nil {
|
||||
return fmt.Errorf("%T: %w", x, err)
|
||||
}
|
||||
return nil
|
||||
case *envoy_grpc_stats_v2.FilterConfig:
|
||||
return nil
|
||||
|
|
|
@ -40,6 +40,19 @@ type IngressGatewayConfigEntry struct {
|
|||
type GatewayTLSConfig struct {
|
||||
// Indicates that TLS should be enabled for this gateway service.
|
||||
Enabled bool
|
||||
|
||||
// SDS allows configuring TLS certificate from an SDS service.
|
||||
SDS *GatewayTLSSDSConfig `json:",omitempty"`
|
||||
}
|
||||
|
||||
type GatewayServiceTLSConfig struct {
|
||||
// SDS allows configuring TLS certificate from an SDS service.
|
||||
SDS *GatewayTLSSDSConfig `json:",omitempty"`
|
||||
}
|
||||
|
||||
type GatewayTLSSDSConfig struct {
|
||||
ClusterName string `json:",omitempty" alias:"cluster_name"`
|
||||
CertResource string `json:",omitempty" alias:"cert_resource"`
|
||||
}
|
||||
|
||||
// IngressListener manages the configuration for a listener on a specific port.
|
||||
|
@ -59,6 +72,9 @@ type IngressListener struct {
|
|||
// For "tcp" protocol listeners, only a single service is allowed.
|
||||
// For "http" listeners, multiple services can be declared.
|
||||
Services []IngressService
|
||||
|
||||
// TLS allows specifying some TLS configuration per listener.
|
||||
TLS *GatewayTLSConfig `json:",omitempty"`
|
||||
}
|
||||
|
||||
// IngressService manages configuration for services that are exposed to
|
||||
|
@ -93,6 +109,9 @@ type IngressService struct {
|
|||
// Namespacing is a Consul Enterprise feature.
|
||||
Namespace string `json:",omitempty"`
|
||||
|
||||
// TLS allows specifying some TLS configuration per listener.
|
||||
TLS *GatewayServiceTLSConfig `json:",omitempty"`
|
||||
|
||||
// Allow HTTP header manipulation to be configured.
|
||||
RequestHeaders *HTTPHeaderModifiers `json:",omitempty" alias:"request_headers"`
|
||||
ResponseHeaders *HTTPHeaderModifiers `json:",omitempty" alias:"response_headers"`
|
||||
|
|
|
@ -86,8 +86,26 @@ func TestAPI_ConfigEntries_IngressGateway(t *testing.T) {
|
|||
ResponseHeaders: &HTTPHeaderModifiers{
|
||||
Remove: []string{"x-foo"},
|
||||
},
|
||||
TLS: &GatewayServiceTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "foo",
|
||||
CertResource: "bar",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
TLS: &GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "baz",
|
||||
CertResource: "qux",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
ingress1.TLS = GatewayTLSConfig{
|
||||
SDS: &GatewayTLSSDSConfig{
|
||||
ClusterName: "qux",
|
||||
CertResource: "bug",
|
||||
},
|
||||
}
|
||||
|
||||
|
|
|
@ -71,23 +71,26 @@ func GetPolicyIDFromPartial(client *api.Client, partialID string) (string, error
|
|||
return policyID, nil
|
||||
}
|
||||
|
||||
func GetPolicyIDByName(client *api.Client, name string) (string, error) {
|
||||
func GetPolicyByName(client *api.Client, name string) (*api.ACLPolicy, error) {
|
||||
if name == "" {
|
||||
return "", fmt.Errorf("No name specified")
|
||||
return nil, fmt.Errorf("No name specified")
|
||||
}
|
||||
|
||||
policies, _, err := client.ACL().PolicyList(nil)
|
||||
policy, _, err := client.ACL().PolicyReadByName(name, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to find policy with name %s: %w", name, err)
|
||||
}
|
||||
|
||||
return policy, nil
|
||||
}
|
||||
|
||||
func GetPolicyIDByName(client *api.Client, name string) (string, error) {
|
||||
policy, err := GetPolicyByName(client, name)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
for _, policy := range policies {
|
||||
if policy.Name == name {
|
||||
return policy.ID, nil
|
||||
}
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("No such policy with name %s", name)
|
||||
return policy.ID, nil
|
||||
}
|
||||
|
||||
func GetRulesFromLegacyToken(client *api.Client, tokenID string, isSecret bool) (string, error) {
|
||||
|
|
|
@ -5,6 +5,7 @@ import (
|
|||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/hashicorp/consul/command/acl"
|
||||
"github.com/hashicorp/consul/command/acl/policy"
|
||||
"github.com/hashicorp/consul/command/flags"
|
||||
|
@ -67,19 +68,26 @@ func (c *cmd) Run(args []string) int {
|
|||
}
|
||||
|
||||
var policyID string
|
||||
var pol *api.ACLPolicy
|
||||
if c.policyID != "" {
|
||||
policyID, err = acl.GetPolicyIDFromPartial(client, c.policyID)
|
||||
if err != nil {
|
||||
c.UI.Error(fmt.Sprintf("Error determining policy ID: %v", err))
|
||||
return 1
|
||||
}
|
||||
pol, _, err = client.ACL().PolicyRead(policyID, nil)
|
||||
} else {
|
||||
policyID, err = acl.GetPolicyIDByName(client, c.policyName)
|
||||
}
|
||||
if err != nil {
|
||||
c.UI.Error(fmt.Sprintf("Error determining policy ID: %v", err))
|
||||
return 1
|
||||
pol, err = acl.GetPolicyByName(client, c.policyName)
|
||||
}
|
||||
|
||||
p, _, err := client.ACL().PolicyRead(policyID, nil)
|
||||
if err != nil {
|
||||
c.UI.Error(fmt.Sprintf("Error reading policy %q: %v", policyID, err))
|
||||
var errArg string
|
||||
if c.policyID != "" {
|
||||
errArg = fmt.Sprintf("id:%s", policyID)
|
||||
} else {
|
||||
errArg = fmt.Sprintf("name:%s", c.policyName)
|
||||
}
|
||||
c.UI.Error(fmt.Sprintf("Error reading policy %q: %v", errArg, err))
|
||||
return 1
|
||||
}
|
||||
|
||||
|
@ -88,7 +96,7 @@ func (c *cmd) Run(args []string) int {
|
|||
c.UI.Error(err.Error())
|
||||
return 1
|
||||
}
|
||||
out, err := formatter.FormatPolicy(p)
|
||||
out, err := formatter.FormatPolicy(pol)
|
||||
if err != nil {
|
||||
c.UI.Error(err.Error())
|
||||
return 1
|
||||
|
|
|
@ -53,6 +53,7 @@ func TestPolicyReadCommand(t *testing.T) {
|
|||
)
|
||||
assert.NoError(err)
|
||||
|
||||
// Test querying by id field
|
||||
args := []string{
|
||||
"-http-addr=" + a.HTTPAddr(),
|
||||
"-token=root",
|
||||
|
@ -66,6 +67,22 @@ func TestPolicyReadCommand(t *testing.T) {
|
|||
output := ui.OutputWriter.String()
|
||||
assert.Contains(output, fmt.Sprintf("test-policy"))
|
||||
assert.Contains(output, policy.ID)
|
||||
|
||||
// Test querying by name field
|
||||
argsName := []string{
|
||||
"-http-addr=" + a.HTTPAddr(),
|
||||
"-token=root",
|
||||
"-name=test-policy",
|
||||
}
|
||||
|
||||
cmd = New(ui)
|
||||
code = cmd.Run(argsName)
|
||||
assert.Equal(code, 0)
|
||||
assert.Empty(ui.ErrorWriter.String())
|
||||
|
||||
output = ui.OutputWriter.String()
|
||||
assert.Contains(output, fmt.Sprintf("test-policy"))
|
||||
assert.Contains(output, policy.ID)
|
||||
}
|
||||
|
||||
func TestPolicyReadCommand_JSON(t *testing.T) {
|
||||
|
|
|
@ -112,27 +112,61 @@ func (q *QueryMeta) GetBackend() structs.QueryBackend {
|
|||
}
|
||||
|
||||
// WriteRequest only applies to writes, always false
|
||||
//
|
||||
// IsRead implements structs.RPCInfo
|
||||
func (w WriteRequest) IsRead() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// SetTokenSecret implements structs.RPCInfo
|
||||
func (w WriteRequest) TokenSecret() string {
|
||||
return w.Token
|
||||
}
|
||||
|
||||
// SetTokenSecret implements structs.RPCInfo
|
||||
func (w *WriteRequest) SetTokenSecret(s string) {
|
||||
w.Token = s
|
||||
}
|
||||
|
||||
// AllowStaleRead returns whether a stale read should be allowed
|
||||
//
|
||||
// AllowStaleRead implements structs.RPCInfo
|
||||
func (w WriteRequest) AllowStaleRead() bool {
|
||||
return false
|
||||
}
|
||||
|
||||
// HasTimedOut implements structs.RPCInfo
|
||||
func (w WriteRequest) HasTimedOut(start time.Time, rpcHoldTimeout, _, _ time.Duration) bool {
|
||||
return time.Since(start) > rpcHoldTimeout
|
||||
}
|
||||
|
||||
// IsRead implements structs.RPCInfo
|
||||
func (r *ReadRequest) IsRead() bool {
|
||||
return true
|
||||
}
|
||||
|
||||
// AllowStaleRead implements structs.RPCInfo
|
||||
func (r *ReadRequest) AllowStaleRead() bool {
|
||||
// TODO(partitions): plumb this?
|
||||
return false
|
||||
}
|
||||
|
||||
// TokenSecret implements structs.RPCInfo
|
||||
func (r *ReadRequest) TokenSecret() string {
|
||||
return r.Token
|
||||
}
|
||||
|
||||
// SetTokenSecret implements structs.RPCInfo
|
||||
func (r *ReadRequest) SetTokenSecret(token string) {
|
||||
r.Token = token
|
||||
}
|
||||
|
||||
// HasTimedOut implements structs.RPCInfo
|
||||
func (r *ReadRequest) HasTimedOut(start time.Time, rpcHoldTimeout, maxQueryTime, defaultQueryTime time.Duration) bool {
|
||||
return time.Since(start) > rpcHoldTimeout
|
||||
}
|
||||
|
||||
// RequestDatacenter implements structs.RPCInfo
|
||||
func (td TargetDatacenter) RequestDatacenter() string {
|
||||
return td.Datacenter
|
||||
}
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue