From 3f93b3cc88fb7adaf0abdc6d0baf56bca4d99248 Mon Sep 17 00:00:00 2001 From: Max Bowsher Date: Sun, 19 Jun 2022 11:58:23 +0100 Subject: [PATCH 01/55] Fix incorrect name and doc for kv_entries metric The name of the metric as registered with the metrics library to provide the help string, was incorrect compared with the actual code that sets the metric value - bring them into sync. Also, the help message was incorrect. Rather than copy the help message from telemetry.mdx, which was correct, but felt a bit unnatural in the way it was worded, update both of them to a new wording. --- agent/consul/usagemetrics/usagemetrics.go | 4 ++-- website/content/docs/agent/telemetry.mdx | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/agent/consul/usagemetrics/usagemetrics.go b/agent/consul/usagemetrics/usagemetrics.go index 6733ed3df..0d74e0e72 100644 --- a/agent/consul/usagemetrics/usagemetrics.go +++ b/agent/consul/usagemetrics/usagemetrics.go @@ -37,8 +37,8 @@ var Gauges = []prometheus.GaugeDefinition{ Help: "Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6.", }, { - Name: []string{"consul", "kv", "entries"}, - Help: "Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.10.3.", + Name: []string{"consul", "state", "kv_entries"}, + Help: "Measures the current number of entries in the Consul KV store. It is only emitted by Consul servers. Added in v1.10.3.", }, { Name: []string{"consul", "state", "connect_instances"}, diff --git a/website/content/docs/agent/telemetry.mdx b/website/content/docs/agent/telemetry.mdx index 0435a42c8..01adb30a1 100644 --- a/website/content/docs/agent/telemetry.mdx +++ b/website/content/docs/agent/telemetry.mdx @@ -391,7 +391,7 @@ This is a full list of metrics emitted by Consul. | `consul.state.nodes` | Measures the current number of nodes registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | | `consul.state.services` | Measures the current number of unique services registered with Consul, based on service name. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | | `consul.state.service_instances` | Measures the current number of unique service instances registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | -| `consul.state.kv_entries` | Measures the current number of unique KV entries written in Consul. It is only emitted by Consul servers. Added in v1.10.3. | number of objects | gauge | +| `consul.state.kv_entries` | Measures the current number of entries in the Consul KV store. It is only emitted by Consul servers. Added in v1.10.3. | number of objects | gauge | | `consul.state.connect_instances` | Measures the current number of unique connect service instances registered with Consul labeled by Kind (e.g. connect-proxy, connect-native, etc). Added in v1.10.4 | number of objects | gauge | | `consul.state.config_entries` | Measures the current number of configuration entries registered with Consul labeled by Kind (e.g. service-defaults, proxy-defaults, etc). See [Configuration Entries](/docs/connect/config-entries) for more information. Added in v1.10.4 | number of objects | gauge | | `consul.members.clients` | Measures the current number of client agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of clients | gauge | From 8d68284491aa222e874508d923df02ebf3479c7c Mon Sep 17 00:00:00 2001 From: Max Bowsher Date: Sun, 14 Aug 2022 16:16:41 +0100 Subject: [PATCH 02/55] Correct problem with merge from master, including reformat of table --- website/content/docs/agent/telemetry.mdx | 110 +++++++++++------------ 1 file changed, 55 insertions(+), 55 deletions(-) diff --git a/website/content/docs/agent/telemetry.mdx b/website/content/docs/agent/telemetry.mdx index c29709057..12f04e882 100644 --- a/website/content/docs/agent/telemetry.mdx +++ b/website/content/docs/agent/telemetry.mdx @@ -349,59 +349,59 @@ populated free list structure. This is a full list of metrics emitted by Consul. -| Metric | Description | Unit | Type | -| -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------- | ------- | -| `consul.acl.blocked.{check,service}.deregistration` | Increments whenever a deregistration fails for an entity (check or service) is blocked by an ACL. | requests | counter | -| `consul.acl.blocked.{check,node,service}.registration` | Increments whenever a registration fails for an entity (check, node or service) is blocked by an ACL. | requests | counter | -| `consul.api.http` | This samples how long it takes to service the given HTTP request for the given verb and path. Includes labels for `path` and `method`. `path` does not include details like service or key names, for these an underscore will be present as a placeholder (eg. path=`v1.kv._`) | ms | timer | -| `consul.client.rpc` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers. | requests | counter | -| `consul.client.rpc.exceeded` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/docs/agent/config/config-files#limits) configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers. | rejected requests | counter | -| `consul.client.rpc.failed` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. | requests | counter | -| `consul.client.api.catalog_register.` | Increments whenever a Consul agent receives a catalog register request. | requests | counter | -| `consul.client.api.success.catalog_register.` | Increments whenever a Consul agent successfully responds to a catalog register request. | requests | counter | -| `consul.client.rpc.error.catalog_register.` | Increments whenever a Consul agent receives an RPC error for a catalog register request. | errors | counter | -| `consul.client.api.catalog_deregister.` | Increments whenever a Consul agent receives a catalog deregister request. | requests | counter | -| `consul.client.api.success.catalog_deregister.` | Increments whenever a Consul agent successfully responds to a catalog deregister request. | requests | counter | -| `consul.client.rpc.error.catalog_deregister.` | Increments whenever a Consul agent receives an RPC error for a catalog deregister request. | errors | counter | -| `consul.client.api.catalog_datacenters.` | Increments whenever a Consul agent receives a request to list datacenters in the catalog. | requests | counter | -| `consul.client.api.success.catalog_datacenters.` | Increments whenever a Consul agent successfully responds to a request to list datacenters. | requests | counter | -| `consul.client.rpc.error.catalog_datacenters.` | Increments whenever a Consul agent receives an RPC error for a request to list datacenters. | errors | counter | -| `consul.client.api.catalog_nodes.` | Increments whenever a Consul agent receives a request to list nodes from the catalog. | requests | counter | -| `consul.client.api.success.catalog_nodes.` | Increments whenever a Consul agent successfully responds to a request to list nodes. | requests | counter | -| `consul.client.rpc.error.catalog_nodes.` | Increments whenever a Consul agent receives an RPC error for a request to list nodes. | errors | counter | -| `consul.client.api.catalog_services.` | Increments whenever a Consul agent receives a request to list services from the catalog. | requests | counter | -| `consul.client.api.success.catalog_services.` | Increments whenever a Consul agent successfully responds to a request to list services. | requests | counter | -| `consul.client.rpc.error.catalog_services.` | Increments whenever a Consul agent receives an RPC error for a request to list services. | errors | counter | -| `consul.client.api.catalog_service_nodes.` | Increments whenever a Consul agent receives a request to list nodes offering a service. | requests | counter | -| `consul.client.api.success.catalog_service_nodes.` | Increments whenever a Consul agent successfully responds to a request to list nodes offering a service. | requests | counter | -| `consul.client.api.error.catalog_service_nodes.` | Increments whenever a Consul agent receives an RPC error for request to list nodes offering a service. | requests | counter | -| `consul.client.rpc.error.catalog_service_nodes.` | Increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service.   | errors | counter | -| `consul.client.api.catalog_node_services.` | Increments whenever a Consul agent receives a request to list services registered in a node.   | requests | counter | -| `consul.client.api.success.catalog_node_services.` | Increments whenever a Consul agent successfully responds to a request to list services in a node.   | requests | counter | -| `consul.client.rpc.error.catalog_node_services.` | Increments whenever a Consul agent receives an RPC error for a request to list services in a node.   | errors | counter | -| `consul.client.api.catalog_node_service_list` | Increments whenever a Consul agent receives a request to list a node's registered services. | requests | counter | -| `consul.client.rpc.error.catalog_node_service_list` | Increments whenever a Consul agent receives an RPC error for request to list a node's registered services. | errors | counter | -| `consul.client.api.success.catalog_node_service_list` | Increments whenever a Consul agent successfully responds to a request to list a node's registered services. | requests | counter | -| `consul.client.api.catalog_gateway_services.` | Increments whenever a Consul agent receives a request to list services associated with a gateway. | requests | counter | -| `consul.client.api.success.catalog_gateway_services.` | Increments whenever a Consul agent successfully responds to a request to list services associated with a gateway. | requests | counter | -| `consul.client.rpc.error.catalog_gateway_services.` | Increments whenever a Consul agent receives an RPC error for a request to list services associated with a gateway. | errors | counter | -| `consul.runtime.num_goroutines` | Tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value. | number of goroutines | gauge | -| `consul.runtime.alloc_bytes` | Measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value. | bytes | gauge | -| `consul.runtime.heap_objects` | Measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value. | number of objects | gauge | -| `consul.state.nodes` | Measures the current number of nodes registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | -| `consul.state.peerings` | Measures the current number of peerings registered with Consul. It is only emitted by Consul servers. Added in v1.13.0. | number of objects | gauge | -| `consul.state.services` | Measures the current number of unique services registered with Consul, based on service name. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | -| `consul.state.service_instances` | Measures the current number of unique service instances registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | -| `consul.state.kv_entries` | Measures the current number of entries in the Consul KV store. It is only emitted by Consul servers. Added in v1.10.3. | number of objects | gauge | -| `consul.state.connect_instances` | Measures the current number of unique connect service instances registered with Consul labeled by Kind (e.g. connect-proxy, connect-native, etc). Added in v1.10.4 | number of objects | gauge | -| `consul.state.config_entries` | Measures the current number of configuration entries registered with Consul labeled by Kind (e.g. service-defaults, proxy-defaults, etc). See [Configuration Entries](/docs/connect/config-entries) for more information. Added in v1.10.4 | number of objects | gauge | -| `consul.members.clients` | Measures the current number of client agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of clients | gauge | -| `consul.members.servers` | Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of servers | gauge | -| `consul.dns.stale_queries` | Increments when an agent serves a query within the allowed stale threshold. | queries | counter | -| `consul.dns.ptr_query.` | Measures the time spent handling a reverse DNS query for the given node. | ms | timer | -| `consul.dns.domain_query.` | Measures the time spent handling a domain query for the given node. | ms | timer | -| `consul.system.licenseExpiration` | This measures the number of hours remaining on the agents license. | hours | gauge | -| `consul.version` | Represents the Consul version. +| Metric | Description | Unit | Type | +|--------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|---------| +| `consul.acl.blocked.{check,service}.deregistration` | Increments whenever a deregistration fails for an entity (check or service) is blocked by an ACL. | requests | counter | +| `consul.acl.blocked.{check,node,service}.registration` | Increments whenever a registration fails for an entity (check, node or service) is blocked by an ACL. | requests | counter | +| `consul.api.http` | This samples how long it takes to service the given HTTP request for the given verb and path. Includes labels for `path` and `method`. `path` does not include details like service or key names, for these an underscore will be present as a placeholder (eg. path=`v1.kv._`) | ms | timer | +| `consul.client.rpc` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers. | requests | counter | +| `consul.client.rpc.exceeded` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/docs/agent/config/config-files#limits) configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers. | rejected requests | counter | +| `consul.client.rpc.failed` | Increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. | requests | counter | +| `consul.client.api.catalog_register.` | Increments whenever a Consul agent receives a catalog register request. | requests | counter | +| `consul.client.api.success.catalog_register.` | Increments whenever a Consul agent successfully responds to a catalog register request. | requests | counter | +| `consul.client.rpc.error.catalog_register.` | Increments whenever a Consul agent receives an RPC error for a catalog register request. | errors | counter | +| `consul.client.api.catalog_deregister.` | Increments whenever a Consul agent receives a catalog deregister request. | requests | counter | +| `consul.client.api.success.catalog_deregister.` | Increments whenever a Consul agent successfully responds to a catalog deregister request. | requests | counter | +| `consul.client.rpc.error.catalog_deregister.` | Increments whenever a Consul agent receives an RPC error for a catalog deregister request. | errors | counter | +| `consul.client.api.catalog_datacenters.` | Increments whenever a Consul agent receives a request to list datacenters in the catalog. | requests | counter | +| `consul.client.api.success.catalog_datacenters.` | Increments whenever a Consul agent successfully responds to a request to list datacenters. | requests | counter | +| `consul.client.rpc.error.catalog_datacenters.` | Increments whenever a Consul agent receives an RPC error for a request to list datacenters. | errors | counter | +| `consul.client.api.catalog_nodes.` | Increments whenever a Consul agent receives a request to list nodes from the catalog. | requests | counter | +| `consul.client.api.success.catalog_nodes.` | Increments whenever a Consul agent successfully responds to a request to list nodes. | requests | counter | +| `consul.client.rpc.error.catalog_nodes.` | Increments whenever a Consul agent receives an RPC error for a request to list nodes. | errors | counter | +| `consul.client.api.catalog_services.` | Increments whenever a Consul agent receives a request to list services from the catalog. | requests | counter | +| `consul.client.api.success.catalog_services.` | Increments whenever a Consul agent successfully responds to a request to list services. | requests | counter | +| `consul.client.rpc.error.catalog_services.` | Increments whenever a Consul agent receives an RPC error for a request to list services. | errors | counter | +| `consul.client.api.catalog_service_nodes.` | Increments whenever a Consul agent receives a request to list nodes offering a service. | requests | counter | +| `consul.client.api.success.catalog_service_nodes.` | Increments whenever a Consul agent successfully responds to a request to list nodes offering a service. | requests | counter | +| `consul.client.api.error.catalog_service_nodes.` | Increments whenever a Consul agent receives an RPC error for request to list nodes offering a service. | requests | counter | +| `consul.client.rpc.error.catalog_service_nodes.` | Increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service.   | errors | counter | +| `consul.client.api.catalog_node_services.` | Increments whenever a Consul agent receives a request to list services registered in a node.   | requests | counter | +| `consul.client.api.success.catalog_node_services.` | Increments whenever a Consul agent successfully responds to a request to list services in a node.   | requests | counter | +| `consul.client.rpc.error.catalog_node_services.` | Increments whenever a Consul agent receives an RPC error for a request to list services in a node.   | errors | counter | +| `consul.client.api.catalog_node_service_list` | Increments whenever a Consul agent receives a request to list a node's registered services. | requests | counter | +| `consul.client.rpc.error.catalog_node_service_list` | Increments whenever a Consul agent receives an RPC error for request to list a node's registered services. | errors | counter | +| `consul.client.api.success.catalog_node_service_list` | Increments whenever a Consul agent successfully responds to a request to list a node's registered services. | requests | counter | +| `consul.client.api.catalog_gateway_services.` | Increments whenever a Consul agent receives a request to list services associated with a gateway. | requests | counter | +| `consul.client.api.success.catalog_gateway_services.` | Increments whenever a Consul agent successfully responds to a request to list services associated with a gateway. | requests | counter | +| `consul.client.rpc.error.catalog_gateway_services.` | Increments whenever a Consul agent receives an RPC error for a request to list services associated with a gateway. | errors | counter | +| `consul.runtime.num_goroutines` | Tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value. | number of goroutines | gauge | +| `consul.runtime.alloc_bytes` | Measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value. | bytes | gauge | +| `consul.runtime.heap_objects` | Measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value. | number of objects | gauge | +| `consul.state.nodes` | Measures the current number of nodes registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | +| `consul.state.peerings` | Measures the current number of peerings registered with Consul. It is only emitted by Consul servers. Added in v1.13.0. | number of objects | gauge | +| `consul.state.services` | Measures the current number of unique services registered with Consul, based on service name. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | +| `consul.state.service_instances` | Measures the current number of unique service instances registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | +| `consul.state.kv_entries` | Measures the current number of entries in the Consul KV store. It is only emitted by Consul servers. Added in v1.10.3. | number of objects | gauge | +| `consul.state.connect_instances` | Measures the current number of unique connect service instances registered with Consul labeled by Kind (e.g. connect-proxy, connect-native, etc). Added in v1.10.4 | number of objects | gauge | +| `consul.state.config_entries` | Measures the current number of configuration entries registered with Consul labeled by Kind (e.g. service-defaults, proxy-defaults, etc). See [Configuration Entries](/docs/connect/config-entries) for more information. Added in v1.10.4 | number of objects | gauge | +| `consul.members.clients` | Measures the current number of client agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of clients | gauge | +| `consul.members.servers` | Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of servers | gauge | +| `consul.dns.stale_queries` | Increments when an agent serves a query within the allowed stale threshold. | queries | counter | +| `consul.dns.ptr_query.` | Measures the time spent handling a reverse DNS query for the given node. | ms | timer | +| `consul.dns.domain_query.` | Measures the time spent handling a domain query for the given node. | ms | timer | +| `consul.system.licenseExpiration` | This measures the number of hours remaining on the agents license. | hours | gauge | +| `consul.version` | Represents the Consul version. | agents | gauge | ## Server Health @@ -691,14 +691,14 @@ agent. The table below describes the additional metrics exported by the proxy. **Requirements:** - Consul 1.13.0+ -[Cluster peering](/docs/connect/cluster-peering) refers to enabling communication between Consul clusters through a peer connection, as opposed to a federated connection. Consul collects metrics that describe the number of services exported to a peered cluster. Peering metrics are only emitted by the leader server. +[Cluster peering](/docs/connect/cluster-peering) refers to enabling communication between Consul clusters through a peer connection, as opposed to a federated connection. Consul collects metrics that describe the number of services exported to a peered cluster. Peering metrics are only emitted by the leader server. | Metric | Description | Unit | Type | | ------------------------------------- | ----------------------------------------------------------------------| ------ | ------- | | `consul.peering.exported_services` | Counts the number of services exported to a peer cluster. | count | gauge | ### Labels -Consul attaches the following labels to metric values. +Consul attaches the following labels to metric values. | Label Name | Description | Possible values | | ------------------------------------- | ---------------------------------------------------------------------- | ------------------------------------------ | | `peer_name` | The name of the peering on the reporting cluster or leader. | Any defined peer name in the cluster | From 19f25fc3a58c2012f0576dd72f65e29b0ab04b08 Mon Sep 17 00:00:00 2001 From: freddygv Date: Fri, 26 Aug 2022 10:52:47 -0600 Subject: [PATCH 03/55] Allow terminated peerings to be deleted Peerings are terminated when a peer decides to delete the peering from their end. Deleting a peering sends a termination message to the peer and triggers them to mark the peering as terminated but does NOT delete the peering itself. This is to prevent peerings from disappearing from both sides just because one side deleted them. Previously the Delete endpoint was skipping the deletion if the peering was not marked as active. However, terminated peerings are also inactive. This PR makes some updates so that peerings marked as terminated can be deleted by users. --- agent/consul/state/peering.go | 23 ++- agent/consul/state/peering_test.go | 228 +++++++++++++++++++++++++---- agent/rpc/peering/service.go | 3 +- agent/rpc/peering/service_test.go | 64 ++++---- 4 files changed, 258 insertions(+), 60 deletions(-) diff --git a/agent/consul/state/peering.go b/agent/consul/state/peering.go index 287e82291..9457dd811 100644 --- a/agent/consul/state/peering.go +++ b/agent/consul/state/peering.go @@ -549,8 +549,25 @@ func (s *Store) PeeringWrite(idx uint64, req *pbpeering.PeeringWriteRequest) err if req.Peering.ID != existing.ID { return fmt.Errorf("A peering already exists with the name %q and a different ID %q", req.Peering.Name, existing.ID) } + + // Nothing to do if our peer wants to terminate the peering but the peering is already marked for deletion. + if existing.State == pbpeering.PeeringState_DELETING && req.Peering.State == pbpeering.PeeringState_TERMINATED { + return nil + } + + // No-op deletion + if existing.State == pbpeering.PeeringState_DELETING && req.Peering.State == pbpeering.PeeringState_DELETING { + return nil + } + + // No-op termination + if existing.State == pbpeering.PeeringState_TERMINATED && req.Peering.State == pbpeering.PeeringState_TERMINATED { + return nil + } + // Prevent modifications to Peering marked for deletion. - if !existing.IsActive() { + // This blocks generating new peering tokens or re-establishing the peering until the peering is done deleting. + if existing.State == pbpeering.PeeringState_DELETING { return fmt.Errorf("cannot write to peering that is marked for deletion") } @@ -582,8 +599,8 @@ func (s *Store) PeeringWrite(idx uint64, req *pbpeering.PeeringWriteRequest) err req.Peering.ModifyIndex = idx } - // Ensure associated secrets are cleaned up when a peering is marked for deletion. - if req.Peering.State == pbpeering.PeeringState_DELETING { + // Ensure associated secrets are cleaned up when a peering is marked for deletion or terminated. + if !req.Peering.IsActive() { if err := peeringSecretsDeleteTxn(tx, req.Peering.ID, req.Peering.ShouldDial()); err != nil { return fmt.Errorf("failed to delete peering secrets: %w", err) } diff --git a/agent/consul/state/peering_test.go b/agent/consul/state/peering_test.go index bfce75295..1dc2446fe 100644 --- a/agent/consul/state/peering_test.go +++ b/agent/consul/state/peering_test.go @@ -1112,16 +1112,22 @@ func TestStore_PeeringWrite(t *testing.T) { // Each case depends on the previous. s := NewStateStore(nil) + testTime := time.Now() + + type expectations struct { + peering *pbpeering.Peering + secrets *pbpeering.PeeringSecrets + err string + } type testcase struct { - name string - input *pbpeering.PeeringWriteRequest - expectSecrets *pbpeering.PeeringSecrets - expectErr string + name string + input *pbpeering.PeeringWriteRequest + expect expectations } run := func(t *testing.T, tc testcase) { err := s.PeeringWrite(10, tc.input) - if tc.expectErr != "" { - testutil.RequireErrorContains(t, err, tc.expectErr) + if tc.expect.err != "" { + testutil.RequireErrorContains(t, err, tc.expect.err) return } require.NoError(t, err) @@ -1133,57 +1139,185 @@ func TestStore_PeeringWrite(t *testing.T) { _, p, err := s.PeeringRead(nil, q) require.NoError(t, err) require.NotNil(t, p) - require.Equal(t, tc.input.Peering.State, p.State) - require.Equal(t, tc.input.Peering.Name, p.Name) + require.Equal(t, tc.expect.peering.State, p.State) + require.Equal(t, tc.expect.peering.Name, p.Name) + require.Equal(t, tc.expect.peering.Meta, p.Meta) + if tc.expect.peering.DeletedAt != nil { + require.Equal(t, tc.expect.peering.DeletedAt, p.DeletedAt) + } secrets, err := s.PeeringSecretsRead(nil, tc.input.Peering.ID) require.NoError(t, err) - prototest.AssertDeepEqual(t, tc.expectSecrets, secrets) + prototest.AssertDeepEqual(t, tc.expect.secrets, secrets) } tcs := []testcase{ { name: "create baz", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ - ID: testBazPeerID, - Name: "baz", - Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_ESTABLISHING, + PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, SecretsRequest: &pbpeering.SecretsWriteRequest{ PeerID: testBazPeerID, - Request: &pbpeering.SecretsWriteRequest_GenerateToken{ - GenerateToken: &pbpeering.SecretsWriteRequest_GenerateTokenRequest{ - EstablishmentSecret: testBazSecretID, + Request: &pbpeering.SecretsWriteRequest_Establish{ + Establish: &pbpeering.SecretsWriteRequest_EstablishRequest{ + ActiveStreamSecret: testBazSecretID, }, }, }, }, - expectSecrets: &pbpeering.PeeringSecrets{ - PeerID: testBazPeerID, - Establishment: &pbpeering.PeeringSecrets_Establishment{ - SecretID: testBazSecretID, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_ESTABLISHING, }, + secrets: &pbpeering.PeeringSecrets{ + PeerID: testBazPeerID, + Stream: &pbpeering.PeeringSecrets_Stream{ + ActiveSecretID: testBazSecretID, + }, + }, + }, + }, + { + name: "cannot change ID for baz", + input: &pbpeering.PeeringWriteRequest{ + Peering: &pbpeering.Peering{ + ID: "123", + Name: "baz", + State: pbpeering.PeeringState_FAILING, + PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + }, + }, + expect: expectations{ + err: `A peering already exists with the name "baz" and a different ID`, }, }, { name: "update baz", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ - ID: testBazPeerID, - Name: "baz", - State: pbpeering.PeeringState_FAILING, + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_FAILING, + PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + }, + }, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_FAILING, + }, + secrets: &pbpeering.PeeringSecrets{ + PeerID: testBazPeerID, + Stream: &pbpeering.PeeringSecrets_Stream{ + ActiveSecretID: testBazSecretID, + }, + }, + }, + }, + { + name: "if no state was included in request it is inherited from existing", + input: &pbpeering.PeeringWriteRequest{ + Peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + // Send undefined state. + // State: pbpeering.PeeringState_FAILING, + PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + }, + }, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + // Previous failing state is picked up. + State: pbpeering.PeeringState_FAILING, + }, + secrets: &pbpeering.PeeringSecrets{ + PeerID: testBazPeerID, + Stream: &pbpeering.PeeringSecrets_Stream{ + ActiveSecretID: testBazSecretID, + }, + }, + }, + }, + { + name: "mark baz as terminated", + input: &pbpeering.PeeringWriteRequest{ + Peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_TERMINATED, + PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + }, + }, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_TERMINATED, + }, + // Secrets for baz should have been deleted + secrets: nil, + }, + }, + { + name: "cannot edit data during no-op termination", + input: &pbpeering.PeeringWriteRequest{ + Peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_TERMINATED, + // Attempt to modify the addresses + Meta: map[string]string{"foo": "bar"}, Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, - expectSecrets: &pbpeering.PeeringSecrets{ - PeerID: testBazPeerID, - Establishment: &pbpeering.PeeringSecrets_Establishment{ - SecretID: testBazSecretID, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_TERMINATED, + // Meta should be unchanged. + Meta: nil, }, }, }, { name: "mark baz for deletion", + input: &pbpeering.PeeringWriteRequest{ + Peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_DELETING, + PeerServerAddresses: []string{"localhost:8502"}, + DeletedAt: structs.TimeToProto(testTime), + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + }, + }, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_DELETING, + DeletedAt: structs.TimeToProto(testTime), + }, + secrets: nil, + }, + }, + { + name: "deleting a deleted peering is a no-op", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ ID: testBazPeerID, @@ -1193,8 +1327,38 @@ func TestStore_PeeringWrite(t *testing.T) { Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, - // Secrets for baz should have been deleted - expectSecrets: nil, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + // Still marked as deleting at the original testTime + State: pbpeering.PeeringState_DELETING, + DeletedAt: structs.TimeToProto(testTime), + }, + // Secrets for baz should have been deleted + secrets: nil, + }, + }, + { + name: "terminating a peering marked for deletion is a no-op", + input: &pbpeering.PeeringWriteRequest{ + Peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_TERMINATED, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + }, + }, + expect: expectations{ + peering: &pbpeering.Peering{ + ID: testBazPeerID, + Name: "baz", + // Still marked as deleting + State: pbpeering.PeeringState_DELETING, + }, + // Secrets for baz should have been deleted + secrets: nil, + }, }, { name: "cannot update peering marked for deletion", @@ -1209,7 +1373,9 @@ func TestStore_PeeringWrite(t *testing.T) { Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, - expectErr: "cannot write to peering that is marked for deletion", + expect: expectations{ + err: "cannot write to peering that is marked for deletion", + }, }, { name: "cannot create peering marked for deletion", @@ -1221,7 +1387,9 @@ func TestStore_PeeringWrite(t *testing.T) { Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, - expectErr: "cannot create a new peering marked for deletion", + expect: expectations{ + err: "cannot create a new peering marked for deletion", + }, }, } for _, tc := range tcs { diff --git a/agent/rpc/peering/service.go b/agent/rpc/peering/service.go index 20bbafc1c..6c0950d9e 100644 --- a/agent/rpc/peering/service.go +++ b/agent/rpc/peering/service.go @@ -726,11 +726,12 @@ func (s *Server) PeeringDelete(ctx context.Context, req *pbpeering.PeeringDelete return nil, err } - if !existing.IsActive() { + if existing == nil || existing.State == pbpeering.PeeringState_DELETING { // Return early when the Peering doesn't exist or is already marked for deletion. // We don't return nil because the pb will fail to marshal. return &pbpeering.PeeringDeleteResponse{}, nil } + // We are using a write request due to needing to perform a deferred deletion. // The peering gets marked for deletion by setting the DeletedAt field, // and a leader routine will handle deleting the peering. diff --git a/agent/rpc/peering/service_test.go b/agent/rpc/peering/service_test.go index 54770d6a6..6b26db7d0 100644 --- a/agent/rpc/peering/service_test.go +++ b/agent/rpc/peering/service_test.go @@ -621,38 +621,50 @@ func TestPeeringService_Read_ACLEnforcement(t *testing.T) { } func TestPeeringService_Delete(t *testing.T) { - // TODO(peering): see note on newTestServer, refactor to not use this - s := newTestServer(t, nil) - - p := &pbpeering.Peering{ - ID: testUUID(t), - Name: "foo", - State: pbpeering.PeeringState_ESTABLISHING, - PeerCAPems: nil, - PeerServerName: "test", - PeerServerAddresses: []string{"addr1"}, + tt := map[string]pbpeering.PeeringState{ + "active peering": pbpeering.PeeringState_ACTIVE, + "terminated peering": pbpeering.PeeringState_TERMINATED, } - err := s.Server.FSM().State().PeeringWrite(10, &pbpeering.PeeringWriteRequest{Peering: p}) - require.NoError(t, err) - require.Nil(t, p.DeletedAt) - require.True(t, p.IsActive()) - client := pbpeering.NewPeeringServiceClient(s.ClientConn(t)) + for name, overrideState := range tt { + t.Run(name, func(t *testing.T) { + // TODO(peering): see note on newTestServer, refactor to not use this + s := newTestServer(t, nil) - ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) - t.Cleanup(cancel) + // A pointer is kept for the following peering so that we can modify the object without another PeeringWrite. + p := &pbpeering.Peering{ + ID: testUUID(t), + Name: "foo", + PeerCAPems: nil, + PeerServerName: "test", + PeerServerAddresses: []string{"addr1"}, + } + err := s.Server.FSM().State().PeeringWrite(10, &pbpeering.PeeringWriteRequest{Peering: p}) + require.NoError(t, err) + require.Nil(t, p.DeletedAt) + require.True(t, p.IsActive()) - _, err = client.PeeringDelete(ctx, &pbpeering.PeeringDeleteRequest{Name: "foo"}) - require.NoError(t, err) + // Overwrite the peering state to simulate deleting from a non-initial state. + p.State = overrideState - retry.Run(t, func(r *retry.R) { - _, resp, err := s.Server.FSM().State().PeeringRead(nil, state.Query{Value: "foo"}) - require.NoError(r, err) + client := pbpeering.NewPeeringServiceClient(s.ClientConn(t)) - // Initially the peering will be marked for deletion but eventually the leader - // routine will clean it up. - require.Nil(r, resp) - }) + ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) + t.Cleanup(cancel) + + _, err = client.PeeringDelete(ctx, &pbpeering.PeeringDeleteRequest{Name: "foo"}) + require.NoError(t, err) + + retry.Run(t, func(r *retry.R) { + _, resp, err := s.Server.FSM().State().PeeringRead(nil, state.Query{Value: "foo"}) + require.NoError(r, err) + + // Initially the peering will be marked for deletion but eventually the leader + // routine will clean it up. + require.Nil(r, resp) + }) + }) + } } func TestPeeringService_Delete_ACLEnforcement(t *testing.T) { From b1025f2dd98d5e3fb51a8bc5a2156239691bbfa1 Mon Sep 17 00:00:00 2001 From: "Chris S. Kim" Date: Fri, 26 Aug 2022 16:49:03 -0400 Subject: [PATCH 04/55] Adjust metrics reporting for peering tracker --- agent/consul/server.go | 11 ++- .../services/peerstream/server.go | 8 +- .../services/peerstream/stream_resources.go | 6 +- .../services/peerstream/stream_test.go | 38 +++------- .../services/peerstream/stream_tracker.go | 51 +++++-------- .../peerstream/stream_tracker_test.go | 76 +++++++------------ 6 files changed, 68 insertions(+), 122 deletions(-) diff --git a/agent/consul/server.go b/agent/consul/server.go index 8f2986c3e..f92a03ccd 100644 --- a/agent/consul/server.go +++ b/agent/consul/server.go @@ -742,7 +742,6 @@ func NewServer(config *Config, flat Deps, externalGRPCServer *grpc.Server) (*Ser return s.ForwardGRPC(s.grpcConnPool, info, fn) }, }) - s.peerStreamTracker.SetHeartbeatTimeout(s.peerStreamServer.Config.IncomingHeartbeatTimeout) s.peerStreamServer.Register(s.externalGRPCServer) // Initialize internal gRPC server. @@ -1575,12 +1574,12 @@ func (s *Server) Stats() map[string]map[string]string { // GetLANCoordinate returns the coordinate of the node in the LAN gossip // pool. // -// - Clients return a single coordinate for the single gossip pool they are -// in (default, segment, or partition). +// - Clients return a single coordinate for the single gossip pool they are +// in (default, segment, or partition). // -// - Servers return one coordinate for their canonical gossip pool (i.e. -// default partition/segment) and one per segment they are also ancillary -// members of. +// - Servers return one coordinate for their canonical gossip pool (i.e. +// default partition/segment) and one per segment they are also ancillary +// members of. // // NOTE: servers do not emit coordinates for partitioned gossip pools they // are ancillary members of. diff --git a/agent/grpc-external/services/peerstream/server.go b/agent/grpc-external/services/peerstream/server.go index 6568d7bf8..7254c60c7 100644 --- a/agent/grpc-external/services/peerstream/server.go +++ b/agent/grpc-external/services/peerstream/server.go @@ -42,8 +42,8 @@ type Config struct { // outgoingHeartbeatInterval is how often we send a heartbeat. outgoingHeartbeatInterval time.Duration - // IncomingHeartbeatTimeout is how long we'll wait between receiving heartbeats before we close the connection. - IncomingHeartbeatTimeout time.Duration + // incomingHeartbeatTimeout is how long we'll wait between receiving heartbeats before we close the connection. + incomingHeartbeatTimeout time.Duration } //go:generate mockery --name ACLResolver --inpackage @@ -63,8 +63,8 @@ func NewServer(cfg Config) *Server { if cfg.outgoingHeartbeatInterval == 0 { cfg.outgoingHeartbeatInterval = defaultOutgoingHeartbeatInterval } - if cfg.IncomingHeartbeatTimeout == 0 { - cfg.IncomingHeartbeatTimeout = defaultIncomingHeartbeatTimeout + if cfg.incomingHeartbeatTimeout == 0 { + cfg.incomingHeartbeatTimeout = defaultIncomingHeartbeatTimeout } return &Server{ Config: cfg, diff --git a/agent/grpc-external/services/peerstream/stream_resources.go b/agent/grpc-external/services/peerstream/stream_resources.go index 0e6b28f45..ad5d9d463 100644 --- a/agent/grpc-external/services/peerstream/stream_resources.go +++ b/agent/grpc-external/services/peerstream/stream_resources.go @@ -406,7 +406,7 @@ func (s *Server) realHandleStream(streamReq HandleStreamRequest) error { // incomingHeartbeatCtx will complete if incoming heartbeats time out. incomingHeartbeatCtx, incomingHeartbeatCtxCancel := - context.WithTimeout(context.Background(), s.IncomingHeartbeatTimeout) + context.WithTimeout(context.Background(), s.incomingHeartbeatTimeout) // NOTE: It's important that we wrap the call to cancel in a wrapper func because during the loop we're // re-assigning the value of incomingHeartbeatCtxCancel and we want the defer to run on the last assigned // value, not the current value. @@ -575,6 +575,7 @@ func (s *Server) realHandleStream(streamReq HandleStreamRequest) error { status.TrackRecvResourceSuccess() } + // We are replying ACK or NACK depending on whether we successfully processed the response. if err := streamSend(reply); err != nil { return fmt.Errorf("failed to send to stream: %v", err) } @@ -605,7 +606,7 @@ func (s *Server) realHandleStream(streamReq HandleStreamRequest) error { // They just can't trace the execution properly for some reason (possibly golang/go#29587). //nolint:govet incomingHeartbeatCtx, incomingHeartbeatCtxCancel = - context.WithTimeout(context.Background(), s.IncomingHeartbeatTimeout) + context.WithTimeout(context.Background(), s.incomingHeartbeatTimeout) } case update := <-subCh: @@ -642,7 +643,6 @@ func (s *Server) realHandleStream(streamReq HandleStreamRequest) error { if err := streamSend(replResp); err != nil { return fmt.Errorf("failed to push data for %q: %w", update.CorrelationID, err) } - status.TrackSendSuccess() } } } diff --git a/agent/grpc-external/services/peerstream/stream_test.go b/agent/grpc-external/services/peerstream/stream_test.go index be4a44ec8..fcdd07422 100644 --- a/agent/grpc-external/services/peerstream/stream_test.go +++ b/agent/grpc-external/services/peerstream/stream_test.go @@ -572,7 +572,7 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { }) }) - var lastSendAck, lastSendSuccess time.Time + var lastSendAck time.Time testutil.RunStep(t, "ack tracked as success", func(t *testing.T) { ack := &pbpeerstream.ReplicationMessage{ @@ -587,16 +587,13 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { }, } - lastSendAck = time.Date(2000, time.January, 1, 0, 0, 2, 0, time.UTC) - lastSendSuccess = time.Date(2000, time.January, 1, 0, 0, 3, 0, time.UTC) + lastSendAck = it.FutureNow(1) err := client.Send(ack) require.NoError(t, err) expect := Status{ - Connected: true, - LastAck: lastSendAck, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, - LastSendSuccess: lastSendSuccess, + Connected: true, + LastAck: lastSendAck, } retry.Run(t, func(r *retry.R) { @@ -624,20 +621,17 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { }, } - lastSendAck = time.Date(2000, time.January, 1, 0, 0, 4, 0, time.UTC) - lastNack = time.Date(2000, time.January, 1, 0, 0, 5, 0, time.UTC) + lastNack = it.FutureNow(1) err := client.Send(nack) require.NoError(t, err) lastNackMsg = "client peer was unable to apply resource: bad bad not good" expect := Status{ - Connected: true, - LastAck: lastSendAck, - LastNack: lastNack, - LastNackMessage: lastNackMsg, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, - LastSendSuccess: lastSendSuccess, + Connected: true, + LastAck: lastSendAck, + LastNack: lastNack, + LastNackMessage: lastNackMsg, } retry.Run(t, func(r *retry.R) { @@ -661,7 +655,7 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { }, }, } - lastRecvResourceSuccess = it.FutureNow(1) + lastRecvResourceSuccess = it.FutureNow(2) err := client.Send(resp) require.NoError(t, err) @@ -707,8 +701,6 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { ImportedServices: map[string]struct{}{ api.String(): {}, }, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, - LastSendSuccess: lastSendSuccess, } retry.Run(t, func(r *retry.R) { @@ -770,8 +762,6 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { ImportedServices: map[string]struct{}{ api.String(): {}, }, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, - LastSendSuccess: lastSendSuccess, } retry.Run(t, func(r *retry.R) { @@ -805,8 +795,6 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { ImportedServices: map[string]struct{}{ api.String(): {}, }, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, - LastSendSuccess: lastSendSuccess, } retry.Run(t, func(r *retry.R) { @@ -839,8 +827,6 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { ImportedServices: map[string]struct{}{ api.String(): {}, }, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, - LastSendSuccess: lastSendSuccess, } retry.Run(t, func(r *retry.R) { @@ -1143,7 +1129,7 @@ func TestStreamResources_Server_DisconnectsOnHeartbeatTimeout(t *testing.T) { srv, store := newTestServer(t, func(c *Config) { c.Tracker.SetClock(it.Now) - c.IncomingHeartbeatTimeout = 5 * time.Millisecond + c.incomingHeartbeatTimeout = 5 * time.Millisecond }) p := writePeeringToBeDialed(t, store, 1, "my-peer") @@ -1250,7 +1236,7 @@ func TestStreamResources_Server_KeepsConnectionOpenWithHeartbeat(t *testing.T) { srv, store := newTestServer(t, func(c *Config) { c.Tracker.SetClock(it.Now) - c.IncomingHeartbeatTimeout = incomingHeartbeatTimeout + c.incomingHeartbeatTimeout = incomingHeartbeatTimeout }) p := writePeeringToBeDialed(t, store, 1, "my-peer") diff --git a/agent/grpc-external/services/peerstream/stream_tracker.go b/agent/grpc-external/services/peerstream/stream_tracker.go index ffde98ba3..d0dcd3977 100644 --- a/agent/grpc-external/services/peerstream/stream_tracker.go +++ b/agent/grpc-external/services/peerstream/stream_tracker.go @@ -16,8 +16,6 @@ type Tracker struct { // timeNow is a shim for testing. timeNow func() time.Time - - heartbeatTimeout time.Duration } func NewTracker() *Tracker { @@ -35,12 +33,6 @@ func (t *Tracker) SetClock(clock func() time.Time) { } } -func (t *Tracker) SetHeartbeatTimeout(heartbeatTimeout time.Duration) { - t.mu.Lock() - defer t.mu.Unlock() - t.heartbeatTimeout = heartbeatTimeout -} - // Register a stream for a given peer but do not mark it as connected. func (t *Tracker) Register(id string) (*MutableStatus, error) { t.mu.Lock() @@ -52,7 +44,7 @@ func (t *Tracker) Register(id string) (*MutableStatus, error) { func (t *Tracker) registerLocked(id string, initAsConnected bool) (*MutableStatus, bool, error) { status, ok := t.streams[id] if !ok { - status = newMutableStatus(t.timeNow, t.heartbeatTimeout, initAsConnected) + status = newMutableStatus(t.timeNow, initAsConnected) t.streams[id] = status return status, true, nil } @@ -152,8 +144,6 @@ type MutableStatus struct { // Status contains information about the replication stream to a peer cluster. // TODO(peering): There's a lot of fields here... type Status struct { - heartbeatTimeout time.Duration - // Connected is true when there is an open stream for the peer. Connected bool @@ -182,9 +172,6 @@ type Status struct { // LastSendErrorMessage tracks the last error message when sending into the stream. LastSendErrorMessage string - // LastSendSuccess tracks the time of the last success response sent into the stream. - LastSendSuccess time.Time - // LastRecvHeartbeat tracks when we last received a heartbeat from our peer. LastRecvHeartbeat time.Time @@ -216,38 +203,40 @@ func (s *Status) GetExportedServicesCount() uint64 { // IsHealthy is a convenience func that returns true/ false for a peering status. // We define a peering as unhealthy if its status satisfies one of the following: -// - If heartbeat hasn't been received within the IncomingHeartbeatTimeout -// - If the last sent error is newer than last sent success +// - If it is disconnected +// - If the last received Nack is newer than last received Ack // - If the last received error is newer than last received success // If none of these conditions apply, we call the peering healthy. func (s *Status) IsHealthy() bool { - if time.Now().Sub(s.LastRecvHeartbeat) > s.heartbeatTimeout { - // 1. If heartbeat hasn't been received for a while - report unhealthy + if !s.Connected { return false } - if s.LastSendError.After(s.LastSendSuccess) { - // 2. If last sent error is newer than last sent success - report unhealthy + // If stream is in a disconnected state, report unhealthy. + // This should be logically equivalent to s.Connected above. + if !s.DisconnectTime.IsZero() { return false } + // If last Nack is after last Ack, it means the peer is unable to + // handle our replication messages. + if s.LastNack.After(s.LastAck) { + return false + } + + // If last recv error is newer than last recv success - report unhealthy if s.LastRecvError.After(s.LastRecvResourceSuccess) { - // 3. If last recv error is newer than last recv success - report unhealthy return false } return true } -func newMutableStatus(now func() time.Time, heartbeatTimeout time.Duration, connected bool) *MutableStatus { - if heartbeatTimeout.Microseconds() == 0 { - heartbeatTimeout = defaultIncomingHeartbeatTimeout - } +func newMutableStatus(now func() time.Time, connected bool) *MutableStatus { return &MutableStatus{ Status: Status{ - Connected: connected, - heartbeatTimeout: heartbeatTimeout, - NeverConnected: !connected, + Connected: connected, + NeverConnected: !connected, }, timeNow: now, doneCh: make(chan struct{}), @@ -271,12 +260,6 @@ func (s *MutableStatus) TrackSendError(error string) { s.mu.Unlock() } -func (s *MutableStatus) TrackSendSuccess() { - s.mu.Lock() - s.LastSendSuccess = s.timeNow().UTC() - s.mu.Unlock() -} - // TrackRecvResourceSuccess tracks receiving a replicated resource. func (s *MutableStatus) TrackRecvResourceSuccess() { s.mu.Lock() diff --git a/agent/grpc-external/services/peerstream/stream_tracker_test.go b/agent/grpc-external/services/peerstream/stream_tracker_test.go index 8cdcbc79a..7500ccd4b 100644 --- a/agent/grpc-external/services/peerstream/stream_tracker_test.go +++ b/agent/grpc-external/services/peerstream/stream_tracker_test.go @@ -16,11 +16,10 @@ const ( func TestStatus_IsHealthy(t *testing.T) { type testcase struct { - name string - dontConnect bool - modifierFunc func(status *MutableStatus) - expectedVal bool - heartbeatTimeout time.Duration + name string + dontConnect bool + modifierFunc func(status *MutableStatus) + expectedVal bool } tcs := []testcase{ @@ -30,56 +29,39 @@ func TestStatus_IsHealthy(t *testing.T) { dontConnect: true, }, { - name: "no heartbeat, unhealthy", - expectedVal: false, - }, - { - name: "heartbeat is not received, unhealthy", + name: "disconnect time not zero", expectedVal: false, modifierFunc: func(status *MutableStatus) { - // set heartbeat - status.LastRecvHeartbeat = time.Now().Add(-1 * time.Second) - }, - heartbeatTimeout: 1 * time.Second, - }, - { - name: "send error before send success", - expectedVal: false, - modifierFunc: func(status *MutableStatus) { - // set heartbeat - status.LastRecvHeartbeat = time.Now() - - status.LastSendSuccess = time.Now() - status.LastSendError = time.Now() + status.DisconnectTime = time.Now() }, }, { - name: "received error before received success", + name: "receive error before receive success", expectedVal: false, modifierFunc: func(status *MutableStatus) { - // set heartbeat - status.LastRecvHeartbeat = time.Now() - - status.LastRecvResourceSuccess = time.Now() - status.LastRecvError = time.Now() + now := time.Now() + status.LastRecvResourceSuccess = now + status.LastRecvError = now.Add(1 * time.Second) + }, + }, + { + name: "receive error before receive success", + expectedVal: false, + modifierFunc: func(status *MutableStatus) { + now := time.Now() + status.LastAck = now + status.LastNack = now.Add(1 * time.Second) }, }, { name: "healthy", expectedVal: true, - modifierFunc: func(status *MutableStatus) { - // set heartbeat - status.LastRecvHeartbeat = time.Now() - }, }, } for _, tc := range tcs { t.Run(tc.name, func(t *testing.T) { tracker := NewTracker() - if tc.heartbeatTimeout.Microseconds() != 0 { - tracker.SetHeartbeatTimeout(tc.heartbeatTimeout) - } if !tc.dontConnect { st, err := tracker.Connected(aPeerID) @@ -120,8 +102,7 @@ func TestTracker_EnsureConnectedDisconnected(t *testing.T) { require.NoError(t, err) expect := Status{ - Connected: true, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, + Connected: true, } status, ok := tracker.StreamStatus(peerID) @@ -147,9 +128,8 @@ func TestTracker_EnsureConnectedDisconnected(t *testing.T) { lastSuccess = it.base.Add(time.Duration(sequence) * time.Second).UTC() expect := Status{ - Connected: true, - LastAck: lastSuccess, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, + Connected: true, + LastAck: lastSuccess, } require.Equal(t, expect, status) }) @@ -159,10 +139,9 @@ func TestTracker_EnsureConnectedDisconnected(t *testing.T) { sequence++ expect := Status{ - Connected: false, - DisconnectTime: it.base.Add(time.Duration(sequence) * time.Second).UTC(), - LastAck: lastSuccess, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, + Connected: false, + DisconnectTime: it.base.Add(time.Duration(sequence) * time.Second).UTC(), + LastAck: lastSuccess, } status, ok := tracker.StreamStatus(peerID) require.True(t, ok) @@ -174,9 +153,8 @@ func TestTracker_EnsureConnectedDisconnected(t *testing.T) { require.NoError(t, err) expect := Status{ - Connected: true, - LastAck: lastSuccess, - heartbeatTimeout: defaultIncomingHeartbeatTimeout, + Connected: true, + LastAck: lastSuccess, // DisconnectTime gets cleared on re-connect. } From 78bf8437d8635a5b3c75851671106d5a0162c652 Mon Sep 17 00:00:00 2001 From: "Chris S. Kim" Date: Mon, 29 Aug 2022 10:19:46 -0400 Subject: [PATCH 05/55] Fix test --- agent/grpc-external/services/peerstream/stream_test.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agent/grpc-external/services/peerstream/stream_test.go b/agent/grpc-external/services/peerstream/stream_test.go index fcdd07422..bee02592a 100644 --- a/agent/grpc-external/services/peerstream/stream_test.go +++ b/agent/grpc-external/services/peerstream/stream_test.go @@ -655,7 +655,7 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { }, }, } - lastRecvResourceSuccess = it.FutureNow(2) + lastRecvResourceSuccess = it.FutureNow(1) err := client.Send(resp) require.NoError(t, err) From a58e943502ad497780c5aefb0b22b6df3d17471b Mon Sep 17 00:00:00 2001 From: "Chris S. Kim" Date: Mon, 29 Aug 2022 10:34:50 -0400 Subject: [PATCH 06/55] Rename test --- agent/grpc-external/services/peerstream/stream_tracker_test.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agent/grpc-external/services/peerstream/stream_tracker_test.go b/agent/grpc-external/services/peerstream/stream_tracker_test.go index 7500ccd4b..555e76258 100644 --- a/agent/grpc-external/services/peerstream/stream_tracker_test.go +++ b/agent/grpc-external/services/peerstream/stream_tracker_test.go @@ -45,7 +45,7 @@ func TestStatus_IsHealthy(t *testing.T) { }, }, { - name: "receive error before receive success", + name: "nack before ack", expectedVal: false, modifierFunc: func(status *MutableStatus) { now := time.Now() From 680ff580a353296beefc670afab57dd23e7a9b89 Mon Sep 17 00:00:00 2001 From: DanStough Date: Tue, 16 Aug 2022 16:47:39 -0400 Subject: [PATCH 07/55] chore: add multi-arch docker build for testing --- GNUmakefile | 30 ++++++++++++++++--- .../docker/Consul-Dev-Multiarch.dockerfile | 5 ++++ 2 files changed, 31 insertions(+), 4 deletions(-) create mode 100644 build-support/docker/Consul-Dev-Multiarch.dockerfile diff --git a/GNUmakefile b/GNUmakefile index 6327ea579..77d6a7ec2 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -16,6 +16,7 @@ PROTOC_GO_INJECT_TAG_VERSION='v1.3.0' GOTAGS ?= GOPATH=$(shell go env GOPATH) +GOARCH?=$(shell go env GOARCH) MAIN_GOPATH=$(shell go env GOPATH | cut -d: -f1) export PATH := $(PWD)/bin:$(GOPATH)/bin:$(PATH) @@ -152,7 +153,28 @@ dev-docker: linux @docker pull consul:$(CONSUL_IMAGE_VERSION) >/dev/null @echo "Building Consul Development container - $(CONSUL_DEV_IMAGE)" # 'consul:local' tag is needed to run the integration tests - @DOCKER_DEFAULT_PLATFORM=linux/amd64 docker build $(NOCACHE) $(QUIET) -t '$(CONSUL_DEV_IMAGE)' -t 'consul:local' --build-arg CONSUL_IMAGE_VERSION=$(CONSUL_IMAGE_VERSION) $(CURDIR)/pkg/bin/linux_amd64 -f $(CURDIR)/build-support/docker/Consul-Dev.dockerfile + @docker buildx use default && docker buildx build -t 'consul:local' \ + --platform linux/$(GOARCH) \ + --build-arg CONSUL_IMAGE_VERSION=$(CONSUL_IMAGE_VERSION) \ + --load \ + -f $(CURDIR)/build-support/docker/Consul-Dev-Multiarch.dockerfile $(CURDIR)/pkg/bin/ + +check-remote-dev-image-env: +ifndef REMOTE_DEV_IMAGE + $(error REMOTE_DEV_IMAGE is undefined: set this image to /:, e.g. hashicorp/consul-k8s-dev:latest) +endif + +remote-docker: check-remote-dev-image-env + $(MAKE) GOARCH=amd64 linux + $(MAKE) GOARCH=arm64 linux + @echo "Pulling consul container image - $(CONSUL_IMAGE_VERSION)" + @docker pull consul:$(CONSUL_IMAGE_VERSION) >/dev/null + @echo "Building and Pushing Consul Development container - $(REMOTE_DEV_IMAGE)" + @docker buildx use default && docker buildx build -t '$(REMOTE_DEV_IMAGE)' \ + --platform linux/amd64,linux/arm64 \ + --build-arg CONSUL_IMAGE_VERSION=$(CONSUL_IMAGE_VERSION) \ + --push \ + -f $(CURDIR)/build-support/docker/Consul-Dev-Multiarch.dockerfile $(CURDIR)/pkg/bin/ # In CircleCI, the linux binary will be attached from a previous step at bin/. This make target # should only run in CI and not locally. @@ -174,10 +196,10 @@ ifeq ($(CIRCLE_BRANCH), main) @docker push $(CI_DEV_DOCKER_NAMESPACE)/$(CI_DEV_DOCKER_IMAGE_NAME):latest endif -# linux builds a linux binary independent of the source platform +# linux builds a linux binary compatible with the source platform linux: - @mkdir -p ./pkg/bin/linux_amd64 - CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o ./pkg/bin/linux_amd64 -ldflags "$(GOLDFLAGS)" -tags "$(GOTAGS)" + @mkdir -p ./pkg/bin/linux_$(GOARCH) + CGO_ENABLED=0 GOOS=linux GOARCH=$(GOARCH) go build -o ./pkg/bin/linux_$(GOARCH) -ldflags "$(GOLDFLAGS)" -tags "$(GOTAGS)" # dist builds binaries for all platforms and packages them for distribution dist: diff --git a/build-support/docker/Consul-Dev-Multiarch.dockerfile b/build-support/docker/Consul-Dev-Multiarch.dockerfile new file mode 100644 index 000000000..a3069bd99 --- /dev/null +++ b/build-support/docker/Consul-Dev-Multiarch.dockerfile @@ -0,0 +1,5 @@ +ARG CONSUL_IMAGE_VERSION=latest +FROM consul:${CONSUL_IMAGE_VERSION} +RUN apk update && apk add iptables +ARG TARGETARCH +COPY linux_${TARGETARCH}/consul /bin/consul From 13992d5dc82972ab29e6027be184b737c05330f8 Mon Sep 17 00:00:00 2001 From: Eric Haberkorn Date: Mon, 29 Aug 2022 13:46:41 -0400 Subject: [PATCH 08/55] Update max_ejection_percent on outlier detection for peered clusters to 100% (#14373) We can't trust health checks on peered services when service resolvers, splitters and routers are used. --- .changelog/14373.txt | 3 +++ agent/xds/clusters.go | 9 ++++++++- .../connect-proxy-with-peered-upstreams.latest.golden | 4 ++-- ...transparent-proxy-with-peered-upstreams.latest.golden | 6 +++--- 4 files changed, 16 insertions(+), 6 deletions(-) create mode 100644 .changelog/14373.txt diff --git a/.changelog/14373.txt b/.changelog/14373.txt new file mode 100644 index 000000000..d9531b09e --- /dev/null +++ b/.changelog/14373.txt @@ -0,0 +1,3 @@ +```release-note:improvement +xds: Set `max_ejection_percent` on Envoy's outlier detection to 100% for peered services. +``` diff --git a/agent/xds/clusters.go b/agent/xds/clusters.go index adde810d3..c3ac71847 100644 --- a/agent/xds/clusters.go +++ b/agent/xds/clusters.go @@ -772,6 +772,13 @@ func (s *ResourceGenerator) makeUpstreamClusterForPeerService( clusterName := generatePeeredClusterName(uid, tbs) + outlierDetection := ToOutlierDetection(cfg.PassiveHealthCheck) + // We can't rely on health checks for services on cluster peers because they + // don't take into account service resolvers, splitters and routers. Setting + // MaxEjectionPercent too 100% gives outlier detection the power to eject the + // entire cluster. + outlierDetection.MaxEjectionPercent = &wrappers.UInt32Value{Value: 100} + s.Logger.Trace("generating cluster for", "cluster", clusterName) if c == nil { c = &envoy_cluster_v3.Cluster{ @@ -785,7 +792,7 @@ func (s *ResourceGenerator) makeUpstreamClusterForPeerService( CircuitBreakers: &envoy_cluster_v3.CircuitBreakers{ Thresholds: makeThresholdsIfNeeded(cfg.Limits), }, - OutlierDetection: ToOutlierDetection(cfg.PassiveHealthCheck), + OutlierDetection: outlierDetection, } if cfg.Protocol == "http2" || cfg.Protocol == "grpc" { if err := s.setHttp2ProtocolOptions(c); err != nil { diff --git a/agent/xds/testdata/clusters/connect-proxy-with-peered-upstreams.latest.golden b/agent/xds/testdata/clusters/connect-proxy-with-peered-upstreams.latest.golden index d7c23515f..29059b143 100644 --- a/agent/xds/testdata/clusters/connect-proxy-with-peered-upstreams.latest.golden +++ b/agent/xds/testdata/clusters/connect-proxy-with-peered-upstreams.latest.golden @@ -58,7 +58,7 @@ "dnsRefreshRate": "10s", "dnsLookupFamily": "V4_ONLY", "outlierDetection": { - + "maxEjectionPercent": 100 }, "commonLbConfig": { "healthyPanicThreshold": { @@ -115,7 +115,7 @@ }, "outlierDetection": { - + "maxEjectionPercent": 100 }, "commonLbConfig": { "healthyPanicThreshold": { diff --git a/agent/xds/testdata/clusters/transparent-proxy-with-peered-upstreams.latest.golden b/agent/xds/testdata/clusters/transparent-proxy-with-peered-upstreams.latest.golden index 0dbbf4277..d1f6d0bb0 100644 --- a/agent/xds/testdata/clusters/transparent-proxy-with-peered-upstreams.latest.golden +++ b/agent/xds/testdata/clusters/transparent-proxy-with-peered-upstreams.latest.golden @@ -18,7 +18,7 @@ }, "outlierDetection": { - + "maxEjectionPercent": 100 }, "commonLbConfig": { "healthyPanicThreshold": { @@ -75,7 +75,7 @@ }, "outlierDetection": { - + "maxEjectionPercent": 100 }, "commonLbConfig": { "healthyPanicThreshold": { @@ -157,7 +157,7 @@ }, "outlierDetection": { - + "maxEjectionPercent": 100 }, "commonLbConfig": { "healthyPanicThreshold": { From f790d84c040fc800b3189c96ae1fe9cd5970ce8d Mon Sep 17 00:00:00 2001 From: freddygv Date: Mon, 29 Aug 2022 12:00:30 -0600 Subject: [PATCH 09/55] Add validation to prevent switching dialing mode This prevents unexpected changes to the output of ShouldDial, which should never change unless a peering is deleted and recreated. --- agent/consul/leader_peering_test.go | 12 +++-- agent/consul/state/peering.go | 10 ++++ agent/consul/state/peering_test.go | 76 ++++++++++++++++++++--------- 3 files changed, 72 insertions(+), 26 deletions(-) diff --git a/agent/consul/leader_peering_test.go b/agent/consul/leader_peering_test.go index b8b5166d8..206608ed4 100644 --- a/agent/consul/leader_peering_test.go +++ b/agent/consul/leader_peering_test.go @@ -40,6 +40,7 @@ func TestLeader_PeeringSync_Lifecycle_ClientDeletion(t *testing.T) { testLeader_PeeringSync_Lifecycle_ClientDeletion(t, true) }) } + func testLeader_PeeringSync_Lifecycle_ClientDeletion(t *testing.T, enableTLS bool) { if testing.Short() { t.Skip("too slow for testing.Short") @@ -137,9 +138,11 @@ func testLeader_PeeringSync_Lifecycle_ClientDeletion(t *testing.T, enableTLS boo // Delete the peering to trigger the termination sequence. deleted := &pbpeering.Peering{ - ID: p.Peering.ID, - Name: "my-peer-acceptor", - DeletedAt: structs.TimeToProto(time.Now()), + ID: p.Peering.ID, + Name: "my-peer-acceptor", + State: pbpeering.PeeringState_DELETING, + PeerServerAddresses: p.Peering.PeerServerAddresses, + DeletedAt: structs.TimeToProto(time.Now()), } require.NoError(t, dialer.fsm.State().PeeringWrite(2000, &pbpeering.PeeringWriteRequest{Peering: deleted})) dialer.logger.Trace("deleted peering for my-peer-acceptor") @@ -262,6 +265,7 @@ func testLeader_PeeringSync_Lifecycle_AcceptorDeletion(t *testing.T, enableTLS b deleted := &pbpeering.Peering{ ID: p.Peering.PeerID, Name: "my-peer-dialer", + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), } @@ -431,6 +435,7 @@ func TestLeader_Peering_DeferredDeletion(t *testing.T) { Peering: &pbpeering.Peering{ ID: peerID, Name: peerName, + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), }, })) @@ -1165,6 +1170,7 @@ func TestLeader_Peering_NoDeletionWhenPeeringDisabled(t *testing.T) { Peering: &pbpeering.Peering{ ID: peerID, Name: peerName, + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), }, })) diff --git a/agent/consul/state/peering.go b/agent/consul/state/peering.go index 9457dd811..eef76aa72 100644 --- a/agent/consul/state/peering.go +++ b/agent/consul/state/peering.go @@ -535,6 +535,12 @@ func (s *Store) PeeringWrite(idx uint64, req *pbpeering.PeeringWriteRequest) err if req.Peering.Name == "" { return errors.New("Missing Peering Name") } + if req.Peering.State == pbpeering.PeeringState_DELETING && (req.Peering.DeletedAt == nil || structs.IsZeroProtoTime(req.Peering.DeletedAt)) { + return errors.New("Missing deletion time for peering in deleting state") + } + if req.Peering.DeletedAt != nil && !structs.IsZeroProtoTime(req.Peering.DeletedAt) && req.Peering.State != pbpeering.PeeringState_DELETING { + return fmt.Errorf("Unexpected state for peering with deletion time: %s", pbpeering.PeeringStateToAPI(req.Peering.State)) + } // Ensure the name is unique (cannot conflict with another peering with a different ID). _, existing, err := peeringReadTxn(tx, nil, Query{ @@ -546,6 +552,10 @@ func (s *Store) PeeringWrite(idx uint64, req *pbpeering.PeeringWriteRequest) err } if existing != nil { + if req.Peering.ShouldDial() != existing.ShouldDial() { + return fmt.Errorf("Cannot switch peering dialing mode from %t to %t", existing.ShouldDial(), req.Peering.ShouldDial()) + } + if req.Peering.ID != existing.ID { return fmt.Errorf("A peering already exists with the name %q and a different ID %q", req.Peering.Name, existing.ID) } diff --git a/agent/consul/state/peering_test.go b/agent/consul/state/peering_test.go index 1dc2446fe..a90727f0e 100644 --- a/agent/consul/state/peering_test.go +++ b/agent/consul/state/peering_test.go @@ -950,6 +950,7 @@ func TestStore_Peering_Watch(t *testing.T) { Peering: &pbpeering.Peering{ ID: testFooPeerID, Name: "foo", + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), }, }) @@ -976,6 +977,7 @@ func TestStore_Peering_Watch(t *testing.T) { err := s.PeeringWrite(lastIdx, &pbpeering.PeeringWriteRequest{Peering: &pbpeering.Peering{ ID: testBarPeerID, Name: "bar", + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), }, }) @@ -1077,6 +1079,7 @@ func TestStore_PeeringList_Watch(t *testing.T) { Peering: &pbpeering.Peering{ ID: testFooPeerID, Name: "foo", + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, @@ -1199,6 +1202,22 @@ func TestStore_PeeringWrite(t *testing.T) { err: `A peering already exists with the name "baz" and a different ID`, }, }, + { + name: "cannot change dialer status for baz", + input: &pbpeering.PeeringWriteRequest{ + Peering: &pbpeering.Peering{ + ID: "123", + Name: "baz", + State: pbpeering.PeeringState_FAILING, + // Excluding the peer server addresses leads to baz not being considered a dialer. + // PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + }, + }, + expect: expectations{ + err: "Cannot switch peering dialing mode from true to false", + }, + }, { name: "update baz", input: &pbpeering.PeeringWriteRequest{ @@ -1273,15 +1292,17 @@ func TestStore_PeeringWrite(t *testing.T) { }, }, { - name: "cannot edit data during no-op termination", + name: "cannot modify peering during no-op termination", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ - ID: testBazPeerID, - Name: "baz", - State: pbpeering.PeeringState_TERMINATED, - // Attempt to modify the addresses - Meta: map[string]string{"foo": "bar"}, - Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_TERMINATED, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + PeerServerAddresses: []string{"localhost:8502"}, + + // Attempt to add metadata + Meta: map[string]string{"foo": "bar"}, }, }, expect: expectations{ @@ -1320,11 +1341,12 @@ func TestStore_PeeringWrite(t *testing.T) { name: "deleting a deleted peering is a no-op", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ - ID: testBazPeerID, - Name: "baz", - State: pbpeering.PeeringState_DELETING, - DeletedAt: structs.TimeToProto(time.Now()), - Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_DELETING, + PeerServerAddresses: []string{"localhost:8502"}, + DeletedAt: structs.TimeToProto(time.Now()), + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, expect: expectations{ @@ -1343,10 +1365,11 @@ func TestStore_PeeringWrite(t *testing.T) { name: "terminating a peering marked for deletion is a no-op", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ - ID: testBazPeerID, - Name: "baz", - State: pbpeering.PeeringState_TERMINATED, - Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + ID: testBazPeerID, + Name: "baz", + State: pbpeering.PeeringState_TERMINATED, + PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, expect: expectations{ @@ -1364,13 +1387,15 @@ func TestStore_PeeringWrite(t *testing.T) { name: "cannot update peering marked for deletion", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ - ID: testBazPeerID, - Name: "baz", + ID: testBazPeerID, + Name: "baz", + PeerServerAddresses: []string{"localhost:8502"}, + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + // Attempt to add metadata Meta: map[string]string{ "source": "kubernetes", }, - Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, expect: expectations{ @@ -1381,10 +1406,12 @@ func TestStore_PeeringWrite(t *testing.T) { name: "cannot create peering marked for deletion", input: &pbpeering.PeeringWriteRequest{ Peering: &pbpeering.Peering{ - ID: testFooPeerID, - Name: "foo", - DeletedAt: structs.TimeToProto(time.Now()), - Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), + ID: testFooPeerID, + Name: "foo", + PeerServerAddresses: []string{"localhost:8502"}, + State: pbpeering.PeeringState_DELETING, + DeletedAt: structs.TimeToProto(time.Now()), + Partition: structs.NodeEnterpriseMetaInDefaultPartition().PartitionOrEmpty(), }, }, expect: expectations{ @@ -1414,6 +1441,7 @@ func TestStore_PeeringDelete(t *testing.T) { Peering: &pbpeering.Peering{ ID: testFooPeerID, Name: "foo", + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), }, })) @@ -1927,6 +1955,7 @@ func TestStateStore_PeeringsForService(t *testing.T) { copied := pbpeering.Peering{ ID: tp.peering.ID, Name: tp.peering.Name, + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), } require.NoError(t, s.PeeringWrite(lastIdx, &pbpeering.PeeringWriteRequest{Peering: &copied})) @@ -2369,6 +2398,7 @@ func TestStore_TrustBundleListByService(t *testing.T) { Peering: &pbpeering.Peering{ ID: peerID1, Name: "peer1", + State: pbpeering.PeeringState_DELETING, DeletedAt: structs.TimeToProto(time.Now()), }, })) From 850dc52f4f8d0412afdc74aaad440737592df03f Mon Sep 17 00:00:00 2001 From: freddygv Date: Mon, 29 Aug 2022 12:07:18 -0600 Subject: [PATCH 10/55] Add changelog entry --- .changelog/14364.txt | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 .changelog/14364.txt diff --git a/.changelog/14364.txt b/.changelog/14364.txt new file mode 100644 index 000000000..d2f777af4 --- /dev/null +++ b/.changelog/14364.txt @@ -0,0 +1,3 @@ +```release-note:bugfix +peering: Fix issue preventing deletion and recreation of peerings in TERMINATED state. +``` \ No newline at end of file From 91be64887e14c37e0235df5ca9f395e9cd470544 Mon Sep 17 00:00:00 2001 From: David Yu Date: Mon, 29 Aug 2022 11:34:39 -0700 Subject: [PATCH 11/55] docs: Update Consul K8s release notes (#14379) --- website/content/docs/release-notes/consul-k8s/v0_47_x.mdx | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx b/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx index b040787c5..a9228e998 100644 --- a/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx +++ b/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx @@ -20,8 +20,7 @@ description: >- ## Supported Software - Consul 1.11.x, Consul 1.12.x and Consul 1.13.1+ -- Kubernetes 1.19+ - - Kubernetes 1.24 is not supported at this time. +- Kubernetes 1.19-1.23 - Kubectl 1.21+ - Envoy proxy support is determined by the Consul version deployed. Refer to [Envoy Integration](/docs/connect/proxies/envoy) for details. From f5139f0c17be9aa87eae06f6590a712477261e95 Mon Sep 17 00:00:00 2001 From: David Yu Date: Mon, 29 Aug 2022 13:07:08 -0700 Subject: [PATCH 12/55] docs: Cluster peering with Transparent Proxy updates (#14369) * Update Cluster Peering docs to show example with Transparent Proxy Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> --- .../docs/connect/cluster-peering/k8s.mdx | 271 ++++++++++++++---- 1 file changed, 215 insertions(+), 56 deletions(-) diff --git a/website/content/docs/connect/cluster-peering/k8s.mdx b/website/content/docs/connect/cluster-peering/k8s.mdx index 35f17959c..b18633f09 100644 --- a/website/content/docs/connect/cluster-peering/k8s.mdx +++ b/website/content/docs/connect/cluster-peering/k8s.mdx @@ -25,50 +25,82 @@ You must implement the following requirements to create and use cluster peering - At least two Kubernetes clusters - The installation must be running on Consul on Kubernetes version 0.47.1 or later -### Helm chart configuration +### Prepare for install -To establish cluster peering through Kubernetes, deploy clusters with the following Helm values. +1. After provisioning a Kubernetes cluster and setting up your kubeconfig file to manage access to multiple Kubernetes clusters, export the Kubernetes context names for future use with `kubectl`. For more information on how to use kubeconfig and contexts, refer to [Configure access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) on the Kubernetes documentation website. - + You can use the following methods to get the context names for your clusters: + + * Issue the `kubectl config current-context` command to get the context for the cluster you are currently in. + * Issue the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file. + + ```shell-session + $ export CLUSTER1_CONTEXT= + $ export CLUSTER2_CONTEXT= + ``` - ```yaml - global: - image: "hashicorp/consul:1.13.1" - peering: +1. To establish cluster peering through Kubernetes, create a `values.yaml` file with the following Helm values. + + With these values, + the servers in each cluster will be exposed over a Kubernetes Load balancer service. This service can be customized + using [`server.exposeService`](/docs/k8s/helm#v-server-exposeservice). + + When generating a peering token from one of the clusters, Consul uses the address(es) of the load balancer in the peering token so that the peering stream goes through the load balancer in front of the servers. For customizing the addresses used in the peering token, refer to [`global.peering.tokenGeneration`](/docs/k8s/helm#v-global-peering-tokengeneration). + + + + ```yaml + global: + image: "hashicorp/consul:1.13.1" + peering: + enabled: true + connectInject: enabled: true - connectInject: - enabled: true - controller: - enabled: true - meshGateway: - enabled: true - replicas: 1 - ``` + dns: + enabled: true + enableRedirection: true + server: + exposeService: + enabeld: true + controller: + enabled: true + meshGateway: + enabled: true + replicas: 1 + ``` - + + +### Install Consul on Kubernetes -Install Consul on Kubernetes on each Kubernetes cluster by applying `values.yaml` using the Helm CLI. With these values, -the servers in each cluster will be exposed over a Kubernetes Load balancer service. This service can be customized -using [`server.exposeService`](/docs/k8s/helm#v-server-exposeservice). When generating a peering token from one of the -clusters, the address(es) of the load balancer will be used in the peering token, so the peering stream will go through -the load balancer in front of the servers. For customizing the addresses used in the peering token, see -[`global.peering.tokenGeneration`](/docs/k8s/helm#v-global-peering-tokengeneration). +1. Install Consul on Kubernetes on each Kubernetes cluster by applying `values.yaml` using the Helm CLI. + + 1. Install Consul on Kubernetes on `cluster-01` + + ```shell-session + $ export HELM_RELEASE_NAME=cluster-01 + ``` -```shell-session -$ export HELM_RELEASE_NAME=cluster-name -``` + ```shell-session + $ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER1_CONTEXT + ``` + 1. Install Consul on Kubernetes on `cluster-02` + + ```shell-session + $ export HELM_RELEASE_NAME=cluster-02 + ``` -```shell-session -$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --version "0.47.1" --values values.yaml -``` + ```shell-session + $ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER2_CONTEXT + ``` ## Create a peering token -To peer Kubernetes clusters running Consul, you need to create a peering token and share it with the other cluster. +To peer Kubernetes clusters running Consul, you need to create a peering token and share it with the other cluster. As part of the peering process, the peer names for each respective cluster within the peering are established by using the `metadata.name` values for the `PeeringAcceptor` and `PeeringDialer` CRDs. 1. In `cluster-01`, create the `PeeringAcceptor` custom resource. - + ```yaml apiVersion: consul.hashicorp.com/v1alpha1 @@ -88,13 +120,13 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the `PeeringAcceptor` resource to the first cluster. ```shell-session - $ kubectl apply --filename acceptor.yml + $ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml ```` 1. Save your peering token so that you can export it to the other cluster. ```shell-session - $ kubectl get secret peering-token --output yaml > peering-token.yml + $ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml ``` ## Establish a peering connection between clusters @@ -102,12 +134,12 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the peering token to the second cluster. ```shell-session - $ kubectl apply --filename peering-token.yml + $ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml ``` -1. In `cluster-02`, create the `PeeringDialer` custom resource. +1. In `cluster-02`, create the `PeeringDialer` custom resource. - + ```yaml apiVersion: consul.hashicorp.com/v1alpha1 @@ -127,27 +159,74 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the `PeeringDialer` resource to the second cluster. ```shell-session - $ kubectl apply --filename dialer.yml + $ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml ``` ## Export services between clusters 1. For the service in "cluster-02" that you want to export, add the following [annotation](/docs/k8s/annotations-and-labels) to your service's pods. - + ```yaml - ##… - annotations: - "consul.hashicorp.com/connect-inject": "true" - ##… + # Service to expose backend + apiVersion: v1 + kind: Service + metadata: + name: backend-service + spec: + selector: + app: backend + ports: + - name: http + protocol: TCP + port: 80 + targetPort: 9090 + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: backend + --- + # deployment for backend + apiVersion: apps/v1 + kind: Deployment + metadata: + name: backend + labels: + app: backend + spec: + replicas: 1 + selector: + matchLabels: + app: backend + template: + metadata: + labels: + app: backend + annotations: + "consul.hashicorp.com/connect-inject": "true" + spec: + serviceAccountName: backend + containers: + - name: backend + image: nicholasjackson/fake-service:v0.22.4 + ports: + - containerPort: 9090 + env: + - name: "LISTEN_ADDR" + value: "0.0.0.0:9090" + - name: "NAME" + value: "backend" + - name: "MESSAGE" + value: "Response from backend" ``` 1. In `cluster-02`, create an `ExportedServices` custom resource. - + ```yaml apiVersion: consul.hashicorp.com/v1alpha1 @@ -166,7 +245,7 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the service file and the `ExportedServices` resource for the second cluster. ```shell-session - $ kubectl apply --filename backend-service.yml --filename exportedsvc.yml + $ kubectl apply --context $CLUSTER2_CONTEXT --filename backend-service.yaml --filename exportedsvc.yaml ``` ## Authorize services for peers @@ -195,18 +274,71 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the intentions to the second cluster. ```shell-session - $ kubectl apply --filename intention.yml + $ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yml ``` -1. For the services in `cluster-01` that you want to access the "backend-service," add the following annotations to the service file. +1. For the services in `cluster-01` that you want to access the "backend-service," add the following annotations to the service file. To dial the upstream service from an application, ensure that the requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/docs/discovery/dns#service-virtual-ip-lookups). - + ```yaml - ##… - annotations: - "consul.hashicorp.com/connect-inject": "true" - ##… + # Service to expose frontend + apiVersion: v1 + kind: Service + metadata: + name: frontend-service + spec: + selector: + app: frontend + ports: + - name: http + protocol: TCP + port: 9090 + targetPort: 9090 + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: frontend + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: frontend + labels: + app: frontend + spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + annotations: + "consul.hashicorp.com/connect-inject": "true" + spec: + serviceAccountName: frontend + containers: + - name: frontend + image: nicholasjackson/fake-service:v0.22.4 + securityContext: + capabilities: + add: ["NET_ADMIN"] + ports: + - containerPort: 9090 + env: + - name: "LISTEN_ADDR" + value: "0.0.0.0:9090" + - name: "UPSTREAM_URIS" + value: "http://backend-service.virtual.cluster-02.consul" + - name: "NAME" + value: "frontend" + - name: "MESSAGE" + value: "Hello World" + - name: "HTTP_CLIENT_KEEP_ALIVES" + value: "false" ``` @@ -214,18 +346,45 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the service file to the first cluster. ```shell-session - $ kubectl apply --filename frontend-service.yml + $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend-service.yaml ``` 1. Run the following command in `frontend-service` and check the output to confirm that you peered your clusters successfully. ```shell-session - $ kubectl exec -it $(kubectl get pod -l app=frontend -o name) -- curl localhost:1234 + $ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090 { - "name": "backend-service", - ##… - "body": "Response from backend", - "code": 200 + "name": "frontend", + "uri": "/", + "type": "HTTP", + "ip_addresses": [ + "10.16.2.11" + ], + "start_time": "2022-08-26T23:40:01.167199", + "end_time": "2022-08-26T23:40:01.226951", + "duration": "59.752279ms", + "body": "Hello World", + "upstream_calls": { + "http://backend-service.virtual.cluster-02.consul": { + "name": "backend", + "uri": "http://backend-service.virtual.cluster-02.consul", + "type": "HTTP", + "ip_addresses": [ + "10.32.2.10" + ], + "start_time": "2022-08-26T23:40:01.223503", + "end_time": "2022-08-26T23:40:01.224653", + "duration": "1.149666ms", + "headers": { + "Content-Length": "266", + "Content-Type": "text/plain; charset=utf-8", + "Date": "Fri, 26 Aug 2022 23:40:01 GMT" + }, + "body": "Response from backend", + "code": 200 + } + }, + "code": 200 } ``` From e4a154c88e38f083c07e8c082ec405ae60420a08 Mon Sep 17 00:00:00 2001 From: "Chris S. Kim" Date: Mon, 29 Aug 2022 16:32:26 -0400 Subject: [PATCH 13/55] Add heartbeat timeout grace period when accounting for peering health --- agent/consul/leader_peering.go | 4 +- agent/consul/leader_peering_test.go | 2 +- agent/consul/server.go | 8 +- .../services/peerstream/server.go | 7 +- .../services/peerstream/stream_test.go | 17 ++-- .../services/peerstream/stream_tracker.go | 81 +++++++++-------- .../peerstream/stream_tracker_test.go | 87 +++++++++++++------ 7 files changed, 122 insertions(+), 84 deletions(-) diff --git a/agent/consul/leader_peering.go b/agent/consul/leader_peering.go index 556f1b5bf..d80038397 100644 --- a/agent/consul/leader_peering.go +++ b/agent/consul/leader_peering.go @@ -112,7 +112,7 @@ func (s *Server) emitPeeringMetricsOnce(logger hclog.Logger, metricsImpl *metric if status.NeverConnected { metricsImpl.SetGaugeWithLabels(leaderHealthyPeeringKey, float32(math.NaN()), labels) } else { - healthy := status.IsHealthy() + healthy := s.peerStreamServer.Tracker.IsHealthy(status) healthyInt := 0 if healthy { healthyInt = 1 @@ -305,7 +305,7 @@ func (s *Server) establishStream(ctx context.Context, logger hclog.Logger, ws me logger.Trace("establishing stream to peer") - streamStatus, err := s.peerStreamTracker.Register(peer.ID) + streamStatus, err := s.peerStreamServer.Tracker.Register(peer.ID) if err != nil { return fmt.Errorf("failed to register stream: %v", err) } diff --git a/agent/consul/leader_peering_test.go b/agent/consul/leader_peering_test.go index b8b5166d8..61f060802 100644 --- a/agent/consul/leader_peering_test.go +++ b/agent/consul/leader_peering_test.go @@ -1216,7 +1216,7 @@ func TestLeader_Peering_NoEstablishmentWhenPeeringDisabled(t *testing.T) { })) require.Never(t, func() bool { - _, found := s1.peerStreamTracker.StreamStatus(peerID) + _, found := s1.peerStreamServer.StreamStatus(peerID) return found }, 7*time.Second, 1*time.Second, "peering should not have been established") } diff --git a/agent/consul/server.go b/agent/consul/server.go index f92a03ccd..94048d06f 100644 --- a/agent/consul/server.go +++ b/agent/consul/server.go @@ -370,9 +370,9 @@ type Server struct { // peerStreamServer is a server used to handle peering streams from external clusters. peerStreamServer *peerstream.Server + // peeringServer handles peering RPC requests internal to this cluster, like generating peering tokens. - peeringServer *peering.Server - peerStreamTracker *peerstream.Tracker + peeringServer *peering.Server // embedded struct to hold all the enterprise specific data EnterpriseServer @@ -724,11 +724,9 @@ func NewServer(config *Config, flat Deps, externalGRPCServer *grpc.Server) (*Ser Logger: logger.Named("grpc-api.server-discovery"), }).Register(s.externalGRPCServer) - s.peerStreamTracker = peerstream.NewTracker() s.peeringBackend = NewPeeringBackend(s) s.peerStreamServer = peerstream.NewServer(peerstream.Config{ Backend: s.peeringBackend, - Tracker: s.peerStreamTracker, GetStore: func() peerstream.StateStore { return s.FSM().State() }, Logger: logger.Named("grpc-api.peerstream"), ACLResolver: s.ACLResolver, @@ -790,7 +788,7 @@ func newGRPCHandlerFromConfig(deps Deps, config *Config, s *Server) connHandler p := peering.NewServer(peering.Config{ Backend: s.peeringBackend, - Tracker: s.peerStreamTracker, + Tracker: s.peerStreamServer.Tracker, Logger: deps.Logger.Named("grpc-api.peering"), ForwardRPC: func(info structs.RPCInfo, fn func(*grpc.ClientConn) error) (bool, error) { // Only forward the request if the dc in the request matches the server's datacenter. diff --git a/agent/grpc-external/services/peerstream/server.go b/agent/grpc-external/services/peerstream/server.go index 7254c60c7..17388f4a2 100644 --- a/agent/grpc-external/services/peerstream/server.go +++ b/agent/grpc-external/services/peerstream/server.go @@ -26,11 +26,12 @@ const ( type Server struct { Config + + Tracker *Tracker } type Config struct { Backend Backend - Tracker *Tracker GetStore func() StateStore Logger hclog.Logger ForwardRPC func(structs.RPCInfo, func(*grpc.ClientConn) error) (bool, error) @@ -53,7 +54,6 @@ type ACLResolver interface { func NewServer(cfg Config) *Server { requireNotNil(cfg.Backend, "Backend") - requireNotNil(cfg.Tracker, "Tracker") requireNotNil(cfg.GetStore, "GetStore") requireNotNil(cfg.Logger, "Logger") // requireNotNil(cfg.ACLResolver, "ACLResolver") // TODO(peering): reenable check when ACLs are required @@ -67,7 +67,8 @@ func NewServer(cfg Config) *Server { cfg.incomingHeartbeatTimeout = defaultIncomingHeartbeatTimeout } return &Server{ - Config: cfg, + Config: cfg, + Tracker: NewTracker(cfg.incomingHeartbeatTimeout), } } diff --git a/agent/grpc-external/services/peerstream/stream_test.go b/agent/grpc-external/services/peerstream/stream_test.go index bee02592a..9116d7a31 100644 --- a/agent/grpc-external/services/peerstream/stream_test.go +++ b/agent/grpc-external/services/peerstream/stream_test.go @@ -499,9 +499,8 @@ func TestStreamResources_Server_Terminate(t *testing.T) { base: time.Date(2000, time.January, 1, 0, 0, 0, 0, time.UTC), } - srv, store := newTestServer(t, func(c *Config) { - c.Tracker.SetClock(it.Now) - }) + srv, store := newTestServer(t, nil) + srv.Tracker.setClock(it.Now) p := writePeeringToBeDialed(t, store, 1, "my-peer") require.Empty(t, p.PeerID, "should be empty if being dialed") @@ -552,9 +551,8 @@ func TestStreamResources_Server_StreamTracker(t *testing.T) { base: time.Date(2000, time.January, 1, 0, 0, 0, 0, time.UTC), } - srv, store := newTestServer(t, func(c *Config) { - c.Tracker.SetClock(it.Now) - }) + srv, store := newTestServer(t, nil) + srv.Tracker.setClock(it.Now) // Set the initial roots and CA configuration. _, rootA := writeInitialRootsAndCA(t, store) @@ -1128,9 +1126,9 @@ func TestStreamResources_Server_DisconnectsOnHeartbeatTimeout(t *testing.T) { } srv, store := newTestServer(t, func(c *Config) { - c.Tracker.SetClock(it.Now) c.incomingHeartbeatTimeout = 5 * time.Millisecond }) + srv.Tracker.setClock(it.Now) p := writePeeringToBeDialed(t, store, 1, "my-peer") require.Empty(t, p.PeerID, "should be empty if being dialed") @@ -1176,9 +1174,9 @@ func TestStreamResources_Server_SendsHeartbeats(t *testing.T) { outgoingHeartbeatInterval := 5 * time.Millisecond srv, store := newTestServer(t, func(c *Config) { - c.Tracker.SetClock(it.Now) c.outgoingHeartbeatInterval = outgoingHeartbeatInterval }) + srv.Tracker.setClock(it.Now) p := writePeeringToBeDialed(t, store, 1, "my-peer") require.Empty(t, p.PeerID, "should be empty if being dialed") @@ -1235,9 +1233,9 @@ func TestStreamResources_Server_KeepsConnectionOpenWithHeartbeat(t *testing.T) { incomingHeartbeatTimeout := 10 * time.Millisecond srv, store := newTestServer(t, func(c *Config) { - c.Tracker.SetClock(it.Now) c.incomingHeartbeatTimeout = incomingHeartbeatTimeout }) + srv.Tracker.setClock(it.Now) p := writePeeringToBeDialed(t, store, 1, "my-peer") require.Empty(t, p.PeerID, "should be empty if being dialed") @@ -2746,7 +2744,6 @@ func newTestServer(t *testing.T, configFn func(c *Config)) (*testServer, *state. store: store, pub: publisher, }, - Tracker: NewTracker(), GetStore: func() StateStore { return store }, Logger: testutil.Logger(t), Datacenter: "dc1", diff --git a/agent/grpc-external/services/peerstream/stream_tracker.go b/agent/grpc-external/services/peerstream/stream_tracker.go index d0dcd3977..c3108e71e 100644 --- a/agent/grpc-external/services/peerstream/stream_tracker.go +++ b/agent/grpc-external/services/peerstream/stream_tracker.go @@ -14,18 +14,27 @@ type Tracker struct { mu sync.RWMutex streams map[string]*MutableStatus + // heartbeatTimeout is the max duration a connection is allowed to be + // disconnected before the stream health is reported as non-healthy + heartbeatTimeout time.Duration + // timeNow is a shim for testing. timeNow func() time.Time } -func NewTracker() *Tracker { +func NewTracker(heartbeatTimeout time.Duration) *Tracker { + if heartbeatTimeout == 0 { + heartbeatTimeout = defaultIncomingHeartbeatTimeout + } return &Tracker{ - streams: make(map[string]*MutableStatus), - timeNow: time.Now, + streams: make(map[string]*MutableStatus), + timeNow: time.Now, + heartbeatTimeout: heartbeatTimeout, } } -func (t *Tracker) SetClock(clock func() time.Time) { +// setClock is used for debugging purposes only. +func (t *Tracker) setClock(clock func() time.Time) { if clock == nil { t.timeNow = time.Now } else { @@ -128,6 +137,39 @@ func (t *Tracker) DeleteStatus(id string) { delete(t.streams, id) } +// IsHealthy is a calculates the health of a peering status. +// We define a peering as unhealthy if its status has been in the following +// states for longer than the configured incomingHeartbeatTimeout. +// - If it is disconnected +// - If the last received Nack is newer than last received Ack +// - If the last received error is newer than last received success +// +// If none of these conditions apply, we call the peering healthy. +func (t *Tracker) IsHealthy(s Status) bool { + // If stream is in a disconnected state for longer than the configured + // heartbeat timeout, report as unhealthy. + if !s.DisconnectTime.IsZero() && + t.timeNow().Sub(s.DisconnectTime) > t.heartbeatTimeout { + return false + } + + // If last Nack is after last Ack, it means the peer is unable to + // handle our replication message. + if s.LastNack.After(s.LastAck) && + t.timeNow().Sub(s.LastAck) > t.heartbeatTimeout { + return false + } + + // If last recv error is newer than last recv success, we were unable + // to handle the peer's replication message. + if s.LastRecvError.After(s.LastRecvResourceSuccess) && + t.timeNow().Sub(s.LastRecvError) > t.heartbeatTimeout { + return false + } + + return true +} + type MutableStatus struct { mu sync.RWMutex @@ -201,37 +243,6 @@ func (s *Status) GetExportedServicesCount() uint64 { return uint64(len(s.ExportedServices)) } -// IsHealthy is a convenience func that returns true/ false for a peering status. -// We define a peering as unhealthy if its status satisfies one of the following: -// - If it is disconnected -// - If the last received Nack is newer than last received Ack -// - If the last received error is newer than last received success -// If none of these conditions apply, we call the peering healthy. -func (s *Status) IsHealthy() bool { - if !s.Connected { - return false - } - - // If stream is in a disconnected state, report unhealthy. - // This should be logically equivalent to s.Connected above. - if !s.DisconnectTime.IsZero() { - return false - } - - // If last Nack is after last Ack, it means the peer is unable to - // handle our replication messages. - if s.LastNack.After(s.LastAck) { - return false - } - - // If last recv error is newer than last recv success - report unhealthy - if s.LastRecvError.After(s.LastRecvResourceSuccess) { - return false - } - - return true -} - func newMutableStatus(now func() time.Time, connected bool) *MutableStatus { return &MutableStatus{ Status: Status{ diff --git a/agent/grpc-external/services/peerstream/stream_tracker_test.go b/agent/grpc-external/services/peerstream/stream_tracker_test.go index 555e76258..bb018b4b4 100644 --- a/agent/grpc-external/services/peerstream/stream_tracker_test.go +++ b/agent/grpc-external/services/peerstream/stream_tracker_test.go @@ -5,6 +5,7 @@ import ( "testing" "time" + "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" "github.com/hashicorp/consul/sdk/testutil" @@ -14,30 +15,35 @@ const ( aPeerID = "63b60245-c475-426b-b314-4588d210859d" ) -func TestStatus_IsHealthy(t *testing.T) { +func TestTracker_IsHealthy(t *testing.T) { type testcase struct { name string - dontConnect bool + tracker *Tracker modifierFunc func(status *MutableStatus) expectedVal bool } tcs := []testcase{ { - name: "never connected, unhealthy", - expectedVal: false, - dontConnect: true, - }, - { - name: "disconnect time not zero", - expectedVal: false, + name: "disconnect time within timeout", + tracker: NewTracker(defaultIncomingHeartbeatTimeout), + expectedVal: true, modifierFunc: func(status *MutableStatus) { status.DisconnectTime = time.Now() }, }, { - name: "receive error before receive success", + name: "disconnect time past timeout", + tracker: NewTracker(1 * time.Millisecond), expectedVal: false, + modifierFunc: func(status *MutableStatus) { + status.DisconnectTime = time.Now().Add(-1 * time.Minute) + }, + }, + { + name: "receive error before receive success within timeout", + tracker: NewTracker(defaultIncomingHeartbeatTimeout), + expectedVal: true, modifierFunc: func(status *MutableStatus) { now := time.Now() status.LastRecvResourceSuccess = now @@ -45,46 +51,71 @@ func TestStatus_IsHealthy(t *testing.T) { }, }, { - name: "nack before ack", + name: "receive error before receive success within timeout", + tracker: NewTracker(defaultIncomingHeartbeatTimeout), + expectedVal: true, + modifierFunc: func(status *MutableStatus) { + now := time.Now() + status.LastRecvResourceSuccess = now + status.LastRecvError = now.Add(1 * time.Second) + }, + }, + { + name: "receive error before receive success past timeout", + tracker: NewTracker(1 * time.Millisecond), expectedVal: false, + modifierFunc: func(status *MutableStatus) { + now := time.Now().Add(-2 * time.Second) + status.LastRecvResourceSuccess = now + status.LastRecvError = now.Add(1 * time.Second) + }, + }, + { + name: "nack before ack within timeout", + tracker: NewTracker(defaultIncomingHeartbeatTimeout), + expectedVal: true, modifierFunc: func(status *MutableStatus) { now := time.Now() status.LastAck = now status.LastNack = now.Add(1 * time.Second) }, }, + { + name: "nack before ack past timeout", + tracker: NewTracker(1 * time.Millisecond), + expectedVal: false, + modifierFunc: func(status *MutableStatus) { + now := time.Now().Add(-2 * time.Second) + status.LastAck = now + status.LastNack = now.Add(1 * time.Second) + }, + }, { name: "healthy", + tracker: NewTracker(defaultIncomingHeartbeatTimeout), expectedVal: true, }, } for _, tc := range tcs { t.Run(tc.name, func(t *testing.T) { - tracker := NewTracker() + tracker := tc.tracker - if !tc.dontConnect { - st, err := tracker.Connected(aPeerID) - require.NoError(t, err) - require.True(t, st.Connected) + st, err := tracker.Connected(aPeerID) + require.NoError(t, err) + require.True(t, st.Connected) - if tc.modifierFunc != nil { - tc.modifierFunc(st) - } - - require.Equal(t, tc.expectedVal, st.IsHealthy()) - - } else { - st, found := tracker.StreamStatus(aPeerID) - require.False(t, found) - require.Equal(t, tc.expectedVal, st.IsHealthy()) + if tc.modifierFunc != nil { + tc.modifierFunc(st) } + + assert.Equal(t, tc.expectedVal, tracker.IsHealthy(st.GetStatus())) }) } } func TestTracker_EnsureConnectedDisconnected(t *testing.T) { - tracker := NewTracker() + tracker := NewTracker(defaultIncomingHeartbeatTimeout) peerID := "63b60245-c475-426b-b314-4588d210859d" it := incrementalTime{ @@ -181,7 +212,7 @@ func TestTracker_connectedStreams(t *testing.T) { } run := func(t *testing.T, tc testCase) { - tracker := NewTracker() + tracker := NewTracker(defaultIncomingHeartbeatTimeout) if tc.setup != nil { tc.setup(t, tracker) } From bb26fd603fb3bb3657c228b1657f0e4b4ba1a482 Mon Sep 17 00:00:00 2001 From: Austin Workman Date: Mon, 11 Jul 2022 16:06:34 -0500 Subject: [PATCH 14/55] Add support for S3 path based addressing --- .changelog/_2271.txt | 3 +++ website/content/commands/snapshot/agent.mdx | 7 ++++++- 2 files changed, 9 insertions(+), 1 deletion(-) create mode 100644 .changelog/_2271.txt diff --git a/.changelog/_2271.txt b/.changelog/_2271.txt new file mode 100644 index 000000000..58dc78dfa --- /dev/null +++ b/.changelog/_2271.txt @@ -0,0 +1,3 @@ +```release-note:improvement +snapshot agent: **(Enterprise only)** Add support for path-based addressing when using s3 backend. +``` \ No newline at end of file diff --git a/website/content/commands/snapshot/agent.mdx b/website/content/commands/snapshot/agent.mdx index c565a52e6..7607fc2b5 100644 --- a/website/content/commands/snapshot/agent.mdx +++ b/website/content/commands/snapshot/agent.mdx @@ -168,7 +168,8 @@ Usage: `consul snapshot agent [options]` "s3_bucket": "", "s3_key_prefix": "consul-snapshot", "s3_server_side_encryption": false, - "s3_static_snapshot_name": "" + "s3_static_snapshot_name": "", + "s3_force_path_style": false }, "azure_blob_storage": { "account_name": "", @@ -275,6 +276,10 @@ Note that despite the AWS references, any S3-compatible endpoint can be specifie - `-aws-s3-static-snapshot-name` - If this is given, all snapshots are saved with the same file name. The agent will not rotate or version snapshots, and will save them with the same name each time. Use this if you want to rely on [S3's versioning capabilities](http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) instead of the agent handling it for you. +- `-aws-s3-force-path-style` - Enables the use of legacy path-based addressing instead of virtual addressing. This flag is required by minio + and other 3rd party S3 compatible object storage platforms where DNS or TLS requirements for virtual addressing are prohibitive. +For more information, refer to the AWS documentation on [Methods for accessing a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-bucket-intro.html) + - `-aws-s3-enable-kms` - Enables using [Amazon KMS](https://aws.amazon.com/kms/) for encrypting snapshots. - `-aws-s3-kms-key` - Optional Amazon KMS key to use, if this is not set the default KMS master key will be used. Set this if you want to manage key rotation yourself. From 13f8839924f401b520696fcbacd8c86fc644c600 Mon Sep 17 00:00:00 2001 From: Eric Haberkorn Date: Mon, 29 Aug 2022 16:59:27 -0400 Subject: [PATCH 15/55] Fix a breaking change to the API package introduced in #13835 (#14378) `QueryDatacenterOptions` was renamed to `QueryFailoverOptions` without creating an alias. This adds `QueryDatacenterOptions` back as an alias to `QueryFailoverOptions` and marks it is deprecated. --- .changelog/14378.txt | 5 +++++ api/prepared_query.go | 3 +++ 2 files changed, 8 insertions(+) create mode 100644 .changelog/14378.txt diff --git a/.changelog/14378.txt b/.changelog/14378.txt new file mode 100644 index 000000000..2ab1b8f13 --- /dev/null +++ b/.changelog/14378.txt @@ -0,0 +1,5 @@ +```release-note:bug +api: Fix a breaking change caused by renaming `QueryDatacenterOptions` to +`QueryFailoverOptions`. This adds `QueryDatacenterOptions` back as an alias to +`QueryFailoverOptions` and marks it as deprecated. +``` diff --git a/api/prepared_query.go b/api/prepared_query.go index 60cd437cb..7e0518f58 100644 --- a/api/prepared_query.go +++ b/api/prepared_query.go @@ -17,6 +17,9 @@ type QueryFailoverOptions struct { Targets []QueryFailoverTarget } +// Deprecated: use QueryFailoverOptions instead. +type QueryDatacenterOptions = QueryFailoverOptions + type QueryFailoverTarget struct { // PeerName specifies a peer to try during failover. PeerName string From 63df49b4406f67cf119283e5ff9e59cedd83c2d7 Mon Sep 17 00:00:00 2001 From: Luke Kysow <1034429+lkysow@users.noreply.github.com> Date: Mon, 29 Aug 2022 16:13:49 -0700 Subject: [PATCH 16/55] Run integration tests locally using amd64 (#14365) Locally, always run integration tests using amd64, even if running on an arm mac. This ensures the architecture locally always matches the CI/CD environment. In addition: * Use consul:local for envoy integration and upgrade tests. Previously, consul:local was used for upgrade tests and consul-dev for integration tests. I didn't see a reason to use separate images as it's more confusing. * By default, disable the requirement that aws credentials are set. These are only needed for the lambda tests and make it so you can't run any tests locally, even if you're not running the lambda tests. Now they'll only run if the LAMBDA_TESTS_ENABLED env var is set. * Split out the building of the Docker image for integration tests into its own target from `dev-docker`. This allows us to always use an amd64 image without messing up the `dev-docker` target. * Add support for passing GO_TEST_FLAGs to `test-envoy-integ` target. * Add a wait_for_leader function because tests were failing locally without it. --- .circleci/config.yml | 7 +++-- GNUmakefile | 18 +++++++++-- .../connect/envoy/Dockerfile-consul-envoy | 2 +- .../envoy/case-wanfed-gw/global-setup.sh | 2 +- test/integration/connect/envoy/helpers.bash | 13 ++++++-- test/integration/connect/envoy/run-tests.sh | 30 +++++++++++-------- 6 files changed, 49 insertions(+), 23 deletions(-) diff --git a/.circleci/config.yml b/.circleci/config.yml index 105666c66..053d50d25 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -816,7 +816,7 @@ jobs: # Get go binary from workspace - attach_workspace: at: . - # Build the consul-dev image from the already built binary + # Build the consul:local image from the already built binary - run: command: | sudo rm -rf /usr/local/go @@ -887,8 +887,8 @@ jobs: - attach_workspace: at: . - run: *install-gotestsum - # Build the consul-dev image from the already built binary - - run: docker build -t consul-dev -f ./build-support/docker/Consul-Dev.dockerfile . + # Build the consul:local image from the already built binary + - run: docker build -t consul:local -f ./build-support/docker/Consul-Dev.dockerfile . - run: name: Envoy Integration Tests command: | @@ -902,6 +902,7 @@ jobs: GOTESTSUM_JUNITFILE: /tmp/test-results/results.xml GOTESTSUM_FORMAT: standard-verbose COMPOSE_INTERACTIVE_NO_CLI: 1 + LAMBDA_TESTS_ENABLED: "true" # tput complains if this isn't set to something. TERM: ansi - store_artifacts: diff --git a/GNUmakefile b/GNUmakefile index 77d6a7ec2..f9dd16081 100644 --- a/GNUmakefile +++ b/GNUmakefile @@ -130,7 +130,7 @@ export GOLDFLAGS # Allow skipping docker build during integration tests in CI since we already # have a built binary -ENVOY_INTEG_DEPS?=dev-docker +ENVOY_INTEG_DEPS?=docker-envoy-integ ifdef SKIP_DOCKER_BUILD ENVOY_INTEG_DEPS=noop endif @@ -346,8 +346,22 @@ consul-docker: go-build-image ui-docker: ui-build-image @$(SHELL) $(CURDIR)/build-support/scripts/build-docker.sh ui +# Build image used to run integration tests locally. +docker-envoy-integ: + $(MAKE) GOARCH=amd64 linux + docker build \ + --platform linux/amd64 $(NOCACHE) $(QUIET) \ + -t 'consul:local' \ + --build-arg CONSUL_IMAGE_VERSION=$(CONSUL_IMAGE_VERSION) \ + $(CURDIR)/pkg/bin/linux_amd64 \ + -f $(CURDIR)/build-support/docker/Consul-Dev.dockerfile + +# Run integration tests. +# Use GO_TEST_FLAGS to run specific tests: +# make test-envoy-integ GO_TEST_FLAGS="-run TestEnvoy/case-basic" +# NOTE: Always uses amd64 images, even when running on M1 macs, to match CI/CD environment. test-envoy-integ: $(ENVOY_INTEG_DEPS) - @go test -v -timeout=30m -tags integration ./test/integration/connect/envoy + @go test -v -timeout=30m -tags integration $(GO_TEST_FLAGS) ./test/integration/connect/envoy .PHONY: test-compat-integ test-compat-integ: dev-docker diff --git a/test/integration/connect/envoy/Dockerfile-consul-envoy b/test/integration/connect/envoy/Dockerfile-consul-envoy index b6d5b3e8e..41941a336 100644 --- a/test/integration/connect/envoy/Dockerfile-consul-envoy +++ b/test/integration/connect/envoy/Dockerfile-consul-envoy @@ -1,7 +1,7 @@ # Note this arg has to be before the first FROM ARG ENVOY_VERSION -FROM consul-dev as consul +FROM consul:local as consul FROM docker.mirror.hashicorp.services/envoyproxy/envoy:v${ENVOY_VERSION} COPY --from=consul /bin/consul /bin/consul diff --git a/test/integration/connect/envoy/case-wanfed-gw/global-setup.sh b/test/integration/connect/envoy/case-wanfed-gw/global-setup.sh index fee985add..3e6120445 100755 --- a/test/integration/connect/envoy/case-wanfed-gw/global-setup.sh +++ b/test/integration/connect/envoy/case-wanfed-gw/global-setup.sh @@ -17,7 +17,7 @@ consul tls cert create -dc=secondary -server -node=sec " docker rm -f "$container" &>/dev/null || true -docker run -i --net=none --name="$container" consul-dev:latest sh -c "${scriptlet}" +docker run -i --net=none --name="$container" consul:local sh -c "${scriptlet}" # primary for f in \ diff --git a/test/integration/connect/envoy/helpers.bash b/test/integration/connect/envoy/helpers.bash index 2fd9be7e3..d7fe0ae02 100755 --- a/test/integration/connect/envoy/helpers.bash +++ b/test/integration/connect/envoy/helpers.bash @@ -562,14 +562,14 @@ function assert_intention_denied { function docker_consul { local DC=$1 shift 1 - docker run -i --rm --network container:envoy_consul-${DC}_1 consul-dev "$@" + docker run -i --rm --network container:envoy_consul-${DC}_1 consul:local "$@" } function docker_consul_for_proxy_bootstrap { local DC=$1 shift 1 - docker run -i --rm --network container:envoy_consul-${DC}_1 consul-dev "$@" + docker run -i --rm --network container:envoy_consul-${DC}_1 consul:local "$@" 2> /dev/null } function docker_wget { @@ -581,7 +581,7 @@ function docker_wget { function docker_curl { local DC=$1 shift 1 - docker run --rm --network container:envoy_consul-${DC}_1 --entrypoint curl consul-dev "$@" + docker run --rm --network container:envoy_consul-${DC}_1 --entrypoint curl consul:local "$@" } function docker_exec { @@ -806,9 +806,16 @@ function delete_config_entry { function register_services { local DC=${1:-primary} + wait_for_leader "$DC" docker_consul_exec ${DC} sh -c "consul services register /workdir/${DC}/register/service_*.hcl" } +# wait_for_leader waits until a leader is elected. +# Its first argument must be the datacenter name. +function wait_for_leader { + retry_default docker_consul_exec "$1" sh -c '[[ $(curl --fail -sS http://127.0.0.1:8500/v1/status/leader) ]]' +} + function setup_upsert_l4_intention { local SOURCE=$1 local DESTINATION=$2 diff --git a/test/integration/connect/envoy/run-tests.sh b/test/integration/connect/envoy/run-tests.sh index c8c392c34..f0e6b165c 100755 --- a/test/integration/connect/envoy/run-tests.sh +++ b/test/integration/connect/envoy/run-tests.sh @@ -16,6 +16,8 @@ ENVOY_VERSION=${ENVOY_VERSION:-"1.23.0"} export ENVOY_VERSION export DOCKER_BUILDKIT=1 +# Always run tests on amd64 because that's what the CI environment uses. +export DOCKER_DEFAULT_PLATFORM="linux/amd64" if [ ! -z "$DEBUG" ] ; then set -x @@ -44,17 +46,19 @@ function network_snippet { } function aws_snippet { - local snippet="" + if [[ ! -z "$LAMBDA_TESTS_ENABLED" ]]; then + local snippet="" - # The Lambda integration cases assume that a Lambda function exists in $AWS_REGION with an ARN of $AWS_LAMBDA_ARN. - # The AWS credentials must have permission to invoke the Lambda function. - [ -n "$(set | grep '^AWS_ACCESS_KEY_ID=')" ] && snippet="${snippet} -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" - [ -n "$(set | grep '^AWS_SECRET_ACCESS_KEY=')" ] && snippet="${snippet} -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" - [ -n "$(set | grep '^AWS_SESSION_TOKEN=')" ] && snippet="${snippet} -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN" - [ -n "$(set | grep '^AWS_LAMBDA_REGION=')" ] && snippet="${snippet} -e AWS_LAMBDA_REGION=$AWS_LAMBDA_REGION" - [ -n "$(set | grep '^AWS_LAMBDA_ARN=')" ] && snippet="${snippet} -e AWS_LAMBDA_ARN=$AWS_LAMBDA_ARN" + # The Lambda integration cases assume that a Lambda function exists in $AWS_REGION with an ARN of $AWS_LAMBDA_ARN. + # The AWS credentials must have permission to invoke the Lambda function. + [ -n "$(set | grep '^AWS_ACCESS_KEY_ID=')" ] && snippet="${snippet} -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" + [ -n "$(set | grep '^AWS_SECRET_ACCESS_KEY=')" ] && snippet="${snippet} -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" + [ -n "$(set | grep '^AWS_SESSION_TOKEN=')" ] && snippet="${snippet} -e AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN" + [ -n "$(set | grep '^AWS_LAMBDA_REGION=')" ] && snippet="${snippet} -e AWS_LAMBDA_REGION=$AWS_LAMBDA_REGION" + [ -n "$(set | grep '^AWS_LAMBDA_ARN=')" ] && snippet="${snippet} -e AWS_LAMBDA_ARN=$AWS_LAMBDA_ARN" - echo "$snippet" + echo "$snippet" + fi } function init_workdir { @@ -222,7 +226,7 @@ function start_consul { --hostname "consul-${DC}-server" \ --network-alias "consul-${DC}-server" \ -e "CONSUL_LICENSE=$license" \ - consul-dev \ + consul:local \ agent -dev -datacenter "${DC}" \ -config-dir "/workdir/${DC}/consul" \ -config-dir "/workdir/${DC}/consul-server" \ @@ -237,7 +241,7 @@ function start_consul { --network-alias "consul-${DC}-client" \ -e "CONSUL_LICENSE=$license" \ ${ports[@]} \ - consul-dev \ + consul:local \ agent -datacenter "${DC}" \ -config-dir "/workdir/${DC}/consul" \ -data-dir "/tmp/consul" \ @@ -256,7 +260,7 @@ function start_consul { --network-alias "consul-${DC}-server" \ -e "CONSUL_LICENSE=$license" \ ${ports[@]} \ - consul-dev \ + consul:local \ agent -dev -datacenter "${DC}" \ -config-dir "/workdir/${DC}/consul" \ -config-dir "/workdir/${DC}/consul-server" \ @@ -290,7 +294,7 @@ function start_partitioned_client { --hostname "consul-${PARTITION}-client" \ --network-alias "consul-${PARTITION}-client" \ -e "CONSUL_LICENSE=$license" \ - consul-dev agent \ + consul:local agent \ -datacenter "primary" \ -retry-join "consul-primary-server" \ -grpc-port 8502 \ From 4d8f5ab5398545482a880087c520a1359811a5e1 Mon Sep 17 00:00:00 2001 From: Jorge Marey Date: Tue, 2 Aug 2022 08:52:48 +0200 Subject: [PATCH 17/55] Add new tracing configuration --- agent/xds/config.go | 6 ++++ agent/xds/listeners.go | 65 +++++++++++++++++++++++++++++++++++++++++- 2 files changed, 70 insertions(+), 1 deletion(-) diff --git a/agent/xds/config.go b/agent/xds/config.go index 2fdf9d115..89e92106d 100644 --- a/agent/xds/config.go +++ b/agent/xds/config.go @@ -27,6 +27,12 @@ type ProxyConfig struct { // Note: This escape hatch is compatible with the discovery chain. PublicListenerJSON string `mapstructure:"envoy_public_listener_json"` + // LstenerTracingJSON is a complete override ("escape hatch") for the + // listeners tracing configuration. + // + // Note: This escape hatch is compatible with the discovery chain. + LstenerTracingJSON string `mapstructure:"envoy_listener_tracing_json"` + // LocalClusterJSON is a complete override ("escape hatch") for the // local application cluster. // diff --git a/agent/xds/listeners.go b/agent/xds/listeners.go index 33c339c4d..b3c9577e1 100644 --- a/agent/xds/listeners.go +++ b/agent/xds/listeners.go @@ -3,7 +3,6 @@ package xds import ( "errors" "fmt" - envoy_extensions_filters_listener_http_inspector_v3 "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/listener/http_inspector/v3" "net" "net/url" "regexp" @@ -12,6 +11,8 @@ import ( "strings" "time" + envoy_extensions_filters_listener_http_inspector_v3 "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/listener/http_inspector/v3" + envoy_core_v3 "github.com/envoyproxy/go-control-plane/envoy/config/core/v3" envoy_listener_v3 "github.com/envoyproxy/go-control-plane/envoy/config/listener/v3" envoy_route_v3 "github.com/envoyproxy/go-control-plane/envoy/config/route/v3" @@ -107,6 +108,19 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. } } + proxyCfg, err := ParseProxyConfig(cfgSnap.Proxy.Config) + if err != nil { + // Don't hard fail on a config typo, just warn. The parse func returns + // default config if there is an error so it's safe to continue. + s.Logger.Warn("failed to parse Connect.Proxy.Config", "error", err) + } + var tracing *envoy_http_v3.HttpConnectionManager_Tracing + if proxyCfg.LstenerTracingJSON != "" { + if tracing, err = makeTracingFromUserConfig(proxyCfg.LstenerTracingJSON); err != nil { + s.Logger.Warn("failed to parse LstenerTracingJSON config", "error", err) + } + } + for uid, chain := range cfgSnap.ConnectProxy.DiscoveryChain { upstreamCfg := cfgSnap.ConnectProxy.UpstreamConfig[uid] @@ -153,6 +167,7 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. filterName: filterName, protocol: cfg.Protocol, useRDS: useRDS, + tracing: tracing, }) if err != nil { return nil, err @@ -178,6 +193,7 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. filterName: filterName, protocol: cfg.Protocol, useRDS: useRDS, + tracing: tracing, }) if err != nil { return nil, err @@ -249,6 +265,7 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. filterName: routeName, protocol: svcConfig.Protocol, useRDS: true, + tracing: tracing, }) if err != nil { return err @@ -265,6 +282,7 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. clusterName: clusterName, filterName: clusterName, protocol: svcConfig.Protocol, + tracing: tracing, }) if err != nil { return err @@ -376,6 +394,7 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. protocol: cfg.Protocol, useRDS: false, statPrefix: "upstream_peered.", + tracing: tracing, }) if err != nil { return nil, err @@ -533,6 +552,7 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. filterName: uid.EnvoyID(), routeName: uid.EnvoyID(), protocol: cfg.Protocol, + tracing: tracing, }) if err != nil { return nil, err @@ -1188,12 +1208,20 @@ func (s *ResourceGenerator) makeInboundListener(cfgSnap *proxycfg.ConfigSnapshot l = makePortListener(name, addr, port, envoy_core_v3.TrafficDirection_INBOUND) + var tracing *envoy_http_v3.HttpConnectionManager_Tracing + if cfg.LstenerTracingJSON != "" { + if tracing, err = makeTracingFromUserConfig(cfg.LstenerTracingJSON); err != nil { + s.Logger.Warn("failed to parse LstenerTracingJSON config", "error", err) + } + } + filterOpts := listenerFilterOpts{ protocol: cfg.Protocol, filterName: name, routeName: name, cluster: LocalAppClusterName, requestTimeoutMs: cfg.LocalRequestTimeoutMs, + tracing: tracing, } if useHTTPFilter { filterOpts.httpAuthzFilter, err = makeRBACHTTPFilter( @@ -1310,6 +1338,7 @@ func (s *ResourceGenerator) makeExposedCheckListener(cfgSnap *proxycfg.ConfigSna statPrefix: "", routePath: path.Path, httpAuthzFilter: nil, + // in the exposed check listener de don't set the tracing configuration } f, err := makeListenerFilter(opts) if err != nil { @@ -1542,6 +1571,19 @@ func (s *ResourceGenerator) makeFilterChainTerminatingGateway(cfgSnap *proxycfg. filterChain.Filters = append(filterChain.Filters, authFilter) } + proxyCfg, err := ParseProxyConfig(cfgSnap.Proxy.Config) + if err != nil { + // Don't hard fail on a config typo, just warn. The parse func returns + // default config if there is an error so it's safe to continue. + s.Logger.Warn("failed to parse Connect.Proxy.Config", "error", err) + } + var tracing *envoy_http_v3.HttpConnectionManager_Tracing + if proxyCfg.LstenerTracingJSON != "" { + if tracing, err = makeTracingFromUserConfig(proxyCfg.LstenerTracingJSON); err != nil { + s.Logger.Warn("failed to parse LstenerTracingJSON config", "error", err) + } + } + // Lastly we setup the actual proxying component. For L4 this is a straight // tcp proxy. For L7 this is a very hands-off HTTP proxy just to inject an // HTTP filter to do intention checks here instead. @@ -1552,6 +1594,7 @@ func (s *ResourceGenerator) makeFilterChainTerminatingGateway(cfgSnap *proxycfg. cluster: tgtwyOpts.cluster, statPrefix: "upstream.", routePath: "", + tracing: tracing, } if useHTTPFilter { @@ -1798,6 +1841,7 @@ type filterChainOpts struct { statPrefix string forwardClientDetails bool forwardClientPolicy envoy_http_v3.HttpConnectionManager_ForwardClientCertDetails + tracing *envoy_http_v3.HttpConnectionManager_Tracing } func (s *ResourceGenerator) makeUpstreamFilterChain(opts filterChainOpts) (*envoy_listener_v3.FilterChain, error) { @@ -1813,6 +1857,7 @@ func (s *ResourceGenerator) makeUpstreamFilterChain(opts filterChainOpts) (*envo statPrefix: opts.statPrefix, forwardClientDetails: opts.forwardClientDetails, forwardClientPolicy: opts.forwardClientPolicy, + tracing: opts.tracing, }) if err != nil { return nil, err @@ -1955,6 +2000,7 @@ type listenerFilterOpts struct { httpAuthzFilter *envoy_http_v3.HttpFilter forwardClientDetails bool forwardClientPolicy envoy_http_v3.HttpConnectionManager_ForwardClientCertDetails + tracing *envoy_http_v3.HttpConnectionManager_Tracing } func makeListenerFilter(opts listenerFilterOpts) (*envoy_listener_v3.Filter, error) { @@ -2014,6 +2060,19 @@ func makeStatPrefix(prefix, filterName string) string { return fmt.Sprintf("%s%s", prefix, strings.Replace(filterName, ":", "_", -1)) } +func makeTracingFromUserConfig(configJSON string) (*envoy_http_v3.HttpConnectionManager_Tracing, error) { + // Type field is present so decode it as a any.Any + var any any.Any + if err := jsonpb.UnmarshalString(configJSON, &any); err != nil { + return nil, err + } + var t envoy_http_v3.HttpConnectionManager_Tracing + if err := proto.Unmarshal(any.Value, &t); err != nil { + return nil, err + } + return &t, nil +} + func makeHTTPFilter(opts listenerFilterOpts) (*envoy_listener_v3.Filter, error) { router, err := makeEnvoyHTTPFilter("envoy.filters.http.router", &envoy_http_router_v3.Router{}) if err != nil { @@ -2034,6 +2093,10 @@ func makeHTTPFilter(opts listenerFilterOpts) (*envoy_listener_v3.Filter, error) }, } + if opts.tracing != nil { + cfg.Tracing = opts.tracing + } + if opts.useRDS { if opts.cluster != "" { return nil, fmt.Errorf("cannot specify cluster name when using RDS") From 916eef8f51651ec4fd93c9f641ea50873f8c8593 Mon Sep 17 00:00:00 2001 From: Jorge Marey Date: Tue, 2 Aug 2022 09:58:37 +0200 Subject: [PATCH 18/55] add changelog file --- .changelog/13998.txt | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 .changelog/13998.txt diff --git a/.changelog/13998.txt b/.changelog/13998.txt new file mode 100644 index 000000000..cced542db --- /dev/null +++ b/.changelog/13998.txt @@ -0,0 +1,3 @@ +```release-note:improvement +connect: change deprecated tracing configuration on envoy +``` From e3813586f32bb4cc0515453668cd0e803e1dbd92 Mon Sep 17 00:00:00 2001 From: Jorge Marey Date: Tue, 30 Aug 2022 08:36:06 +0200 Subject: [PATCH 19/55] Fix typos. Add test. Add documentation --- agent/xds/config.go | 4 +- agent/xds/listeners.go | 20 +- agent/xds/listeners_test.go | 43 +++++ .../custom-trace-listener.latest.golden | 180 ++++++++++++++++++ .../content/docs/connect/proxies/envoy.mdx | 39 ++++ 5 files changed, 274 insertions(+), 12 deletions(-) create mode 100644 agent/xds/testdata/listeners/custom-trace-listener.latest.golden diff --git a/agent/xds/config.go b/agent/xds/config.go index 89e92106d..cfbd23e07 100644 --- a/agent/xds/config.go +++ b/agent/xds/config.go @@ -27,11 +27,11 @@ type ProxyConfig struct { // Note: This escape hatch is compatible with the discovery chain. PublicListenerJSON string `mapstructure:"envoy_public_listener_json"` - // LstenerTracingJSON is a complete override ("escape hatch") for the + // ListenerTracingJSON is a complete override ("escape hatch") for the // listeners tracing configuration. // // Note: This escape hatch is compatible with the discovery chain. - LstenerTracingJSON string `mapstructure:"envoy_listener_tracing_json"` + ListenerTracingJSON string `mapstructure:"envoy_listener_tracing_json"` // LocalClusterJSON is a complete override ("escape hatch") for the // local application cluster. diff --git a/agent/xds/listeners.go b/agent/xds/listeners.go index b3c9577e1..488cc6eb8 100644 --- a/agent/xds/listeners.go +++ b/agent/xds/listeners.go @@ -115,9 +115,9 @@ func (s *ResourceGenerator) listenersFromSnapshotConnectProxy(cfgSnap *proxycfg. s.Logger.Warn("failed to parse Connect.Proxy.Config", "error", err) } var tracing *envoy_http_v3.HttpConnectionManager_Tracing - if proxyCfg.LstenerTracingJSON != "" { - if tracing, err = makeTracingFromUserConfig(proxyCfg.LstenerTracingJSON); err != nil { - s.Logger.Warn("failed to parse LstenerTracingJSON config", "error", err) + if proxyCfg.ListenerTracingJSON != "" { + if tracing, err = makeTracingFromUserConfig(proxyCfg.ListenerTracingJSON); err != nil { + s.Logger.Warn("failed to parse ListenerTracingJSON config", "error", err) } } @@ -1209,9 +1209,9 @@ func (s *ResourceGenerator) makeInboundListener(cfgSnap *proxycfg.ConfigSnapshot l = makePortListener(name, addr, port, envoy_core_v3.TrafficDirection_INBOUND) var tracing *envoy_http_v3.HttpConnectionManager_Tracing - if cfg.LstenerTracingJSON != "" { - if tracing, err = makeTracingFromUserConfig(cfg.LstenerTracingJSON); err != nil { - s.Logger.Warn("failed to parse LstenerTracingJSON config", "error", err) + if cfg.ListenerTracingJSON != "" { + if tracing, err = makeTracingFromUserConfig(cfg.ListenerTracingJSON); err != nil { + s.Logger.Warn("failed to parse ListenerTracingJSON config", "error", err) } } @@ -1338,7 +1338,7 @@ func (s *ResourceGenerator) makeExposedCheckListener(cfgSnap *proxycfg.ConfigSna statPrefix: "", routePath: path.Path, httpAuthzFilter: nil, - // in the exposed check listener de don't set the tracing configuration + // in the exposed check listener we don't set the tracing configuration } f, err := makeListenerFilter(opts) if err != nil { @@ -1578,9 +1578,9 @@ func (s *ResourceGenerator) makeFilterChainTerminatingGateway(cfgSnap *proxycfg. s.Logger.Warn("failed to parse Connect.Proxy.Config", "error", err) } var tracing *envoy_http_v3.HttpConnectionManager_Tracing - if proxyCfg.LstenerTracingJSON != "" { - if tracing, err = makeTracingFromUserConfig(proxyCfg.LstenerTracingJSON); err != nil { - s.Logger.Warn("failed to parse LstenerTracingJSON config", "error", err) + if proxyCfg.ListenerTracingJSON != "" { + if tracing, err = makeTracingFromUserConfig(proxyCfg.ListenerTracingJSON); err != nil { + s.Logger.Warn("failed to parse ListenerTracingJSON config", "error", err) } } diff --git a/agent/xds/listeners_test.go b/agent/xds/listeners_test.go index c51730074..1112222f3 100644 --- a/agent/xds/listeners_test.go +++ b/agent/xds/listeners_test.go @@ -772,6 +772,15 @@ func TestListenersFromSnapshot(t *testing.T) { name: "transparent-proxy-terminating-gateway", create: proxycfg.TestConfigSnapshotTransparentProxyTerminatingGatewayCatalogDestinationsOnly, }, + { + name: "custom-trace-listener", + create: func(t testinf.T) *proxycfg.ConfigSnapshot { + return proxycfg.TestConfigSnapshot(t, func(ns *structs.NodeService) { + ns.Proxy.Config["protocol"] = "http" + ns.Proxy.Config["envoy_listener_tracing_json"] = customTraceJSON(t) + }, nil) + }, + }, } latestEnvoyVersion := proxysupport.EnvoyVersions[0] @@ -947,6 +956,40 @@ func customHTTPListenerJSON(t testinf.T, opts customHTTPListenerJSONOptions) str return buf.String() } +func customTraceJSON(t testinf.T) string { + t.Helper() + return ` + { + "@type" : "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager.Tracing", + "provider" : { + "name" : "envoy.tracers.zipkin", + "typed_config" : { + "@type" : "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", + "collector_cluster" : "otelcolector", + "collector_endpoint" : "/api/v2/spans", + "collector_endpoint_version" : "HTTP_JSON", + "shared_span_context" : false + } + }, + "custom_tags" : [ + { + "tag" : "custom_header", + "request_header" : { + "name" : "x-custom-traceid", + "default_value" : "" + } + }, + { + "tag" : "alloc_id", + "environment" : { + "name" : "NOMAD_ALLOC_ID" + } + } + ] + } + ` +} + type configFetcherFunc func() string var _ ConfigFetcher = (configFetcherFunc)(nil) diff --git a/agent/xds/testdata/listeners/custom-trace-listener.latest.golden b/agent/xds/testdata/listeners/custom-trace-listener.latest.golden new file mode 100644 index 000000000..5fce12bb7 --- /dev/null +++ b/agent/xds/testdata/listeners/custom-trace-listener.latest.golden @@ -0,0 +1,180 @@ +{ + "versionInfo": "00000001", + "resources": [ + { + "@type": "type.googleapis.com/envoy.config.listener.v3.Listener", + "name": "db:127.0.0.1:9191", + "address": { + "socketAddress": { + "address": "127.0.0.1", + "portValue": 9191 + } + }, + "filterChains": [ + { + "filters": [ + { + "name": "envoy.filters.network.tcp_proxy", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy", + "statPrefix": "upstream.db.default.default.dc1", + "cluster": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul" + } + } + ] + } + ], + "trafficDirection": "OUTBOUND" + }, + { + "@type": "type.googleapis.com/envoy.config.listener.v3.Listener", + "name": "prepared_query:geo-cache:127.10.10.10:8181", + "address": { + "socketAddress": { + "address": "127.10.10.10", + "portValue": 8181 + } + }, + "filterChains": [ + { + "filters": [ + { + "name": "envoy.filters.network.tcp_proxy", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy", + "statPrefix": "upstream.prepared_query_geo-cache", + "cluster": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul" + } + } + ] + } + ], + "trafficDirection": "OUTBOUND" + }, + { + "@type": "type.googleapis.com/envoy.config.listener.v3.Listener", + "name": "public_listener:0.0.0.0:9999", + "address": { + "socketAddress": { + "address": "0.0.0.0", + "portValue": 9999 + } + }, + "filterChains": [ + { + "filters": [ + { + "name": "envoy.filters.network.http_connection_manager", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager", + "statPrefix": "public_listener", + "routeConfig": { + "name": "public_listener", + "virtualHosts": [ + { + "name": "public_listener", + "domains": [ + "*" + ], + "routes": [ + { + "match": { + "prefix": "/" + }, + "route": { + "cluster": "local_app" + } + } + ] + } + ] + }, + "httpFilters": [ + { + "name": "envoy.filters.http.rbac", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.filters.http.rbac.v3.RBAC", + "rules": { + + } + } + }, + { + "name": "envoy.filters.http.router", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.filters.http.router.v3.Router" + } + } + ], + "tracing": { + "customTags": [ + { + "tag": "custom_header", + "requestHeader": { + "name": "x-custom-traceid" + } + }, + { + "tag": "alloc_id", + "environment": { + "name": "NOMAD_ALLOC_ID" + } + } + ], + "provider": { + "name": "envoy.tracers.zipkin", + "typedConfig": { + "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", + "collectorCluster": "otelcolector", + "collectorEndpoint": "/api/v2/spans", + "sharedSpanContext": false, + "collectorEndpointVersion": "HTTP_JSON" + } + } + }, + "forwardClientCertDetails": "APPEND_FORWARD", + "setCurrentClientCertDetails": { + "subject": true, + "cert": true, + "chain": true, + "dns": true, + "uri": true + } + } + } + ], + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICXDCCAgKgAwIBAgIICpZq70Z9LyUwCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowFDESMBAG\nA1UEAxMJVGVzdCBDQSAyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIhywH1gx\nAsMwuF3ukAI5YL2jFxH6Usnma1HFSfVyxbXX1/uoZEYrj8yCAtdU2yoHETyd+Zx2\nThhRLP79pYegCaOCATwwggE4MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTAD\nAQH/MGgGA1UdDgRhBF9kMToxMToxMTphYzoyYTpiYTo5NzpiMjozZjphYzo3Yjpi\nZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1ZTo0MTo2ZjpmMjo3\nMzo5NTo1ODowYzpkYjBqBgNVHSMEYzBhgF9kMToxMToxMTphYzoyYTpiYTo5Nzpi\nMjozZjphYzo3YjpiZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1\nZTo0MTo2ZjpmMjo3Mzo5NTo1ODowYzpkYjA/BgNVHREEODA2hjRzcGlmZmU6Ly8x\nMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqG\nSM49BAMCA0gAMEUCICOY0i246rQHJt8o8Oya0D5PLL1FnmsQmQqIGCi31RwnAiEA\noR5f6Ku+cig2Il8T8LJujOp2/2A72QcHZA57B13y+8o=\n-----END CERTIFICATE-----\n" + } + } + }, + "requireClientCertificate": true + } + } + } + ], + "trafficDirection": "INBOUND" + } + ], + "typeUrl": "type.googleapis.com/envoy.config.listener.v3.Listener", + "nonce": "00000001" +} \ No newline at end of file diff --git a/website/content/docs/connect/proxies/envoy.mdx b/website/content/docs/connect/proxies/envoy.mdx index 7ada5b6fd..020a0510f 100644 --- a/website/content/docs/connect/proxies/envoy.mdx +++ b/website/content/docs/connect/proxies/envoy.mdx @@ -759,6 +759,45 @@ definition](/docs/connect/registration/service-registration) or +- `envoy_listener_tracing_json` - Specifies a [tracing + configuration](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#envoy-v3-api-msg-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-tracing) + to be inserter in the public and upstreams listeners of the proxy. + + + + ```json + { + "@type" : "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager.Tracing", + "provider" : { + "name" : "envoy.tracers.zipkin", + "typed_config" : { + "@type" : "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", + "collector_cluster" : "otelcolector", + "collector_endpoint" : "/api/v2/spans", + "collector_endpoint_version" : "HTTP_JSON", + "shared_span_context" : false + } + }, + "custom_tags" : [ + { + "tag" : "custom_header", + "request_header" : { + "name" : "x-custom-traceid", + "default_value" : "" + } + }, + { + "tag" : "alloc_id", + "environment" : { + "name" : "NOMAD_ALLOC_ID" + } + } + ] + } + ``` + + + - `envoy_local_cluster_json` - Specifies a complete [Envoy cluster][pb-cluster] to be delivered in place of the local application cluster. This allows customization of timeouts, rate limits, load balancing strategy etc. From 5525efd2bd3ca2889c796edbeb742b5e858a278f Mon Sep 17 00:00:00 2001 From: Jorge Marey Date: Tue, 30 Aug 2022 17:00:11 +0200 Subject: [PATCH 20/55] Change changelog message --- .changelog/13998.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.changelog/13998.txt b/.changelog/13998.txt index cced542db..0fba03f22 100644 --- a/.changelog/13998.txt +++ b/.changelog/13998.txt @@ -1,3 +1,3 @@ ```release-note:improvement -connect: change deprecated tracing configuration on envoy +connect: expose new tracing configuration on envoy ``` From 06e7f3cadb1215fee610e24d90dc2eaf4f99c14c Mon Sep 17 00:00:00 2001 From: Eric Haberkorn Date: Tue, 30 Aug 2022 11:46:34 -0400 Subject: [PATCH 21/55] Finish up cluster peering failover (#14396) --- .changelog/14396.txt | 3 + agent/proxycfg/connect_proxy.go | 34 ++- agent/proxycfg/ingress_gateway.go | 11 + agent/proxycfg/snapshot.go | 12 + agent/proxycfg/state_test.go | 62 ++++- agent/proxycfg/testing.go | 25 ++ agent/proxycfg/testing_upstreams.go | 34 +++ agent/proxycfg/upstreams.go | 92 ++++++-- agent/structs/testing_catalog.go | 22 ++ agent/xds/clusters.go | 165 +++++++------ agent/xds/clusters_test.go | 13 ++ agent/xds/endpoints.go | 216 ++++++++++------- agent/xds/endpoints_test.go | 13 ++ ...and-failover-to-cluster-peer.latest.golden | 219 ++++++++++++++++++ ...and-failover-to-cluster-peer.latest.golden | 139 +++++++++++ ...and-failover-to-cluster-peer.latest.golden | 109 +++++++++ ...and-failover-to-cluster-peer.latest.golden | 75 ++++++ .../alpha/base.hcl | 5 + .../alpha/config_entries.hcl | 26 +++ .../alpha/service_gateway.hcl | 5 + .../alpha/service_s1.hcl | 1 + .../alpha/service_s2.hcl | 7 + .../alpha/setup.sh | 11 + .../alpha/verify.bats | 27 +++ .../bind.hcl | 2 + .../capture.sh | 6 + .../primary/base.hcl | 3 + .../primary/config_entries.hcl | 21 ++ .../primary/service_s1.hcl | 16 ++ .../primary/service_s2.hcl | 7 + .../primary/setup.sh | 10 + .../primary/verify.bats | 87 +++++++ .../vars.sh | 4 + 33 files changed, 1298 insertions(+), 184 deletions(-) create mode 100644 .changelog/14396.txt create mode 100644 agent/xds/testdata/clusters/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden create mode 100644 agent/xds/testdata/clusters/ingress-with-chain-and-failover-to-cluster-peer.latest.golden create mode 100644 agent/xds/testdata/endpoints/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden create mode 100644 agent/xds/testdata/endpoints/ingress-with-chain-and-failover-to-cluster-peer.latest.golden create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/base.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/config_entries.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_gateway.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s1.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s2.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/setup.sh create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/verify.bats create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/bind.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/capture.sh create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/base.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/config_entries.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s1.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s2.hcl create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/setup.sh create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/verify.bats create mode 100644 test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/vars.sh diff --git a/.changelog/14396.txt b/.changelog/14396.txt new file mode 100644 index 000000000..3905df946 --- /dev/null +++ b/.changelog/14396.txt @@ -0,0 +1,3 @@ +```release-note:feature +peering: Add support to failover to services running on cluster peers. +``` diff --git a/agent/proxycfg/connect_proxy.go b/agent/proxycfg/connect_proxy.go index 15e3498f2..2ff1f9ca9 100644 --- a/agent/proxycfg/connect_proxy.go +++ b/agent/proxycfg/connect_proxy.go @@ -280,16 +280,6 @@ func (s *handlerConnectProxy) handleUpdate(ctx context.Context, u UpdateEvent, s } snap.Roots = roots - case strings.HasPrefix(u.CorrelationID, peerTrustBundleIDPrefix): - resp, ok := u.Result.(*pbpeering.TrustBundleReadResponse) - if !ok { - return fmt.Errorf("invalid type for response: %T", u.Result) - } - peer := strings.TrimPrefix(u.CorrelationID, peerTrustBundleIDPrefix) - if resp.Bundle != nil { - snap.ConnectProxy.UpstreamPeerTrustBundles.Set(peer, resp.Bundle) - } - case u.CorrelationID == peeringTrustBundlesWatchID: resp, ok := u.Result.(*pbpeering.TrustBundleListByServiceResponse) if !ok { @@ -369,6 +359,17 @@ func (s *handlerConnectProxy) handleUpdate(ctx context.Context, u UpdateEvent, s // Clean up data // + peeredChainTargets := make(map[UpstreamID]struct{}) + for _, discoChain := range snap.ConnectProxy.DiscoveryChain { + for _, target := range discoChain.Targets { + if target.Peer == "" { + continue + } + uid := NewUpstreamIDFromTargetID(target.ID) + peeredChainTargets[uid] = struct{}{} + } + } + validPeerNames := make(map[string]struct{}) // Iterate through all known endpoints and remove references to upstream IDs that weren't in the update @@ -383,6 +384,11 @@ func (s *handlerConnectProxy) handleUpdate(ctx context.Context, u UpdateEvent, s validPeerNames[uid.Peer] = struct{}{} return true } + // Peered upstream came from a discovery chain target + if _, ok := peeredChainTargets[uid]; ok { + validPeerNames[uid.Peer] = struct{}{} + return true + } snap.ConnectProxy.PeerUpstreamEndpoints.CancelWatch(uid) return true }) @@ -463,8 +469,14 @@ func (s *handlerConnectProxy) handleUpdate(ctx context.Context, u UpdateEvent, s continue } if _, ok := seenUpstreams[uid]; !ok { - for _, cancelFn := range targets { + for targetID, cancelFn := range targets { cancelFn() + + targetUID := NewUpstreamIDFromTargetID(targetID) + if targetUID.Peer != "" { + snap.ConnectProxy.PeerUpstreamEndpoints.CancelWatch(targetUID) + snap.ConnectProxy.UpstreamPeerTrustBundles.CancelWatch(targetUID.Peer) + } } delete(snap.ConnectProxy.WatchedUpstreams, uid) } diff --git a/agent/proxycfg/ingress_gateway.go b/agent/proxycfg/ingress_gateway.go index 828229864..81a492836 100644 --- a/agent/proxycfg/ingress_gateway.go +++ b/agent/proxycfg/ingress_gateway.go @@ -5,7 +5,9 @@ import ( "fmt" cachetype "github.com/hashicorp/consul/agent/cache-types" + "github.com/hashicorp/consul/agent/proxycfg/internal/watch" "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/proto/pbpeering" ) type handlerIngressGateway struct { @@ -66,6 +68,9 @@ func (s *handlerIngressGateway) initialize(ctx context.Context) (ConfigSnapshot, snap.IngressGateway.WatchedGateways = make(map[UpstreamID]map[string]context.CancelFunc) snap.IngressGateway.WatchedGatewayEndpoints = make(map[UpstreamID]map[string]structs.CheckServiceNodes) snap.IngressGateway.Listeners = make(map[IngressListenerKey]structs.IngressListener) + snap.IngressGateway.UpstreamPeerTrustBundles = watch.NewMap[string, *pbpeering.PeeringTrustBundle]() + snap.IngressGateway.PeerUpstreamEndpoints = watch.NewMap[UpstreamID, structs.CheckServiceNodes]() + snap.IngressGateway.PeerUpstreamEndpointsUseHostnames = make(map[UpstreamID]struct{}) return snap, nil } @@ -152,6 +157,12 @@ func (s *handlerIngressGateway) handleUpdate(ctx context.Context, u UpdateEvent, delete(snap.IngressGateway.WatchedUpstreams[uid], targetID) delete(snap.IngressGateway.WatchedUpstreamEndpoints[uid], targetID) cancelUpstreamFn() + + targetUID := NewUpstreamIDFromTargetID(targetID) + if targetUID.Peer != "" { + snap.IngressGateway.PeerUpstreamEndpoints.CancelWatch(targetUID) + snap.IngressGateway.UpstreamPeerTrustBundles.CancelWatch(targetUID.Peer) + } } cancelFn() diff --git a/agent/proxycfg/snapshot.go b/agent/proxycfg/snapshot.go index 8d2d81bed..23cb8a955 100644 --- a/agent/proxycfg/snapshot.go +++ b/agent/proxycfg/snapshot.go @@ -814,6 +814,18 @@ func (s *ConfigSnapshot) MeshConfigTLSOutgoing() *structs.MeshDirectionalTLSConf return mesh.TLS.Outgoing } +func (s *ConfigSnapshot) ToConfigSnapshotUpstreams() (*ConfigSnapshotUpstreams, error) { + switch s.Kind { + case structs.ServiceKindConnectProxy: + return &s.ConnectProxy.ConfigSnapshotUpstreams, nil + case structs.ServiceKindIngressGateway: + return &s.IngressGateway.ConfigSnapshotUpstreams, nil + default: + // This is a coherence check and should never fail + return nil, fmt.Errorf("No upstream snapshot for gateway mode %q", s.Kind) + } +} + func (u *ConfigSnapshotUpstreams) UpstreamPeerMeta(uid UpstreamID) structs.PeeringServiceMeta { nodes, _ := u.PeerUpstreamEndpoints.Get(uid) if len(nodes) == 0 { diff --git a/agent/proxycfg/state_test.go b/agent/proxycfg/state_test.go index 855ded03d..825ac84fe 100644 --- a/agent/proxycfg/state_test.go +++ b/agent/proxycfg/state_test.go @@ -493,6 +493,11 @@ func TestState_WatchesAndUpdates(t *testing.T) { Mode: structs.MeshGatewayModeNone, }, }, + structs.Upstream{ + DestinationType: structs.UpstreamDestTypeService, + DestinationName: "api-failover-to-peer", + LocalBindPort: 10007, + }, structs.Upstream{ DestinationType: structs.UpstreamDestTypeService, DestinationName: "api-dc2", @@ -552,6 +557,16 @@ func TestState_WatchesAndUpdates(t *testing.T) { Mode: structs.MeshGatewayModeNone, }, }), + fmt.Sprintf("discovery-chain:%s-failover-to-peer", apiUID.String()): genVerifyDiscoveryChainWatch(&structs.DiscoveryChainRequest{ + Name: "api-failover-to-peer", + EvaluateInDatacenter: "dc1", + EvaluateInNamespace: "default", + EvaluateInPartition: "default", + Datacenter: "dc1", + OverrideMeshGateway: structs.MeshGatewayConfig{ + Mode: meshGatewayProxyConfigValue, + }, + }), fmt.Sprintf("discovery-chain:%s-dc2", apiUID.String()): genVerifyDiscoveryChainWatch(&structs.DiscoveryChainRequest{ Name: "api-dc2", EvaluateInDatacenter: "dc1", @@ -639,6 +654,26 @@ func TestState_WatchesAndUpdates(t *testing.T) { }, Err: nil, }, + { + CorrelationID: fmt.Sprintf("discovery-chain:%s-failover-to-peer", apiUID.String()), + Result: &structs.DiscoveryChainResponse{ + Chain: discoverychain.TestCompileConfigEntries(t, "api-failover-to-peer", "default", "default", "dc1", "trustdomain.consul", + func(req *discoverychain.CompileRequest) { + req.OverrideMeshGateway.Mode = meshGatewayProxyConfigValue + }, &structs.ServiceResolverConfigEntry{ + Kind: structs.ServiceResolver, + Name: "api-failover-to-peer", + Failover: map[string]structs.ServiceResolverFailover{ + "*": { + Targets: []structs.ServiceResolverFailoverTarget{ + {Peer: "cluster-01"}, + }, + }, + }, + }), + }, + Err: nil, + }, }, verifySnapshot: func(t testing.TB, snap *ConfigSnapshot) { require.True(t, snap.Valid()) @@ -646,15 +681,18 @@ func TestState_WatchesAndUpdates(t *testing.T) { require.Equal(t, indexedRoots, snap.Roots) require.Equal(t, issuedCert, snap.ConnectProxy.Leaf) - require.Len(t, snap.ConnectProxy.DiscoveryChain, 5, "%+v", snap.ConnectProxy.DiscoveryChain) - require.Len(t, snap.ConnectProxy.WatchedUpstreams, 5, "%+v", snap.ConnectProxy.WatchedUpstreams) - require.Len(t, snap.ConnectProxy.WatchedUpstreamEndpoints, 5, "%+v", snap.ConnectProxy.WatchedUpstreamEndpoints) - require.Len(t, snap.ConnectProxy.WatchedGateways, 5, "%+v", snap.ConnectProxy.WatchedGateways) - require.Len(t, snap.ConnectProxy.WatchedGatewayEndpoints, 5, "%+v", snap.ConnectProxy.WatchedGatewayEndpoints) + require.Len(t, snap.ConnectProxy.DiscoveryChain, 6, "%+v", snap.ConnectProxy.DiscoveryChain) + require.Len(t, snap.ConnectProxy.WatchedUpstreams, 6, "%+v", snap.ConnectProxy.WatchedUpstreams) + require.Len(t, snap.ConnectProxy.WatchedUpstreamEndpoints, 6, "%+v", snap.ConnectProxy.WatchedUpstreamEndpoints) + require.Len(t, snap.ConnectProxy.WatchedGateways, 6, "%+v", snap.ConnectProxy.WatchedGateways) + require.Len(t, snap.ConnectProxy.WatchedGatewayEndpoints, 6, "%+v", snap.ConnectProxy.WatchedGatewayEndpoints) require.Len(t, snap.ConnectProxy.WatchedServiceChecks, 0, "%+v", snap.ConnectProxy.WatchedServiceChecks) require.Len(t, snap.ConnectProxy.PreparedQueryEndpoints, 0, "%+v", snap.ConnectProxy.PreparedQueryEndpoints) + require.Equal(t, 1, snap.ConnectProxy.ConfigSnapshotUpstreams.PeerUpstreamEndpoints.Len()) + require.Equal(t, 1, snap.ConnectProxy.ConfigSnapshotUpstreams.UpstreamPeerTrustBundles.Len()) + require.True(t, snap.ConnectProxy.IntentionsSet) require.Equal(t, ixnMatch, snap.ConnectProxy.Intentions) require.True(t, snap.ConnectProxy.MeshConfigSet) @@ -667,6 +705,7 @@ func TestState_WatchesAndUpdates(t *testing.T) { fmt.Sprintf("upstream-target:api-failover-remote.default.default.dc2:%s-failover-remote?dc=dc2", apiUID.String()): genVerifyServiceSpecificRequest("api-failover-remote", "", "dc2", true), fmt.Sprintf("upstream-target:api-failover-local.default.default.dc2:%s-failover-local?dc=dc2", apiUID.String()): genVerifyServiceSpecificRequest("api-failover-local", "", "dc2", true), fmt.Sprintf("upstream-target:api-failover-direct.default.default.dc2:%s-failover-direct?dc=dc2", apiUID.String()): genVerifyServiceSpecificRequest("api-failover-direct", "", "dc2", true), + upstreamPeerWatchIDPrefix + fmt.Sprintf("%s-failover-to-peer?peer=cluster-01", apiUID.String()): genVerifyServiceSpecificPeeredRequest("api-failover-to-peer", "", "", "cluster-01", true), fmt.Sprintf("mesh-gateway:dc2:%s-failover-remote?dc=dc2", apiUID.String()): genVerifyGatewayWatch("dc2"), fmt.Sprintf("mesh-gateway:dc1:%s-failover-local?dc=dc2", apiUID.String()): genVerifyGatewayWatch("dc1"), }, @@ -676,15 +715,18 @@ func TestState_WatchesAndUpdates(t *testing.T) { require.Equal(t, indexedRoots, snap.Roots) require.Equal(t, issuedCert, snap.ConnectProxy.Leaf) - require.Len(t, snap.ConnectProxy.DiscoveryChain, 5, "%+v", snap.ConnectProxy.DiscoveryChain) - require.Len(t, snap.ConnectProxy.WatchedUpstreams, 5, "%+v", snap.ConnectProxy.WatchedUpstreams) - require.Len(t, snap.ConnectProxy.WatchedUpstreamEndpoints, 5, "%+v", snap.ConnectProxy.WatchedUpstreamEndpoints) - require.Len(t, snap.ConnectProxy.WatchedGateways, 5, "%+v", snap.ConnectProxy.WatchedGateways) - require.Len(t, snap.ConnectProxy.WatchedGatewayEndpoints, 5, "%+v", snap.ConnectProxy.WatchedGatewayEndpoints) + require.Len(t, snap.ConnectProxy.DiscoveryChain, 6, "%+v", snap.ConnectProxy.DiscoveryChain) + require.Len(t, snap.ConnectProxy.WatchedUpstreams, 6, "%+v", snap.ConnectProxy.WatchedUpstreams) + require.Len(t, snap.ConnectProxy.WatchedUpstreamEndpoints, 6, "%+v", snap.ConnectProxy.WatchedUpstreamEndpoints) + require.Len(t, snap.ConnectProxy.WatchedGateways, 6, "%+v", snap.ConnectProxy.WatchedGateways) + require.Len(t, snap.ConnectProxy.WatchedGatewayEndpoints, 6, "%+v", snap.ConnectProxy.WatchedGatewayEndpoints) require.Len(t, snap.ConnectProxy.WatchedServiceChecks, 0, "%+v", snap.ConnectProxy.WatchedServiceChecks) require.Len(t, snap.ConnectProxy.PreparedQueryEndpoints, 0, "%+v", snap.ConnectProxy.PreparedQueryEndpoints) + require.Equal(t, 1, snap.ConnectProxy.ConfigSnapshotUpstreams.PeerUpstreamEndpoints.Len()) + require.Equal(t, 1, snap.ConnectProxy.ConfigSnapshotUpstreams.UpstreamPeerTrustBundles.Len()) + require.True(t, snap.ConnectProxy.IntentionsSet) require.Equal(t, ixnMatch, snap.ConnectProxy.Intentions) }, diff --git a/agent/proxycfg/testing.go b/agent/proxycfg/testing.go index 0493e30da..d43658947 100644 --- a/agent/proxycfg/testing.go +++ b/agent/proxycfg/testing.go @@ -280,6 +280,31 @@ func TestUpstreamNodesDC2(t testing.T) structs.CheckServiceNodes { } } +func TestUpstreamNodesPeerCluster01(t testing.T) structs.CheckServiceNodes { + peer := "cluster-01" + service := structs.TestNodeServiceWithNameInPeer(t, "web", peer) + return structs.CheckServiceNodes{ + structs.CheckServiceNode{ + Node: &structs.Node{ + ID: "test1", + Node: "test1", + Address: "10.40.1.1", + PeerName: peer, + }, + Service: service, + }, + structs.CheckServiceNode{ + Node: &structs.Node{ + ID: "test2", + Node: "test2", + Address: "10.40.1.2", + PeerName: peer, + }, + Service: service, + }, + } +} + func TestUpstreamNodesInStatusDC2(t testing.T, status string) structs.CheckServiceNodes { return structs.CheckServiceNodes{ structs.CheckServiceNode{ diff --git a/agent/proxycfg/testing_upstreams.go b/agent/proxycfg/testing_upstreams.go index 2d80c0968..5e131af4f 100644 --- a/agent/proxycfg/testing_upstreams.go +++ b/agent/proxycfg/testing_upstreams.go @@ -8,6 +8,7 @@ import ( "github.com/hashicorp/consul/agent/connect" "github.com/hashicorp/consul/agent/consul/discoverychain" "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/proto/pbpeering" ) func setupTestVariationConfigEntriesAndSnapshot( @@ -72,6 +73,24 @@ func setupTestVariationConfigEntriesAndSnapshot( Nodes: TestGatewayNodesDC2(t), }, }) + case "failover-to-cluster-peer": + events = append(events, UpdateEvent{ + CorrelationID: "peer-trust-bundle:cluster-01", + Result: &pbpeering.TrustBundleReadResponse{ + Bundle: &pbpeering.PeeringTrustBundle{ + PeerName: "peer1", + TrustDomain: "peer1.domain", + ExportedPartition: "peer1ap", + RootPEMs: []string{"peer1-root-1"}, + }, + }, + }) + events = append(events, UpdateEvent{ + CorrelationID: "upstream-peer:db?peer=cluster-01", + Result: &structs.IndexedCheckServiceNodes{ + Nodes: TestUpstreamNodesPeerCluster01(t), + }, + }) case "failover-through-double-remote-gateway-triggered": events = append(events, UpdateEvent{ CorrelationID: "upstream-target:db.default.default.dc1:" + dbUID.String(), @@ -255,6 +274,21 @@ func setupTestVariationDiscoveryChain( }, }, ) + case "failover-to-cluster-peer": + entries = append(entries, + &structs.ServiceResolverConfigEntry{ + Kind: structs.ServiceResolver, + Name: "db", + ConnectTimeout: 33 * time.Second, + Failover: map[string]structs.ServiceResolverFailover{ + "*": { + Targets: []structs.ServiceResolverFailoverTarget{ + {Peer: "cluster-01"}, + }, + }, + }, + }, + ) case "failover-through-double-remote-gateway-triggered": fallthrough case "failover-through-double-remote-gateway": diff --git a/agent/proxycfg/upstreams.go b/agent/proxycfg/upstreams.go index 600a89e09..e8825e94c 100644 --- a/agent/proxycfg/upstreams.go +++ b/agent/proxycfg/upstreams.go @@ -9,7 +9,9 @@ import ( "github.com/mitchellh/mapstructure" "github.com/hashicorp/consul/acl" + cachetype "github.com/hashicorp/consul/agent/cache-types" "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/proto/pbpeering" ) type handlerUpstreams struct { @@ -21,9 +23,10 @@ func (s *handlerUpstreams) handleUpdateUpstreams(ctx context.Context, u UpdateEv return fmt.Errorf("error filling agent cache: %v", u.Err) } - upstreamsSnapshot := &snap.ConnectProxy.ConfigSnapshotUpstreams - if snap.Kind == structs.ServiceKindIngressGateway { - upstreamsSnapshot = &snap.IngressGateway.ConfigSnapshotUpstreams + upstreamsSnapshot, err := snap.ToConfigSnapshotUpstreams() + + if err != nil { + return err } switch { @@ -98,19 +101,16 @@ func (s *handlerUpstreams) handleUpdateUpstreams(ctx context.Context, u UpdateEv uid := UpstreamIDFromString(uidString) - filteredNodes := hostnameEndpoints( - s.logger, - GatewayKey{ /*empty so it never matches*/ }, - resp.Nodes, - ) - if len(filteredNodes) > 0 { - if set := upstreamsSnapshot.PeerUpstreamEndpoints.Set(uid, filteredNodes); set { - upstreamsSnapshot.PeerUpstreamEndpointsUseHostnames[uid] = struct{}{} - } - } else { - if set := upstreamsSnapshot.PeerUpstreamEndpoints.Set(uid, resp.Nodes); set { - delete(upstreamsSnapshot.PeerUpstreamEndpointsUseHostnames, uid) - } + s.setPeerEndpoints(upstreamsSnapshot, uid, resp.Nodes) + + case strings.HasPrefix(u.CorrelationID, peerTrustBundleIDPrefix): + resp, ok := u.Result.(*pbpeering.TrustBundleReadResponse) + if !ok { + return fmt.Errorf("invalid type for response: %T", u.Result) + } + peer := strings.TrimPrefix(u.CorrelationID, peerTrustBundleIDPrefix) + if resp.Bundle != nil { + upstreamsSnapshot.UpstreamPeerTrustBundles.Set(peer, resp.Bundle) } case strings.HasPrefix(u.CorrelationID, "upstream-target:"): @@ -216,6 +216,23 @@ func removeColonPrefix(s string) (string, string, bool) { return s[0:idx], s[idx+1:], true } +func (s *handlerUpstreams) setPeerEndpoints(upstreamsSnapshot *ConfigSnapshotUpstreams, uid UpstreamID, nodes structs.CheckServiceNodes) { + filteredNodes := hostnameEndpoints( + s.logger, + GatewayKey{ /*empty so it never matches*/ }, + nodes, + ) + if len(filteredNodes) > 0 { + if set := upstreamsSnapshot.PeerUpstreamEndpoints.Set(uid, filteredNodes); set { + upstreamsSnapshot.PeerUpstreamEndpointsUseHostnames[uid] = struct{}{} + } + } else { + if set := upstreamsSnapshot.PeerUpstreamEndpoints.Set(uid, nodes); set { + delete(upstreamsSnapshot.PeerUpstreamEndpointsUseHostnames, uid) + } + } +} + func (s *handlerUpstreams) resetWatchesFromChain( ctx context.Context, uid UpstreamID, @@ -255,6 +272,12 @@ func (s *handlerUpstreams) resetWatchesFromChain( delete(snap.WatchedUpstreams[uid], targetID) delete(snap.WatchedUpstreamEndpoints[uid], targetID) cancelFn() + + targetUID := NewUpstreamIDFromTargetID(targetID) + if targetUID.Peer != "" { + snap.PeerUpstreamEndpoints.CancelWatch(targetUID) + snap.UpstreamPeerTrustBundles.CancelWatch(targetUID.Peer) + } } var ( @@ -274,6 +297,7 @@ func (s *handlerUpstreams) resetWatchesFromChain( service: target.Service, filter: target.Subset.Filter, datacenter: target.Datacenter, + peer: target.Peer, entMeta: target.GetEnterpriseMetadata(), } err := s.watchUpstreamTarget(ctx, snap, opts) @@ -384,6 +408,7 @@ type targetWatchOpts struct { service string filter string datacenter string + peer string entMeta *acl.EnterpriseMeta } @@ -397,11 +422,17 @@ func (s *handlerUpstreams) watchUpstreamTarget(ctx context.Context, snap *Config var finalMeta acl.EnterpriseMeta finalMeta.Merge(opts.entMeta) - correlationID := "upstream-target:" + opts.chainID + ":" + opts.upstreamID.String() + uid := opts.upstreamID + correlationID := "upstream-target:" + opts.chainID + ":" + uid.String() + + if opts.peer != "" { + uid = NewUpstreamIDFromTargetID(opts.chainID) + correlationID = upstreamPeerWatchIDPrefix + uid.String() + } ctx, cancel := context.WithCancel(ctx) err := s.dataSources.Health.Notify(ctx, &structs.ServiceSpecificRequest{ - PeerName: opts.upstreamID.Peer, + PeerName: opts.peer, Datacenter: opts.datacenter, QueryOptions: structs.QueryOptions{ Token: s.token, @@ -422,6 +453,31 @@ func (s *handlerUpstreams) watchUpstreamTarget(ctx context.Context, snap *Config } snap.WatchedUpstreams[opts.upstreamID][opts.chainID] = cancel + if uid.Peer == "" { + return nil + } + + if ok := snap.PeerUpstreamEndpoints.IsWatched(uid); !ok { + snap.PeerUpstreamEndpoints.InitWatch(uid, cancel) + } + + // Check whether a watch for this peer exists to avoid duplicates. + if ok := snap.UpstreamPeerTrustBundles.IsWatched(uid.Peer); !ok { + peerCtx, cancel := context.WithCancel(ctx) + if err := s.dataSources.TrustBundle.Notify(peerCtx, &cachetype.TrustBundleReadRequest{ + Request: &pbpeering.TrustBundleReadRequest{ + Name: uid.Peer, + Partition: uid.PartitionOrDefault(), + }, + QueryOptions: structs.QueryOptions{Token: s.token}, + }, peerTrustBundleIDPrefix+uid.Peer, s.ch); err != nil { + cancel() + return fmt.Errorf("error while watching trust bundle for peer %q: %w", uid.Peer, err) + } + + snap.UpstreamPeerTrustBundles.InitWatch(uid.Peer, cancel) + } + return nil } diff --git a/agent/structs/testing_catalog.go b/agent/structs/testing_catalog.go index c9fcf017d..f026f6091 100644 --- a/agent/structs/testing_catalog.go +++ b/agent/structs/testing_catalog.go @@ -53,6 +53,28 @@ func TestNodeServiceWithName(t testing.T, name string) *NodeService { } } +const peerTrustDomain = "1c053652-8512-4373-90cf-5a7f6263a994.consul" + +func TestNodeServiceWithNameInPeer(t testing.T, name string, peer string) *NodeService { + service := "payments" + return &NodeService{ + Kind: ServiceKindTypical, + Service: name, + Port: 8080, + Connect: ServiceConnect{ + PeerMeta: &PeeringServiceMeta{ + SNI: []string{ + service + ".default.default." + peer + ".external." + peerTrustDomain, + }, + SpiffeID: []string{ + "spiffe://" + peerTrustDomain + "/ns/default/dc/" + peer + "-dc/svc/" + service, + }, + Protocol: "tcp", + }, + }, + } +} + // TestNodeServiceProxy returns a *NodeService representing a valid // Connect proxy. func TestNodeServiceProxy(t testing.T) *NodeService { diff --git a/agent/xds/clusters.go b/agent/xds/clusters.go index c3ac71847..6b171a27f 100644 --- a/agent/xds/clusters.go +++ b/agent/xds/clusters.go @@ -88,29 +88,26 @@ func (s *ResourceGenerator) clustersFromSnapshotConnectProxy(cfgSnap *proxycfg.C clusters = append(clusters, passthroughs...) } - // NOTE: Any time we skip a chain below we MUST also skip that discovery chain in endpoints.go - // so that the sets of endpoints generated matches the sets of clusters. - for uid, chain := range cfgSnap.ConnectProxy.DiscoveryChain { + getUpstream := func(uid proxycfg.UpstreamID) (*structs.Upstream, bool) { upstream := cfgSnap.ConnectProxy.UpstreamConfig[uid] explicit := upstream.HasLocalPortOrSocket() implicit := cfgSnap.ConnectProxy.IsImplicitUpstream(uid) - if !implicit && !explicit { - // Discovery chain is not associated with a known explicit or implicit upstream so it is skipped. - continue - } + return upstream, !implicit && !explicit + } - chainEndpoints, ok := cfgSnap.ConnectProxy.WatchedUpstreamEndpoints[uid] - if !ok { - // this should not happen - return nil, fmt.Errorf("no endpoint map for upstream %q", uid) + // NOTE: Any time we skip a chain below we MUST also skip that discovery chain in endpoints.go + // so that the sets of endpoints generated matches the sets of clusters. + for uid, chain := range cfgSnap.ConnectProxy.DiscoveryChain { + upstream, skip := getUpstream(uid) + if skip { + continue } upstreamClusters, err := s.makeUpstreamClustersForDiscoveryChain( uid, upstream, chain, - chainEndpoints, cfgSnap, false, ) @@ -127,18 +124,15 @@ func (s *ResourceGenerator) clustersFromSnapshotConnectProxy(cfgSnap *proxycfg.C // upstream in endpoints.go so that the sets of endpoints generated matches // the sets of clusters. for _, uid := range cfgSnap.ConnectProxy.PeeredUpstreamIDs() { - upstreamCfg := cfgSnap.ConnectProxy.UpstreamConfig[uid] - - explicit := upstreamCfg.HasLocalPortOrSocket() - implicit := cfgSnap.ConnectProxy.IsImplicitUpstream(uid) - if !implicit && !explicit { - // Not associated with a known explicit or implicit upstream so it is skipped. + upstream, skip := getUpstream(uid) + if skip { continue } peerMeta := cfgSnap.ConnectProxy.UpstreamPeerMeta(uid) + cfg := s.getAndModifyUpstreamConfigForPeeredListener(uid, upstream, peerMeta) - upstreamCluster, err := s.makeUpstreamClusterForPeerService(uid, upstreamCfg, peerMeta, cfgSnap) + upstreamCluster, err := s.makeUpstreamClusterForPeerService(uid, cfg, peerMeta, cfgSnap) if err != nil { return nil, err } @@ -652,17 +646,10 @@ func (s *ResourceGenerator) clustersFromSnapshotIngressGateway(cfgSnap *proxycfg return nil, fmt.Errorf("no discovery chain for upstream %q", uid) } - chainEndpoints, ok := cfgSnap.IngressGateway.WatchedUpstreamEndpoints[uid] - if !ok { - // this should not happen - return nil, fmt.Errorf("no endpoint map for upstream %q", uid) - } - upstreamClusters, err := s.makeUpstreamClustersForDiscoveryChain( uid, &u, chain, - chainEndpoints, cfgSnap, false, ) @@ -745,7 +732,7 @@ func (s *ResourceGenerator) makeAppCluster(cfgSnap *proxycfg.ConfigSnapshot, nam func (s *ResourceGenerator) makeUpstreamClusterForPeerService( uid proxycfg.UpstreamID, - upstream *structs.Upstream, + upstreamConfig structs.UpstreamConfig, peerMeta structs.PeeringServiceMeta, cfgSnap *proxycfg.ConfigSnapshot, ) (*envoy_cluster_v3.Cluster, error) { @@ -754,16 +741,21 @@ func (s *ResourceGenerator) makeUpstreamClusterForPeerService( err error ) - cfg := s.getAndModifyUpstreamConfigForPeeredListener(uid, upstream, peerMeta) - if cfg.EnvoyClusterJSON != "" { - c, err = makeClusterFromUserConfig(cfg.EnvoyClusterJSON) + if upstreamConfig.EnvoyClusterJSON != "" { + c, err = makeClusterFromUserConfig(upstreamConfig.EnvoyClusterJSON) if err != nil { return c, err } // In the happy path don't return yet as we need to inject TLS config still. } - tbs, ok := cfgSnap.ConnectProxy.UpstreamPeerTrustBundles.Get(uid.Peer) + upstreamsSnapshot, err := cfgSnap.ToConfigSnapshotUpstreams() + + if err != nil { + return c, err + } + + tbs, ok := upstreamsSnapshot.UpstreamPeerTrustBundles.Get(uid.Peer) if !ok { // this should never happen since we loop through upstreams with // set trust bundles @@ -772,7 +764,7 @@ func (s *ResourceGenerator) makeUpstreamClusterForPeerService( clusterName := generatePeeredClusterName(uid, tbs) - outlierDetection := ToOutlierDetection(cfg.PassiveHealthCheck) + outlierDetection := ToOutlierDetection(upstreamConfig.PassiveHealthCheck) // We can't rely on health checks for services on cluster peers because they // don't take into account service resolvers, splitters and routers. Setting // MaxEjectionPercent too 100% gives outlier detection the power to eject the @@ -783,18 +775,18 @@ func (s *ResourceGenerator) makeUpstreamClusterForPeerService( if c == nil { c = &envoy_cluster_v3.Cluster{ Name: clusterName, - ConnectTimeout: durationpb.New(time.Duration(cfg.ConnectTimeoutMs) * time.Millisecond), + ConnectTimeout: durationpb.New(time.Duration(upstreamConfig.ConnectTimeoutMs) * time.Millisecond), CommonLbConfig: &envoy_cluster_v3.Cluster_CommonLbConfig{ HealthyPanicThreshold: &envoy_type_v3.Percent{ Value: 0, // disable panic threshold }, }, CircuitBreakers: &envoy_cluster_v3.CircuitBreakers{ - Thresholds: makeThresholdsIfNeeded(cfg.Limits), + Thresholds: makeThresholdsIfNeeded(upstreamConfig.Limits), }, OutlierDetection: outlierDetection, } - if cfg.Protocol == "http2" || cfg.Protocol == "grpc" { + if upstreamConfig.Protocol == "http2" || upstreamConfig.Protocol == "grpc" { if err := s.setHttp2ProtocolOptions(c); err != nil { return c, err } @@ -828,12 +820,11 @@ func (s *ResourceGenerator) makeUpstreamClusterForPeerService( false, /*onlyPassing*/ ) } - } rootPEMs := cfgSnap.RootPEMs() if uid.Peer != "" { - tbs, _ := cfgSnap.ConnectProxy.UpstreamPeerTrustBundles.Get(uid.Peer) + tbs, _ := upstreamsSnapshot.UpstreamPeerTrustBundles.Get(uid.Peer) rootPEMs = tbs.ConcatenatedRootPEMs() } @@ -968,7 +959,6 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( uid proxycfg.UpstreamID, upstream *structs.Upstream, chain *structs.CompiledDiscoveryChain, - chainEndpoints map[string]structs.CheckServiceNodes, cfgSnap *proxycfg.ConfigSnapshot, forMeshGateway bool, ) ([]*envoy_cluster_v3.Cluster, error) { @@ -985,7 +975,15 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( upstreamConfigMap = upstream.Config } - cfg, err := structs.ParseUpstreamConfigNoDefaults(upstreamConfigMap) + upstreamsSnapshot, err := cfgSnap.ToConfigSnapshotUpstreams() + + // Mesh gateways are exempt because upstreamsSnapshot is only used for + // cluster peering targets and transative failover/redirects are unsupported. + if err != nil && !forMeshGateway { + return nil, fmt.Errorf("No upstream snapshot for gateway mode %q", cfgSnap.Kind) + } + + rawUpstreamConfig, err := structs.ParseUpstreamConfigNoDefaults(upstreamConfigMap) if err != nil { // Don't hard fail on a config typo, just warn. The parse func returns // default config if there is an error so it's safe to continue. @@ -993,13 +991,28 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( "error", err) } + finalizeUpstreamConfig := func(cfg structs.UpstreamConfig, connectTimeout time.Duration) structs.UpstreamConfig { + if cfg.Protocol == "" { + cfg.Protocol = chain.Protocol + } + + if cfg.Protocol == "" { + cfg.Protocol = "tcp" + } + + if cfg.ConnectTimeoutMs == 0 { + cfg.ConnectTimeoutMs = int(connectTimeout / time.Millisecond) + } + return cfg + } + var escapeHatchCluster *envoy_cluster_v3.Cluster if !forMeshGateway { - if cfg.EnvoyClusterJSON != "" { + if rawUpstreamConfig.EnvoyClusterJSON != "" { if chain.Default { // If you haven't done anything to setup the discovery chain, then // you can use the envoy_cluster_json escape hatch. - escapeHatchCluster, err = makeClusterFromUserConfig(cfg.EnvoyClusterJSON) + escapeHatchCluster, err = makeClusterFromUserConfig(rawUpstreamConfig.EnvoyClusterJSON) if err != nil { return nil, err } @@ -1013,14 +1026,20 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( var out []*envoy_cluster_v3.Cluster for _, node := range chain.Nodes { - if node.Type != structs.DiscoveryGraphNodeTypeResolver { + switch { + case node == nil: + return nil, fmt.Errorf("impossible to process a nil node") + case node.Type != structs.DiscoveryGraphNodeTypeResolver: continue + case node.Resolver == nil: + return nil, fmt.Errorf("impossible to process a non-resolver node") } failover := node.Resolver.Failover // These variables are prefixed with primary to avoid shaddowing bugs. primaryTargetID := node.Resolver.Target primaryTarget := chain.Targets[primaryTargetID] primaryClusterName := CustomizeClusterName(primaryTarget.Name, chain) + upstreamConfig := finalizeUpstreamConfig(rawUpstreamConfig, node.Resolver.ConnectTimeout) if forMeshGateway { primaryClusterName = meshGatewayExportedClusterNamePrefix + primaryClusterName } @@ -1033,22 +1052,38 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( continue } - type targetClusterOptions struct { + type targetClusterOption struct { targetID string clusterName string } // Construct the information required to make target clusters. When // failover is configured, create the aggregate cluster. - var targetClustersOptions []targetClusterOptions + var targetClustersOptions []targetClusterOption if failover != nil && !forMeshGateway { var failoverClusterNames []string for _, tid := range append([]string{primaryTargetID}, failover.Targets...) { target := chain.Targets[tid] - clusterName := CustomizeClusterName(target.Name, chain) + clusterName := target.Name + targetUID := proxycfg.NewUpstreamIDFromTargetID(tid) + if targetUID.Peer != "" { + tbs, ok := upstreamsSnapshot.UpstreamPeerTrustBundles.Get(targetUID.Peer) + // We can't generate cluster on peers without the trust bundle. The + // trust bundle should be ready soon. + if !ok { + s.Logger.Debug("peer trust bundle not ready for discovery chain target", + "peer", targetUID.Peer, + "target", tid, + ) + continue + } + + clusterName = generatePeeredClusterName(targetUID, tbs) + } + clusterName = CustomizeClusterName(clusterName, chain) clusterName = failoverClusterNamePrefix + clusterName - targetClustersOptions = append(targetClustersOptions, targetClusterOptions{ + targetClustersOptions = append(targetClustersOptions, targetClusterOption{ targetID: tid, clusterName: clusterName, }) @@ -1077,7 +1112,7 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( out = append(out, c) } else { - targetClustersOptions = append(targetClustersOptions, targetClusterOptions{ + targetClustersOptions = append(targetClustersOptions, targetClusterOption{ targetID: primaryTargetID, clusterName: primaryClusterName, }) @@ -1096,11 +1131,20 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( Datacenter: target.Datacenter, Service: target.Service, }.URI().String() - if uid.Peer != "" { - return nil, fmt.Errorf("impossible to get a peer discovery chain") + targetUID := proxycfg.NewUpstreamIDFromTargetID(targetInfo.targetID) + s.Logger.Debug("generating cluster for", "cluster", targetInfo.clusterName) + if targetUID.Peer != "" { + peerMeta := upstreamsSnapshot.UpstreamPeerMeta(targetUID) + upstreamCluster, err := s.makeUpstreamClusterForPeerService(targetUID, upstreamConfig, peerMeta, cfgSnap) + if err != nil { + continue + } + // Override the cluster name to include the failover-target~ prefix. + upstreamCluster.Name = targetInfo.clusterName + out = append(out, upstreamCluster) + continue } - s.Logger.Trace("generating cluster for", "cluster", targetInfo.clusterName) c := &envoy_cluster_v3.Cluster{ Name: targetInfo.clusterName, AltStatName: targetInfo.clusterName, @@ -1121,9 +1165,9 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( }, // TODO(peering): make circuit breakers or outlier detection work? CircuitBreakers: &envoy_cluster_v3.CircuitBreakers{ - Thresholds: makeThresholdsIfNeeded(cfg.Limits), + Thresholds: makeThresholdsIfNeeded(upstreamConfig.Limits), }, - OutlierDetection: ToOutlierDetection(cfg.PassiveHealthCheck), + OutlierDetection: ToOutlierDetection(upstreamConfig.PassiveHealthCheck), } var lb *structs.LoadBalancer @@ -1134,19 +1178,7 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( return nil, fmt.Errorf("failed to apply load balancer configuration to cluster %q: %v", targetInfo.clusterName, err) } - var proto string - if !forMeshGateway { - proto = cfg.Protocol - } - if proto == "" { - proto = chain.Protocol - } - - if proto == "" { - proto = "tcp" - } - - if proto == "http2" || proto == "grpc" { + if upstreamConfig.Protocol == "http2" || upstreamConfig.Protocol == "grpc" { if err := s.setHttp2ProtocolOptions(c); err != nil { return nil, err } @@ -1155,7 +1187,7 @@ func (s *ResourceGenerator) makeUpstreamClustersForDiscoveryChain( configureTLS := true if forMeshGateway { // We only initiate TLS if we're doing an L7 proxy. - configureTLS = structs.IsProtocolHTTPLike(proto) + configureTLS = structs.IsProtocolHTTPLike(upstreamConfig.Protocol) } if configureTLS { @@ -1228,7 +1260,6 @@ func (s *ResourceGenerator) makeExportedUpstreamClustersForMeshGateway(cfgSnap * proxycfg.NewUpstreamIDFromServiceName(svc), nil, chain, - nil, cfgSnap, true, ) diff --git a/agent/xds/clusters_test.go b/agent/xds/clusters_test.go index a56853b81..26087dd1d 100644 --- a/agent/xds/clusters_test.go +++ b/agent/xds/clusters_test.go @@ -257,6 +257,12 @@ func TestClustersFromSnapshot(t *testing.T) { return proxycfg.TestConfigSnapshotDiscoveryChain(t, "failover", nil, nil) }, }, + { + name: "connect-proxy-with-chain-and-failover-to-cluster-peer", + create: func(t testinf.T) *proxycfg.ConfigSnapshot { + return proxycfg.TestConfigSnapshotDiscoveryChain(t, "failover-to-cluster-peer", nil, nil) + }, + }, { name: "connect-proxy-with-tcp-chain-failover-through-remote-gateway", create: func(t testinf.T) *proxycfg.ConfigSnapshot { @@ -495,6 +501,13 @@ func TestClustersFromSnapshot(t *testing.T) { "failover", nil, nil, nil) }, }, + { + name: "ingress-with-chain-and-failover-to-cluster-peer", + create: func(t testinf.T) *proxycfg.ConfigSnapshot { + return proxycfg.TestConfigSnapshotIngressGateway(t, true, "tcp", + "failover-to-cluster-peer", nil, nil, nil) + }, + }, { name: "ingress-with-tcp-chain-failover-through-remote-gateway", create: func(t testinf.T) *proxycfg.ConfigSnapshot { diff --git a/agent/xds/endpoints.go b/agent/xds/endpoints.go index 6c0fca7f2..c1501f0f7 100644 --- a/agent/xds/endpoints.go +++ b/agent/xds/endpoints.go @@ -50,14 +50,19 @@ func (s *ResourceGenerator) endpointsFromSnapshotConnectProxy(cfgSnap *proxycfg. cfgSnap.ConnectProxy.PeerUpstreamEndpoints.Len()+ len(cfgSnap.ConnectProxy.WatchedUpstreamEndpoints)) - // NOTE: Any time we skip a chain below we MUST also skip that discovery chain in clusters.go - // so that the sets of endpoints generated matches the sets of clusters. - for uid, chain := range cfgSnap.ConnectProxy.DiscoveryChain { + getUpstream := func(uid proxycfg.UpstreamID) (*structs.Upstream, bool) { upstream := cfgSnap.ConnectProxy.UpstreamConfig[uid] explicit := upstream.HasLocalPortOrSocket() implicit := cfgSnap.ConnectProxy.IsImplicitUpstream(uid) - if !implicit && !explicit { + return upstream, !implicit && !explicit + } + + // NOTE: Any time we skip a chain below we MUST also skip that discovery chain in clusters.go + // so that the sets of endpoints generated matches the sets of clusters. + for uid, chain := range cfgSnap.ConnectProxy.DiscoveryChain { + upstream, skip := getUpstream(uid) + if skip { // Discovery chain is not associated with a known explicit or implicit upstream so it is skipped. continue } @@ -70,6 +75,7 @@ func (s *ResourceGenerator) endpointsFromSnapshotConnectProxy(cfgSnap *proxycfg. es, err := s.endpointsFromDiscoveryChain( uid, chain, + cfgSnap, cfgSnap.Locality, upstreamConfigMap, cfgSnap.ConnectProxy.WatchedUpstreamEndpoints[uid], @@ -86,12 +92,9 @@ func (s *ResourceGenerator) endpointsFromSnapshotConnectProxy(cfgSnap *proxycfg. // upstream in clusters.go so that the sets of endpoints generated matches // the sets of clusters. for _, uid := range cfgSnap.ConnectProxy.PeeredUpstreamIDs() { - upstreamCfg := cfgSnap.ConnectProxy.UpstreamConfig[uid] - - explicit := upstreamCfg.HasLocalPortOrSocket() - implicit := cfgSnap.ConnectProxy.IsImplicitUpstream(uid) - if !implicit && !explicit { - // Not associated with a known explicit or implicit upstream so it is skipped. + _, skip := getUpstream(uid) + if skip { + // Discovery chain is not associated with a known explicit or implicit upstream so it is skipped. continue } @@ -104,22 +107,14 @@ func (s *ResourceGenerator) endpointsFromSnapshotConnectProxy(cfgSnap *proxycfg. clusterName := generatePeeredClusterName(uid, tbs) - // Also skip peer instances with a hostname as their address. EDS - // cannot resolve hostnames, so we provide them through CDS instead. - if _, ok := cfgSnap.ConnectProxy.PeerUpstreamEndpointsUseHostnames[uid]; ok { - continue + loadAssignment, err := s.makeUpstreamLoadAssignmentForPeerService(cfgSnap, clusterName, uid) + + if err != nil { + return nil, err } - endpoints, ok := cfgSnap.ConnectProxy.PeerUpstreamEndpoints.Get(uid) - if ok { - la := makeLoadAssignment( - clusterName, - []loadAssignmentEndpointGroup{ - {Endpoints: endpoints}, - }, - proxycfg.GatewayKey{ /*empty so it never matches*/ }, - ) - resources = append(resources, la) + if loadAssignment != nil { + resources = append(resources, loadAssignment) } } @@ -375,6 +370,7 @@ func (s *ResourceGenerator) endpointsFromSnapshotIngressGateway(cfgSnap *proxycf es, err := s.endpointsFromDiscoveryChain( uid, cfgSnap.IngressGateway.DiscoveryChain[uid], + cfgSnap, proxycfg.GatewayKey{Datacenter: cfgSnap.Datacenter, Partition: u.DestinationPartition}, u.Config, cfgSnap.IngressGateway.WatchedUpstreamEndpoints[uid], @@ -412,9 +408,38 @@ func makePipeEndpoint(path string) *envoy_endpoint_v3.LbEndpoint { } } +func (s *ResourceGenerator) makeUpstreamLoadAssignmentForPeerService(cfgSnap *proxycfg.ConfigSnapshot, clusterName string, uid proxycfg.UpstreamID) (*envoy_endpoint_v3.ClusterLoadAssignment, error) { + var la *envoy_endpoint_v3.ClusterLoadAssignment + + upstreamsSnapshot, err := cfgSnap.ToConfigSnapshotUpstreams() + if err != nil { + return la, err + } + + // Also skip peer instances with a hostname as their address. EDS + // cannot resolve hostnames, so we provide them through CDS instead. + if _, ok := upstreamsSnapshot.PeerUpstreamEndpointsUseHostnames[uid]; ok { + return la, nil + } + + endpoints, ok := upstreamsSnapshot.PeerUpstreamEndpoints.Get(uid) + if !ok { + return nil, nil + } + la = makeLoadAssignment( + clusterName, + []loadAssignmentEndpointGroup{ + {Endpoints: endpoints}, + }, + proxycfg.GatewayKey{ /*empty so it never matches*/ }, + ) + return la, nil +} + func (s *ResourceGenerator) endpointsFromDiscoveryChain( uid proxycfg.UpstreamID, chain *structs.CompiledDiscoveryChain, + cfgSnap *proxycfg.ConfigSnapshot, gatewayKey proxycfg.GatewayKey, upstreamConfigMap map[string]interface{}, upstreamEndpoints map[string]structs.CheckServiceNodes, @@ -432,6 +457,14 @@ func (s *ResourceGenerator) endpointsFromDiscoveryChain( upstreamConfigMap = make(map[string]interface{}) // TODO:needed? } + upstreamsSnapshot, err := cfgSnap.ToConfigSnapshotUpstreams() + + // Mesh gateways are exempt because upstreamsSnapshot is only used for + // cluster peering targets and transative failover/redirects are unsupported. + if err != nil && !forMeshGateway { + return nil, fmt.Errorf("No upstream snapshot for gateway mode %q", cfgSnap.Kind) + } + var resources []proto.Message var escapeHatchCluster *envoy_cluster_v3.Cluster @@ -465,8 +498,15 @@ func (s *ResourceGenerator) endpointsFromDiscoveryChain( if node.Type != structs.DiscoveryGraphNodeTypeResolver { continue } + primaryTargetID := node.Resolver.Target failover := node.Resolver.Failover + type targetLoadAssignmentOption struct { + targetID string + clusterName string + } + var targetLoadAssignmentOptions []targetLoadAssignmentOption + var numFailoverTargets int if failover != nil { numFailoverTargets = len(failover.Targets) @@ -474,66 +514,84 @@ func (s *ResourceGenerator) endpointsFromDiscoveryChain( clusterNamePrefix := "" if numFailoverTargets > 0 && !forMeshGateway { clusterNamePrefix = failoverClusterNamePrefix - for _, failTargetID := range failover.Targets { - target := chain.Targets[failTargetID] - endpointGroup, valid := makeLoadAssignmentEndpointGroup( - chain.Targets, - upstreamEndpoints, - gatewayEndpoints, - failTargetID, - gatewayKey, - forMeshGateway, - ) - if !valid { - continue // skip the failover target if we're still populating the snapshot - } + for _, targetID := range append([]string{primaryTargetID}, failover.Targets...) { + target := chain.Targets[targetID] + clusterName := target.Name + targetUID := proxycfg.NewUpstreamIDFromTargetID(targetID) + if targetUID.Peer != "" { + tbs, ok := upstreamsSnapshot.UpstreamPeerTrustBundles.Get(targetUID.Peer) + // We can't generate cluster on peers without the trust bundle. The + // trust bundle should be ready soon. + if !ok { + s.Logger.Debug("peer trust bundle not ready for discovery chain target", + "peer", targetUID.Peer, + "target", targetID, + ) + continue + } - clusterName := CustomizeClusterName(target.Name, chain) + clusterName = generatePeeredClusterName(targetUID, tbs) + } + clusterName = CustomizeClusterName(clusterName, chain) clusterName = failoverClusterNamePrefix + clusterName if escapeHatchCluster != nil { clusterName = escapeHatchCluster.Name } - s.Logger.Debug("generating endpoints for", "cluster", clusterName) - - la := makeLoadAssignment( - clusterName, - []loadAssignmentEndpointGroup{endpointGroup}, - gatewayKey, - ) - resources = append(resources, la) + targetLoadAssignmentOptions = append(targetLoadAssignmentOptions, targetLoadAssignmentOption{ + targetID: targetID, + clusterName: clusterName, + }) } - } - targetID := node.Resolver.Target - - target := chain.Targets[targetID] - clusterName := CustomizeClusterName(target.Name, chain) - clusterName = clusterNamePrefix + clusterName - if escapeHatchCluster != nil { - clusterName = escapeHatchCluster.Name - } - if forMeshGateway { - clusterName = meshGatewayExportedClusterNamePrefix + clusterName - } - s.Logger.Debug("generating endpoints for", "cluster", clusterName) - endpointGroup, valid := makeLoadAssignmentEndpointGroup( - chain.Targets, - upstreamEndpoints, - gatewayEndpoints, - targetID, - gatewayKey, - forMeshGateway, - ) - if !valid { - continue // skip the cluster if we're still populating the snapshot + } else { + target := chain.Targets[primaryTargetID] + clusterName := CustomizeClusterName(target.Name, chain) + clusterName = clusterNamePrefix + clusterName + if escapeHatchCluster != nil { + clusterName = escapeHatchCluster.Name + } + if forMeshGateway { + clusterName = meshGatewayExportedClusterNamePrefix + clusterName + } + targetLoadAssignmentOptions = append(targetLoadAssignmentOptions, targetLoadAssignmentOption{ + targetID: primaryTargetID, + clusterName: clusterName, + }) } - la := makeLoadAssignment( - clusterName, - []loadAssignmentEndpointGroup{endpointGroup}, - gatewayKey, - ) - resources = append(resources, la) + for _, targetInfo := range targetLoadAssignmentOptions { + s.Logger.Debug("generating endpoints for", "cluster", targetInfo.clusterName) + targetUID := proxycfg.NewUpstreamIDFromTargetID(targetInfo.targetID) + if targetUID.Peer != "" { + loadAssignment, err := s.makeUpstreamLoadAssignmentForPeerService(cfgSnap, targetInfo.clusterName, targetUID) + if err != nil { + return nil, err + } + if loadAssignment != nil { + resources = append(resources, loadAssignment) + } + continue + } + + endpointGroup, valid := makeLoadAssignmentEndpointGroup( + chain.Targets, + upstreamEndpoints, + gatewayEndpoints, + targetInfo.targetID, + gatewayKey, + forMeshGateway, + ) + if !valid { + continue // skip the cluster if we're still populating the snapshot + } + + la := makeLoadAssignment( + targetInfo.clusterName, + []loadAssignmentEndpointGroup{endpointGroup}, + gatewayKey, + ) + resources = append(resources, la) + } } return resources, nil @@ -586,6 +644,7 @@ func (s *ResourceGenerator) makeExportedUpstreamEndpointsForMeshGateway(cfgSnap clusterEndpoints, err := s.endpointsFromDiscoveryChain( proxycfg.NewUpstreamIDFromServiceName(svc), chain, + cfgSnap, cfgSnap.Locality, nil, chainEndpoints, @@ -640,11 +699,12 @@ func makeLoadAssignment(clusterName string, endpointGroups []loadAssignmentEndpo healthStatus = endpointGroup.OverrideHealth } + endpoint := &envoy_endpoint_v3.Endpoint{ + Address: makeAddress(addr, port), + } es = append(es, &envoy_endpoint_v3.LbEndpoint{ HostIdentifier: &envoy_endpoint_v3.LbEndpoint_Endpoint{ - Endpoint: &envoy_endpoint_v3.Endpoint{ - Address: makeAddress(addr, port), - }, + Endpoint: endpoint, }, HealthStatus: healthStatus, LoadBalancingWeight: makeUint32Value(weight), diff --git a/agent/xds/endpoints_test.go b/agent/xds/endpoints_test.go index b02bdd725..90fad78e2 100644 --- a/agent/xds/endpoints_test.go +++ b/agent/xds/endpoints_test.go @@ -284,6 +284,12 @@ func TestEndpointsFromSnapshot(t *testing.T) { return proxycfg.TestConfigSnapshotDiscoveryChain(t, "failover", nil, nil) }, }, + { + name: "connect-proxy-with-chain-and-failover-to-cluster-peer", + create: func(t testinf.T) *proxycfg.ConfigSnapshot { + return proxycfg.TestConfigSnapshotDiscoveryChain(t, "failover-to-cluster-peer", nil, nil) + }, + }, { name: "connect-proxy-with-tcp-chain-failover-through-remote-gateway", create: func(t testinf.T) *proxycfg.ConfigSnapshot { @@ -396,6 +402,13 @@ func TestEndpointsFromSnapshot(t *testing.T) { "failover", nil, nil, nil) }, }, + { + name: "ingress-with-chain-and-failover-to-cluster-peer", + create: func(t testinf.T) *proxycfg.ConfigSnapshot { + return proxycfg.TestConfigSnapshotIngressGateway(t, true, "tcp", + "failover-to-cluster-peer", nil, nil, nil) + }, + }, { name: "ingress-with-tcp-chain-failover-through-remote-gateway", create: func(t testinf.T) *proxycfg.ConfigSnapshot { diff --git a/agent/xds/testdata/clusters/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden b/agent/xds/testdata/clusters/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden new file mode 100644 index 000000000..61de6b2e2 --- /dev/null +++ b/agent/xds/testdata/clusters/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden @@ -0,0 +1,219 @@ +{ + "versionInfo": "00000001", + "resources": [ + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "altStatName": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "clusterType": { + "name": "envoy.clusters.aggregate", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.clusters.aggregate.v3.ClusterConfig", + "clusters": [ + "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "failover-target~db.default.cluster-01.external.peer1.domain" + ] + } + }, + "connectTimeout": "33s", + "lbPolicy": "CLUSTER_PROVIDED" + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "failover-target~db.default.cluster-01.external.peer1.domain", + "type": "EDS", + "edsClusterConfig": { + "edsConfig": { + "ads": { + + }, + "resourceApiVersion": "V3" + } + }, + "connectTimeout": "1s", + "circuitBreakers": { + + }, + "outlierDetection": { + "maxEjectionPercent": 100 + }, + "commonLbConfig": { + "healthyPanicThreshold": { + + } + }, + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "peer1-root-1\n" + }, + "matchSubjectAltNames": [ + { + "exact": "spiffe://1c053652-8512-4373-90cf-5a7f6263a994.consul/ns/default/dc/cluster-01-dc/svc/payments" + } + ] + } + }, + "sni": "payments.default.default.cluster-01.external.1c053652-8512-4373-90cf-5a7f6263a994.consul" + } + } + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "altStatName": "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "type": "EDS", + "edsClusterConfig": { + "edsConfig": { + "ads": { + + }, + "resourceApiVersion": "V3" + } + }, + "connectTimeout": "33s", + "circuitBreakers": { + + }, + "outlierDetection": { + + }, + "commonLbConfig": { + "healthyPanicThreshold": { + + } + }, + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICXDCCAgKgAwIBAgIICpZq70Z9LyUwCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowFDESMBAG\nA1UEAxMJVGVzdCBDQSAyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIhywH1gx\nAsMwuF3ukAI5YL2jFxH6Usnma1HFSfVyxbXX1/uoZEYrj8yCAtdU2yoHETyd+Zx2\nThhRLP79pYegCaOCATwwggE4MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTAD\nAQH/MGgGA1UdDgRhBF9kMToxMToxMTphYzoyYTpiYTo5NzpiMjozZjphYzo3Yjpi\nZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1ZTo0MTo2ZjpmMjo3\nMzo5NTo1ODowYzpkYjBqBgNVHSMEYzBhgF9kMToxMToxMTphYzoyYTpiYTo5Nzpi\nMjozZjphYzo3YjpiZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1\nZTo0MTo2ZjpmMjo3Mzo5NTo1ODowYzpkYjA/BgNVHREEODA2hjRzcGlmZmU6Ly8x\nMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqG\nSM49BAMCA0gAMEUCICOY0i246rQHJt8o8Oya0D5PLL1FnmsQmQqIGCi31RwnAiEA\noR5f6Ku+cig2Il8T8LJujOp2/2A72QcHZA57B13y+8o=\n-----END CERTIFICATE-----\n" + }, + "matchSubjectAltNames": [ + { + "exact": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc1/svc/db" + } + ] + } + }, + "sni": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul" + } + } + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul", + "type": "EDS", + "edsClusterConfig": { + "edsConfig": { + "ads": { + + }, + "resourceApiVersion": "V3" + } + }, + "connectTimeout": "5s", + "circuitBreakers": { + + }, + "outlierDetection": { + + }, + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICXDCCAgKgAwIBAgIICpZq70Z9LyUwCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowFDESMBAG\nA1UEAxMJVGVzdCBDQSAyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIhywH1gx\nAsMwuF3ukAI5YL2jFxH6Usnma1HFSfVyxbXX1/uoZEYrj8yCAtdU2yoHETyd+Zx2\nThhRLP79pYegCaOCATwwggE4MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTAD\nAQH/MGgGA1UdDgRhBF9kMToxMToxMTphYzoyYTpiYTo5NzpiMjozZjphYzo3Yjpi\nZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1ZTo0MTo2ZjpmMjo3\nMzo5NTo1ODowYzpkYjBqBgNVHSMEYzBhgF9kMToxMToxMTphYzoyYTpiYTo5Nzpi\nMjozZjphYzo3YjpiZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1\nZTo0MTo2ZjpmMjo3Mzo5NTo1ODowYzpkYjA/BgNVHREEODA2hjRzcGlmZmU6Ly8x\nMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqG\nSM49BAMCA0gAMEUCICOY0i246rQHJt8o8Oya0D5PLL1FnmsQmQqIGCi31RwnAiEA\noR5f6Ku+cig2Il8T8LJujOp2/2A72QcHZA57B13y+8o=\n-----END CERTIFICATE-----\n" + }, + "matchSubjectAltNames": [ + { + "exact": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc1/svc/geo-cache-target" + }, + { + "exact": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc2/svc/geo-cache-target" + } + ] + } + }, + "sni": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul" + } + } + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "local_app", + "type": "STATIC", + "connectTimeout": "5s", + "loadAssignment": { + "clusterName": "local_app", + "endpoints": [ + { + "lbEndpoints": [ + { + "endpoint": { + "address": { + "socketAddress": { + "address": "127.0.0.1", + "portValue": 8080 + } + } + } + } + ] + } + ] + } + } + ], + "typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "nonce": "00000001" +} \ No newline at end of file diff --git a/agent/xds/testdata/clusters/ingress-with-chain-and-failover-to-cluster-peer.latest.golden b/agent/xds/testdata/clusters/ingress-with-chain-and-failover-to-cluster-peer.latest.golden new file mode 100644 index 000000000..94521dc8f --- /dev/null +++ b/agent/xds/testdata/clusters/ingress-with-chain-and-failover-to-cluster-peer.latest.golden @@ -0,0 +1,139 @@ +{ + "versionInfo": "00000001", + "resources": [ + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "altStatName": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "clusterType": { + "name": "envoy.clusters.aggregate", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.clusters.aggregate.v3.ClusterConfig", + "clusters": [ + "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "failover-target~db.default.cluster-01.external.peer1.domain" + ] + } + }, + "connectTimeout": "33s", + "lbPolicy": "CLUSTER_PROVIDED" + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "failover-target~db.default.cluster-01.external.peer1.domain", + "type": "EDS", + "edsClusterConfig": { + "edsConfig": { + "ads": { + + }, + "resourceApiVersion": "V3" + } + }, + "connectTimeout": "33s", + "circuitBreakers": { + + }, + "outlierDetection": { + "maxEjectionPercent": 100 + }, + "commonLbConfig": { + "healthyPanicThreshold": { + + } + }, + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "peer1-root-1\n" + }, + "matchSubjectAltNames": [ + { + "exact": "spiffe://1c053652-8512-4373-90cf-5a7f6263a994.consul/ns/default/dc/cluster-01-dc/svc/payments" + } + ] + } + }, + "sni": "payments.default.default.cluster-01.external.1c053652-8512-4373-90cf-5a7f6263a994.consul" + } + } + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "altStatName": "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "type": "EDS", + "edsClusterConfig": { + "edsConfig": { + "ads": { + + }, + "resourceApiVersion": "V3" + } + }, + "connectTimeout": "33s", + "circuitBreakers": { + + }, + "outlierDetection": { + + }, + "commonLbConfig": { + "healthyPanicThreshold": { + + } + }, + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICXDCCAgKgAwIBAgIICpZq70Z9LyUwCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowFDESMBAG\nA1UEAxMJVGVzdCBDQSAyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIhywH1gx\nAsMwuF3ukAI5YL2jFxH6Usnma1HFSfVyxbXX1/uoZEYrj8yCAtdU2yoHETyd+Zx2\nThhRLP79pYegCaOCATwwggE4MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTAD\nAQH/MGgGA1UdDgRhBF9kMToxMToxMTphYzoyYTpiYTo5NzpiMjozZjphYzo3Yjpi\nZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1ZTo0MTo2ZjpmMjo3\nMzo5NTo1ODowYzpkYjBqBgNVHSMEYzBhgF9kMToxMToxMTphYzoyYTpiYTo5Nzpi\nMjozZjphYzo3YjpiZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1\nZTo0MTo2ZjpmMjo3Mzo5NTo1ODowYzpkYjA/BgNVHREEODA2hjRzcGlmZmU6Ly8x\nMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqG\nSM49BAMCA0gAMEUCICOY0i246rQHJt8o8Oya0D5PLL1FnmsQmQqIGCi31RwnAiEA\noR5f6Ku+cig2Il8T8LJujOp2/2A72QcHZA57B13y+8o=\n-----END CERTIFICATE-----\n" + }, + "matchSubjectAltNames": [ + { + "exact": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc1/svc/db" + } + ] + } + }, + "sni": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul" + } + } + } + ], + "typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "nonce": "00000001" +} \ No newline at end of file diff --git a/agent/xds/testdata/endpoints/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden b/agent/xds/testdata/endpoints/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden new file mode 100644 index 000000000..feaea9055 --- /dev/null +++ b/agent/xds/testdata/endpoints/connect-proxy-with-chain-and-failover-to-cluster-peer.latest.golden @@ -0,0 +1,109 @@ +{ + "versionInfo": "00000001", + "resources": [ + { + "@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "clusterName": "failover-target~db.default.cluster-01.external.peer1.domain", + "endpoints": [ + { + "lbEndpoints": [ + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.40.1.1", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + }, + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.40.1.2", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + } + ] + } + ] + }, + { + "@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "clusterName": "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "endpoints": [ + { + "lbEndpoints": [ + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.10.1.1", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + }, + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.10.1.2", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + } + ] + } + ] + }, + { + "@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "clusterName": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul", + "endpoints": [ + { + "lbEndpoints": [ + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.10.1.1", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + }, + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.20.1.2", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + } + ] + } + ] + } + ], + "typeUrl": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "nonce": "00000001" +} \ No newline at end of file diff --git a/agent/xds/testdata/endpoints/ingress-with-chain-and-failover-to-cluster-peer.latest.golden b/agent/xds/testdata/endpoints/ingress-with-chain-and-failover-to-cluster-peer.latest.golden new file mode 100644 index 000000000..c799a5a0c --- /dev/null +++ b/agent/xds/testdata/endpoints/ingress-with-chain-and-failover-to-cluster-peer.latest.golden @@ -0,0 +1,75 @@ +{ + "versionInfo": "00000001", + "resources": [ + { + "@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "clusterName": "failover-target~db.default.cluster-01.external.peer1.domain", + "endpoints": [ + { + "lbEndpoints": [ + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.40.1.1", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + }, + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.40.1.2", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + } + ] + } + ] + }, + { + "@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "clusterName": "failover-target~db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "endpoints": [ + { + "lbEndpoints": [ + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.10.1.1", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + }, + { + "endpoint": { + "address": { + "socketAddress": { + "address": "10.10.1.2", + "portValue": 8080 + } + } + }, + "healthStatus": "HEALTHY", + "loadBalancingWeight": 1 + } + ] + } + ] + } + ], + "typeUrl": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "nonce": "00000001" +} \ No newline at end of file diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/base.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/base.hcl new file mode 100644 index 000000000..f81ab0edd --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/base.hcl @@ -0,0 +1,5 @@ +primary_datacenter = "alpha" +log_level = "trace" +peering { + enabled = true +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/config_entries.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/config_entries.hcl new file mode 100644 index 000000000..64d011702 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/config_entries.hcl @@ -0,0 +1,26 @@ +config_entries { + bootstrap = [ + { + kind = "proxy-defaults" + name = "global" + + config { + protocol = "tcp" + } + }, + { + kind = "exported-services" + name = "default" + services = [ + { + name = "s2" + consumers = [ + { + peer_name = "alpha-to-primary" + } + ] + } + ] + } + ] +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_gateway.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_gateway.hcl new file mode 100644 index 000000000..bcdcb2e8b --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_gateway.hcl @@ -0,0 +1,5 @@ +services { + name = "mesh-gateway" + kind = "mesh-gateway" + port = 4432 +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s1.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s1.hcl new file mode 100644 index 000000000..e97ec2366 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s1.hcl @@ -0,0 +1 @@ +# We don't want an s1 service in this peer diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s2.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s2.hcl new file mode 100644 index 000000000..01d4505c6 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/service_s2.hcl @@ -0,0 +1,7 @@ +services { + name = "s2" + port = 8181 + connect { + sidecar_service {} + } +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/setup.sh b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/setup.sh new file mode 100644 index 000000000..820506ea9 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/setup.sh @@ -0,0 +1,11 @@ +#!/bin/bash + +set -euo pipefail + +register_services alpha + +gen_envoy_bootstrap s2 19002 alpha +gen_envoy_bootstrap mesh-gateway 19003 alpha true + +wait_for_config_entry proxy-defaults global alpha +wait_for_config_entry exported-services default alpha diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/verify.bats b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/verify.bats new file mode 100644 index 000000000..d2229b297 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/alpha/verify.bats @@ -0,0 +1,27 @@ +#!/usr/bin/env bats + +load helpers + +@test "s2 proxy is running correct version" { + assert_envoy_version 19002 +} + +@test "s2 proxy admin is up on :19002" { + retry_default curl -f -s localhost:19002/stats -o /dev/null +} + +@test "gateway-alpha proxy admin is up on :19003" { + retry_default curl -f -s localhost:19003/stats -o /dev/null +} + +@test "s2 proxy listener should be up and have right cert" { + assert_proxy_presents_cert_uri localhost:21000 s2 alpha +} + +@test "s2 proxy should be healthy" { + assert_service_has_healthy_instances s2 1 alpha +} + +@test "gateway-alpha should be up and listening" { + retry_long nc -z consul-alpha-client:4432 +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/bind.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/bind.hcl new file mode 100644 index 000000000..f54393f03 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/bind.hcl @@ -0,0 +1,2 @@ +bind_addr = "0.0.0.0" +advertise_addr = "{{ GetInterfaceIP \"eth0\" }}" \ No newline at end of file diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/capture.sh b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/capture.sh new file mode 100644 index 000000000..ab90eb425 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/capture.sh @@ -0,0 +1,6 @@ +#!/bin/bash + +snapshot_envoy_admin localhost:19000 s1 primary || true +snapshot_envoy_admin localhost:19001 s2 primary || true +snapshot_envoy_admin localhost:19002 s2 alpha || true +snapshot_envoy_admin localhost:19003 mesh-gateway alpha || true diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/base.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/base.hcl new file mode 100644 index 000000000..c1e134d5a --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/base.hcl @@ -0,0 +1,3 @@ +peering { + enabled = true +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/config_entries.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/config_entries.hcl new file mode 100644 index 000000000..d9b4ba03b --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/config_entries.hcl @@ -0,0 +1,21 @@ +config_entries { + bootstrap { + kind = "proxy-defaults" + name = "global" + + config { + protocol = "tcp" + } + } + + bootstrap { + kind = "service-resolver" + name = "s2" + + failover = { + "*" = { + targets = [{peer = "primary-to-alpha"}] + } + } + } +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s1.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s1.hcl new file mode 100644 index 000000000..842490e63 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s1.hcl @@ -0,0 +1,16 @@ +services { + name = "s1" + port = 8080 + connect { + sidecar_service { + proxy { + upstreams = [ + { + destination_name = "s2" + local_bind_port = 5000 + } + ] + } + } + } +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s2.hcl b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s2.hcl new file mode 100644 index 000000000..01d4505c6 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/service_s2.hcl @@ -0,0 +1,7 @@ +services { + name = "s2" + port = 8181 + connect { + sidecar_service {} + } +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/setup.sh b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/setup.sh new file mode 100644 index 000000000..c65cc31e4 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/setup.sh @@ -0,0 +1,10 @@ +#!/bin/bash + +set -euo pipefail + +register_services primary + +gen_envoy_bootstrap s1 19000 primary +gen_envoy_bootstrap s2 19001 primary + +wait_for_config_entry proxy-defaults global diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/verify.bats b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/verify.bats new file mode 100644 index 000000000..543459333 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/primary/verify.bats @@ -0,0 +1,87 @@ +#!/usr/bin/env bats + +load helpers + +@test "s1 proxy is running correct version" { + assert_envoy_version 19000 +} + +@test "s1 proxy admin is up on :19000" { + retry_default curl -f -s localhost:19000/stats -o /dev/null +} + +@test "s2 proxy admin is up on :19001" { + retry_default curl -f -s localhost:19001/stats -o /dev/null +} + +@test "gateway-primary proxy admin is up on :19001" { + retry_default curl localhost:19000/config_dump +} + +@test "s1 proxy listener should be up and have right cert" { + assert_proxy_presents_cert_uri localhost:21000 s1 +} + +@test "s2 proxies should be healthy in primary" { + assert_service_has_healthy_instances s2 1 primary +} + +@test "s2 proxies should be healthy in alpha" { + assert_service_has_healthy_instances s2 1 alpha +} + +@test "gateway-alpha should be up and listening" { + retry_long nc -z consul-alpha-client:4432 +} + +@test "peer the two clusters together" { + create_peering primary alpha +} + +@test "s2 alpha proxies should be healthy in primary" { + assert_service_has_healthy_instances s2 1 primary "" "" primary-to-alpha +} + +@test "s1 upstream should have healthy endpoints for s2 in both primary and failover" { + assert_upstream_has_endpoints_in_status 127.0.0.1:19000 failover-target~s2.default.primary.internal HEALTHY 1 + assert_upstream_has_endpoints_in_status 127.0.0.1:19000 failover-target~s2.default.primary-to-alpha.external HEALTHY 1 +} + + +@test "s1 upstream should be able to connect to s2" { + run retry_default curl -s -f -d hello localhost:5000 + [ "$status" -eq 0 ] + [ "$output" = "hello" ] +} + +@test "s1 upstream made 1 connection" { + assert_envoy_metric_at_least 127.0.0.1:19000 "cluster.failover-target~s2.default.primary.internal.*cx_total" 1 +} + +@test "terminate instance of s2 primary envoy which should trigger failover to s2 alpha when the tcp check fails" { + kill_envoy s2 primary +} + +@test "s2 proxies should be unhealthy in primary" { + assert_service_has_healthy_instances s2 0 primary +} + +@test "s1 upstream should have healthy endpoints for s2 in the failover cluster peer" { + assert_upstream_has_endpoints_in_status 127.0.0.1:19000 failover-target~s2.default.primary.internal UNHEALTHY 1 + assert_upstream_has_endpoints_in_status 127.0.0.1:19000 failover-target~s2.default.primary-to-alpha.external HEALTHY 1 +} + +@test "reset envoy statistics" { + reset_envoy_metrics 127.0.0.1:19000 +} + + +@test "s1 upstream should be able to connect to s2 in the failover cluster peer" { + run retry_default curl -s -f -d hello localhost:5000 + [ "$status" -eq 0 ] + [ "$output" = "hello" ] +} + +@test "s1 upstream made 1 connection to s2 through the cluster peer" { + assert_envoy_metric_at_least 127.0.0.1:19000 "cluster.failover-target~s2.default.primary-to-alpha.external.*cx_total" 1 +} diff --git a/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/vars.sh b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/vars.sh new file mode 100644 index 000000000..8e9108a34 --- /dev/null +++ b/test/integration/connect/envoy/case-cfg-resolver-cluster-peering-failover/vars.sh @@ -0,0 +1,4 @@ +#!/bin/bash + +export REQUIRED_SERVICES="s1 s1-sidecar-proxy s2 s2-sidecar-proxy s2-alpha s2-sidecar-proxy-alpha gateway-alpha tcpdump-primary tcpdump-alpha" +export REQUIRE_PEERS=1 From 7bc220f34d571414aa242fb3b4e2197d102fc473 Mon Sep 17 00:00:00 2001 From: Josh Roose <54345520+joshRooz@users.noreply.github.com> Date: Wed, 31 Aug 2022 02:53:18 +1000 Subject: [PATCH 22/55] events compiled to JSON sentence structure (#13717) --- website/content/docs/enterprise/audit-logging.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/content/docs/enterprise/audit-logging.mdx b/website/content/docs/enterprise/audit-logging.mdx index a8d69fae2..93661d49c 100644 --- a/website/content/docs/enterprise/audit-logging.mdx +++ b/website/content/docs/enterprise/audit-logging.mdx @@ -17,7 +17,7 @@ description: >- With Consul Enterprise v1.8.0+, audit logging can be used to capture a clear and actionable log of authenticated events (both attempted and committed) that Consul -processes via its HTTP API. These events are compiled them into a JSON format for easy export +processes via its HTTP API. These events are then compiled into a JSON format for easy export and contain a timestamp, the operation performed, and the user who initiated the action. Audit logging enables security and compliance teams within an organization to get From 49453d4402bd75122f87c11798ccfb1c57362396 Mon Sep 17 00:00:00 2001 From: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> Date: Tue, 10 May 2022 21:40:39 -0700 Subject: [PATCH 23/55] docs: Additional feedback from PR #12971 This commit incorporates additional feedback received related to PR #12971. --- website/content/api-docs/snapshot.mdx | 6 +++--- website/content/commands/snapshot/restore.mdx | 6 +++--- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/website/content/api-docs/snapshot.mdx b/website/content/api-docs/snapshot.mdx index 81357464b..2f3c094ab 100644 --- a/website/content/api-docs/snapshot.mdx +++ b/website/content/api-docs/snapshot.mdx @@ -75,9 +75,9 @@ This endpoint restores a point-in-time snapshot of the Consul server state. Restores involve a potentially dangerous low-level Raft operation that is not designed to handle server failures during a restore. This operation is primarily -intended to be used when recovering from a disaster, restoring into a fresh -cluster of Consul servers running the same version as the cluster from where the -snapshot was taken. +intended to recover from a disaster. It restores your configuration into a fresh +cluster of Consul servers as long as your new cluster runs the same Consul +version as the cluster that originally took the snapshot. | Method | Path | Produces | | :----- | :---------- | ----------------------------- | diff --git a/website/content/commands/snapshot/restore.mdx b/website/content/commands/snapshot/restore.mdx index 2d7ec902e..8bbe50fe1 100644 --- a/website/content/commands/snapshot/restore.mdx +++ b/website/content/commands/snapshot/restore.mdx @@ -16,9 +16,9 @@ from the given file. Restores involve a potentially dangerous low-level Raft operation that is not designed to handle server failures during a restore. This command is primarily -intended to be used when recovering from a disaster, restoring into a fresh -cluster of Consul servers running the same version as the cluster from where the -snapshot was taken. +intended to recover from a disaster. It restores your configuration into a fresh +cluster of Consul servers as long as your new cluster runs the same Consul +version as the cluster that originally took the snapshot. The table below shows this command's [required ACLs](/api#authentication). Configuration of [blocking queries](/api-docs/features/blocking) and [agent caching](/api-docs/features/caching) From ca75f6c9368d87bba1f181e263a9109a8415eaae Mon Sep 17 00:00:00 2001 From: Mike Morris Date: Tue, 30 Aug 2022 15:44:06 -0400 Subject: [PATCH 24/55] ci: update backport-assistant to pick merge commit (#14408) --- .github/workflows/backport-assistant.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/.github/workflows/backport-assistant.yml b/.github/workflows/backport-assistant.yml index f6738815b..b68e41e61 100644 --- a/.github/workflows/backport-assistant.yml +++ b/.github/workflows/backport-assistant.yml @@ -16,7 +16,7 @@ jobs: backport: if: github.event.pull_request.merged runs-on: ubuntu-latest - container: hashicorpdev/backport-assistant:0.2.3 + container: hashicorpdev/backport-assistant:0.2.5 steps: - name: Run Backport Assistant for stable-website run: | @@ -24,6 +24,7 @@ jobs: env: BACKPORT_LABEL_REGEXP: "type/docs-(?Pcherrypick)" BACKPORT_TARGET_TEMPLATE: "stable-website" + BACKPORT_MERGE_COMMIT: true GITHUB_TOKEN: ${{ secrets.ELEVATED_GITHUB_TOKEN }} - name: Backport changes to latest release branch run: | From 2d1352b02eb1e46f583176b3b1dfbc3c82be29c9 Mon Sep 17 00:00:00 2001 From: David Yu Date: Tue, 30 Aug 2022 15:17:35 -0700 Subject: [PATCH 25/55] docs: re-organize service and node lookups for Consul Enterprise (#14389) * docs: re-organize service and node lookups for Consul Enterprise Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> --- website/content/docs/discovery/dns.mdx | 85 ++++++++++++------- .../docs/enterprise/admin-partitions.mdx | 2 +- 2 files changed, 54 insertions(+), 33 deletions(-) diff --git a/website/content/docs/discovery/dns.mdx b/website/content/docs/discovery/dns.mdx index b50e8deee..5643068ba 100644 --- a/website/content/docs/discovery/dns.mdx +++ b/website/content/docs/discovery/dns.mdx @@ -96,6 +96,23 @@ pairs according to [RFC1464](https://www.ietf.org/rfc/rfc1464.txt). Alternatively, the TXT record will only include the node's metadata value when the node's metadata key starts with `rfc1035-`. + +### Node Lookups for Consul Enterprise + +Consul nodes exist at the admin partition level within a datacenter. +By default, the partition and datacenter used in a [node lookup](#node-lookups) are +the partition and datacenter of the Consul agent that received the DNS query. + +Use the following query format to specify a partition for a node lookup: +```text +[.].node..ap..dc. +``` + +Consul server agents are in the `default` partition. +If DNS queries are addressed to Consul server agents, +node lookups to non-`default` partitions must explicitly specify +the partition of the target node. + ## Service Lookups A service lookup is used to query for service providers. Service queries support @@ -334,6 +351,28 @@ $ echo -n "20010db800010002cafe000000001337" | perl -ne 'printf join(":", unpack +### Service Lookups for Consul Enterprise + +By default, all service lookups use the `default` namespace +within the partition and datacenter of the Consul agent that received the DNS query. + +Use the following query format to specify a namespace, partition, and/or datacenter +for all service lookup types except `.query`, +including `.service`, `.connect`, `.virtual`, and `.ingress`. +At least two of those three fields (`namespace`, `partition`, `datacenter`) +must be specified. +```text +[.].service..ns..ap..dc. +``` + +Consul server agents are in the `default` partition. +If DNS queries are addressed to Consul server agents, +service lookups to non-`default` partitions must explicitly specify +the partition of the target service. + +To lookup services imported from a cluster peer, +use a [service virtual IP lookups for Consul Enterprise](#service-virtual-ip-lookups-for-consul-enterprise) instead. + ### Prepared Query Lookups The format of a prepared query lookup is: @@ -398,7 +437,21 @@ of a service imported from that peer. The virtual IP is also added to the service's [Tagged Addresses](/docs/discovery/services#tagged-addresses) under the `consul-virtual` tag. + +#### Service Virtual IP Lookups for Consul Enterprise +By default, a service virtual IP lookup uses the `default` namespace +within the partition and datacenter of the Consul agent that received the DNS query. + +To lookup services imported from a cluster peered partition or open-source datacenter, +specify the namespace and peer name in the lookup: +```text +.virtual[.].. +``` + +To lookup services not imported from a cluster peer, +refer to [service lookups for Consul Enterprise](#service-lookups-for-consul-enterprise) instead. + ### Ingress Service Lookups To find ingress-enabled services: @@ -480,38 +533,6 @@ using the [`advertise-wan`](/docs/agent/config/cli-flags#_advertise-wan) and [`translate_wan_addrs`](/docs/agent/config/config-files#translate_wan_addrs) configuration options. -## Namespaced/Partitioned Services and Nodes - -Consul Enterprise supports resolving namespaced and partitioned services via DNS. -The DNS server in Consul Enterprise can resolve services assigned to namespaces and partitions. -The DNS server can also resolve nodes assigned to partitions. -To maintain backwards compatibility existing queries can be used and these will -resolve services within the `default` namespace and partition. However, for resolving -services from other namespaces or partitions the following form can be used: - -```text -[.].service..ns..ap..dc. -``` - -This sequence is the canonical naming convention of a Consul Enterprise service. At least two of the following -fields must be present: -* `namespace` -* `partition` -* `datacenter` - -For imported lookups, only the namespace and peer need to be specified as the partition can be inferred from the peering: - -```text -.virtual[.].. -``` - -For node lookups, only the partition and datacenter need to be specified as nodes cannot be -namespaced. - -```text -[.].node..ap..dc. -``` - ## DNS with ACLs In order to use the DNS interface when diff --git a/website/content/docs/enterprise/admin-partitions.mdx b/website/content/docs/enterprise/admin-partitions.mdx index 089aac51d..da33eff19 100644 --- a/website/content/docs/enterprise/admin-partitions.mdx +++ b/website/content/docs/enterprise/admin-partitions.mdx @@ -58,7 +58,7 @@ The partition in which [`proxy-defaults`](/docs/connect/config-entries/proxy-def ### Cross-partition Networking -You can configure services to be discoverable by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `exported-services` configuration entry in the partition where the services are registered. Refer to the [`exported-services` documentation](/docs/connect/config-entries/exported-services) for details. Additionally, the `upstreams` configuration for proxies in the source partition must specify the name of the destination partition so that listeners can be created. Refer to the [Upstream Configuration Reference](/docs/connect/registration/service-registration#upstream-configuration-reference) for additional information. +You can configure services to be discoverable by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `exported-services` configuration entry in the partition where the services are registered. Refer to the [`exported-services` documentation](/docs/connect/config-entries/exported-services) for details. Additionally, the requests made by dowstream applications must have the correct DNS name for the Virtual IP Service lookup to occur. Service Virtual IP lookups allow for communications across Admin Partitions when using Transparent Proxy. Refer to the [Service Virtual IP Lookups for Consul Enterprise](/docs/discovery/dns#service-virtual-ip-lookups-for-consul-enterprise) for additional information. ## Requirements From 58e44db5e2e0ff1d81c8c4076387b2e3101256be Mon Sep 17 00:00:00 2001 From: Thomas Kula Date: Sat, 28 May 2022 15:22:01 -0400 Subject: [PATCH 26/55] Typo fix in service-splitter.mdx --- .../content/docs/connect/config-entries/service-splitter.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/content/docs/connect/config-entries/service-splitter.mdx b/website/content/docs/connect/config-entries/service-splitter.mdx index bc5d709ce..609cf818e 100644 --- a/website/content/docs/connect/config-entries/service-splitter.mdx +++ b/website/content/docs/connect/config-entries/service-splitter.mdx @@ -302,7 +302,7 @@ spec: name: 'weight', type: 'float32: 0', description: - 'A value between 0 and 100 reflecting what portion of traffic should be directed to this split. The smallest representable eight is 1/10000 or .01%', + 'A value between 0 and 100 reflecting what portion of traffic should be directed to this split. The smallest representable weight is 1/10000 or .01%', }, { name: 'Service', From 851c280dfce25b2d5f5d7def058810ac7c20a84b Mon Sep 17 00:00:00 2001 From: "Chris S. Kim" Date: Wed, 31 Aug 2022 12:11:15 -0400 Subject: [PATCH 27/55] Fix code example --- .../docs/connect/cluster-peering/create-manage-peering.mdx | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/website/content/docs/connect/cluster-peering/create-manage-peering.mdx b/website/content/docs/connect/cluster-peering/create-manage-peering.mdx index 009c60f40..ee0a69a94 100644 --- a/website/content/docs/connect/cluster-peering/create-manage-peering.mdx +++ b/website/content/docs/connect/cluster-peering/create-manage-peering.mdx @@ -108,7 +108,7 @@ First, create a configuration entry and specify the `Kind` as `"exported-service ```hcl Kind = "exported-services" - +Name = "default" Services = [ { ## The name and namespace of the service to export. @@ -120,10 +120,11 @@ Services = [ { ## The peer name to reference in config is the one set ## during the peering process. - Peer = "cluster-02" + PeerName = "cluster-02" } - } ] + } +] ``` From 13aa1bcceb83b7daf82cb09c7bebac730c639717 Mon Sep 17 00:00:00 2001 From: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com> Date: Wed, 31 Aug 2022 13:58:23 -0400 Subject: [PATCH 28/55] docs: node lookups don't support filtering on tag --- website/content/docs/discovery/dns.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/content/docs/discovery/dns.mdx b/website/content/docs/discovery/dns.mdx index 5643068ba..f052b0e27 100644 --- a/website/content/docs/discovery/dns.mdx +++ b/website/content/docs/discovery/dns.mdx @@ -105,7 +105,7 @@ the partition and datacenter of the Consul agent that received the DNS query. Use the following query format to specify a partition for a node lookup: ```text -[.].node..ap..dc. +.node..ap..dc. ``` Consul server agents are in the `default` partition. From c5370d52e965e74984addc9e39c8cf286319e7e8 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Wed, 31 Aug 2022 11:20:29 -0700 Subject: [PATCH 29/55] Prune old expired intermediate certs when appending a new one --- agent/consul/leader_connect_ca.go | 22 ++++++++++++++++++++++ agent/consul/leader_connect_ca_test.go | 1 - agent/consul/leader_connect_test.go | 24 ++++++++++++++++++++++++ 3 files changed, 46 insertions(+), 1 deletion(-) diff --git a/agent/consul/leader_connect_ca.go b/agent/consul/leader_connect_ca.go index 5e2268164..d2cd02113 100644 --- a/agent/consul/leader_connect_ca.go +++ b/agent/consul/leader_connect_ca.go @@ -1100,6 +1100,28 @@ func setLeafSigningCert(caRoot *structs.CARoot, pem string) error { caRoot.IntermediateCerts = append(caRoot.IntermediateCerts, pem) caRoot.SigningKeyID = connect.EncodeSigningKeyID(cert.SubjectKeyId) + return pruneExpiredIntermediates(caRoot) +} + +// pruneExpiredIntermediates removes expired intermediate certificates +// from the given CARoot. +func pruneExpiredIntermediates(caRoot *structs.CARoot) error { + var newIntermediates []string + now := time.Now() + for i, intermediatePEM := range caRoot.IntermediateCerts { + cert, err := connect.ParseCert(intermediatePEM) + if err != nil { + return fmt.Errorf("error parsing leaf signing cert: %w", err) + } + + // Only keep the intermediate cert if it's still valid, or if it's the most + // recently added (and thus the active signing cert). + if cert.NotAfter.After(now) || i == len(caRoot.IntermediateCerts) { + newIntermediates = append(newIntermediates, intermediatePEM) + } + } + + caRoot.IntermediateCerts = newIntermediates return nil } diff --git a/agent/consul/leader_connect_ca_test.go b/agent/consul/leader_connect_ca_test.go index 37756eb20..91095be8e 100644 --- a/agent/consul/leader_connect_ca_test.go +++ b/agent/consul/leader_connect_ca_test.go @@ -435,7 +435,6 @@ func TestCAManager_SignCertificate_WithExpiredCert(t *testing.T) { errorMsg string }{ {"intermediate valid", time.Now().AddDate(0, 0, -1), time.Now().AddDate(0, 0, 2), time.Now().AddDate(0, 0, -1), time.Now().AddDate(0, 0, 2), false, ""}, - {"intermediate expired", time.Now().AddDate(0, 0, -1), time.Now().AddDate(0, 0, 2), time.Now().AddDate(-2, 0, 0), time.Now().AddDate(0, 0, -1), true, "intermediate expired: certificate expired, expiration date"}, {"root expired", time.Now().AddDate(-2, 0, 0), time.Now().AddDate(0, 0, -1), time.Now().AddDate(0, 0, -1), time.Now().AddDate(0, 0, 2), true, "root expired: certificate expired, expiration date"}, // a cert that is not yet valid is ok, assume it will be valid soon enough {"intermediate in the future", time.Now().AddDate(0, 0, -1), time.Now().AddDate(0, 0, 2), time.Now().AddDate(0, 0, 1), time.Now().AddDate(0, 0, 2), false, ""}, diff --git a/agent/consul/leader_connect_test.go b/agent/consul/leader_connect_test.go index d9b386386..c8b361b03 100644 --- a/agent/consul/leader_connect_test.go +++ b/agent/consul/leader_connect_test.go @@ -401,6 +401,18 @@ func TestCAManager_RenewIntermediate_Vault_Primary(t *testing.T) { err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", &req, &cert) require.NoError(t, err) verifyLeafCert(t, activeRoot, cert.CertPEM) + + // Wait for the primary's old intermediate to be pruned after expiring. + oldIntermediate := activeRoot.IntermediateCerts[0] + retry.Run(t, func(r *retry.R) { + store := s1.caManager.delegate.State() + _, storedRoot, err := store.CARootActive(nil) + r.Check(err) + + if storedRoot.IntermediateCerts[0] == oldIntermediate { + r.Fatal("old intermediate should be gone") + } + }) } func patchIntermediateCertRenewInterval(t *testing.T) { @@ -516,6 +528,18 @@ func TestCAManager_RenewIntermediate_Secondary(t *testing.T) { err = msgpackrpc.CallWithCodec(codec, "ConnectCA.Sign", &req, &cert) require.NoError(t, err) verifyLeafCert(t, activeRoot, cert.CertPEM) + + // Wait for dc2's old intermediate to be pruned after expiring. + oldIntermediate := activeRoot.IntermediateCerts[0] + retry.Run(t, func(r *retry.R) { + store := s2.caManager.delegate.State() + _, storedRoot, err := store.CARootActive(nil) + r.Check(err) + + if storedRoot.IntermediateCerts[0] == oldIntermediate { + r.Fatal("old intermediate should be gone") + } + }) } func TestConnectCA_ConfigurationSet_RootRotation_Secondary(t *testing.T) { From 66b05b108190487478a3b2136146c93ae316fd03 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Wed, 31 Aug 2022 11:43:21 -0700 Subject: [PATCH 30/55] Add changelog note --- .changelog/14429.txt | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 .changelog/14429.txt diff --git a/.changelog/14429.txt b/.changelog/14429.txt new file mode 100644 index 000000000..4387d1ed4 --- /dev/null +++ b/.changelog/14429.txt @@ -0,0 +1,3 @@ +```release-note:bug +connect: Fixed an issue where intermediate certificates could build up in the root CA because they were never being pruned after expiring. +`` \ No newline at end of file From ad301924990b6e3edda3d15078f9cd78df5efe25 Mon Sep 17 00:00:00 2001 From: malizz Date: Wed, 31 Aug 2022 13:03:38 -0700 Subject: [PATCH 31/55] validate args before deleting proxy defaults (#14290) * validate args before deleting proxy defaults * add changelog * validate name when normalizing proxy defaults * add test for proxyConfigEntry * add comments --- .changelog/14290.txt | 3 +++ agent/structs/config_entry.go | 10 +++++++++- agent/structs/config_entry_test.go | 21 +++++++++++++++++++++ 3 files changed, 33 insertions(+), 1 deletion(-) create mode 100644 .changelog/14290.txt diff --git a/.changelog/14290.txt b/.changelog/14290.txt new file mode 100644 index 000000000..719bd67b3 --- /dev/null +++ b/.changelog/14290.txt @@ -0,0 +1,3 @@ +```release-note:bugfix +envoy: validate name before deleting proxy default configurations. +``` \ No newline at end of file diff --git a/agent/structs/config_entry.go b/agent/structs/config_entry.go index 8b3b0a8d2..88c523a15 100644 --- a/agent/structs/config_entry.go +++ b/agent/structs/config_entry.go @@ -3,12 +3,13 @@ package structs import ( "errors" "fmt" - "github.com/miekg/dns" "net" "strconv" "strings" "time" + "github.com/miekg/dns" + "github.com/hashicorp/go-multierror" "github.com/mitchellh/hashstructure" "github.com/mitchellh/mapstructure" @@ -362,6 +363,13 @@ func (e *ProxyConfigEntry) Normalize() error { } e.Kind = ProxyDefaults + + // proxy default config only accepts global configs + // this check is replicated in normalize() and validate(), + // since validate is not called by all the endpoints (e.g., delete) + if e.Name != "" && e.Name != ProxyConfigGlobal { + return fmt.Errorf("invalid name (%q), only %q is supported", e.Name, ProxyConfigGlobal) + } e.Name = ProxyConfigGlobal e.EnterpriseMeta.Normalize() diff --git a/agent/structs/config_entry_test.go b/agent/structs/config_entry_test.go index e462f6aa7..a9e113f21 100644 --- a/agent/structs/config_entry_test.go +++ b/agent/structs/config_entry_test.go @@ -2944,6 +2944,27 @@ func TestParseUpstreamConfig(t *testing.T) { } } +func TestProxyConfigEntry(t *testing.T) { + cases := map[string]configEntryTestcase{ + "proxy config name provided is not global": { + entry: &ProxyConfigEntry{ + Name: "foo", + }, + normalizeErr: `invalid name ("foo"), only "global" is supported`, + }, + "proxy config has no name": { + entry: &ProxyConfigEntry{ + Name: "", + }, + expected: &ProxyConfigEntry{ + Name: ProxyConfigGlobal, + Kind: ProxyDefaults, + }, + }, + } + testConfigEntryNormalizeAndValidate(t, cases) +} + func requireContainsLower(t *testing.T, haystack, needle string) { t.Helper() require.Contains(t, strings.ToLower(haystack), strings.ToLower(needle)) From 095934116e90ddd45ddc0748a04554fff4bd169b Mon Sep 17 00:00:00 2001 From: Luke Kysow <1034429+lkysow@users.noreply.github.com> Date: Wed, 31 Aug 2022 13:06:35 -0700 Subject: [PATCH 32/55] Suppress "unbound variable" error. (#14424) Without this change, you'd see this error: ``` ./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable ./run-tests.sh: line 49: LAMBDA_TESTS_ENABLED: unbound variable ``` --- test/integration/connect/envoy/run-tests.sh | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/test/integration/connect/envoy/run-tests.sh b/test/integration/connect/envoy/run-tests.sh index f0e6b165c..1de8ed3e5 100755 --- a/test/integration/connect/envoy/run-tests.sh +++ b/test/integration/connect/envoy/run-tests.sh @@ -46,7 +46,8 @@ function network_snippet { } function aws_snippet { - if [[ ! -z "$LAMBDA_TESTS_ENABLED" ]]; then + LAMBDA_TESTS_ENABLED=${LAMBDA_TESTS_ENABLED:-false} + if [ "$LAMBDA_TESTS_ENABLED" != false ]; then local snippet="" # The Lambda integration cases assume that a Lambda function exists in $AWS_REGION with an ARN of $AWS_LAMBDA_ARN. From 2110f1d0ffdd502f90e712d2adfd7e617e46449d Mon Sep 17 00:00:00 2001 From: Jorge Marey Date: Wed, 31 Aug 2022 23:14:25 +0200 Subject: [PATCH 33/55] Fix typo on documentation --- website/content/docs/connect/proxies/envoy.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/content/docs/connect/proxies/envoy.mdx b/website/content/docs/connect/proxies/envoy.mdx index 020a0510f..812adff17 100644 --- a/website/content/docs/connect/proxies/envoy.mdx +++ b/website/content/docs/connect/proxies/envoy.mdx @@ -761,7 +761,7 @@ definition](/docs/connect/registration/service-registration) or - `envoy_listener_tracing_json` - Specifies a [tracing configuration](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#envoy-v3-api-msg-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-tracing) - to be inserter in the public and upstreams listeners of the proxy. + to be inserted in the proxy's public and upstreams listeners. From e70ba97e45c55aa7f265f423bddd378dfc012030 Mon Sep 17 00:00:00 2001 From: "Chris S. Kim" Date: Thu, 1 Sep 2022 10:32:59 -0400 Subject: [PATCH 34/55] Add Internal.ServiceDump support for querying by PeerName --- agent/consul/internal_endpoint.go | 113 ++++++++++++++++++------------ agent/ui_endpoint.go | 4 +- 2 files changed, 71 insertions(+), 46 deletions(-) diff --git a/agent/consul/internal_endpoint.go b/agent/consul/internal_endpoint.go index 28d7f365e..534513c8a 100644 --- a/agent/consul/internal_endpoint.go +++ b/agent/consul/internal_endpoint.go @@ -153,64 +153,87 @@ func (m *Internal) ServiceDump(args *structs.ServiceDumpRequest, reply *structs. &args.QueryOptions, &reply.QueryMeta, func(ws memdb.WatchSet, state *state.Store) error { - // we don't support calling this endpoint for a specific peer - if args.PeerName != "" { - return fmt.Errorf("this endpoint does not support specifying a peer: %q", args.PeerName) - } - // this maxIndex will be the max of the ServiceDump calls and the PeeringList call var maxIndex uint64 - // get a local dump for services - index, nodes, err := state.ServiceDump(ws, args.ServiceKind, args.UseServiceKind, &args.EnterpriseMeta, structs.DefaultPeerKeyword) - if err != nil { - return fmt.Errorf("could not get a service dump for local nodes: %w", err) - } - - if index > maxIndex { - maxIndex = index - } - reply.Nodes = nodes - - // get a list of all peerings - index, listedPeerings, err := state.PeeringList(ws, args.EnterpriseMeta) - if err != nil { - return fmt.Errorf("could not list peers for service dump %w", err) - } - - if index > maxIndex { - maxIndex = index - } - - for _, p := range listedPeerings { - index, importedNodes, err := state.ServiceDump(ws, args.ServiceKind, args.UseServiceKind, &args.EnterpriseMeta, p.Name) + // If PeerName is not empty, we return only the imported services from that peer + if args.PeerName != "" { + // get a local dump for services + index, nodes, err := state.ServiceDump(ws, + args.ServiceKind, + args.UseServiceKind, + // Note we fetch imported services with wildcard namespace because imported services' namespaces + // are in a different locality; regardless of our local namespace, we return all imported services + // of the local partition. + args.EnterpriseMeta.WithWildcardNamespace(), + args.PeerName) if err != nil { - return fmt.Errorf("could not get a service dump for peer %q: %w", p.Name, err) + return fmt.Errorf("could not get a service dump for peer %q: %w", args.PeerName, err) } if index > maxIndex { maxIndex = index } - reply.ImportedNodes = append(reply.ImportedNodes, importedNodes...) - } + reply.Index = maxIndex + reply.ImportedNodes = nodes - // Get, store, and filter gateway services - idx, gatewayServices, err := state.DumpGatewayServices(ws) - if err != nil { - return err - } - reply.Gateways = gatewayServices + } else { + // otherwise return both local and all imported services - if idx > maxIndex { - maxIndex = idx - } - reply.Index = maxIndex + // get a local dump for services + index, nodes, err := state.ServiceDump(ws, args.ServiceKind, args.UseServiceKind, &args.EnterpriseMeta, structs.DefaultPeerKeyword) + if err != nil { + return fmt.Errorf("could not get a service dump for local nodes: %w", err) + } - raw, err := filter.Execute(reply.Nodes) - if err != nil { - return fmt.Errorf("could not filter local service dump: %w", err) + if index > maxIndex { + maxIndex = index + } + reply.Nodes = nodes + + // get a list of all peerings + index, listedPeerings, err := state.PeeringList(ws, args.EnterpriseMeta) + if err != nil { + return fmt.Errorf("could not list peers for service dump %w", err) + } + + if index > maxIndex { + maxIndex = index + } + + for _, p := range listedPeerings { + // Note we fetch imported services with wildcard namespace because imported services' namespaces + // are in a different locality; regardless of our local namespace, we return all imported services + // of the local partition. + index, importedNodes, err := state.ServiceDump(ws, args.ServiceKind, args.UseServiceKind, args.EnterpriseMeta.WithWildcardNamespace(), p.Name) + if err != nil { + return fmt.Errorf("could not get a service dump for peer %q: %w", p.Name, err) + } + + if index > maxIndex { + maxIndex = index + } + reply.ImportedNodes = append(reply.ImportedNodes, importedNodes...) + } + + // Get, store, and filter gateway services + idx, gatewayServices, err := state.DumpGatewayServices(ws) + if err != nil { + return err + } + reply.Gateways = gatewayServices + + if idx > maxIndex { + maxIndex = idx + } + reply.Index = maxIndex + + raw, err := filter.Execute(reply.Nodes) + if err != nil { + return fmt.Errorf("could not filter local service dump: %w", err) + } + reply.Nodes = raw.(structs.CheckServiceNodes) } - reply.Nodes = raw.(structs.CheckServiceNodes) importedRaw, err := filter.Execute(reply.ImportedNodes) if err != nil { diff --git a/agent/ui_endpoint.go b/agent/ui_endpoint.go index 2f74d8e59..df6f359de 100644 --- a/agent/ui_endpoint.go +++ b/agent/ui_endpoint.go @@ -211,7 +211,9 @@ func (s *HTTPHandlers) UIServices(resp http.ResponseWriter, req *http.Request) ( if done := s.parse(resp, req, &args.Datacenter, &args.QueryOptions); done { return nil, nil } - + if peer := req.URL.Query().Get("peer"); peer != "" { + args.PeerName = peer + } if err := s.parseEntMeta(req, &args.EnterpriseMeta); err != nil { return nil, err } From 7547f7535f2b02b3d492b6ee2e9361a518b54546 Mon Sep 17 00:00:00 2001 From: Michael Klein Date: Thu, 1 Sep 2022 17:37:37 +0200 Subject: [PATCH 35/55] ui: chore upgrade to ember-qunit v5 (#14430) * Refactor remaining `moduleFor`-tests `moduleFor*` will be removed from ember-qunit v5 * Upgrade ember-qunit to v5 * Update how we use ember-sinon-qunit With ember-qunit v5 we need to use ember-sinon-qunit differently. * Fix submit-blank test We can't click on disabled buttons with new test-helpers. We need to adapt the test accordingly. * Make sure we await fill-in with form yaml step We need to await `fill-in`. This changes the reducer function in the step to create a proper await chain. * Fix show-routing test We need to await a tick before visiting again. * Remove redundant `wait one tick`-step * remove unneeded "next Tick" promise from form step * Increase timeout show-routing feature * Comment on pause hack for show-routing test --- ui/packages/consul-ui/package.json | 8 +- .../dc/services/show-routing.feature | 7 + .../tests/acceptance/submit-blank.feature | 2 +- ui/packages/consul-ui/tests/index.html | 7 + .../components/data-source-test.js | 16 +- .../services/repository/auth-method-test.js | 170 ++++----- .../services/repository/coordinate-test.js | 124 ++++--- .../services/repository/dc-test.js | 57 +-- .../repository/discovery-chain-test.js | 76 ++-- .../services/repository/kv-test.js | 171 +++++---- .../services/repository/node-test.js | 113 +++--- .../services/repository/policy-test.js | 179 ++++----- .../services/repository/role-test.js | 185 +++++----- .../services/repository/service-test.js | 129 +++---- .../services/repository/session-test.js | 181 ++++----- .../services/repository/token-test.js | 349 +++++++++--------- .../services/repository/topology-test.js | 78 ++-- .../integration/services/routlet-test.js | 63 ++-- .../utils/dom/event-source/callable-test.js | 12 +- .../tests/steps/interactions/form.js | 15 +- ui/packages/consul-ui/tests/test-helper.js | 7 + .../tests/unit/adapters/application-test.js | 3 +- .../unit/mixins/with-blocking-actions-test.js | 14 +- .../consul-ui/tests/unit/routes/dc-test.js | 3 +- .../unit/serializers/application-test.js | 3 +- .../tests/unit/serializers/kv-test.js | 6 +- .../consul-ui/tests/unit/utils/ascend-test.js | 3 +- .../consul-ui/tests/unit/utils/atob-test.js | 3 +- .../consul-ui/tests/unit/utils/btoa-test.js | 3 +- .../tests/unit/utils/dom/closest-test.js | 6 +- .../unit/utils/dom/create-listeners-test.js | 22 +- .../utils/dom/event-source/blocking-test.js | 8 +- .../unit/utils/dom/event-source/cache-test.js | 20 +- .../utils/dom/event-source/callable-test.js | 10 +- .../utils/dom/event-source/openable-test.js | 6 +- .../tests/unit/utils/http/create-url-test.js | 3 +- .../tests/unit/utils/isFolder-test.js | 3 +- .../tests/unit/utils/keyToArray-test.js | 3 +- .../tests/unit/utils/left-trim-test.js | 3 +- .../tests/unit/utils/promisedTimeout-test.js | 3 +- .../tests/unit/utils/right-trim-test.js | 3 +- .../tests/unit/utils/routing/walk-test.js | 6 +- .../tests/unit/utils/ucfirst-test.js | 3 +- ui/yarn.lock | 237 +++++++----- 44 files changed, 1214 insertions(+), 1109 deletions(-) diff --git a/ui/packages/consul-ui/package.json b/ui/packages/consul-ui/package.json index 7960af1a7..64e731b32 100644 --- a/ui/packages/consul-ui/package.json +++ b/ui/packages/consul-ui/package.json @@ -60,6 +60,7 @@ "@docfy/ember": "^0.4.1", "@ember/optional-features": "^1.3.0", "@ember/render-modifiers": "^1.0.2", + "@ember/test-helpers": "^2.1.4", "@glimmer/component": "^1.0.0", "@glimmer/tracking": "^1.0.0", "@hashicorp/ember-cli-api-double": "^3.1.0", @@ -135,7 +136,7 @@ "ember-page-title": "^6.2.1", "ember-power-select": "^4.0.5", "ember-power-select-with-create": "^0.8.0", - "ember-qunit": "^4.6.0", + "ember-qunit": "^5.1.1", "ember-ref-modifier": "^1.0.0", "ember-render-helpers": "^0.2.0", "ember-resolver": "^8.0.0", @@ -166,7 +167,7 @@ "pretender": "^3.2.0", "prettier": "^1.10.2", "pretty-ms": "^7.0.1", - "qunit-dom": "^1.0.0", + "qunit-dom": "^1.6.0", "react-is": "^17.0.1", "refractor": "^3.5.0", "remark-autolink-headings": "^6.0.1", @@ -177,7 +178,8 @@ "tippy.js": "^6.2.7", "torii": "^0.10.1", "unist-util-visit": "^2.0.3", - "wayfarer": "^7.0.1" + "wayfarer": "^7.0.1", + "qunit": "^2.13.0" }, "engines": { "node": ">=10 <=14" diff --git a/ui/packages/consul-ui/tests/acceptance/dc/services/show-routing.feature b/ui/packages/consul-ui/tests/acceptance/dc/services/show-routing.feature index 8befb868f..3cbf392f7 100644 --- a/ui/packages/consul-ui/tests/acceptance/dc/services/show-routing.feature +++ b/ui/packages/consul-ui/tests/acceptance/dc/services/show-routing.feature @@ -52,6 +52,13 @@ Feature: dc / services / show-routing: Show Routing for Service service: service-1 --- And I see routing on the tabs + # something weird is going on with this test + # without waiting we issue a url reload that + # will make the test timeout. + # waiting will "fix" this - we should look into + # the underlying reason for this soon. This is + # only a quick-fix to land ember-qunit v5. + And pause for 1000 And I visit the service page for yaml --- dc: dc1 diff --git a/ui/packages/consul-ui/tests/acceptance/submit-blank.feature b/ui/packages/consul-ui/tests/acceptance/submit-blank.feature index 906281012..f18cdbbc3 100644 --- a/ui/packages/consul-ui/tests/acceptance/submit-blank.feature +++ b/ui/packages/consul-ui/tests/acceptance/submit-blank.feature @@ -10,7 +10,7 @@ Feature: submit-blank dc: datacenter --- Then the url should be /datacenter/[Slug]/create - And I submit + Then I don't see submitIsEnabled Then the url should be /datacenter/[Slug]/create Where: -------------------------- diff --git a/ui/packages/consul-ui/tests/index.html b/ui/packages/consul-ui/tests/index.html index f841d34b9..4ee572ab4 100644 --- a/ui/packages/consul-ui/tests/index.html +++ b/ui/packages/consul-ui/tests/index.html @@ -16,6 +16,13 @@ {{content-for "body"}} {{content-for "test-body"}} +
+
+
+
+
+
+ {{content-for "body-footer"}} diff --git a/ui/packages/consul-ui/tests/integration/components/data-source-test.js b/ui/packages/consul-ui/tests/integration/components/data-source-test.js index 10702240a..6c343d617 100644 --- a/ui/packages/consul-ui/tests/integration/components/data-source-test.js +++ b/ui/packages/consul-ui/tests/integration/components/data-source-test.js @@ -1,13 +1,13 @@ -import { module } from 'qunit'; +import { module, test } from 'qunit'; import { setupRenderingTest } from 'ember-qunit'; import { clearRender, render, waitUntil } from '@ember/test-helpers'; import hbs from 'htmlbars-inline-precompile'; -import test from 'ember-sinon-qunit/test-support/test'; import Service, { inject as service } from '@ember/service'; import DataSourceComponent from 'consul-ui/components/data-source/index'; import { BlockingEventSource as RealEventSource } from 'consul-ui/utils/dom/event-source'; +import sinon from 'sinon'; const createFakeBlockingEventSource = function() { const EventSource = function(cb) { @@ -39,10 +39,10 @@ module('Integration | Component | data-source', function(hooks) { // Set any properties with this.set('myProperty', 'value'); // Handle any actions with this.set('myAction', function(val) { ... }); assert.expect(9); - const close = this.stub(); - const open = this.stub(); - const addEventListener = this.stub(); - const removeEventListener = this.stub(); + const close = sinon.stub(); + const open = sinon.stub(); + const addEventListener = sinon.stub(); + const removeEventListener = sinon.stub(); let count = 0; const fakeService = class extends Service { close = close; @@ -98,8 +98,8 @@ module('Integration | Component | data-source', function(hooks) { }); test('error actions are triggered when errors are dispatched', async function(assert) { const source = new RealEventSource(); - const error = this.stub(); - const close = this.stub(); + const error = sinon.stub(); + const close = sinon.stub(); const fakeService = class extends Service { close = close; open(uri, obj) { diff --git a/ui/packages/consul-ui/tests/integration/services/repository/auth-method-test.js b/ui/packages/consul-ui/tests/integration/services/repository/auth-method-test.js index ce9ab231c..93de5150f 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/auth-method-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/auth-method-test.js @@ -1,92 +1,94 @@ -import { moduleFor, test } from 'ember-qunit'; +import { setupTest } from 'ember-qunit'; import repo from 'consul-ui/tests/helpers/repo'; -import { skip } from 'qunit'; +import { module, skip, test } from 'qunit'; -const NAME = 'auth-method'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -const dc = 'dc-1'; -const id = 'auth-method-name'; -const undefinedNspace = 'default'; -const undefinedPartition = 'default'; -const partition = 'default'; -[undefinedNspace, 'team-1', undefined].forEach(nspace => { - test(`findAllByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { - return repo( - 'auth-method', - 'findAllByDatacenter', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/auth-methods?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_AUTH_METHOD_COUNT: '3', - } - ); - }, - function performTest(service) { - return service.findAllByDatacenter({ - dc: dc, - nspace: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(function(item) { +module(`Integration | Service | auth-method`, function(hooks) { + setupTest(hooks); + const dc = 'dc-1'; + const id = 'auth-method-name'; + const undefinedNspace = 'default'; + const undefinedPartition = 'default'; + const partition = 'default'; + [undefinedNspace, 'team-1', undefined].forEach(nspace => { + test(`findAllByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/auth-method'); + + return repo( + 'auth-method', + 'findAllByDatacenter', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/auth-methods?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }`, + { + CONSUL_AUTH_METHOD_COUNT: '3', + } + ); + }, + function performTest(service) { + return service.findAllByDatacenter({ + dc: dc, + nspace: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(function(item) { + return Object.assign({}, item, { + Datacenter: dc, + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.Name}"]`, + }); + }); + }) + ); + } + ); + }); + skip(`findBySlug returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/auth-method'); + + return repo( + 'AuthMethod', + 'findBySlug', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/auth-method/${id}?dc=${dc}${ + typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` + }` + ); + }, + function performTest(service) { + return service.findBySlug(id, dc, nspace || undefinedNspace); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + const item = payload; return Object.assign({}, item, { Datacenter: dc, Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.Name}"]`, + uid: `["${item.Namespace || undefinedNspace}","${dc}","${item.Name}"]`, + meta: { + cacheControl: undefined, + cursor: undefined, + dc: dc, + nspace: item.Namespace || undefinedNspace, + }, }); - }); - }) - ); - } - ); - }); - skip(`findBySlug returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { - return repo( - 'AuthMethod', - 'findBySlug', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/auth-method/${id}?dc=${dc}${ - typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` - }` - ); - }, - function performTest(service) { - return service.findBySlug(id, dc, nspace || undefinedNspace); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - const item = payload; - return Object.assign({}, item, { - Datacenter: dc, - Namespace: item.Namespace || undefinedNspace, - uid: `["${item.Namespace || undefinedNspace}","${dc}","${item.Name}"]`, - meta: { - cacheControl: undefined, - cursor: undefined, - dc: dc, - nspace: item.Namespace || undefinedNspace, - }, - }); - }) - ); - } - ); + }) + ); + } + ); + }); }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/coordinate-test.js b/ui/packages/consul-ui/tests/integration/services/repository/coordinate-test.js index de96343c2..1e19a84ca 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/coordinate-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/coordinate-test.js @@ -1,68 +1,74 @@ -import { moduleFor, test } from 'ember-qunit'; +import { setupTest } from 'ember-qunit'; +import { module, test } from 'qunit'; import repo from 'consul-ui/tests/helpers/repo'; import { get } from '@ember/object'; -const NAME = 'coordinate'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); const dc = 'dc-1'; const nspace = 'default'; const partition = 'default'; const now = new Date().getTime(); -test('findAllByDatacenter returns the correct data for list endpoint', function(assert) { - get(this.subject(), 'store').serializerFor(NAME).timestamp = function() { - return now; - }; - return repo( - 'Coordinate', - 'findAllByDatacenter', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/coordinate/nodes?dc=${dc}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_NODE_COUNT: '100', - } +module(`Integration | Service | coordinate`, function(hooks) { + setupTest(hooks); + + test('findAllByDatacenter returns the correct data for list endpoint', function(assert) { + const subject = this.owner.lookup('service:repository/coordinate'); + + get(subject, 'store').serializerFor('coordinate').timestamp = function() { + return now; + }; + return repo( + 'Coordinate', + 'findAllByDatacenter', + subject, + function retrieveStub(stub) { + return stub( + `/v1/coordinate/nodes?dc=${dc}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }`, + { + CONSUL_NODE_COUNT: '100', + } + ); + }, + function performTest(service) { + return service.findAllByDatacenter({ dc, partition }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(item => + Object.assign({}, item, { + SyncTime: now, + Datacenter: dc, + Partition: partition, + // TODO: nspace isn't required here, once we've + // refactored out our Serializer this can go + uid: `["${partition}","${nspace}","${dc}","${item.Node}"]`, + }) + ); + }) + ); + } + ); + }); + test('findAllByNode calls findAllByDatacenter with the correct arguments', function(assert) { + assert.expect(3); + const datacenter = 'dc-1'; + const conf = { + cursor: 1, + }; + const service = this.owner.lookup('service:repository/coordinate'); + service.findAllByDatacenter = function(params, configuration) { + assert.equal( + arguments.length, + 2, + 'Expected to be called with the correct number of arguments' ); - }, - function performTest(service) { - return service.findAllByDatacenter({ dc, partition }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(item => - Object.assign({}, item, { - SyncTime: now, - Datacenter: dc, - Partition: partition, - // TODO: nspace isn't required here, once we've - // refactored out our Serializer this can go - uid: `["${partition}","${nspace}","${dc}","${item.Node}"]`, - }) - ); - }) - ); - } - ); -}); -test('findAllByNode calls findAllByDatacenter with the correct arguments', function(assert) { - assert.expect(3); - const datacenter = 'dc-1'; - const conf = { - cursor: 1, - }; - const service = this.subject(); - service.findAllByDatacenter = function(params, configuration) { - assert.equal(arguments.length, 2, 'Expected to be called with the correct number of arguments'); - assert.equal(params.dc, datacenter); - assert.deepEqual(configuration, conf); - return Promise.resolve([]); - }; - return service.findAllByNode({ node: 'node-name', dc: datacenter }, conf); + assert.equal(params.dc, datacenter); + assert.deepEqual(configuration, conf); + return Promise.resolve([]); + }; + return service.findAllByNode({ node: 'node-name', dc: datacenter }, conf); + }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/dc-test.js b/ui/packages/consul-ui/tests/integration/services/repository/dc-test.js index a491c98e0..6763d54be 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/dc-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/dc-test.js @@ -1,30 +1,31 @@ -import { moduleFor } from 'ember-qunit'; -import { skip } from 'qunit'; +import { setupTest } from 'ember-qunit'; +import { module, skip } from 'qunit'; import repo from 'consul-ui/tests/helpers/repo'; -const NAME = 'dc'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -skip("findBySlug (doesn't interact with the API) but still needs an int test"); -skip('findAll returns the correct data for list endpoint', function(assert) { - return repo( - 'Dc', - 'findAll', - this.subject(), - function retrieveStub(stub) { - return stub(`/v1/catalog/datacenters`, { - CONSUL_DATACENTER_COUNT: '100', - }); - }, - function performTest(service) { - return service.findAll(); - }, - function performAssertion(actual, expected) { - actual.forEach((item, i) => { - assert.equal(actual[i].Name, item.Name); - assert.equal(item.Local, i === 0); - }); - } - ); + +module(`Integration | Service | dc`, function(hooks) { + setupTest(hooks); + skip("findBySlug (doesn't interact with the API) but still needs an int test"); + skip('findAll returns the correct data for list endpoint', function(assert) { + const subject = this.owner.lookup('service:repository/dc'); + + return repo( + 'Dc', + 'findAll', + subject, + function retrieveStub(stub) { + return stub(`/v1/catalog/datacenters`, { + CONSUL_DATACENTER_COUNT: '100', + }); + }, + function performTest(service) { + return service.findAll(); + }, + function performAssertion(actual, expected) { + actual.forEach((item, i) => { + assert.equal(actual[i].Name, item.Name); + assert.equal(item.Local, i === 0); + }); + } + ); + }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/discovery-chain-test.js b/ui/packages/consul-ui/tests/integration/services/repository/discovery-chain-test.js index b289fd75c..e7d3da2ba 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/discovery-chain-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/discovery-chain-test.js @@ -1,42 +1,42 @@ -import { moduleFor, test } from 'ember-qunit'; +import { module, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import repo from 'consul-ui/tests/helpers/repo'; -moduleFor('service:repository/discovery-chain', 'Integration | Service | discovery-chain', { - // Specify the other units that are required for this test. - integration: true, -}); -const dc = 'dc-1'; -const id = 'slug'; -test('findBySlug returns the correct data for item endpoint', function(assert) { - return repo( - 'Service', - 'findBySlug', - this.subject(), - function retrieveStub(stub) { - return stub(`/v1/discovery-chain/${id}?dc=${dc}`, { - CONSUL_DISCOVERY_CHAIN_COUNT: 1, - }); - }, - function performTest(service) { - return service.findBySlug({ id, dc }); - }, - function performAssertion(actual, expected) { - const result = expected(function(payload) { - return Object.assign( - {}, - { - Datacenter: dc, - uid: `["default","default","${dc}","${id}"]`, - meta: { - cacheControl: undefined, - cursor: undefined, +module('Integration | Service | discovery-chain', function(hooks) { + setupTest(hooks); + const dc = 'dc-1'; + const id = 'slug'; + test('findBySlug returns the correct data for item endpoint', function(assert) { + return repo( + 'Service', + 'findBySlug', + this.owner.lookup('service:repository/discovery-chain'), + function retrieveStub(stub) { + return stub(`/v1/discovery-chain/${id}?dc=${dc}`, { + CONSUL_DISCOVERY_CHAIN_COUNT: 1, + }); + }, + function performTest(service) { + return service.findBySlug({ id, dc }); + }, + function performAssertion(actual, expected) { + const result = expected(function(payload) { + return Object.assign( + {}, + { + Datacenter: dc, + uid: `["default","default","${dc}","${id}"]`, + meta: { + cacheControl: undefined, + cursor: undefined, + }, }, - }, - payload - ); - }); - assert.equal(actual.Datacenter, result.Datacenter); - assert.equal(actual.uid, result.uid); - } - ); + payload + ); + }); + assert.equal(actual.Datacenter, result.Datacenter); + assert.equal(actual.uid, result.uid); + } + ); + }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/kv-test.js b/ui/packages/consul-ui/tests/integration/services/repository/kv-test.js index ee7f5a085..bde99eb3f 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/kv-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/kv-test.js @@ -1,90 +1,97 @@ -import { moduleFor, test } from 'ember-qunit'; +import { module, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import repo from 'consul-ui/tests/helpers/repo'; import { env } from '../../../../env'; import { get } from '@ember/object'; -const NAME = 'kv'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -const dc = 'dc-1'; -const id = 'key-name'; -const now = new Date().getTime(); -const undefinedNspace = 'default'; -const undefinedPartition = 'default'; -const partition = 'default'; -[undefinedNspace, 'team-1', undefined].forEach(nspace => { - test(`findAllBySlug returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { - get(this.subject(), 'store').serializerFor(NAME).timestamp = function() { - return now; - }; - return repo( - 'Kv', - 'findAllBySlug', - this.subject(), - function retrieveTest(stub) { - return stub( - `/v1/kv/${id}?keys&dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_KV_COUNT: '1', - } - ); - }, - function performTest(service) { - return service.findAllBySlug({ - id, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - const expectedNspace = env('CONSUL_NSPACES_ENABLED') - ? nspace || undefinedNspace - : 'default'; - const expectedPartition = env('CONSUL_PARTITIONS_ENABLED') - ? partition || undefinedPartition - : 'default'; - actual.forEach(item => { - assert.equal(item.uid, `["${expectedPartition}","${expectedNspace}","${dc}","${item.Key}"]`); - assert.equal(item.Datacenter, dc); - }); - } - ); - }); - test(`findBySlug returns the correct data for item endpoint when nspace is ${nspace}`, function(assert) { - return repo( - 'Kv', - 'findBySlug', - this.subject(), - function(stub) { - return stub( - `/v1/kv/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }` - ); - }, - function(service) { - return service.findBySlug({ - id, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function(actual, expected) { - expected( - function(payload) { +module(`Integration | Service | kv`, function(hooks) { + setupTest(hooks); + const dc = 'dc-1'; + const id = 'key-name'; + const now = new Date().getTime(); + const undefinedNspace = 'default'; + const undefinedPartition = 'default'; + const partition = 'default'; + [undefinedNspace, 'team-1', undefined].forEach(nspace => { + test(`findAllBySlug returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/kv'); + + get(subject, 'store').serializerFor('kv').timestamp = function() { + return now; + }; + return repo( + 'Kv', + 'findAllBySlug', + subject, + function retrieveTest(stub) { + return stub( + `/v1/kv/${id}?keys&dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }`, + { + CONSUL_KV_COUNT: '1', + } + ); + }, + function performTest(service) { + return service.findAllBySlug({ + id, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + const expectedNspace = env('CONSUL_NSPACES_ENABLED') + ? nspace || undefinedNspace + : 'default'; + const expectedPartition = env('CONSUL_PARTITIONS_ENABLED') + ? partition || undefinedPartition + : 'default'; + actual.forEach(item => { + assert.equal( + item.uid, + `["${expectedPartition}","${expectedNspace}","${dc}","${item.Key}"]` + ); + assert.equal(item.Datacenter, dc); + }); + } + ); + }); + test(`findBySlug returns the correct data for item endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/kv'); + + return repo( + 'Kv', + 'findBySlug', + subject, + function(stub) { + return stub( + `/v1/kv/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }` + ); + }, + function(service) { + return service.findBySlug({ + id, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function(actual, expected) { + expected(function(payload) { const item = payload[0]; - assert.equal(actual.uid, `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.Key}"]`); + assert.equal( + actual.uid, + `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.Key}"]` + ); assert.equal(actual.Datacenter, dc); - } - ); - } - ); + }); + } + ); + }); }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/node-test.js b/ui/packages/consul-ui/tests/integration/services/repository/node-test.js index 0735c57d6..2e43465cc 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/node-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/node-test.js @@ -1,64 +1,67 @@ -import { moduleFor, test } from 'ember-qunit'; +import { setupTest } from 'ember-qunit'; +import { module, test } from 'qunit'; import repo from 'consul-ui/tests/helpers/repo'; import { get } from '@ember/object'; -const NAME = 'node'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); const dc = 'dc-1'; const id = 'token-name'; const now = new Date().getTime(); const nspace = 'default'; const partition = 'default'; -test('findByDatacenter returns the correct data for list endpoint', function(assert) { - get(this.subject(), 'store').serializerFor(NAME).timestamp = function() { - return now; - }; - return repo( - 'Node', - 'findAllByDatacenter', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/internal/ui/nodes?dc=${dc}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_NODE_COUNT: '100', - } - ); - }, - function performTest(service) { - return service.findAllByDatacenter({ dc, partition }); - }, - function performAssertion(actual, expected) { - actual.forEach(item => { - assert.equal(item.uid, `["${partition}","${nspace}","${dc}","${item.ID}"]`); - assert.equal(item.Datacenter, dc); - }); - } - ); -}); -test('findBySlug returns the correct data for item endpoint', function(assert) { - return repo( - 'Node', - 'findBySlug', - this.subject(), - function(stub) { - return stub( - `/v1/internal/ui/node/${id}?dc=${dc}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }` - ); - }, - function(service) { - return service.findBySlug({ id, dc, partition }); - }, - function(actual, expected) { - assert.equal(actual.uid, `["${partition}","${nspace}","${dc}","${actual.ID}"]`); - assert.equal(actual.Datacenter, dc); - } - ); +module(`Integration | Service | node`, function(hooks) { + setupTest(hooks); + + test('findByDatacenter returns the correct data for list endpoint', function(assert) { + const subject = this.owner.lookup('service:repository/node'); + get(subject, 'store').serializerFor('node').timestamp = function() { + return now; + }; + return repo( + 'Node', + 'findAllByDatacenter', + subject, + function retrieveStub(stub) { + return stub( + `/v1/internal/ui/nodes?dc=${dc}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }`, + { + CONSUL_NODE_COUNT: '100', + } + ); + }, + function performTest(service) { + return service.findAllByDatacenter({ dc, partition }); + }, + function performAssertion(actual, expected) { + actual.forEach(item => { + assert.equal(item.uid, `["${partition}","${nspace}","${dc}","${item.ID}"]`); + assert.equal(item.Datacenter, dc); + }); + } + ); + }); + test('findBySlug returns the correct data for item endpoint', function(assert) { + const subject = this.owner.lookup('service:repository/node'); + + return repo( + 'Node', + 'findBySlug', + subject, + function(stub) { + return stub( + `/v1/internal/ui/node/${id}?dc=${dc}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }` + ); + }, + function(service) { + return service.findBySlug({ id, dc, partition }); + }, + function(actual, expected) { + assert.equal(actual.uid, `["${partition}","${nspace}","${dc}","${actual.ID}"]`); + assert.equal(actual.Datacenter, dc); + } + ); + }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/policy-test.js b/ui/packages/consul-ui/tests/integration/services/repository/policy-test.js index b7fe07ed1..662ff8715 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/policy-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/policy-test.js @@ -1,92 +1,95 @@ -import { moduleFor, test, skip } from 'ember-qunit'; +import { module, skip, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import { get } from '@ember/object'; import repo from 'consul-ui/tests/helpers/repo'; -const NAME = 'policy'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -skip('translate returns the correct data for the translate endpoint'); -const now = new Date().getTime(); -const dc = 'dc-1'; -const id = 'policy-name'; -const undefinedNspace = 'default'; -const undefinedPartition = 'default'; -const partition = 'default'; -[undefinedNspace, 'team-1', undefined].forEach(nspace => { - test(`findByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { - get(this.subject(), 'store').serializerFor(NAME).timestamp = function() { - return now; - }; - return repo( - 'Policy', - 'findAllByDatacenter', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/policies?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_POLICY_COUNT: '10', - } - ); - }, - function performTest(service) { - return service.findAllByDatacenter({ - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(item => - Object.assign({}, item, { - SyncTime: now, - Datacenter: dc, - Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.ID}"]`, - }) - ); - }) - ); - } - ); - }); - test(`findBySlug returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { - return repo( - 'Policy', - 'findBySlug', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/policy/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }` - ); - }, - function performTest(service) { - return service.findBySlug({ - id, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.equal( - actual.uid, - `["${partition || undefinedPartition}","${nspace || undefinedNspace}","${dc}","${ - actual.ID - }"]` - ); - assert.equal(actual.Datacenter, dc); - } - ); + +module(`Integration | Service | policy`, function(hooks) { + setupTest(hooks); + skip('translate returns the correct data for the translate endpoint'); + const now = new Date().getTime(); + const dc = 'dc-1'; + const id = 'policy-name'; + const undefinedNspace = 'default'; + const undefinedPartition = 'default'; + const partition = 'default'; + [undefinedNspace, 'team-1', undefined].forEach(nspace => { + test(`findByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/policy'); + + get(subject, 'store').serializerFor('policy').timestamp = function() { + return now; + }; + return repo( + 'Policy', + 'findAllByDatacenter', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/policies?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }`, + { + CONSUL_POLICY_COUNT: '10', + } + ); + }, + function performTest(service) { + return service.findAllByDatacenter({ + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(item => + Object.assign({}, item, { + SyncTime: now, + Datacenter: dc, + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.ID}"]`, + }) + ); + }) + ); + } + ); + }); + test(`findBySlug returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/policy'); + return repo( + 'Policy', + 'findBySlug', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/policy/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }` + ); + }, + function performTest(service) { + return service.findBySlug({ + id, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.equal( + actual.uid, + `["${partition || undefinedPartition}","${nspace || undefinedNspace}","${dc}","${ + actual.ID + }"]` + ); + assert.equal(actual.Datacenter, dc); + } + ); + }); }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/role-test.js b/ui/packages/consul-ui/tests/integration/services/repository/role-test.js index 474eeaf65..9241b4323 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/role-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/role-test.js @@ -1,102 +1,105 @@ -import { moduleFor, test, skip } from 'ember-qunit'; +import { module, skip, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import { get } from '@ember/object'; import repo from 'consul-ui/tests/helpers/repo'; import { createPolicies } from 'consul-ui/tests/helpers/normalizers'; -const NAME = 'role'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -const now = new Date().getTime(); -const dc = 'dc-1'; -const id = 'role-name'; -const undefinedNspace = 'default'; -const undefinedPartition = 'default'; -const partition = 'default'; -[undefinedNspace, 'team-1', undefined].forEach(nspace => { - test(`findByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { - get(this.subject(), 'store').serializerFor(NAME).timestamp = function() { - return now; - }; - return repo( - 'Role', - 'findAllByDatacenter', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/roles?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_ROLE_COUNT: '100', - } - ); - }, - function performTest(service) { - return service.findAllByDatacenter({ - dc: dc, - nspace: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(item => - Object.assign({}, item, { - SyncTime: now, +module(`Integration | Service | role`, function(hooks) { + setupTest(hooks); + const now = new Date().getTime(); + const dc = 'dc-1'; + const id = 'role-name'; + const undefinedNspace = 'default'; + const undefinedPartition = 'default'; + const partition = 'default'; + [undefinedNspace, 'team-1', undefined].forEach(nspace => { + test(`findByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/role'); + + get(subject, 'store').serializerFor('role').timestamp = function() { + return now; + }; + return repo( + 'Role', + 'findAllByDatacenter', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/roles?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }`, + { + CONSUL_ROLE_COUNT: '100', + } + ); + }, + function performTest(service) { + return service.findAllByDatacenter({ + dc: dc, + nspace: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(item => + Object.assign({}, item, { + SyncTime: now, + Datacenter: dc, + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.ID}"]`, + Policies: createPolicies(item), + }) + ); + }) + ); + } + ); + }); + // FIXME: For some reason this tries to initialize the metrics service? + skip(`findBySlug returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/role'); + + return repo( + 'Role', + 'findBySlug', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/role/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }` + ); + }, + function performTest(service) { + return service.findBySlug({ + id, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + const item = payload; + return Object.assign({}, item, { Datacenter: dc, Namespace: item.Namespace || undefinedNspace, Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + uid: `["${partition || undefinedPartition}","${item.Namespace || undefinedNspace}","${dc}","${item.ID}"]`, Policies: createPolicies(item), - }) - ); - }) - ); - } - ); - }); - // FIXME: For some reason this tries to initialize the metrics service? - skip(`findBySlug returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { - return repo( - 'Role', - 'findBySlug', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/role/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }` - ); - }, - function performTest(service) { - return service.findBySlug({ - id, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - const item = payload; - return Object.assign({}, item, { - Datacenter: dc, - Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.ID}"]`, - Policies: createPolicies(item), - }); - }) - ); - } - ); + }); + }) + ); + } + ); + }); }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/service-test.js b/ui/packages/consul-ui/tests/integration/services/repository/service-test.js index 77b7cf92f..40174005f 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/service-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/service-test.js @@ -1,69 +1,70 @@ -import { moduleFor, test } from 'ember-qunit'; +import { module, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import repo from 'consul-ui/tests/helpers/repo'; import { get } from '@ember/object'; -const NAME = 'service'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -const dc = 'dc-1'; -const now = new Date().getTime(); -const undefinedNspace = 'default'; -const undefinedPartition = 'default'; -const partition = 'default'; -[undefinedNspace, 'team-1', undefined].forEach(nspace => { - test(`findGatewayBySlug returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { - get(this.subject(), 'store').serializerFor(NAME).timestamp = function() { - return now; - }; - const gateway = 'gateway'; - const conf = { - cursor: 1, - }; - return repo( - 'Service', - 'findGatewayBySlug', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/internal/ui/gateway-services-nodes/${gateway}?dc=${dc}${ - typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` - }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}`, - { - CONSUL_SERVICE_COUNT: '100', - } - ); - }, - function performTest(service) { - return service.findGatewayBySlug( - { - gateway, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }, - conf - ); - }, - function performAssertion(actual, expected) { - const result = expected(function(payload) { - return payload.map(item => - Object.assign({}, item, { - SyncTime: now, - Datacenter: dc, - Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.Name}"]`, - }) + +module(`Integration | Service | service`, function(hooks) { + setupTest(hooks); + const dc = 'dc-1'; + const now = new Date().getTime(); + const undefinedNspace = 'default'; + const undefinedPartition = 'default'; + const partition = 'default'; + [undefinedNspace, 'team-1', undefined].forEach(nspace => { + test(`findGatewayBySlug returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/service'); + get(subject, 'store').serializerFor('service').timestamp = function() { + return now; + }; + const gateway = 'gateway'; + const conf = { + cursor: 1, + }; + return repo( + 'Service', + 'findGatewayBySlug', + subject, + function retrieveStub(stub) { + return stub( + `/v1/internal/ui/gateway-services-nodes/${gateway}?dc=${dc}${ + typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` + }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}`, + { + CONSUL_SERVICE_COUNT: '100', + } ); - }); - assert.equal(actual[0].SyncTime, result[0].SyncTime); - assert.equal(actual[0].Datacenter, result[0].Datacenter); - assert.equal(actual[0].Namespace, result[0].Namespace); - assert.equal(actual[0].Partition, result[0].Partition); - assert.equal(actual[0].uid, result[0].uid); - } - ); + }, + function performTest(service) { + return service.findGatewayBySlug( + { + gateway, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }, + conf + ); + }, + function performAssertion(actual, expected) { + const result = expected(function(payload) { + return payload.map(item => + Object.assign({}, item, { + SyncTime: now, + Datacenter: dc, + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.Name}"]`, + }) + ); + }); + assert.equal(actual[0].SyncTime, result[0].SyncTime); + assert.equal(actual[0].Datacenter, result[0].Datacenter); + assert.equal(actual[0].Namespace, result[0].Namespace); + assert.equal(actual[0].Partition, result[0].Partition); + assert.equal(actual[0].uid, result[0].uid); + } + ); + }); }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/session-test.js b/ui/packages/consul-ui/tests/integration/services/repository/session-test.js index c99742fbe..eee607e1b 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/session-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/session-test.js @@ -1,99 +1,102 @@ -import { moduleFor, test } from 'ember-qunit'; +import { module, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import repo from 'consul-ui/tests/helpers/repo'; import { get } from '@ember/object'; -const NAME = 'session'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -const dc = 'dc-1'; -const id = 'node-name'; -const now = new Date().getTime(); -const undefinedNspace = 'default'; -const undefinedPartition = 'default'; -const partition = 'default'; -[undefinedNspace, 'team-1', undefined].forEach(nspace => { - test(`findByNode returns the correct data for list endpoint when the nspace is ${nspace}`, function(assert) { - get(this.subject(), 'store').serializerFor(NAME).timestamp = function() { - return now; - }; - return repo( - 'Session', - 'findByNode', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/session/node/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_SESSION_COUNT: '100', - } - ); - }, - function performTest(service) { - return service.findByNode({ - id, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(item => - Object.assign({}, item, { - SyncTime: now, +module(`Integration | Service | session`, function(hooks) { + setupTest(hooks); + + const dc = 'dc-1'; + const id = 'node-name'; + const now = new Date().getTime(); + const undefinedNspace = 'default'; + const undefinedPartition = 'default'; + const partition = 'default'; + [undefinedNspace, 'team-1', undefined].forEach(nspace => { + test(`findByNode returns the correct data for list endpoint when the nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/session'); + + get(subject, 'store').serializerFor('session').timestamp = function() { + return now; + }; + return repo( + 'Session', + 'findByNode', + subject, + function retrieveStub(stub) { + return stub( + `/v1/session/node/${id}?dc=${dc}${ + typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` + }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}`, + { + CONSUL_SESSION_COUNT: '100', + } + ); + }, + function performTest(service) { + return service.findByNode({ + id, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(item => + Object.assign({}, item, { + SyncTime: now, + Datacenter: dc, + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.ID}"]`, + }) + ); + }) + ); + } + ); + }); + test(`findByKey returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/session'); + return repo( + 'Session', + 'findByKey', + subject, + function(stub) { + return stub( + `/v1/session/info/${id}?dc=${dc}${ + typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` + }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}` + ); + }, + function(service) { + return service.findByKey({ + id, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + const item = payload[0]; + return Object.assign({}, item, { Datacenter: dc, Namespace: item.Namespace || undefinedNspace, Partition: item.Partition || undefinedPartition, uid: `["${item.Partition || undefinedPartition}","${item.Namespace || undefinedNspace}","${dc}","${item.ID}"]`, - }) - ); - }) - ); - } - ); - }); - test(`findByKey returns the correct data for item endpoint when the nspace is ${nspace}`, function(assert) { - return repo( - 'Session', - 'findByKey', - this.subject(), - function(stub) { - return stub( - `/v1/session/info/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }` - ); - }, - function(service) { - return service.findByKey({ - id, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - const item = payload[0]; - return Object.assign({}, item, { - Datacenter: dc, - Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.ID}"]`, - }); - }) - ); - } - ); + }); + }) + ); + } + ); + }); }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/token-test.js b/ui/packages/consul-ui/tests/integration/services/repository/token-test.js index 2142f031b..2af8d73d7 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/token-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/token-test.js @@ -1,181 +1,184 @@ -import { moduleFor, test, skip } from 'ember-qunit'; +import { module, skip, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import repo from 'consul-ui/tests/helpers/repo'; import { createPolicies } from 'consul-ui/tests/helpers/normalizers'; -const NAME = 'token'; -moduleFor(`service:repository/${NAME}`, `Integration | Service | ${NAME}`, { - // Specify the other units that are required for this test. - integration: true, -}); -skip('clone returns the correct data for the clone endpoint'); -const dc = 'dc-1'; -const id = 'token-id'; -const undefinedNspace = 'default'; -const undefinedPartition = 'default'; -const partition = 'default'; -[undefinedNspace, 'team-1', undefined].forEach(nspace => { - test(`findByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { - return repo( - 'Token', - 'findAllByDatacenter', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/tokens?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }`, - { - CONSUL_TOKEN_COUNT: '100', - } - ); - }, - function performTest(service) { - return service.findAllByDatacenter({ - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(function(item) { - return Object.assign({}, item, { - Datacenter: dc, - CreateTime: new Date(item.CreateTime), - Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.AccessorID}"]`, - Policies: createPolicies(item), - }); - }); - }) - ); - } - ); - }); - test(`findBySlug returns the correct data for item endpoint when nspace is ${nspace}`, function(assert) { - return repo( - 'Token', - 'findBySlug', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/token/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ - typeof partition !== 'undefined' ? `&partition=${partition}` : `` - }` - ); - }, - function performTest(service) { - return service.findBySlug({ - id, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - expected(function(item) { - assert.equal( - actual.uid, - `["${partition || undefinedPartition}","${nspace || undefinedNspace}","${dc}","${ - item.AccessorID - }"]` +module(`Integration | Service | token`, function(hooks) { + setupTest(hooks); + skip('clone returns the correct data for the clone endpoint'); + const dc = 'dc-1'; + const id = 'token-id'; + const undefinedNspace = 'default'; + const undefinedPartition = 'default'; + const partition = 'default'; + [undefinedNspace, 'team-1', undefined].forEach(nspace => { + test(`findByDatacenter returns the correct data for list endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/token'); + return repo( + 'Token', + 'findAllByDatacenter', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/tokens?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }`, + { + CONSUL_TOKEN_COUNT: '100', + } ); - assert.equal(actual.Datacenter, dc); - assert.deepEqual(actual.Policies, createPolicies(item)); - }); - } - ); - }); - test(`findByPolicy returns the correct data when nspace is ${nspace}`, function(assert) { - const policy = 'policy-1'; - return repo( - 'Token', - 'findByPolicy', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/tokens?dc=${dc}&policy=${policy}${ - typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` - }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}`, - { - CONSUL_TOKEN_COUNT: '100', - } - ); - }, - function performTest(service) { - return service.findByPolicy({ - id: policy, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(function(item) { - return Object.assign({}, item, { - Datacenter: dc, - CreateTime: new Date(item.CreateTime), - Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.AccessorID}"]`, - Policies: createPolicies(item), + }, + function performTest(service) { + return service.findAllByDatacenter({ + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(function(item) { + return Object.assign({}, item, { + Datacenter: dc, + CreateTime: new Date(item.CreateTime), + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.AccessorID}"]`, + Policies: createPolicies(item), + }); }); - }); - }) - ); - } - ); - }); - test(`findByRole returns the correct data when nspace is ${nspace}`, function(assert) { - const role = 'role-1'; - return repo( - 'Token', - 'findByPolicy', - this.subject(), - function retrieveStub(stub) { - return stub( - `/v1/acl/tokens?dc=${dc}&role=${role}${ - typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` - }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}`, - { - CONSUL_TOKEN_COUNT: '100', - } - ); - }, - function performTest(service) { - return service.findByRole({ - id: role, - dc, - ns: nspace || undefinedNspace, - partition: partition || undefinedPartition, - }); - }, - function performAssertion(actual, expected) { - assert.deepEqual( - actual, - expected(function(payload) { - return payload.map(function(item) { - return Object.assign({}, item, { - Datacenter: dc, - CreateTime: new Date(item.CreateTime), - Namespace: item.Namespace || undefinedNspace, - Partition: item.Partition || undefinedPartition, - uid: `["${item.Partition || undefinedPartition}","${item.Namespace || - undefinedNspace}","${dc}","${item.AccessorID}"]`, - Policies: createPolicies(item), + }) + ); + } + ); + }); + test(`findBySlug returns the correct data for item endpoint when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/token'); + return repo( + 'Token', + 'findBySlug', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/token/${id}?dc=${dc}${typeof nspace !== 'undefined' ? `&ns=${nspace}` : ``}${ + typeof partition !== 'undefined' ? `&partition=${partition}` : `` + }` + ); + }, + function performTest(service) { + return service.findBySlug({ + id, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + expected(function(item) { + assert.equal( + actual.uid, + `["${partition || undefinedPartition}","${nspace || undefinedNspace}","${dc}","${ + item.AccessorID + }"]` + ); + assert.equal(actual.Datacenter, dc); + assert.deepEqual(actual.Policies, createPolicies(item)); + }); + } + ); + }); + test(`findByPolicy returns the correct data when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/token'); + const policy = 'policy-1'; + return repo( + 'Token', + 'findByPolicy', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/tokens?dc=${dc}&policy=${policy}${ + typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` + }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}`, + { + CONSUL_TOKEN_COUNT: '100', + } + ); + }, + function performTest(service) { + return service.findByPolicy({ + id: policy, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(function(item) { + return Object.assign({}, item, { + Datacenter: dc, + CreateTime: new Date(item.CreateTime), + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.AccessorID}"]`, + Policies: createPolicies(item), + }); }); - }); - }) - ); - } - ); + }) + ); + } + ); + }); + test(`findByRole returns the correct data when nspace is ${nspace}`, function(assert) { + const subject = this.owner.lookup('service:repository/token'); + const role = 'role-1'; + return repo( + 'Token', + 'findByPolicy', + subject, + function retrieveStub(stub) { + return stub( + `/v1/acl/tokens?dc=${dc}&role=${role}${ + typeof nspace !== 'undefined' ? `&ns=${nspace}` : `` + }${typeof partition !== 'undefined' ? `&partition=${partition}` : ``}`, + { + CONSUL_TOKEN_COUNT: '100', + } + ); + }, + function performTest(service) { + return service.findByRole({ + id: role, + dc, + ns: nspace || undefinedNspace, + partition: partition || undefinedPartition, + }); + }, + function performAssertion(actual, expected) { + assert.deepEqual( + actual, + expected(function(payload) { + return payload.map(function(item) { + return Object.assign({}, item, { + Datacenter: dc, + CreateTime: new Date(item.CreateTime), + Namespace: item.Namespace || undefinedNspace, + Partition: item.Partition || undefinedPartition, + uid: `["${item.Partition || undefinedPartition}","${item.Namespace || + undefinedNspace}","${dc}","${item.AccessorID}"]`, + Policies: createPolicies(item), + }); + }); + }) + ); + } + ); + }); }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/repository/topology-test.js b/ui/packages/consul-ui/tests/integration/services/repository/topology-test.js index 928da30fc..e42eb5c51 100644 --- a/ui/packages/consul-ui/tests/integration/services/repository/topology-test.js +++ b/ui/packages/consul-ui/tests/integration/services/repository/topology-test.js @@ -1,43 +1,43 @@ -import { moduleFor, test } from 'ember-qunit'; +import { module, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; import repo from 'consul-ui/tests/helpers/repo'; -moduleFor('service:repository/topology', 'Integration | Service | topology', { - // Specify the other units that are required for this test. - integration: true, -}); -const dc = 'dc-1'; -const id = 'slug'; -const kind = ''; -test('findBySlug returns the correct data for item endpoint', function(assert) { - return repo( - 'Service', - 'findBySlug', - this.subject(), - function retrieveStub(stub) { - return stub(`/v1/internal/ui/service-topology/${id}?dc=${dc}&${kind}`, { - CONSUL_DISCOVERY_CHAIN_COUNT: 1, - }); - }, - function performTest(service) { - return service.findBySlug({ id, kind, dc }); - }, - function performAssertion(actual, expected) { - const result = expected(function(payload) { - return Object.assign( - {}, - { - Datacenter: dc, - uid: `["default","default","${dc}","${id}"]`, - meta: { - cacheControl: undefined, - cursor: undefined, +module('Integration | Service | topology', function(hooks) { + setupTest(hooks); + const dc = 'dc-1'; + const id = 'slug'; + const kind = ''; + test('findBySlug returns the correct data for item endpoint', function(assert) { + return repo( + 'Service', + 'findBySlug', + this.owner.lookup('service:repository/topology'), + function retrieveStub(stub) { + return stub(`/v1/internal/ui/service-topology/${id}?dc=${dc}&${kind}`, { + CONSUL_DISCOVERY_CHAIN_COUNT: 1, + }); + }, + function performTest(service) { + return service.findBySlug({ id, kind, dc }); + }, + function performAssertion(actual, expected) { + const result = expected(function(payload) { + return Object.assign( + {}, + { + Datacenter: dc, + uid: `["default","default","${dc}","${id}"]`, + meta: { + cacheControl: undefined, + cursor: undefined, + }, }, - }, - payload - ); - }); - assert.equal(actual.Datacenter, result.Datacenter); - assert.equal(actual.uid, result.uid); - } - ); + payload + ); + }); + assert.equal(actual.Datacenter, result.Datacenter); + assert.equal(actual.uid, result.uid); + } + ); + }); }); diff --git a/ui/packages/consul-ui/tests/integration/services/routlet-test.js b/ui/packages/consul-ui/tests/integration/services/routlet-test.js index c0faa5f40..0fd40846e 100644 --- a/ui/packages/consul-ui/tests/integration/services/routlet-test.js +++ b/ui/packages/consul-ui/tests/integration/services/routlet-test.js @@ -1,32 +1,33 @@ -import { moduleFor, test } from 'ember-qunit'; -moduleFor('service:routlet', 'Integration | Routlet', { - // Specify the other units that are required for this test. - integration: true, -}); -test('outletFor works', function(assert) { - const routlet = this.subject(); - routlet.addOutlet('application', { - name: 'application' - }); - routlet.addRoute('dc', {}); - routlet.addOutlet('dc', { - name: 'dc' - }); - routlet.addRoute('dc.services', {}); - routlet.addOutlet('dc.services', { - name: 'dc.services' - }); - routlet.addRoute('dc.services.instances', {}); - - let actual = routlet.outletFor('dc.services'); - let expected = 'dc'; - assert.equal(actual.name, expected); - - actual = routlet.outletFor('dc'); - expected = 'application'; - assert.equal(actual.name, expected); - - actual = routlet.outletFor('application'); - expected = undefined; - assert.equal(actual, expected); +import { module, test } from 'qunit'; +import { setupTest } from 'ember-qunit'; + +module('Integration | Routlet', function(hooks) { + setupTest(hooks); + test('outletFor works', function(assert) { + const routlet = this.owner.lookup('service:routlet'); + routlet.addOutlet('application', { + name: 'application', + }); + routlet.addRoute('dc', {}); + routlet.addOutlet('dc', { + name: 'dc', + }); + routlet.addRoute('dc.services', {}); + routlet.addOutlet('dc.services', { + name: 'dc.services', + }); + routlet.addRoute('dc.services.instances', {}); + + let actual = routlet.outletFor('dc.services'); + let expected = 'dc'; + assert.equal(actual.name, expected); + + actual = routlet.outletFor('dc'); + expected = 'application'; + assert.equal(actual.name, expected); + + actual = routlet.outletFor('application'); + expected = undefined; + assert.equal(actual, expected); + }); }); diff --git a/ui/packages/consul-ui/tests/integration/utils/dom/event-source/callable-test.js b/ui/packages/consul-ui/tests/integration/utils/dom/event-source/callable-test.js index 71d322388..527b1c735 100644 --- a/ui/packages/consul-ui/tests/integration/utils/dom/event-source/callable-test.js +++ b/ui/packages/consul-ui/tests/integration/utils/dom/event-source/callable-test.js @@ -1,16 +1,16 @@ import domEventSourceCallable from 'consul-ui/utils/dom/event-source/callable'; import EventTarget from 'consul-ui/utils/dom/event-target/rsvp'; -import { module, skip } from 'qunit'; +import { module, test, skip } from 'qunit'; import { setupTest } from 'ember-qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import sinon from 'sinon'; module('Integration | Utility | dom/event-source/callable', function(hooks) { setupTest(hooks); test('it dispatches messages', function(assert) { assert.expect(1); const EventSource = domEventSourceCallable(EventTarget); - const listener = this.stub(); + const listener = sinon.stub(); const source = new EventSource( function(configuration) { return new Promise(resolve => { @@ -45,7 +45,7 @@ module('Integration | Utility | dom/event-source/callable', function(hooks) { skip('it dispatches a single open event and closes when called with no callable', function(assert) { assert.expect(4); const EventSource = domEventSourceCallable(EventTarget, Promise); - const listener = this.stub(); + const listener = sinon.stub(); const source = new EventSource(); source.addEventListener('open', function(e) { assert.deepEqual(e.target, this); @@ -60,7 +60,7 @@ module('Integration | Utility | dom/event-source/callable', function(hooks) { test('it dispatches a single open event, and calls the specified callable that can dispatch an event', function(assert) { assert.expect(1); const EventSource = domEventSourceCallable(EventTarget); - const listener = this.stub(); + const listener = sinon.stub(); const source = new EventSource(function() { return new Promise(resolve => { setTimeout(() => { @@ -87,7 +87,7 @@ module('Integration | Utility | dom/event-source/callable', function(hooks) { test("it can be closed before the first tick, and therefore doesn't run", function(assert) { assert.expect(4); const EventSource = domEventSourceCallable(EventTarget); - const listener = this.stub(); + const listener = sinon.stub(); const source = new EventSource(); assert.equal(source.readyState, 0); source.close(); diff --git a/ui/packages/consul-ui/tests/steps/interactions/form.js b/ui/packages/consul-ui/tests/steps/interactions/form.js index 5ac5cc1f0..122aa2488 100644 --- a/ui/packages/consul-ui/tests/steps/interactions/form.js +++ b/ui/packages/consul-ui/tests/steps/interactions/form.js @@ -1,6 +1,6 @@ export default function(scenario, find, fillIn, triggerKeyEvent, currentPage) { const dont = `( don't| shouldn't| can't)?`; - const fillInElement = function(page, name, value) { + const fillInElement = async function(page, name, value) { const cm = document.querySelector(`textarea[name="${name}"] + .CodeMirror`); if (cm) { if (!cm.CodeMirror.options.readOnly) { @@ -11,7 +11,7 @@ export default function(scenario, find, fillIn, triggerKeyEvent, currentPage) { return page; } else { const $el = document.querySelector(`[name="${name}"]`); - fillIn($el, value); + await fillIn($el, value); return page; } }; @@ -57,11 +57,13 @@ export default function(scenario, find, fillIn, triggerKeyEvent, currentPage) { } catch (e) { obj = currentPage(); } - const res = Object.keys(data).reduce(function(prev, item, i, arr) { + const res = Object.keys(data).reduce(async function(prev, item, i, arr) { + await prev; + const name = `${obj.prefix || property}[${item}]`; if (negative) { try { - fillInElement(prev, name, data[item]); + await fillInElement(obj, name, data[item]); throw new TypeError(`${item} is editable`); } catch (e) { if (e instanceof TypeError) { @@ -69,10 +71,9 @@ export default function(scenario, find, fillIn, triggerKeyEvent, currentPage) { } } } else { - return fillInElement(prev, name, data[item]); + return await fillInElement(obj, name, data[item]); } - }, obj); - await new Promise(resolve => setTimeout(resolve, 0)); + }, Promise.resolve()); return res; } ) diff --git a/ui/packages/consul-ui/tests/test-helper.js b/ui/packages/consul-ui/tests/test-helper.js index 58bc7a64a..2d638ea15 100644 --- a/ui/packages/consul-ui/tests/test-helper.js +++ b/ui/packages/consul-ui/tests/test-helper.js @@ -1,9 +1,12 @@ import Application from '../app'; import config from '../config/environment'; +import * as QUnit from 'qunit'; import { setApplication } from '@ember/test-helpers'; +import { setup } from 'qunit-dom'; import { registerWaiter } from '@ember/test'; import './helpers/flash-message'; import start from 'ember-exam/test-support/start'; +import setupSinon from 'ember-sinon-qunit'; import ClientConnections from 'consul-ui/services/client/connections'; @@ -33,6 +36,10 @@ ClientConnections.reopen({ }); const application = Application.create(config.APP); application.inject('component:copy-button', 'clipboard', 'service:clipboard/local-storage'); + setApplication(application); +setup(QUnit.assert); +setupSinon(); + start(); diff --git a/ui/packages/consul-ui/tests/unit/adapters/application-test.js b/ui/packages/consul-ui/tests/unit/adapters/application-test.js index 4c6d52515..b8dc15e8c 100644 --- a/ui/packages/consul-ui/tests/unit/adapters/application-test.js +++ b/ui/packages/consul-ui/tests/unit/adapters/application-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import { setupTest } from 'ember-qunit'; module('Unit | Adapter | application', function(hooks) { diff --git a/ui/packages/consul-ui/tests/unit/mixins/with-blocking-actions-test.js b/ui/packages/consul-ui/tests/unit/mixins/with-blocking-actions-test.js index 436a44dbe..6f72a7887 100644 --- a/ui/packages/consul-ui/tests/unit/mixins/with-blocking-actions-test.js +++ b/ui/packages/consul-ui/tests/unit/mixins/with-blocking-actions-test.js @@ -1,8 +1,8 @@ -import { module, skip } from 'qunit'; +import { module, test, skip } from 'qunit'; import { setupTest } from 'ember-qunit'; -import test from 'ember-sinon-qunit/test-support/test'; import Route from '@ember/routing/route'; import Mixin from 'consul-ui/mixins/with-blocking-actions'; +import sinon from 'sinon'; module('Unit | Mixin | with blocking actions', function(hooks) { setupTest(hooks); @@ -24,7 +24,7 @@ module('Unit | Mixin | with blocking actions', function(hooks) { test('afterCreate just calls afterUpdate', function(assert) { const subject = this.subject(); const expected = [1, 2, 3, 4]; - const afterUpdate = this.stub(subject, 'afterUpdate').returns(expected); + const afterUpdate = sinon.stub(subject, 'afterUpdate').returns(expected); const actual = subject.afterCreate(expected); assert.deepEqual(actual, expected); assert.ok(afterUpdate.calledOnce); @@ -33,7 +33,7 @@ module('Unit | Mixin | with blocking actions', function(hooks) { const subject = this.subject(); const expected = 'dc.kv'; subject.routeName = expected + '.edit'; - const transitionTo = this.stub(subject, 'transitionTo').returnsArg(0); + const transitionTo = sinon.stub(subject, 'transitionTo').returnsArg(0); const actual = subject.afterUpdate(); assert.equal(actual, expected); assert.ok(transitionTo.calledOnce); @@ -42,7 +42,7 @@ module('Unit | Mixin | with blocking actions', function(hooks) { const subject = this.subject(); const expected = 'dc.kv'; subject.routeName = expected + '.edit'; - const transitionTo = this.stub(subject, 'transitionTo').returnsArg(0); + const transitionTo = sinon.stub(subject, 'transitionTo').returnsArg(0); const actual = subject.afterDelete(); assert.equal(actual, expected); assert.ok(transitionTo.calledOnce); @@ -51,7 +51,7 @@ module('Unit | Mixin | with blocking actions', function(hooks) { const subject = this.subject(); subject.routeName = 'dc.kv.index'; const expected = 'refresh'; - const refresh = this.stub(subject, 'refresh').returns(expected); + const refresh = sinon.stub(subject, 'refresh').returns(expected); const actual = subject.afterDelete(); assert.equal(actual, expected); assert.ok(refresh.calledOnce); @@ -67,7 +67,7 @@ module('Unit | Mixin | with blocking actions', function(hooks) { test('action cancel just calls afterUpdate', function(assert) { const subject = this.subject(); const expected = [1, 2, 3, 4]; - const afterUpdate = this.stub(subject, 'afterUpdate').returns(expected); + const afterUpdate = sinon.stub(subject, 'afterUpdate').returns(expected); // TODO: unsure as to whether ember testing should actually bind this for you? const actual = subject.actions.cancel.bind(subject)(expected); assert.deepEqual(actual, expected); diff --git a/ui/packages/consul-ui/tests/unit/routes/dc-test.js b/ui/packages/consul-ui/tests/unit/routes/dc-test.js index 79bc3ca39..b8cba11f2 100644 --- a/ui/packages/consul-ui/tests/unit/routes/dc-test.js +++ b/ui/packages/consul-ui/tests/unit/routes/dc-test.js @@ -1,6 +1,5 @@ -import { module } from 'qunit'; +import { module, test } from 'qunit'; import { setupTest } from 'ember-qunit'; -import test from 'ember-sinon-qunit/test-support/test'; module('Unit | Route | dc', function(hooks) { setupTest(hooks); diff --git a/ui/packages/consul-ui/tests/unit/serializers/application-test.js b/ui/packages/consul-ui/tests/unit/serializers/application-test.js index d6e69e6fc..19648227d 100644 --- a/ui/packages/consul-ui/tests/unit/serializers/application-test.js +++ b/ui/packages/consul-ui/tests/unit/serializers/application-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import { setupTest } from 'ember-qunit'; import { HEADERS_SYMBOL as META } from 'consul-ui/utils/http/consul'; diff --git a/ui/packages/consul-ui/tests/unit/serializers/kv-test.js b/ui/packages/consul-ui/tests/unit/serializers/kv-test.js index b1a442172..ed9c00b59 100644 --- a/ui/packages/consul-ui/tests/unit/serializers/kv-test.js +++ b/ui/packages/consul-ui/tests/unit/serializers/kv-test.js @@ -1,8 +1,8 @@ -import { module, skip } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, skip, test } from 'qunit'; import { setupTest } from 'ember-qunit'; import { run } from '@ember/runloop'; import { set } from '@ember/object'; +import sinon from 'sinon'; module('Unit | Serializer | kv', function(hooks) { setupTest(hooks); @@ -101,7 +101,7 @@ module('Unit | Serializer | kv', function(hooks) { test('serialize decodes Value if its a string', function(assert) { const serializer = this.owner.lookup('serializer:kv'); set(serializer, 'decoder', { - execute: this.stub().returnsArg(0), + execute: sinon.stub().returnsArg(0), }); // const expected = 'value'; diff --git a/ui/packages/consul-ui/tests/unit/utils/ascend-test.js b/ui/packages/consul-ui/tests/unit/utils/ascend-test.js index 9d8c9ac7c..9ec0c9afb 100644 --- a/ui/packages/consul-ui/tests/unit/utils/ascend-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/ascend-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import ascend from 'consul-ui/utils/ascend'; module('Unit | Utils | ascend', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/atob-test.js b/ui/packages/consul-ui/tests/unit/utils/atob-test.js index b84abaec8..10cc764e9 100644 --- a/ui/packages/consul-ui/tests/unit/utils/atob-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/atob-test.js @@ -1,5 +1,4 @@ -import test from 'ember-sinon-qunit/test-support/test'; -import { module, skip } from 'qunit'; +import { module, skip, test } from 'qunit'; import atob from 'consul-ui/utils/atob'; module('Unit | Utils | atob', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/btoa-test.js b/ui/packages/consul-ui/tests/unit/utils/btoa-test.js index 776a3b57f..1fe10c584 100644 --- a/ui/packages/consul-ui/tests/unit/utils/btoa-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/btoa-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import btoa from 'consul-ui/utils/btoa'; module('Unit | Utils | btoa', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/dom/closest-test.js b/ui/packages/consul-ui/tests/unit/utils/dom/closest-test.js index 5c72e2177..ccbb2bf95 100644 --- a/ui/packages/consul-ui/tests/unit/utils/dom/closest-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/dom/closest-test.js @@ -1,11 +1,11 @@ import domClosest from 'consul-ui/utils/dom/closest'; -import test from 'ember-sinon-qunit/test-support/test'; -import { module, skip } from 'qunit'; +import { module, skip, test } from 'qunit'; +import sinon from 'sinon'; module('Unit | Utility | dom/closest', function() { test('it calls Element.closest with the specified selector', function(assert) { const el = { - closest: this.stub().returnsArg(0), + closest: sinon.stub().returnsArg(0), }; const expected = 'selector'; const actual = domClosest(expected, el); diff --git a/ui/packages/consul-ui/tests/unit/utils/dom/create-listeners-test.js b/ui/packages/consul-ui/tests/unit/utils/dom/create-listeners-test.js index 9cb2391e8..1297ee362 100644 --- a/ui/packages/consul-ui/tests/unit/utils/dom/create-listeners-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/dom/create-listeners-test.js @@ -1,6 +1,6 @@ import createListeners from 'consul-ui/utils/dom/create-listeners'; -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; +import sinon from 'sinon'; module('Unit | Utility | dom/create listeners', function() { test('it has add and remove methods', function(assert) { @@ -33,7 +33,7 @@ module('Unit | Utility | dom/create listeners', function() { assert.equal(handlers.length, 0); }); test('remove calls the remove functions', function(assert) { - const expected = this.stub(); + const expected = sinon.stub(); const arr = [expected]; const listeners = createListeners(arr); listeners.remove(); @@ -42,7 +42,7 @@ module('Unit | Utility | dom/create listeners', function() { }); test('listeners are added on add', function(assert) { const listeners = createListeners(); - const stub = this.stub(); + const stub = sinon.stub(); const target = { addEventListener: stub, }; @@ -54,8 +54,8 @@ module('Unit | Utility | dom/create listeners', function() { }); test('listeners as objects are added on add and removed on remove', function(assert) { const listeners = createListeners(); - const addStub = this.stub(); - const removeStub = this.stub(); + const addStub = sinon.stub(); + const removeStub = sinon.stub(); const target = { addEventListener: addStub, removeEventListener: removeStub, @@ -77,7 +77,7 @@ module('Unit | Utility | dom/create listeners', function() { }); test('listeners are removed on remove', function(assert) { const listeners = createListeners(); - const stub = this.stub(); + const stub = sinon.stub(); const target = { addEventListener: function() {}, removeEventListener: stub, @@ -91,7 +91,7 @@ module('Unit | Utility | dom/create listeners', function() { }); test('listeners as functions are removed on remove', function(assert) { const listeners = createListeners(); - const stub = this.stub(); + const stub = sinon.stub(); const remove = listeners.add(stub); remove(); assert.ok(stub.calledOnce); @@ -99,7 +99,7 @@ module('Unit | Utility | dom/create listeners', function() { test('listeners as other listeners are removed on remove', function(assert) { const listeners = createListeners(); const listeners2 = createListeners(); - const stub = this.stub(); + const stub = sinon.stub(); listeners2.add(stub); const remove = listeners.add(listeners2); remove(); @@ -108,7 +108,7 @@ module('Unit | Utility | dom/create listeners', function() { test('listeners as functions of other listeners are removed on remove', function(assert) { const listeners = createListeners(); const listeners2 = createListeners(); - const stub = this.stub(); + const stub = sinon.stub(); const remove = listeners.add(listeners2.add(stub)); remove(); assert.ok(stub.calledOnce); @@ -120,7 +120,7 @@ module('Unit | Utility | dom/create listeners', function() { removeEventListener: function() {}, }; const name = 'test'; - const expected = this.stub(); + const expected = sinon.stub(); const remove = listeners.add(target, name, expected); const actual = remove(); actual[0](); diff --git a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/blocking-test.js b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/blocking-test.js index 49d6dac98..e81cf89b8 100644 --- a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/blocking-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/blocking-test.js @@ -2,8 +2,8 @@ import domEventSourceBlocking, { validateCursor, createErrorBackoff, } from 'consul-ui/utils/dom/event-source/blocking'; -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; +import sinon from 'sinon'; module('Unit | Utility | dom/event-source/blocking', function() { const createEventSource = function() { @@ -74,8 +74,8 @@ module('Unit | Utility | dom/event-source/blocking', function() { { errors: [{ status: '504' }] }, { errors: [{ status: '524' }] }, ].forEach(item => { - const timeout = this.stub().callsArg(0); - const resolve = this.stub().withArgs(item); + const timeout = sinon.stub().callsArg(0); + const resolve = sinon.stub().withArgs(item); const Promise = createPromise(resolve); const backoff = createErrorBackoff(undefined, Promise, timeout); const promise = backoff(item); diff --git a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/cache-test.js b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/cache-test.js index 2a82d59ce..8561479ea 100644 --- a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/cache-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/cache-test.js @@ -1,6 +1,6 @@ import domEventSourceCache from 'consul-ui/utils/dom/event-source/cache'; -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; +import sinon from 'sinon'; module('Unit | Utility | dom/event-source/cache', function() { const createEventSource = function() { @@ -66,8 +66,8 @@ module('Unit | Utility | dom/event-source/cache', function() { const Promise = createPromise(function() { return stub; }); - const source = this.stub().returns(Promise.resolve()); - const cb = this.stub(); + const source = sinon.stub().returns(Promise.resolve()); + const cb = sinon.stub(); const getCache = domEventSourceCache(source, EventSource, Promise); const obj = {}; const cache = getCache(obj); @@ -92,14 +92,14 @@ module('Unit | Utility | dom/event-source/cache', function() { test('cache creates the default EventSource and keeps it open when there is a cursor', function(assert) { const EventSource = createEventSource(); const stub = { - close: this.stub(), + close: sinon.stub(), configuration: { cursor: 1 }, }; const Promise = createPromise(function() { return stub; }); - const source = this.stub().returns(Promise.resolve()); - const cb = this.stub(); + const source = sinon.stub().returns(Promise.resolve()); + const cb = sinon.stub(); const getCache = domEventSourceCache(source, EventSource, Promise); const obj = {}; const cache = getCache(obj); @@ -120,14 +120,14 @@ module('Unit | Utility | dom/event-source/cache', function() { test("cache creates the default EventSource and closes it when there isn't a cursor", function(assert) { const EventSource = createEventSource(); const stub = { - close: this.stub(), + close: sinon.stub(), configuration: {}, }; const Promise = createPromise(function() { return stub; }); - const source = this.stub().returns(Promise.resolve()); - const cb = this.stub(); + const source = sinon.stub().returns(Promise.resolve()); + const cb = sinon.stub(); const getCache = domEventSourceCache(source, EventSource, Promise); const obj = {}; const cache = getCache(obj); diff --git a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/callable-test.js b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/callable-test.js index 823af58f5..a4b2f9cb1 100644 --- a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/callable-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/callable-test.js @@ -1,6 +1,6 @@ import domEventSourceCallable, { defaultRunner } from 'consul-ui/utils/dom/event-source/callable'; -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; +import sinon from 'sinon'; module('Unit | Utility | dom/event-source/callable', function() { const createEventTarget = function() { @@ -43,14 +43,14 @@ module('Unit | Utility | dom/event-source/callable', function() { return count === 11; }; const configuration = {}; - const then = this.stub().callsArg(0); + const then = sinon.stub().callsArg(0); const target = { source: function(configuration) { return { then: then, }; }, - dispatchEvent: this.stub(), + dispatchEvent: sinon.stub(), }; defaultRunner(target, configuration, isClosed); assert.ok(then.callCount == 10); @@ -59,7 +59,7 @@ module('Unit | Utility | dom/event-source/callable', function() { test('it calls the defaultRunner', function(assert) { const Promise = createPromise(); const EventTarget = createEventTarget(); - const run = this.stub(); + const run = sinon.stub(); const EventSource = domEventSourceCallable(EventTarget, Promise, run); const source = new EventSource(); assert.ok(run.calledOnce); diff --git a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/openable-test.js b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/openable-test.js index 9ab9dc3d5..16fabfa3d 100644 --- a/ui/packages/consul-ui/tests/unit/utils/dom/event-source/openable-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/dom/event-source/openable-test.js @@ -1,6 +1,6 @@ import domEventSourceOpenable from 'consul-ui/utils/dom/event-source/openable'; -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; +import sinon from 'sinon'; module('Unit | Utility | dom/event-source/openable', function() { const createEventSource = function() { @@ -23,7 +23,7 @@ module('Unit | Utility | dom/event-source/openable', function() { assert.ok(source instanceof EventSource); }); test('it reopens the event source when open is called', function(assert) { - const callable = this.stub(); + const callable = sinon.stub(); const EventSource = createEventSource(); const OpenableEventSource = domEventSourceOpenable(EventSource); const source = new OpenableEventSource(callable); diff --git a/ui/packages/consul-ui/tests/unit/utils/http/create-url-test.js b/ui/packages/consul-ui/tests/unit/utils/http/create-url-test.js index e996bb346..c9ba8eb5f 100644 --- a/ui/packages/consul-ui/tests/unit/utils/http/create-url-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/http/create-url-test.js @@ -1,5 +1,4 @@ -import { module, skip } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, skip, test } from 'qunit'; import createURL from 'consul-ui/utils/http/create-url'; import createQueryParams from 'consul-ui/utils/http/create-query-params'; diff --git a/ui/packages/consul-ui/tests/unit/utils/isFolder-test.js b/ui/packages/consul-ui/tests/unit/utils/isFolder-test.js index ab536257a..a9fab4584 100644 --- a/ui/packages/consul-ui/tests/unit/utils/isFolder-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/isFolder-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import isFolder from 'consul-ui/utils/isFolder'; module('Unit | Utils | isFolder', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/keyToArray-test.js b/ui/packages/consul-ui/tests/unit/utils/keyToArray-test.js index 5cd33c322..120939cb4 100644 --- a/ui/packages/consul-ui/tests/unit/utils/keyToArray-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/keyToArray-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import keyToArray from 'consul-ui/utils/keyToArray'; module('Unit | Utils | keyToArray', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/left-trim-test.js b/ui/packages/consul-ui/tests/unit/utils/left-trim-test.js index 3f0f1c31d..9bdefcdf2 100644 --- a/ui/packages/consul-ui/tests/unit/utils/left-trim-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/left-trim-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import leftTrim from 'consul-ui/utils/left-trim'; module('Unit | Utility | left trim', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/promisedTimeout-test.js b/ui/packages/consul-ui/tests/unit/utils/promisedTimeout-test.js index 7719f6c08..b080f5119 100644 --- a/ui/packages/consul-ui/tests/unit/utils/promisedTimeout-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/promisedTimeout-test.js @@ -1,5 +1,4 @@ -import { module, skip } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, skip, test } from 'qunit'; import promisedTimeout from 'consul-ui/utils/promisedTimeout'; module('Unit | Utils | promisedTimeout', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/right-trim-test.js b/ui/packages/consul-ui/tests/unit/utils/right-trim-test.js index 6898de8e9..ac988322d 100644 --- a/ui/packages/consul-ui/tests/unit/utils/right-trim-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/right-trim-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import rightTrim from 'consul-ui/utils/right-trim'; module('Unit | Utility | right trim', function() { diff --git a/ui/packages/consul-ui/tests/unit/utils/routing/walk-test.js b/ui/packages/consul-ui/tests/unit/utils/routing/walk-test.js index 9f520e57c..01eb1d487 100644 --- a/ui/packages/consul-ui/tests/unit/utils/routing/walk-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/routing/walk-test.js @@ -1,10 +1,10 @@ import { walk } from 'consul-ui/utils/routing/walk'; -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; +import sinon from 'sinon'; module('Unit | Utility | routing/walk', function() { test('it walks down deep routes', function(assert) { - const route = this.stub(); + const route = sinon.stub(); const Router = { route: function(name, options, cb) { route(); diff --git a/ui/packages/consul-ui/tests/unit/utils/ucfirst-test.js b/ui/packages/consul-ui/tests/unit/utils/ucfirst-test.js index b911ac04f..3c45b51d6 100644 --- a/ui/packages/consul-ui/tests/unit/utils/ucfirst-test.js +++ b/ui/packages/consul-ui/tests/unit/utils/ucfirst-test.js @@ -1,5 +1,4 @@ -import { module } from 'qunit'; -import test from 'ember-sinon-qunit/test-support/test'; +import { module, test } from 'qunit'; import ucfirst from 'consul-ui/utils/ucfirst'; module('Unit | Utils | ucfirst', function() { diff --git a/ui/yarn.lock b/ui/yarn.lock index 1f3f24f35..76bbc42af 100644 --- a/ui/yarn.lock +++ b/ui/yarn.lock @@ -1948,17 +1948,29 @@ ember-compatibility-helpers "^1.2.5" ember-modifier-manager-polyfill "^1.2.0" -"@ember/test-helpers@^1.7.1": - version "1.7.2" - resolved "https://registry.yarnpkg.com/@ember/test-helpers/-/test-helpers-1.7.2.tgz#5b128dc5f6524c3850abf52668e6bd4fda401194" - integrity sha512-FEJBpbFNIaWAsCSnataiNwYFvmcpoymL/B7fXLruuJ/46BnJjzLaRPtpUIZ91w4GNTK6knxbHWXW76aVb3Aezg== +"@ember/test-helpers@^2.1.4": + version "2.8.1" + resolved "https://registry.yarnpkg.com/@ember/test-helpers/-/test-helpers-2.8.1.tgz#20f2e30d48172c2ff713e1db7fbec5352f918d4e" + integrity sha512-jbsYwWyAdhL/pdPu7Gb3SG1gvIXY70FWMtC/Us0Kmvk82Y+5YUQ1SOC0io75qmOGYQmH7eQrd/bquEVd+4XtdQ== dependencies: + "@ember/test-waiters" "^3.0.0" + "@embroider/macros" "^1.6.0" + "@embroider/util" "^1.6.0" broccoli-debug "^0.6.5" - broccoli-funnel "^2.0.2" - ember-assign-polyfill "^2.6.0" - ember-cli-babel "^7.7.3" - ember-cli-htmlbars-inline-precompile "^2.1.0" - ember-test-waiters "^1.1.1" + broccoli-funnel "^3.0.8" + ember-cli-babel "^7.26.6" + ember-cli-htmlbars "^5.7.1" + ember-destroyable-polyfill "^2.0.3" + +"@ember/test-waiters@^3.0.0": + version "3.0.2" + resolved "https://registry.yarnpkg.com/@ember/test-waiters/-/test-waiters-3.0.2.tgz#5b950c580a1891ed1d4ee64f9c6bacf49a15ea6f" + integrity sha512-H8Q3Xy9rlqhDKnQpwt2pzAYDouww4TZIGSI1pZJhM7mQIGufQKuB0ijzn/yugA6Z+bNdjYp1HioP8Y4hn2zazQ== + dependencies: + calculate-cache-key-for-tree "^2.0.0" + ember-cli-babel "^7.26.6" + ember-cli-version-checker "^5.1.2" + semver "^7.3.5" "@embroider/core@0.33.0", "@embroider/core@^0.33.0": version "0.33.0" @@ -2039,6 +2051,20 @@ resolve "^1.20.0" semver "^7.3.2" +"@embroider/macros@1.8.3", "@embroider/macros@^1.6.0": + version "1.8.3" + resolved "https://registry.yarnpkg.com/@embroider/macros/-/macros-1.8.3.tgz#2f0961ab8871f6ad819630208031d705b357757e" + integrity sha512-gnIOfTL/pUkoD6oI7JyWOqXlVIUgZM+CnbH10/YNtZr2K0hij9eZQMdgjOZZVgN0rKOFw9dIREqc1ygrJHRYQA== + dependencies: + "@embroider/shared-internals" "1.8.3" + assert-never "^1.2.1" + babel-import-util "^1.1.0" + ember-cli-babel "^7.26.6" + find-up "^5.0.0" + lodash "^4.17.21" + resolve "^1.20.0" + semver "^7.3.2" + "@embroider/shared-internals@0.41.0": version "0.41.0" resolved "https://registry.yarnpkg.com/@embroider/shared-internals/-/shared-internals-0.41.0.tgz#2553f026d4f48ea1fd11235501feb63bf49fa306" @@ -2065,6 +2091,20 @@ semver "^7.3.5" typescript-memoize "^1.0.1" +"@embroider/shared-internals@1.8.3", "@embroider/shared-internals@^1.0.0": + version "1.8.3" + resolved "https://registry.yarnpkg.com/@embroider/shared-internals/-/shared-internals-1.8.3.tgz#52d868dc80016e9fe983552c0e516f437bf9b9f9" + integrity sha512-N5Gho6Qk8z5u+mxLCcMYAoQMbN4MmH+z2jXwQHVs859bxuZTxwF6kKtsybDAASCtd2YGxEmzcc1Ja/wM28824w== + dependencies: + babel-import-util "^1.1.0" + ember-rfc176-data "^0.3.17" + fs-extra "^9.1.0" + js-string-escape "^1.0.1" + lodash "^4.17.21" + resolve-package-path "^4.0.1" + semver "^7.3.5" + typescript-memoize "^1.0.1" + "@embroider/util@^0.39.1 || ^0.40.0 || ^0.41.0": version "0.41.0" resolved "https://registry.yarnpkg.com/@embroider/util/-/util-0.41.0.tgz#5324cb4742aa4ed8d613c4f88a466f73e4e6acc1" @@ -2083,6 +2123,15 @@ broccoli-funnel "^3.0.5" ember-cli-babel "^7.23.1" +"@embroider/util@^1.6.0": + version "1.8.3" + resolved "https://registry.yarnpkg.com/@embroider/util/-/util-1.8.3.tgz#7267a2b6fcbf3e56712711441159ab373f9bee7a" + integrity sha512-FvsPzsb9rNeveSnIGnsfLkWWBdSM5QIA9lDVtckUktRnRnBWZHm5jDxU/ST//pWMhZ8F0DucRlFWE149MTLtuQ== + dependencies: + "@embroider/macros" "1.8.3" + broccoli-funnel "^3.0.5" + ember-cli-babel "^7.23.1" + "@eslint/eslintrc@^0.4.0": version "0.4.0" resolved "https://registry.yarnpkg.com/@eslint/eslintrc/-/eslintrc-0.4.0.tgz#99cc0a0584d72f1df38b900fb062ba995f395547" @@ -3387,6 +3436,11 @@ babel-import-util@^0.2.0: resolved "https://registry.yarnpkg.com/babel-import-util/-/babel-import-util-0.2.0.tgz#b468bb679919601a3570f9e317536c54f2862e23" integrity sha512-CtWYYHU/MgK88rxMrLfkD356dApswtR/kWZ/c6JifG1m10e7tBBrs/366dFzWMAoqYmG5/JSh+94tUSpIwh+ag== +babel-import-util@^1.1.0: + version "1.2.2" + resolved "https://registry.yarnpkg.com/babel-import-util/-/babel-import-util-1.2.2.tgz#1027560e143a4a68b1758e71d4fadc661614e495" + integrity sha512-8HgkHWt5WawRFukO30TuaL9EiDUOdvyKtDwLma4uBNeUSDbOO0/hiPfavrOWxSS6J6TKXfukWHZ3wiqZhJ8ONQ== + babel-loader@^8.0.6, babel-loader@^8.1.0: version "8.2.2" resolved "https://registry.yarnpkg.com/babel-loader/-/babel-loader-8.2.2.tgz#9363ce84c10c9a40e6c753748e1441b60c8a0b81" @@ -3471,11 +3525,6 @@ babel-plugin-filter-imports@^4.0.0: "@babel/types" "^7.7.2" lodash "^4.17.15" -babel-plugin-htmlbars-inline-precompile@^1.0.0: - version "1.0.0" - resolved "https://registry.yarnpkg.com/babel-plugin-htmlbars-inline-precompile/-/babel-plugin-htmlbars-inline-precompile-1.0.0.tgz#a9d2f6eaad8a3f3d361602de593a8cbef8179c22" - integrity sha512-4jvKEHR1bAX03hBDZ94IXsYCj3bwk9vYsn6ux6JZNL2U5pvzCWjqyrGahfsGNrhERyxw8IqcirOi9Q6WCo3dkQ== - babel-plugin-htmlbars-inline-precompile@^3.2.0: version "3.2.0" resolved "https://registry.yarnpkg.com/babel-plugin-htmlbars-inline-precompile/-/babel-plugin-htmlbars-inline-precompile-3.2.0.tgz#c4882ea875d0f5683f0d91c1f72e29a4f14b5606" @@ -4427,7 +4476,7 @@ broccoli-funnel@^3.0.1, broccoli-funnel@^3.0.2, broccoli-funnel@^3.0.3: path-posix "^1.0.0" walk-sync "^2.0.2" -broccoli-funnel@^3.0.5: +broccoli-funnel@^3.0.5, broccoli-funnel@^3.0.8: version "3.0.8" resolved "https://registry.yarnpkg.com/broccoli-funnel/-/broccoli-funnel-3.0.8.tgz#f5b62e2763c3918026a15a3c833edc889971279b" integrity sha512-ng4eIhPYiXqMw6SyGoxPHR3YAwEd2lr9FgBI1CyTbspl4txZovOsmzFkMkGAlu88xyvYXJqHiM2crfLa65T1BQ== @@ -5444,10 +5493,10 @@ commander@2.8.x: dependencies: graceful-readlink ">= 1.0.0" -commander@7.1.0: - version "7.1.0" - resolved "https://registry.yarnpkg.com/commander/-/commander-7.1.0.tgz#f2eaecf131f10e36e07d894698226e36ae0eb5ff" - integrity sha512-pRxBna3MJe6HKnBGsDyMv8ETbptw3axEdYHoqNh7gu5oDcew8fs0xnivZGm06Ogk8zGAJ9VX+OPEr2GXEQK4dg== +commander@7.2.0: + version "7.2.0" + resolved "https://registry.yarnpkg.com/commander/-/commander-7.2.0.tgz#a36cb57d0b501ce108e4d20559a150a391d97ab7" + integrity sha512-QrWXB+ZQSVPmIWIhtEO9H+gwHaMGYiF5ChvoJ+K9ZGHG/sVsa6yiesAD1GC/x46sET00Xlwo1u49RVVVzvcSkw== commander@^2.20.0, commander@^2.6.0: version "2.20.3" @@ -5464,7 +5513,7 @@ commander@^6.2.0: resolved "https://registry.yarnpkg.com/commander/-/commander-6.2.1.tgz#0792eb682dfbc325999bb2b84fddddba110ac73c" integrity sha512-U7VdrJFnJgo4xjrHpTzu0yrHPGImdsmD95ZlgYSEajAn2JKzDhDTPG9kBTefmObL2w/ngeZnilk+OV9CG3d7UA== -common-tags@^1.4.0, common-tags@^1.8.0: +common-tags@^1.8.0: version "1.8.0" resolved "https://registry.yarnpkg.com/common-tags/-/common-tags-1.8.0.tgz#8e3153e542d4a39e9b10554434afaaf98956a937" integrity sha512-6P6g0uetGpW/sdyUy/iQQCbFF0kWVMSIVSyYz7Zgjcgh8mgw8PQzDNZeyZ5DQ2gM7LBoZPHmnjz8rUthkBG5tw== @@ -6292,13 +6341,40 @@ ember-assign-helper@^0.3.0: ember-cli-babel "^7.19.0" ember-cli-htmlbars "^4.3.1" -ember-assign-polyfill@^2.6.0: - version "2.7.2" - resolved "https://registry.yarnpkg.com/ember-assign-polyfill/-/ember-assign-polyfill-2.7.2.tgz#58f6f60235126cb23df248c846008fa9a3245fc1" - integrity sha512-hDSaKIZyFS0WRQsWzxUgO6pJPFfmcpfdM7CbGoMgYGriYbvkKn+k8zTXSKpTFVGehhSmsLE9YPqisQ9QpPisfA== +ember-auto-import@^1.11.3: + version "1.12.2" + resolved "https://registry.yarnpkg.com/ember-auto-import/-/ember-auto-import-1.12.2.tgz#cc7298ee5c0654b0249267de68fb27a2861c3579" + integrity sha512-gLqML2k77AuUiXxWNon1FSzuG1DV7PEPpCLCU5aJvf6fdL6rmFfElsZRh+8ELEB/qP9dT+LHjNEunVzd2dYc8A== dependencies: - ember-cli-babel "^7.20.5" - ember-cli-version-checker "^2.0.0" + "@babel/core" "^7.1.6" + "@babel/preset-env" "^7.10.2" + "@babel/traverse" "^7.1.6" + "@babel/types" "^7.1.6" + "@embroider/shared-internals" "^1.0.0" + babel-core "^6.26.3" + babel-loader "^8.0.6" + babel-plugin-syntax-dynamic-import "^6.18.0" + babylon "^6.18.0" + broccoli-debug "^0.6.4" + broccoli-node-api "^1.7.0" + broccoli-plugin "^4.0.0" + broccoli-source "^3.0.0" + debug "^3.1.0" + ember-cli-babel "^7.0.0" + enhanced-resolve "^4.0.0" + fs-extra "^6.0.1" + fs-tree-diff "^2.0.0" + handlebars "^4.3.1" + js-string-escape "^1.0.1" + lodash "^4.17.19" + mkdirp "^0.5.1" + resolve-package-path "^3.1.0" + rimraf "^2.6.2" + semver "^7.3.4" + symlink-or-copy "^1.2.0" + typescript-memoize "^1.0.0-alpha.3" + walk-sync "^0.3.3" + webpack "^4.43.0" ember-auto-import@^1.5.2, ember-auto-import@^1.5.3, ember-auto-import@^1.6.0: version "1.10.1" @@ -6402,7 +6478,7 @@ ember-cli-babel-plugin-helpers@^1.0.0, ember-cli-babel-plugin-helpers@^1.1.0, em resolved "https://registry.yarnpkg.com/ember-cli-babel-plugin-helpers/-/ember-cli-babel-plugin-helpers-1.1.1.tgz#5016b80cdef37036c4282eef2d863e1d73576879" integrity sha512-sKvOiPNHr5F/60NLd7SFzMpYPte/nnGkq/tMIfXejfKHIhaiIkYFqX8Z9UFTKWLLn+V7NOaby6niNPZUdvKCRw== -ember-cli-babel@7, ember-cli-babel@^7.0.0, ember-cli-babel@^7.1.3, ember-cli-babel@^7.10.0, ember-cli-babel@^7.11.0, ember-cli-babel@^7.12.0, ember-cli-babel@^7.13.0, ember-cli-babel@^7.17.2, ember-cli-babel@^7.18.0, ember-cli-babel@^7.19.0, ember-cli-babel@^7.20.0, ember-cli-babel@^7.20.5, ember-cli-babel@^7.21.0, ember-cli-babel@^7.22.1, ember-cli-babel@^7.23.0, ember-cli-babel@^7.23.1, ember-cli-babel@^7.7.3, ember-cli-babel@^7.8.0: +ember-cli-babel@7, ember-cli-babel@^7.0.0, ember-cli-babel@^7.1.3, ember-cli-babel@^7.10.0, ember-cli-babel@^7.13.0, ember-cli-babel@^7.17.2, ember-cli-babel@^7.18.0, ember-cli-babel@^7.19.0, ember-cli-babel@^7.20.0, ember-cli-babel@^7.20.5, ember-cli-babel@^7.21.0, ember-cli-babel@^7.22.1, ember-cli-babel@^7.23.0, ember-cli-babel@^7.23.1, ember-cli-babel@^7.7.3, ember-cli-babel@^7.8.0: version "7.26.1" resolved "https://registry.yarnpkg.com/ember-cli-babel/-/ember-cli-babel-7.26.1.tgz#d3f06bd9aec8aac9197c5ff4d0b87ff1e4f0d62a" integrity sha512-WEWP3hJSe9CWL22gEWQ+Y3uKMGk1vLoIREUQfJNKrgUUh3l49bnfAamh3ywcAQz31IgzvkLPO8ZTXO4rxnuP4Q== @@ -6454,7 +6530,7 @@ ember-cli-babel@^6.0.0, ember-cli-babel@^6.0.0-beta.4, ember-cli-babel@^6.11.0, ember-cli-version-checker "^2.1.2" semver "^5.5.0" -ember-cli-babel@^7.26.3, ember-cli-babel@^7.26.5: +ember-cli-babel@^7.13.2, ember-cli-babel@^7.26.3, ember-cli-babel@^7.26.5: version "7.26.11" resolved "https://registry.yarnpkg.com/ember-cli-babel/-/ember-cli-babel-7.26.11.tgz#50da0fe4dcd99aada499843940fec75076249a9f" integrity sha512-JJYeYjiz/JTn34q7F5DSOjkkZqy8qwFOOxXfE6pe9yEJqWGu4qErKxlz8I22JoVEQ/aBUO+OcKTpmctvykM9YA== @@ -6573,17 +6649,6 @@ ember-cli-get-component-path-option@^1.0.0: resolved "https://registry.yarnpkg.com/ember-cli-get-component-path-option/-/ember-cli-get-component-path-option-1.0.0.tgz#0d7b595559e2f9050abed804f1d8eff1b08bc771" integrity sha1-DXtZVVni+QUKvtgE8djv8bCLx3E= -ember-cli-htmlbars-inline-precompile@^2.1.0: - version "2.1.0" - resolved "https://registry.yarnpkg.com/ember-cli-htmlbars-inline-precompile/-/ember-cli-htmlbars-inline-precompile-2.1.0.tgz#61b91ff1879d44ae504cadb46fb1f2604995ae08" - integrity sha512-BylIHduwQkncPhnj0ZyorBuljXbTzLgRo6kuHf1W+IHFxThFl2xG+r87BVwsqx4Mn9MTgW9SE0XWjwBJcSWd6Q== - dependencies: - babel-plugin-htmlbars-inline-precompile "^1.0.0" - ember-cli-version-checker "^2.1.2" - hash-for-dep "^1.2.3" - heimdalljs-logger "^0.1.9" - silent-error "^1.1.0" - ember-cli-htmlbars@^3.0.1: version "3.1.0" resolved "https://registry.yarnpkg.com/ember-cli-htmlbars/-/ember-cli-htmlbars-3.1.0.tgz#87806c2a0bca2ab52d4fb8af8e2215c1ca718a99" @@ -6636,7 +6701,7 @@ ember-cli-htmlbars@^5.0.0, ember-cli-htmlbars@^5.1.0, ember-cli-htmlbars@^5.1.2, strip-bom "^4.0.0" walk-sync "^2.2.0" -ember-cli-htmlbars@^5.3.2: +ember-cli-htmlbars@^5.3.2, ember-cli-htmlbars@^5.7.1: version "5.7.2" resolved "https://registry.yarnpkg.com/ember-cli-htmlbars/-/ember-cli-htmlbars-5.7.2.tgz#e0cd2fb3c20d85fe4c3e228e6f0590ee1c645ba8" integrity sha512-Uj6R+3TtBV5RZoJY14oZn/sNPnc+UgmC8nb5rI4P3fR/gYoyTFIZSXiIM7zl++IpMoIrocxOrgt+mhonKphgGg== @@ -6799,12 +6864,12 @@ ember-cli-test-info@^1.0.0: dependencies: ember-cli-string-utils "^1.0.0" -ember-cli-test-loader@^2.2.0: - version "2.2.0" - resolved "https://registry.yarnpkg.com/ember-cli-test-loader/-/ember-cli-test-loader-2.2.0.tgz#3fb8d5d1357e4460d3f0a092f5375e71b6f7c243" - integrity sha512-mlSXX9SciIRwGkFTX6XGyJYp4ry6oCFZRxh5jJ7VH8UXLTNx2ZACtDTwaWtNhYrWXgKyiDUvmD8enD56aePWRA== +ember-cli-test-loader@^3.0.0: + version "3.0.0" + resolved "https://registry.yarnpkg.com/ember-cli-test-loader/-/ember-cli-test-loader-3.0.0.tgz#1c036fc48de36155355fcda3266af63f977826f1" + integrity sha512-wfFRBrfO9gaKScYcdQxTfklx9yp1lWK6zv1rZRpkas9z2SHyJojF7NOQRWQgSB3ypm7vfpiF8VsFFVVr7VBzAQ== dependencies: - ember-cli-babel "^6.8.1" + ember-cli-babel "^7.13.2" ember-cli-typescript@3.0.0: version "3.0.0" @@ -6885,7 +6950,7 @@ ember-cli-uglify@^3.0.0: broccoli-uglify-sourcemap "^3.1.0" lodash.defaultsdeep "^4.6.0" -ember-cli-version-checker@^2.0.0, ember-cli-version-checker@^2.1.0, ember-cli-version-checker@^2.1.2: +ember-cli-version-checker@^2.1.0, ember-cli-version-checker@^2.1.2: version "2.2.0" resolved "https://registry.yarnpkg.com/ember-cli-version-checker/-/ember-cli-version-checker-2.2.0.tgz#47771b731fe0962705e27c8199a9e3825709f3b3" integrity sha512-G+KtYIVlSOWGcNaTFHk76xR4GdzDLzAS4uxZUKdASuFX0KJE43C6DaqL+y3VTpUFLI2FIkAS6HZ4I1YBi+S3hg== @@ -7135,7 +7200,7 @@ ember-decorators@^6.1.1: "@ember-decorators/object" "^6.1.1" ember-cli-babel "^7.7.3" -ember-destroyable-polyfill@^2.0.2: +ember-destroyable-polyfill@^2.0.2, ember-destroyable-polyfill@^2.0.3: version "2.0.3" resolved "https://registry.yarnpkg.com/ember-destroyable-polyfill/-/ember-destroyable-polyfill-2.0.3.tgz#1673ed66609a82268ef270a7d917ebd3647f11e1" integrity sha512-TovtNqCumzyAiW0/OisSkkVK93xnVF4NRU6+FN0ubpfwEOpRrmM2RqDwXI6YAChCgSHON1cz0DfQStpA1Gjuuw== @@ -7403,18 +7468,20 @@ ember-power-select@^4.0.0, ember-power-select@^4.0.5: ember-text-measurer "^0.6.0" ember-truth-helpers "^2.1.0 || ^3.0.0" -ember-qunit@^4.6.0: - version "4.6.0" - resolved "https://registry.yarnpkg.com/ember-qunit/-/ember-qunit-4.6.0.tgz#ad79fd3ff00073a8779400cc5a4b44829517590f" - integrity sha512-i5VOGn0RP8XH+5qkYDOZshbqAvO6lHgF65D0gz8vRx4DszCIvJMJO+bbftBTfYMxp6rqG85etAA6pfNxE0DqsQ== +ember-qunit@^5.1.1: + version "5.1.5" + resolved "https://registry.yarnpkg.com/ember-qunit/-/ember-qunit-5.1.5.tgz#24a7850f052be24189ff597dfc31b923e684c444" + integrity sha512-2cFA4oMygh43RtVcMaBrr086Tpdhgbn3fVZ2awLkzF/rnSN0D0PSRpd7hAD7OdBPerC/ZYRwzVyGXLoW/Zes4A== dependencies: - "@ember/test-helpers" "^1.7.1" - broccoli-funnel "^2.0.2" + broccoli-funnel "^3.0.8" broccoli-merge-trees "^3.0.2" - common-tags "^1.4.0" - ember-cli-babel "^7.12.0" - ember-cli-test-loader "^2.2.0" - qunit "^2.9.3" + common-tags "^1.8.0" + ember-auto-import "^1.11.3" + ember-cli-babel "^7.26.6" + ember-cli-test-loader "^3.0.0" + resolve-package-path "^3.1.0" + silent-error "^1.1.1" + validate-peer-dependencies "^1.2.0" ember-ref-modifier@^1.0.0: version "1.0.1" @@ -7629,14 +7696,6 @@ ember-test-selectors@^5.0.0: ember-cli-babel "^7.22.1" ember-cli-version-checker "^5.1.1" -ember-test-waiters@^1.1.1: - version "1.2.0" - resolved "https://registry.yarnpkg.com/ember-test-waiters/-/ember-test-waiters-1.2.0.tgz#c12ead4313934c24cff41857020cacdbf8e6effe" - integrity sha512-aEw7YuutLuJT4NUuPTNiGFwgTYl23ThqmBxSkfFimQAn+keWjAftykk3dlFELuhsJhYW/S8YoVjN0bSAQRLNtw== - dependencies: - ember-cli-babel "^7.11.0" - semver "^6.3.0" - ember-text-measurer@^0.6.0: version "0.6.0" resolved "https://registry.yarnpkg.com/ember-text-measurer/-/ember-text-measurer-0.6.0.tgz#140eda044fd7d4d7f60f654dd30da79c06922b2e" @@ -10290,11 +10349,6 @@ jquery@^3.4.1, jquery@^3.5.0: resolved "https://registry.yarnpkg.com/jquery/-/jquery-3.6.0.tgz#c72a09f15c1bdce142f49dbf1170bdf8adac2470" integrity sha512-JVzAR/AjBvVt2BmYhxRCSYysDsPcssdmTFnzyLEts9qNwmjmu4JTAMYubEfwVOSwpQ1I1sKKFcxhZCI2buerfw== -js-reporters@1.2.3: - version "1.2.3" - resolved "https://registry.yarnpkg.com/js-reporters/-/js-reporters-1.2.3.tgz#8febcab370539df62e09b95da133da04b11f6168" - integrity sha512-2YzWkHbbRu6LueEs5ZP3P1LqbECvAeUJYrjw3H4y1ofW06hqCS0AbzBtLwbr+Hke51bt9CUepJ/Fj1hlCRIF6A== - js-string-escape@^1.0.1: version "1.0.1" resolved "https://registry.yarnpkg.com/js-string-escape/-/js-string-escape-1.0.1.tgz#e2625badbc0d67c7533e9edc1068c587ae4137ef" @@ -11734,10 +11788,10 @@ node-releases@^2.0.2: resolved "https://registry.yarnpkg.com/node-releases/-/node-releases-2.0.2.tgz#7139fe71e2f4f11b47d4d2986aaf8c48699e0c01" integrity sha512-XxYDdcQ6eKqp/YjI+tb2C5WM2LgjnZrfYg4vgQt49EK268b6gYCHsBLrK2qvJo4FmCtqmKezb0WZFK4fkrZNsg== -node-watch@0.7.1: - version "0.7.1" - resolved "https://registry.yarnpkg.com/node-watch/-/node-watch-0.7.1.tgz#0caaa6a6833b0d533487f953c52a6c787769ba7c" - integrity sha512-UWblPYuZYrkCQCW5PxAwYSxaELNBLUckrTBBk8xr1/bUgyOkYYTsUcV4e3ytcazFEOyiRyiUrsG37pu6I0I05g== +node-watch@0.7.3: + version "0.7.3" + resolved "https://registry.yarnpkg.com/node-watch/-/node-watch-0.7.3.tgz#6d4db88e39c8d09d3ea61d6568d80e5975abc7ab" + integrity sha512-3l4E8uMPY1HdMMryPRUAl+oIHtXtyiTlIiESNSVSNxcPfzAFzeTbXFQkZfAwBbo0B1qMSG8nUABx+Gd+YrbKrQ== nomnom@^1.5.x: version "1.8.1" @@ -12705,7 +12759,7 @@ quick-temp@^0.1.2, quick-temp@^0.1.3, quick-temp@^0.1.5, quick-temp@^0.1.8: rimraf "^2.5.4" underscore.string "~3.3.4" -qunit-dom@^1.0.0: +qunit-dom@^1.6.0: version "1.6.0" resolved "https://registry.yarnpkg.com/qunit-dom/-/qunit-dom-1.6.0.tgz#a4bea6a46329d221e4a317d712cb40709107b977" integrity sha512-YwSqcLjQcRI0fUFpaSWwU10KIJPFW5Qh+d3cT5DOgx81dypRuUSiPkKFmBY/CDs/R1KdHRadthkcXg2rqAon8Q== @@ -12715,15 +12769,14 @@ qunit-dom@^1.0.0: ember-cli-babel "^7.23.0" ember-cli-version-checker "^5.1.1" -qunit@^2.9.3: - version "2.14.1" - resolved "https://registry.yarnpkg.com/qunit/-/qunit-2.14.1.tgz#02ba25c108f0845fda411a42b5cbfca0f0319943" - integrity sha512-jtFw8bf8+GjzY8UpnwbjqTOdK/rvrjcafUFTNpRc6/9N4q5dBwcwSMlcC76kAn5BRiSFj5Ssn2dfHtEYvtsXSw== +qunit@^2.13.0: + version "2.19.1" + resolved "https://registry.yarnpkg.com/qunit/-/qunit-2.19.1.tgz#eb1afd188da9e47f07c13aa70461a1d9c4505490" + integrity sha512-gSGuw0vErE/rNjnlBW/JmE7NNubBlGrDPQvsug32ejYhcVFuZec9yoU0+C30+UgeCGwq6Ap89K65dMGo+kDGZQ== dependencies: - commander "7.1.0" - js-reporters "1.2.3" - node-watch "0.7.1" - tiny-glob "0.2.8" + commander "7.2.0" + node-watch "0.7.3" + tiny-glob "0.2.9" raf-pool@~0.1.4: version "0.1.4" @@ -13686,7 +13739,7 @@ signal-exit@^3.0.0, signal-exit@^3.0.2: resolved "https://registry.yarnpkg.com/signal-exit/-/signal-exit-3.0.3.tgz#a1410c2edd8f077b08b4e253c8eacfcaf057461c" integrity sha512-VUJ49FC8U1OxwZLxIbTTrDvLnf/6TDgxZcK8wxR8zs13xpx7xbG60ndBlhNrFi2EMuFRoeDoJO7wthSLq42EjA== -silent-error@^1.0.0, silent-error@^1.0.1, silent-error@^1.1.0, silent-error@^1.1.1: +silent-error@^1.0.0, silent-error@^1.0.1, silent-error@^1.1.1: version "1.1.1" resolved "https://registry.yarnpkg.com/silent-error/-/silent-error-1.1.1.tgz#f72af5b0d73682a2ba1778b7e32cd8aa7c2d8662" integrity sha512-n4iEKyNcg4v6/jpb3c0/iyH2G1nzUNl7Gpqtn/mHIJK9S/q/7MCfoO4rwVOoO59qPFIc0hVHvMbiOJ0NdtxKKw== @@ -14531,10 +14584,10 @@ tiny-emitter@^2.0.0: resolved "https://registry.yarnpkg.com/tiny-emitter/-/tiny-emitter-2.1.0.tgz#1d1a56edfc51c43e863cbb5382a72330e3555423" integrity sha512-NB6Dk1A9xgQPMoGqC5CVXn123gWyte215ONT5Pp5a0yt4nlEoO1ZWeCwpncaekPHXO60i47ihFnZPiRPjRMq4Q== -tiny-glob@0.2.8: - version "0.2.8" - resolved "https://registry.yarnpkg.com/tiny-glob/-/tiny-glob-0.2.8.tgz#b2792c396cc62db891ffa161fe8b33e76123e531" - integrity sha512-vkQP7qOslq63XRX9kMswlby99kyO5OvKptw7AMwBVMjXEI7Tb61eoI5DydyEMOseyGS5anDN1VPoVxEvH01q8w== +tiny-glob@0.2.9: + version "0.2.9" + resolved "https://registry.yarnpkg.com/tiny-glob/-/tiny-glob-0.2.9.tgz#2212d441ac17928033b110f8b3640683129d31e2" + integrity sha512-g/55ssRPUjShh+xkfx9UPDXqhckHEsHr4Vd9zX55oSdGZc/MD0m3sferOkwWtp98bv+kcVfEHtRJgBVJzelrzg== dependencies: globalyzer "0.1.0" globrex "^0.1.2" @@ -15235,6 +15288,14 @@ validate-npm-package-name@^3.0.0: dependencies: builtins "^1.0.3" +validate-peer-dependencies@^1.2.0: + version "1.2.0" + resolved "https://registry.yarnpkg.com/validate-peer-dependencies/-/validate-peer-dependencies-1.2.0.tgz#22aab93c514f4fda457d36c80685e8b1160d2036" + integrity sha512-nd2HUpKc6RWblPZQ2GDuI65sxJ2n/UqZwSBVtj64xlWjMx0m7ZB2m9b2JS3v1f+n9VWH/dd1CMhkHfP6pIdckA== + dependencies: + resolve-package-path "^3.1.0" + semver "^7.3.2" + validated-changeset@0.10.0, validated-changeset@~0.10.0: version "0.10.0" resolved "https://registry.yarnpkg.com/validated-changeset/-/validated-changeset-0.10.0.tgz#2e8188c089ab282c1b51fba3c289073f6bd14c8b" From ef5f6971214fbcabb223760fd4dc0fb9d1b961a1 Mon Sep 17 00:00:00 2001 From: malizz Date: Thu, 1 Sep 2022 09:59:11 -0700 Subject: [PATCH 36/55] Add additional parameters to envoy passive health check config (#14238) * draft commit * add changelog, update test * remove extra param * fix test * update type to account for nil value * add test for custom passive health check * update comments and tests * update description in docs * fix missing commas --- .changelog/14238.txt | 3 + agent/consul/config_endpoint_test.go | 19 ++- agent/structs/config_entry.go | 5 + agent/structs/config_entry_test.go | 14 +- agent/structs/testing_connect_proxy_config.go | 2 +- agent/xds/clusters_test.go | 12 ++ agent/xds/config.go | 5 + .../custom-passive-healthcheck.latest.golden | 147 ++++++++++++++++++ api/config_entry.go | 5 + .../config-entries/service-defaults.mdx | 20 +++ .../content/docs/connect/proxies/envoy.mdx | 2 + 11 files changed, 223 insertions(+), 11 deletions(-) create mode 100644 .changelog/14238.txt create mode 100644 agent/xds/testdata/clusters/custom-passive-healthcheck.latest.golden diff --git a/.changelog/14238.txt b/.changelog/14238.txt new file mode 100644 index 000000000..43c570915 --- /dev/null +++ b/.changelog/14238.txt @@ -0,0 +1,3 @@ +```release-note:improvement +envoy: adds additional Envoy outlier ejection parameters to passive health check configurations. +``` \ No newline at end of file diff --git a/agent/consul/config_endpoint_test.go b/agent/consul/config_endpoint_test.go index 3f79b3d1b..aaf7cba94 100644 --- a/agent/consul/config_endpoint_test.go +++ b/agent/consul/config_endpoint_test.go @@ -1399,8 +1399,9 @@ func TestConfigEntry_ResolveServiceConfig_Upstreams(t *testing.T) { Protocol: "http", MeshGateway: structs.MeshGatewayConfig{Mode: structs.MeshGatewayModeRemote}, PassiveHealthCheck: &structs.PassiveHealthCheck{ - Interval: 10, - MaxFailures: 2, + Interval: 10, + MaxFailures: 2, + EnforcingConsecutive5xx: uintPointer(60), }, }, Overrides: []*structs.UpstreamConfig{ @@ -1432,8 +1433,9 @@ func TestConfigEntry_ResolveServiceConfig_Upstreams(t *testing.T) { Upstream: wildcard, Config: map[string]interface{}{ "passive_health_check": map[string]interface{}{ - "Interval": int64(10), - "MaxFailures": int64(2), + "Interval": int64(10), + "MaxFailures": int64(2), + "EnforcingConsecutive5xx": int64(60), }, "mesh_gateway": map[string]interface{}{ "Mode": "remote", @@ -1445,8 +1447,9 @@ func TestConfigEntry_ResolveServiceConfig_Upstreams(t *testing.T) { Upstream: mysql, Config: map[string]interface{}{ "passive_health_check": map[string]interface{}{ - "Interval": int64(10), - "MaxFailures": int64(2), + "Interval": int64(10), + "MaxFailures": int64(2), + "EnforcingConsecutive5xx": int64(60), }, "mesh_gateway": map[string]interface{}{ "Mode": "local", @@ -2507,3 +2510,7 @@ func Test_gateWriteToSecondary_AllowedKinds(t *testing.T) { }) } } + +func uintPointer(v uint32) *uint32 { + return &v +} diff --git a/agent/structs/config_entry.go b/agent/structs/config_entry.go index 88c523a15..23c5c8e47 100644 --- a/agent/structs/config_entry.go +++ b/agent/structs/config_entry.go @@ -969,6 +969,11 @@ type PassiveHealthCheck struct { // MaxFailures is the count of consecutive failures that results in a host // being removed from the pool. MaxFailures uint32 `json:",omitempty" alias:"max_failures"` + + // EnforcingConsecutive5xx is the % chance that a host will be actually ejected + // when an outlier status is detected through consecutive 5xx. + // This setting can be used to disable ejection or to ramp it up slowly. Defaults to 100. + EnforcingConsecutive5xx *uint32 `json:",omitempty" alias:"enforcing_consecutive_5xx"` } func (chk *PassiveHealthCheck) Clone() *PassiveHealthCheck { diff --git a/agent/structs/config_entry_test.go b/agent/structs/config_entry_test.go index a9e113f21..c08e82399 100644 --- a/agent/structs/config_entry_test.go +++ b/agent/structs/config_entry_test.go @@ -2754,8 +2754,9 @@ func TestUpstreamConfig_MergeInto(t *testing.T) { MaxConcurrentRequests: intPointer(12), }, "passive_health_check": &PassiveHealthCheck{ - MaxFailures: 13, - Interval: 14 * time.Second, + MaxFailures: 13, + Interval: 14 * time.Second, + EnforcingConsecutive5xx: uintPointer(80), }, "mesh_gateway": MeshGatewayConfig{Mode: MeshGatewayModeLocal}, }, @@ -2770,8 +2771,9 @@ func TestUpstreamConfig_MergeInto(t *testing.T) { MaxConcurrentRequests: intPointer(12), }, "passive_health_check": &PassiveHealthCheck{ - MaxFailures: 13, - Interval: 14 * time.Second, + MaxFailures: 13, + Interval: 14 * time.Second, + EnforcingConsecutive5xx: uintPointer(80), }, "mesh_gateway": MeshGatewayConfig{Mode: MeshGatewayModeLocal}, }, @@ -3067,3 +3069,7 @@ func testConfigEntryNormalizeAndValidate(t *testing.T, cases map[string]configEn }) } } + +func uintPointer(v uint32) *uint32 { + return &v +} diff --git a/agent/structs/testing_connect_proxy_config.go b/agent/structs/testing_connect_proxy_config.go index ad918927a..fdee3f693 100644 --- a/agent/structs/testing_connect_proxy_config.go +++ b/agent/structs/testing_connect_proxy_config.go @@ -26,7 +26,7 @@ func TestUpstreams(t testing.T) Upstreams { Config: map[string]interface{}{ // Float because this is how it is decoded by JSON decoder so this // enables the value returned to be compared directly to a decoded JSON - // response without spurios type loss. + // response without spurious type loss. "connect_timeout_ms": float64(1000), }, }, diff --git a/agent/xds/clusters_test.go b/agent/xds/clusters_test.go index 26087dd1d..5efd5029c 100644 --- a/agent/xds/clusters_test.go +++ b/agent/xds/clusters_test.go @@ -169,6 +169,18 @@ func TestClustersFromSnapshot(t *testing.T) { }, nil) }, }, + { + name: "custom-passive-healthcheck", + create: func(t testinf.T) *proxycfg.ConfigSnapshot { + return proxycfg.TestConfigSnapshot(t, func(ns *structs.NodeService) { + ns.Proxy.Upstreams[0].Config["passive_health_check"] = map[string]interface{}{ + "enforcing_consecutive_5xx": float64(80), + "max_failures": float64(5), + "interval": float64(10), + } + }, nil) + }, + }, { name: "custom-max-inbound-connections", create: func(t testinf.T) *proxycfg.ConfigSnapshot { diff --git a/agent/xds/config.go b/agent/xds/config.go index cfbd23e07..0736fb44c 100644 --- a/agent/xds/config.go +++ b/agent/xds/config.go @@ -174,5 +174,10 @@ func ToOutlierDetection(p *structs.PassiveHealthCheck) *envoy_cluster_v3.Outlier if p.MaxFailures != 0 { od.Consecutive_5Xx = &wrappers.UInt32Value{Value: p.MaxFailures} } + + if p.EnforcingConsecutive5xx != nil { + od.EnforcingConsecutive_5Xx = &wrappers.UInt32Value{Value: *p.EnforcingConsecutive5xx} + } + return od } diff --git a/agent/xds/testdata/clusters/custom-passive-healthcheck.latest.golden b/agent/xds/testdata/clusters/custom-passive-healthcheck.latest.golden new file mode 100644 index 000000000..41bc16f6e --- /dev/null +++ b/agent/xds/testdata/clusters/custom-passive-healthcheck.latest.golden @@ -0,0 +1,147 @@ +{ + "versionInfo": "00000001", + "resources": [ + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "altStatName": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul", + "type": "EDS", + "edsClusterConfig": { + "edsConfig": { + "ads": { + + }, + "resourceApiVersion": "V3" + } + }, + "connectTimeout": "5s", + "circuitBreakers": { + + }, + "outlierDetection": { + "consecutive5xx": 5, + "interval": "0.000000010s", + "enforcingConsecutive5xx": 80 + }, + "commonLbConfig": { + "healthyPanicThreshold": { + + } + }, + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICXDCCAgKgAwIBAgIICpZq70Z9LyUwCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowFDESMBAG\nA1UEAxMJVGVzdCBDQSAyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIhywH1gx\nAsMwuF3ukAI5YL2jFxH6Usnma1HFSfVyxbXX1/uoZEYrj8yCAtdU2yoHETyd+Zx2\nThhRLP79pYegCaOCATwwggE4MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTAD\nAQH/MGgGA1UdDgRhBF9kMToxMToxMTphYzoyYTpiYTo5NzpiMjozZjphYzo3Yjpi\nZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1ZTo0MTo2ZjpmMjo3\nMzo5NTo1ODowYzpkYjBqBgNVHSMEYzBhgF9kMToxMToxMTphYzoyYTpiYTo5Nzpi\nMjozZjphYzo3YjpiZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1\nZTo0MTo2ZjpmMjo3Mzo5NTo1ODowYzpkYjA/BgNVHREEODA2hjRzcGlmZmU6Ly8x\nMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqG\nSM49BAMCA0gAMEUCICOY0i246rQHJt8o8Oya0D5PLL1FnmsQmQqIGCi31RwnAiEA\noR5f6Ku+cig2Il8T8LJujOp2/2A72QcHZA57B13y+8o=\n-----END CERTIFICATE-----\n" + }, + "matchSubjectAltNames": [ + { + "exact": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc1/svc/db" + } + ] + } + }, + "sni": "db.default.dc1.internal.11111111-2222-3333-4444-555555555555.consul" + } + } + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul", + "type": "EDS", + "edsClusterConfig": { + "edsConfig": { + "ads": { + + }, + "resourceApiVersion": "V3" + } + }, + "connectTimeout": "5s", + "circuitBreakers": { + + }, + "outlierDetection": { + + }, + "transportSocket": { + "name": "tls", + "typedConfig": { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext", + "commonTlsContext": { + "tlsParams": { + + }, + "tlsCertificates": [ + { + "certificateChain": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICjDCCAjKgAwIBAgIIC5llxGV1gB8wCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowDjEMMAoG\nA1UEAxMDd2ViMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEADPv1RHVNRfa2VKR\nAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Favq5E0ivpNtv1QnFhxtPd7d5k4e+T7\nSkW1TaOCAXIwggFuMA4GA1UdDwEB/wQEAwIDuDAdBgNVHSUEFjAUBggrBgEFBQcD\nAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBoBgNVHQ4EYQRfN2Q6MDc6ODc6M2E6\nNDA6MTk6NDc6YzM6NWE6YzA6YmE6NjI6ZGY6YWY6NGI6ZDQ6MDU6MjU6NzY6M2Q6\nNWE6OGQ6MTY6OGQ6Njc6NWU6MmU6YTA6MzQ6N2Q6ZGM6ZmYwagYDVR0jBGMwYYBf\nZDE6MTE6MTE6YWM6MmE6YmE6OTc6YjI6M2Y6YWM6N2I6YmQ6ZGE6YmU6YjE6OGE6\nZmM6OWE6YmE6YjU6YmM6ODM6ZTc6NWU6NDE6NmY6ZjI6NzM6OTU6NTg6MGM6ZGIw\nWQYDVR0RBFIwUIZOc3BpZmZlOi8vMTExMTExMTEtMjIyMi0zMzMzLTQ0NDQtNTU1\nNTU1NTU1NTU1LmNvbnN1bC9ucy9kZWZhdWx0L2RjL2RjMS9zdmMvd2ViMAoGCCqG\nSM49BAMCA0gAMEUCIGC3TTvvjj76KMrguVyFf4tjOqaSCRie3nmHMRNNRav7AiEA\npY0heYeK9A6iOLrzqxSerkXXQyj5e9bE4VgUnxgPU6g=\n-----END CERTIFICATE-----\n" + }, + "privateKey": { + "inlineString": "-----BEGIN EC PRIVATE KEY-----\nMHcCAQEEIMoTkpRggp3fqZzFKh82yS4LjtJI+XY+qX/7DefHFrtdoAoGCCqGSM49\nAwEHoUQDQgAEADPv1RHVNRfa2VKRAB16b6rZnEt7tuhaxCFpQXPj7M2omb0B9Fav\nq5E0ivpNtv1QnFhxtPd7d5k4e+T7SkW1TQ==\n-----END EC PRIVATE KEY-----\n" + } + } + ], + "validationContext": { + "trustedCa": { + "inlineString": "-----BEGIN CERTIFICATE-----\nMIICXDCCAgKgAwIBAgIICpZq70Z9LyUwCgYIKoZIzj0EAwIwFDESMBAGA1UEAxMJ\nVGVzdCBDQSAyMB4XDTE5MDMyMjEzNTgyNloXDTI5MDMyMjEzNTgyNlowFDESMBAG\nA1UEAxMJVGVzdCBDQSAyMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEIhywH1gx\nAsMwuF3ukAI5YL2jFxH6Usnma1HFSfVyxbXX1/uoZEYrj8yCAtdU2yoHETyd+Zx2\nThhRLP79pYegCaOCATwwggE4MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTAD\nAQH/MGgGA1UdDgRhBF9kMToxMToxMTphYzoyYTpiYTo5NzpiMjozZjphYzo3Yjpi\nZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1ZTo0MTo2ZjpmMjo3\nMzo5NTo1ODowYzpkYjBqBgNVHSMEYzBhgF9kMToxMToxMTphYzoyYTpiYTo5Nzpi\nMjozZjphYzo3YjpiZDpkYTpiZTpiMTo4YTpmYzo5YTpiYTpiNTpiYzo4MzplNzo1\nZTo0MTo2ZjpmMjo3Mzo5NTo1ODowYzpkYjA/BgNVHREEODA2hjRzcGlmZmU6Ly8x\nMTExMTExMS0yMjIyLTMzMzMtNDQ0NC01NTU1NTU1NTU1NTUuY29uc3VsMAoGCCqG\nSM49BAMCA0gAMEUCICOY0i246rQHJt8o8Oya0D5PLL1FnmsQmQqIGCi31RwnAiEA\noR5f6Ku+cig2Il8T8LJujOp2/2A72QcHZA57B13y+8o=\n-----END CERTIFICATE-----\n" + }, + "matchSubjectAltNames": [ + { + "exact": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc1/svc/geo-cache-target" + }, + { + "exact": "spiffe://11111111-2222-3333-4444-555555555555.consul/ns/default/dc/dc2/svc/geo-cache-target" + } + ] + } + }, + "sni": "geo-cache.default.dc1.query.11111111-2222-3333-4444-555555555555.consul" + } + } + }, + { + "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "name": "local_app", + "type": "STATIC", + "connectTimeout": "5s", + "loadAssignment": { + "clusterName": "local_app", + "endpoints": [ + { + "lbEndpoints": [ + { + "endpoint": { + "address": { + "socketAddress": { + "address": "127.0.0.1", + "portValue": 8080 + } + } + } + } + ] + } + ] + } + } + ], + "typeUrl": "type.googleapis.com/envoy.config.cluster.v3.Cluster", + "nonce": "00000001" +} \ No newline at end of file diff --git a/api/config_entry.go b/api/config_entry.go index da685b786..7fe128958 100644 --- a/api/config_entry.go +++ b/api/config_entry.go @@ -196,6 +196,11 @@ type PassiveHealthCheck struct { // MaxFailures is the count of consecutive failures that results in a host // being removed from the pool. MaxFailures uint32 `alias:"max_failures"` + + // EnforcingConsecutive5xx is the % chance that a host will be actually ejected + // when an outlier status is detected through consecutive 5xx. + // This setting can be used to disable ejection or to ramp it up slowly. + EnforcingConsecutive5xx uint32 `json:",omitempty" alias:"enforcing_consecutive_5xx"` } // UpstreamLimits describes the limits that are associated with a specific diff --git a/website/content/docs/connect/config-entries/service-defaults.mdx b/website/content/docs/connect/config-entries/service-defaults.mdx index b431e4345..2ce58c271 100644 --- a/website/content/docs/connect/config-entries/service-defaults.mdx +++ b/website/content/docs/connect/config-entries/service-defaults.mdx @@ -503,6 +503,16 @@ represents a location outside the Consul cluster. They can be dialed directly wh description: `The number of consecutive failures which cause a host to be removed from the load balancer.`, }, + { + name: 'EnforcingConsecutive5xx', + type: 'int: 100', + description: { + hcl: `The % chance that a host will be actually ejected + when an outlier status is detected through consecutive 5xx.`, + yaml: `The % chance that a host will be actually ejected + when an outlier status is detected through consecutive 5xx.`, + }, + }, ], }, ], @@ -635,6 +645,16 @@ represents a location outside the Consul cluster. They can be dialed directly wh description: `The number of consecutive failures which cause a host to be removed from the load balancer.`, }, + { + name: 'EnforcingConsecutive5xx', + type: 'int: 100', + description: { + hcl: `The % chance that a host will be actually ejected + when an outlier status is detected through consecutive 5xx.`, + yaml: `The % chance that a host will be actually ejected + when an outlier status is detected through consecutive 5xx.`, + }, + }, ], }, ], diff --git a/website/content/docs/connect/proxies/envoy.mdx b/website/content/docs/connect/proxies/envoy.mdx index 812adff17..d6222e898 100644 --- a/website/content/docs/connect/proxies/envoy.mdx +++ b/website/content/docs/connect/proxies/envoy.mdx @@ -309,6 +309,8 @@ definition](/docs/connect/registration/service-registration) or load balancer. - `max_failures` - The number of consecutive failures which cause a host to be removed from the load balancer. + - `enforcing_consecutive_5xx` - The % chance that a host will be actually ejected + when an outlier status is detected through consecutive 5xx. ### Gateway Options From fd8b367dc0d23753301db3b372230d57ea793347 Mon Sep 17 00:00:00 2001 From: David Yu Date: Thu, 1 Sep 2022 10:10:32 -0700 Subject: [PATCH 37/55] docs: minor changes to cluster peering k8s docs and typos (#14442) * docs: minor changes to cluster peering k8s docs and typos --- .../docs/connect/cluster-peering/k8s.mdx | 30 +++++++++---------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/website/content/docs/connect/cluster-peering/k8s.mdx b/website/content/docs/connect/cluster-peering/k8s.mdx index b18633f09..0263e8794 100644 --- a/website/content/docs/connect/cluster-peering/k8s.mdx +++ b/website/content/docs/connect/cluster-peering/k8s.mdx @@ -61,7 +61,7 @@ You must implement the following requirements to create and use cluster peering enableRedirection: true server: exposeService: - enabeld: true + enabled: true controller: enabled: true meshGateway: @@ -166,14 +166,14 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. For the service in "cluster-02" that you want to export, add the following [annotation](/docs/k8s/annotations-and-labels) to your service's pods. - + ```yaml # Service to expose backend apiVersion: v1 kind: Service metadata: - name: backend-service + name: backend spec: selector: app: backend @@ -235,7 +235,7 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a name: default ## The name of the partition containing the service spec: services: - - name: backend-service ## The name of the service you want to export + - name: backend ## The name of the service you want to export consumers: - peer: cluster-01 ## The name of the peer that receives the service ``` @@ -245,7 +245,7 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the service file and the `ExportedServices` resource for the second cluster. ```shell-session - $ kubectl apply --context $CLUSTER2_CONTEXT --filename backend-service.yaml --filename exportedsvc.yaml + $ kubectl apply --context $CLUSTER2_CONTEXT --filename backend.yaml --filename exportedsvc.yaml ``` ## Authorize services for peers @@ -261,11 +261,11 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a name: backend-deny spec: destination: - name: backend-service + name: backend sources: - name: "*" action: deny - - name: frontend-service + - name: frontend action: allow ``` @@ -277,16 +277,16 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a $ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yml ``` -1. For the services in `cluster-01` that you want to access the "backend-service," add the following annotations to the service file. To dial the upstream service from an application, ensure that the requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/docs/discovery/dns#service-virtual-ip-lookups). +1. For the services in `cluster-01` that you want to access the "backend," add the following annotations to the service file. To dial the upstream service from an application, ensure that the requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/docs/discovery/dns#service-virtual-ip-lookups). - + ```yaml # Service to expose frontend apiVersion: v1 kind: Service metadata: - name: frontend-service + name: frontend spec: selector: app: frontend @@ -332,7 +332,7 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a - name: "LISTEN_ADDR" value: "0.0.0.0:9090" - name: "UPSTREAM_URIS" - value: "http://backend-service.virtual.cluster-02.consul" + value: "http://backend.virtual.cluster-02.consul" - name: "NAME" value: "frontend" - name: "MESSAGE" @@ -346,10 +346,10 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a 1. Apply the service file to the first cluster. ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend-service.yaml + $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml ``` -1. Run the following command in `frontend-service` and check the output to confirm that you peered your clusters successfully. +1. Run the following command in `frontend` and check the output to confirm that you peered your clusters successfully. ```shell-session $ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090 @@ -365,9 +365,9 @@ To peer Kubernetes clusters running Consul, you need to create a peering token a "duration": "59.752279ms", "body": "Hello World", "upstream_calls": { - "http://backend-service.virtual.cluster-02.consul": { + "http://backend.virtual.cluster-02.consul": { "name": "backend", - "uri": "http://backend-service.virtual.cluster-02.consul", + "uri": "http://backend.virtual.cluster-02.consul", "type": "HTTP", "ip_addresses": [ "10.32.2.10" From fc6b2ccb00a064f7be3fbc4cb24ff95b2dd8329b Mon Sep 17 00:00:00 2001 From: John Cowen Date: Thu, 1 Sep 2022 18:15:06 +0100 Subject: [PATCH 38/55] ui: Use credentials for all HTTP API requests (#14343) Adds withCredentials/credentials to all HTTP API requests. --- .changelog/14343.txt | 4 ++++ ui/packages/consul-ui/app/services/client/http.js | 1 + ui/packages/consul-ui/app/utils/http/xhr.js | 1 + 3 files changed, 6 insertions(+) create mode 100644 .changelog/14343.txt diff --git a/.changelog/14343.txt b/.changelog/14343.txt new file mode 100644 index 000000000..94e7432b4 --- /dev/null +++ b/.changelog/14343.txt @@ -0,0 +1,4 @@ +```release-note:feature +ui: Use withCredentials for all HTTP API requests +``` + diff --git a/ui/packages/consul-ui/app/services/client/http.js b/ui/packages/consul-ui/app/services/client/http.js index 6d3659c22..9b7736501 100644 --- a/ui/packages/consul-ui/app/services/client/http.js +++ b/ui/packages/consul-ui/app/services/client/http.js @@ -210,6 +210,7 @@ export default class HttpService extends Service { return this.settings.findBySlug('token').then(token => { return fetch(`${path}`, { ...params, + credentials: 'include', headers: { 'X-Consul-Token': typeof token.SecretID === 'undefined' ? '' : token.SecretID, ...params.headers, diff --git a/ui/packages/consul-ui/app/utils/http/xhr.js b/ui/packages/consul-ui/app/utils/http/xhr.js index cbdea6411..8ef24a019 100644 --- a/ui/packages/consul-ui/app/utils/http/xhr.js +++ b/ui/packages/consul-ui/app/utils/http/xhr.js @@ -27,6 +27,7 @@ export default function(parseHeaders, XHR) { }; Object.entries(headers).forEach(([key, value]) => xhr.setRequestHeader(key, value)); options.beforeSend(xhr); + xhr.withCredentials = true; xhr.send(options.body); return xhr; }; From a4a4383aa89ef5e825803174ad54cca8abf58868 Mon Sep 17 00:00:00 2001 From: John Cowen Date: Thu, 1 Sep 2022 18:26:12 +0100 Subject: [PATCH 39/55] ui: Adds a HCP home link when in HCP (#14417) --- .../app/components/consul/hcp/home/index.hbs | 8 ++++++++ .../app/components/consul/hcp/home/index.scss | 11 +++++++++++ .../consul-hcp/vendor/consul-hcp/services.js | 4 +++- .../app/components/hashicorp-consul/index.hbs | 13 +++++++------ .../app/components/main-nav-vertical/index.scss | 3 +-- .../app/styles/base/icons/icons/index.scss | 2 +- ui/packages/consul-ui/app/styles/components.scss | 1 + ui/packages/consul-ui/vendor/consul-ui/services.js | 3 +++ 8 files changed, 35 insertions(+), 10 deletions(-) create mode 100644 ui/packages/consul-hcp/app/components/consul/hcp/home/index.hbs create mode 100644 ui/packages/consul-hcp/app/components/consul/hcp/home/index.scss diff --git a/ui/packages/consul-hcp/app/components/consul/hcp/home/index.hbs b/ui/packages/consul-hcp/app/components/consul/hcp/home/index.hbs new file mode 100644 index 000000000..053f235da --- /dev/null +++ b/ui/packages/consul-hcp/app/components/consul/hcp/home/index.hbs @@ -0,0 +1,8 @@ + diff --git a/ui/packages/consul-hcp/app/components/consul/hcp/home/index.scss b/ui/packages/consul-hcp/app/components/consul/hcp/home/index.scss new file mode 100644 index 000000000..7ae65f241 --- /dev/null +++ b/ui/packages/consul-hcp/app/components/consul/hcp/home/index.scss @@ -0,0 +1,11 @@ +.consul-hcp-home { + position: relative; + top: -22px; +} +.consul-hcp-home a::before { + content: ''; + --icon-name: icon-arrow-left; + --icon-size: icon-300; + margin-right: 8px; +} + diff --git a/ui/packages/consul-hcp/vendor/consul-hcp/services.js b/ui/packages/consul-hcp/vendor/consul-hcp/services.js index 159a7a96e..27f9d4a74 100644 --- a/ui/packages/consul-hcp/vendor/consul-hcp/services.js +++ b/ui/packages/consul-hcp/vendor/consul-hcp/services.js @@ -1,5 +1,7 @@ (services => services({ - + 'component:consul/hcp/home': { + class: 'consul-ui/components/consul/hcp/home', + }, }))( (json, data = (typeof document !== 'undefined' ? document.currentScript.dataset : module.exports)) => { data[`services`] = JSON.stringify(json); diff --git a/ui/packages/consul-ui/app/components/hashicorp-consul/index.hbs b/ui/packages/consul-ui/app/components/hashicorp-consul/index.hbs index 4d7a040ff..672985310 100644 --- a/ui/packages/consul-ui/app/components/hashicorp-consul/index.hbs +++ b/ui/packages/consul-ui/app/components/hashicorp-consul/index.hbs @@ -86,13 +86,14 @@ <:main-nav> +
    - + ul > li > a { +%main-nav-vertical a { @extend %main-nav-vertical-action; } %main-nav-vertical > ul > li.is-active > a { diff --git a/ui/packages/consul-ui/app/styles/base/icons/icons/index.scss b/ui/packages/consul-ui/app/styles/base/icons/icons/index.scss index 20f57edc7..8f8663c4c 100644 --- a/ui/packages/consul-ui/app/styles/base/icons/icons/index.scss +++ b/ui/packages/consul-ui/app/styles/base/icons/icons/index.scss @@ -2,7 +2,7 @@ @import './alert-circle-outline/index.scss'; @import './alert-triangle/index.scss'; // @import './arrow-down/index.scss'; -// @import './arrow-left/index.scss'; +@import './arrow-left/index.scss'; @import './arrow-right/index.scss'; // @import './arrow-up/index.scss'; // @import './bolt/index.scss'; diff --git a/ui/packages/consul-ui/app/styles/components.scss b/ui/packages/consul-ui/app/styles/components.scss index f94f14d44..4a7e7b9e0 100644 --- a/ui/packages/consul-ui/app/styles/components.scss +++ b/ui/packages/consul-ui/app/styles/components.scss @@ -109,3 +109,4 @@ @import 'consul-ui/components/consul/node/peer-info'; @import 'consul-ui/components/consul/peer/info'; @import 'consul-ui/components/consul/peer/form'; +@import 'consul-ui/components/consul/hcp/home'; diff --git a/ui/packages/consul-ui/vendor/consul-ui/services.js b/ui/packages/consul-ui/vendor/consul-ui/services.js index 13f2f054b..2b2258d52 100644 --- a/ui/packages/consul-ui/vendor/consul-ui/services.js +++ b/ui/packages/consul-ui/vendor/consul-ui/services.js @@ -18,6 +18,9 @@ 'component:consul/peer/selector': { class: '@glimmer/component', }, + 'component:consul/hcp/home': { + class: '@glimmer/component', + }, }))( ( json, From b9f0241d936c2b58e1f5f7f049db5ecda6702cb7 Mon Sep 17 00:00:00 2001 From: Kyle Schochenmaier Date: Thu, 1 Sep 2022 13:33:37 -0500 Subject: [PATCH 40/55] [docs] update docs for kube-1.24 support (#14339) * update docs for kube-1.24 support. Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> --- website/content/docs/k8s/connect/index.mdx | 26 ++++++++++++++++++++-- 1 file changed, 24 insertions(+), 2 deletions(-) diff --git a/website/content/docs/k8s/connect/index.mdx b/website/content/docs/k8s/connect/index.mdx index 7a3c472ca..c84729fe9 100644 --- a/website/content/docs/k8s/connect/index.mdx +++ b/website/content/docs/k8s/connect/index.mdx @@ -13,8 +13,8 @@ description: >- [Consul Service Mesh](/docs/connect) is a feature built into to Consul that enables automatic service-to-service authorization and connection encryption across your Consul services. Consul Service Mesh can be used with Kubernetes to secure pod -communication with other pods and external Kubernetes services. Consul Connect is used interchangeably with the name -Consul Service Mesh and is what will be used to refer to for Service Mesh functionality within Consul. +communication with other pods and external Kubernetes services. "Consul Connect" refers to the service mesh functionality within Consul and is used interchangeably with the name +"Consul Service Mesh." The Connect sidecar running Envoy can be automatically injected into pods in your cluster, making configuration for Kubernetes automatic. @@ -273,6 +273,27 @@ spec: `web` will target `containerPort` `8080` and select pods labeled `app: web`. `web-admin` will target `containerPort` `9090` and will also select the same pods. +~> Kubernetes 1.24+ only +In Kubernetes 1.24+ you need to [create a Kubernetes secret](https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets) for each multi-port service that references the ServiceAccount, and the Kubernetes secret must have the same name as the ServiceAccount: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: web + annotations: + kubernetes.io/service-account.name: web + type: kubernetes.io/service-account-token +--- +apiVersion: v1 +kind: Secret +metadata: + name: web-admin + annotations: + kubernetes.io/service-account.name: web-admin + type: kubernetes.io/service-account-token +``` + Create a Deployment with any chosen name, and use the following annotations: ```yaml consul.hashicorp.com/connect-inject: true @@ -355,6 +376,7 @@ The way this works is that a Consul service instance is being registered per por services in this case. An additional Envoy sidecar proxy and `connect-init` init container are also deployed per port in the Pod. So the upstream configuration can use the individual service names to reach each port as seen in the example. + #### Caveats for Multi-port Pods * Transparent proxy is not supported for multi-port Pods. * Metrics and metrics merging is not supported for multi-port Pods. From c5cbd45b7d884ff1b8595991c0fce35d1653f8d2 Mon Sep 17 00:00:00 2001 From: malizz Date: Thu, 1 Sep 2022 11:37:47 -0700 Subject: [PATCH 41/55] fix TestProxyConfigEntry (#14435) --- agent/structs/config_entry_test.go | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/agent/structs/config_entry_test.go b/agent/structs/config_entry_test.go index c08e82399..004f8b6fe 100644 --- a/agent/structs/config_entry_test.go +++ b/agent/structs/config_entry_test.go @@ -2959,8 +2959,9 @@ func TestProxyConfigEntry(t *testing.T) { Name: "", }, expected: &ProxyConfigEntry{ - Name: ProxyConfigGlobal, - Kind: ProxyDefaults, + Name: ProxyConfigGlobal, + Kind: ProxyDefaults, + EnterpriseMeta: *acl.DefaultEnterpriseMeta(), }, }, } From 3cfea70273e1cfe7673c4220a8356a884ce69c17 Mon Sep 17 00:00:00 2001 From: Luke Kysow <1034429+lkysow@users.noreply.github.com> Date: Thu, 1 Sep 2022 14:03:35 -0700 Subject: [PATCH 42/55] Use proxy address for default check (#14433) When a sidecar proxy is registered, a check is automatically added. Previously, the address this check used was the underlying service's address instead of the proxy's address, even though the check is testing if the proxy is up. This worked in most cases because the proxy ran on the same IP as the underlying service but it's not guaranteed and so the proper default address should be the proxy's address. --- .changelog/14433.txt | 3 + agent/agent_endpoint_test.go | 2 +- agent/sidecar_service.go | 24 ++++-- agent/sidecar_service_test.go | 135 ++++++++++++++++++++++++++++++++++ 4 files changed, 155 insertions(+), 9 deletions(-) create mode 100644 .changelog/14433.txt diff --git a/.changelog/14433.txt b/.changelog/14433.txt new file mode 100644 index 000000000..25167320c --- /dev/null +++ b/.changelog/14433.txt @@ -0,0 +1,3 @@ +```release-note:bug +checks: If set, use proxy address for automatically added sidecar check instead of service address. +``` diff --git a/agent/agent_endpoint_test.go b/agent/agent_endpoint_test.go index 67850f9eb..d380d0d93 100644 --- a/agent/agent_endpoint_test.go +++ b/agent/agent_endpoint_test.go @@ -3764,7 +3764,7 @@ func testAgent_RegisterService_TranslateKeys(t *testing.T, extraHCL string) { fmt.Println("TCP Check:= ", v) } if hasNoCorrectTCPCheck { - t.Fatalf("Did not find the expected TCP Healtcheck '%s' in %#v ", tt.expectedTCPCheckStart, a.checkTCPs) + t.Fatalf("Did not find the expected TCP Healthcheck '%s' in %#v ", tt.expectedTCPCheckStart, a.checkTCPs) } require.Equal(t, sidecarSvc, gotSidecar) }) diff --git a/agent/sidecar_service.go b/agent/sidecar_service.go index e0cb24a0e..a41d73d80 100644 --- a/agent/sidecar_service.go +++ b/agent/sidecar_service.go @@ -127,9 +127,20 @@ func (a *Agent) sidecarServiceFromNodeService(ns *structs.NodeService, token str if err != nil { return nil, nil, "", err } - // Setup default check if none given + // Setup default check if none given. if len(checks) < 1 { - checks = sidecarDefaultChecks(ns.ID, sidecar.Proxy.LocalServiceAddress, sidecar.Port) + // The check should use the sidecar's address because it makes a request to the sidecar. + // If the sidecar's address is empty, we fall back to the address of the local service, as set in + // sidecar.Proxy.LocalServiceAddress, in the hope that the proxy is also accessible on that address + // (which in most cases it is because it's running as a sidecar in the same network). + // We could instead fall back to the address of the service as set by (ns.Address), but I've kept it using + // sidecar.Proxy.LocalServiceAddress so as to not change things too much in the + // process of fixing #14433. + checkAddress := sidecar.Address + if checkAddress == "" { + checkAddress = sidecar.Proxy.LocalServiceAddress + } + checks = sidecarDefaultChecks(ns.ID, checkAddress, sidecar.Port) } return sidecar, checks, token, nil @@ -202,14 +213,11 @@ func (a *Agent) sidecarPortFromServiceID(sidecarCompoundServiceID structs.Servic return sidecarPort, nil } -func sidecarDefaultChecks(serviceID string, localServiceAddress string, port int) []*structs.CheckType { - // Setup default check if none given +func sidecarDefaultChecks(serviceID string, address string, port int) []*structs.CheckType { return []*structs.CheckType{ { - Name: "Connect Sidecar Listening", - // Default to localhost rather than agent/service public IP. The checks - // can always be overridden if a non-loopback IP is needed. - TCP: ipaddr.FormatAddressPort(localServiceAddress, port), + Name: "Connect Sidecar Listening", + TCP: ipaddr.FormatAddressPort(address, port), Interval: 10 * time.Second, }, { diff --git a/agent/sidecar_service_test.go b/agent/sidecar_service_test.go index f095670ff..39ab854a6 100644 --- a/agent/sidecar_service_test.go +++ b/agent/sidecar_service_test.go @@ -215,6 +215,141 @@ func TestAgent_sidecarServiceFromNodeService(t *testing.T) { token: "foo", wantErr: "reserved for internal use", }, + { + name: "uses proxy address for check", + sd: &structs.ServiceDefinition{ + ID: "web1", + Name: "web", + Port: 1111, + Connect: &structs.ServiceConnect{ + SidecarService: &structs.ServiceDefinition{ + Address: "123.123.123.123", + Proxy: &structs.ConnectProxyConfig{ + LocalServiceAddress: "255.255.255.255", + }, + }, + }, + Address: "255.255.255.255", + }, + token: "foo", + wantNS: &structs.NodeService{ + EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(), + Kind: structs.ServiceKindConnectProxy, + ID: "web1-sidecar-proxy", + Service: "web-sidecar-proxy", + Port: 2222, + Address: "123.123.123.123", + LocallyRegisteredAsSidecar: true, + Proxy: structs.ConnectProxyConfig{ + DestinationServiceName: "web", + DestinationServiceID: "web1", + LocalServiceAddress: "255.255.255.255", + LocalServicePort: 1111, + }, + }, + wantChecks: []*structs.CheckType{ + { + Name: "Connect Sidecar Listening", + TCP: "123.123.123.123:2222", + Interval: 10 * time.Second, + }, + { + Name: "Connect Sidecar Aliasing web1", + AliasService: "web1", + }, + }, + wantToken: "foo", + }, + { + name: "uses proxy.local_service_address for check if proxy address is empty", + sd: &structs.ServiceDefinition{ + ID: "web1", + Name: "web", + Port: 1111, + Connect: &structs.ServiceConnect{ + SidecarService: &structs.ServiceDefinition{ + Address: "", // Proxy address empty. + Proxy: &structs.ConnectProxyConfig{ + LocalServiceAddress: "1.2.3.4", + }, + }, + }, + Address: "", // Service address empty. + }, + token: "foo", + wantNS: &structs.NodeService{ + EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(), + Kind: structs.ServiceKindConnectProxy, + ID: "web1-sidecar-proxy", + Service: "web-sidecar-proxy", + Port: 2222, + Address: "", + LocallyRegisteredAsSidecar: true, + Proxy: structs.ConnectProxyConfig{ + DestinationServiceName: "web", + DestinationServiceID: "web1", + LocalServiceAddress: "1.2.3.4", + LocalServicePort: 1111, + }, + }, + wantChecks: []*structs.CheckType{ + { + Name: "Connect Sidecar Listening", + TCP: "1.2.3.4:2222", + Interval: 10 * time.Second, + }, + { + Name: "Connect Sidecar Aliasing web1", + AliasService: "web1", + }, + }, + wantToken: "foo", + }, + { + name: "uses 127.0.0.1 for check if proxy and proxy.local_service_address are empty", + sd: &structs.ServiceDefinition{ + ID: "web1", + Name: "web", + Port: 1111, + Connect: &structs.ServiceConnect{ + SidecarService: &structs.ServiceDefinition{ + Address: "", + Proxy: &structs.ConnectProxyConfig{ + LocalServiceAddress: "", + }, + }, + }, + Address: "", + }, + token: "foo", + wantNS: &structs.NodeService{ + EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(), + Kind: structs.ServiceKindConnectProxy, + ID: "web1-sidecar-proxy", + Service: "web-sidecar-proxy", + Port: 2222, + Address: "", + LocallyRegisteredAsSidecar: true, + Proxy: structs.ConnectProxyConfig{ + DestinationServiceName: "web", + DestinationServiceID: "web1", + LocalServiceAddress: "127.0.0.1", + LocalServicePort: 1111, + }, + }, + wantChecks: []*structs.CheckType{ + { + Name: "Connect Sidecar Listening", + TCP: "127.0.0.1:2222", + Interval: 10 * time.Second, + }, + { + Name: "Connect Sidecar Aliasing web1", + AliasService: "web1", + }, + }, + wantToken: "foo", + }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { From 1fe98bbe0b354c6991844d72251e55e726121c4d Mon Sep 17 00:00:00 2001 From: DanStough Date: Wed, 31 Aug 2022 12:58:41 -0400 Subject: [PATCH 43/55] feat(cli): add initial peering cli commands --- .changelog/14423.txt | 3 + command/flags/http.go | 4 + command/peering/delete/delete.go | 91 ++++++++++ command/peering/delete/delete_test.go | 70 ++++++++ command/peering/establish/establish.go | 109 ++++++++++++ command/peering/establish/establish_test.go | 127 ++++++++++++++ command/peering/generate/generate.go | 139 +++++++++++++++ command/peering/generate/generate_test.go | 141 +++++++++++++++ command/peering/list/list.go | 139 +++++++++++++++ command/peering/list/list_test.go | 133 ++++++++++++++ command/peering/peering.go | 69 ++++++++ command/peering/read/read.go | 164 ++++++++++++++++++ command/peering/read/read_test.go | 135 ++++++++++++++ command/registry.go | 12 ++ testrpc/wait.go | 7 +- website/content/api-docs/peering.mdx | 18 +- website/content/commands/index.mdx | 1 + website/content/commands/peering/delete.mdx | 50 ++++++ .../content/commands/peering/establish.mdx | 52 ++++++ .../commands/peering/generate-token.mdx | 68 ++++++++ website/content/commands/peering/index.mdx | 40 +++++ website/content/commands/peering/list.mdx | 47 +++++ website/content/commands/peering/read.mdx | 62 +++++++ .../cluster-peering/create-manage-peering.mdx | 86 +++++++++ website/data/commands-nav-data.json | 29 ++++ 25 files changed, 1780 insertions(+), 16 deletions(-) create mode 100644 .changelog/14423.txt create mode 100644 command/peering/delete/delete.go create mode 100644 command/peering/delete/delete_test.go create mode 100644 command/peering/establish/establish.go create mode 100644 command/peering/establish/establish_test.go create mode 100644 command/peering/generate/generate.go create mode 100644 command/peering/generate/generate_test.go create mode 100644 command/peering/list/list.go create mode 100644 command/peering/list/list_test.go create mode 100644 command/peering/peering.go create mode 100644 command/peering/read/read.go create mode 100644 command/peering/read/read_test.go create mode 100644 website/content/commands/peering/delete.mdx create mode 100644 website/content/commands/peering/establish.mdx create mode 100644 website/content/commands/peering/generate-token.mdx create mode 100644 website/content/commands/peering/index.mdx create mode 100644 website/content/commands/peering/list.mdx create mode 100644 website/content/commands/peering/read.mdx diff --git a/.changelog/14423.txt b/.changelog/14423.txt new file mode 100644 index 000000000..fd4033945 --- /dev/null +++ b/.changelog/14423.txt @@ -0,0 +1,3 @@ +```release-note:feature +cli: Adds new subcommands for `peering` workflows. Refer to the [CLI docs](https://www.consul.io/commands/peering) for more information. +``` diff --git a/command/flags/http.go b/command/flags/http.go index 139ab7ed0..e82e024fb 100644 --- a/command/flags/http.go +++ b/command/flags/http.go @@ -98,6 +98,10 @@ func (f *HTTPFlags) Datacenter() string { return f.datacenter.String() } +func (f *HTTPFlags) Partition() string { + return f.partition.String() +} + func (f *HTTPFlags) Stale() bool { if f.stale.v == nil { return false diff --git a/command/peering/delete/delete.go b/command/peering/delete/delete.go new file mode 100644 index 000000000..cb9818900 --- /dev/null +++ b/command/peering/delete/delete.go @@ -0,0 +1,91 @@ +package delete + +import ( + "context" + "flag" + "fmt" + + "github.com/mitchellh/cli" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + name string +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + + c.flags.StringVar(&c.name, "name", "", "(Required) The local name assigned to the peer cluster.") + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.PartitionFlag()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + if c.name == "" { + c.UI.Error("Missing the required -name flag") + return 1 + } + + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 1 + } + + peerings := client.Peerings() + + _, err = peerings.Delete(context.Background(), c.name, &api.WriteOptions{}) + if err != nil { + c.UI.Error(fmt.Sprintf("Error deleting peering for %s: %v", c.name, err)) + return 1 + } + + c.UI.Info(fmt.Sprintf("Successfully submitted peering connection, %s, for deletion", c.name)) + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(c.help, nil) +} + +const ( + synopsis = "Delete a peering connection" + help = ` +Usage: consul peering delete [options] -name + + Delete a peering connection. Consul deletes all data imported from the peer + in the background. The peering connection is removed after all associated + data has been deleted. Operators can still read the peering connections + while the data is being removed. A 'DeletedAt' field will be populated with + the timestamp of when the peering was marked for deletion. + + Example: + + $ consul peering delete -name west-dc +` +) diff --git a/command/peering/delete/delete_test.go b/command/peering/delete/delete_test.go new file mode 100644 index 000000000..984e773f5 --- /dev/null +++ b/command/peering/delete/delete_test.go @@ -0,0 +1,70 @@ +package delete + +import ( + "context" + "strings" + "testing" + + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testrpc" +) + +func TestDeleteCommand_noTabs(t *testing.T) { + t.Parallel() + + if strings.ContainsRune(New(cli.NewMockUi()).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestDeleteCommand(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + t.Parallel() + + acceptor := agent.NewTestAgent(t, ``) + t.Cleanup(func() { _ = acceptor.Shutdown() }) + + testrpc.WaitForTestAgent(t, acceptor.RPC, "dc1") + + acceptingClient := acceptor.Client() + + t.Run("name is required", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "err: %s", ui.ErrorWriter.String()) + require.Contains(t, ui.ErrorWriter.String(), "Missing the required -name flag") + }) + + t.Run("delete connection", func(t *testing.T) { + + req := api.PeeringGenerateTokenRequest{PeerName: "foo"} + _, _, err := acceptingClient.Peerings().GenerateToken(context.Background(), req, &api.WriteOptions{}) + require.NoError(t, err, "Could not generate peering token at acceptor") + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-name=foo", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.String() + require.Contains(t, output, "Success") + }) +} diff --git a/command/peering/establish/establish.go b/command/peering/establish/establish.go new file mode 100644 index 000000000..14cd0e310 --- /dev/null +++ b/command/peering/establish/establish.go @@ -0,0 +1,109 @@ +package establish + +import ( + "context" + "flag" + "fmt" + + "github.com/mitchellh/cli" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + name string + peeringToken string + meta map[string]string +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + + c.flags.StringVar(&c.name, "name", "", "(Required) The local name assigned to the peer cluster.") + + c.flags.StringVar(&c.peeringToken, "peering-token", "", "(Required) The peering token from the accepting cluster.") + + c.flags.Var((*flags.FlagMapValue)(&c.meta), "meta", + "Metadata to associate with the peering, formatted as key=value. This flag "+ + "may be specified multiple times to set multiple meta fields.") + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.PartitionFlag()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + if c.name == "" { + c.UI.Error("Missing the required -name flag") + return 1 + } + + if c.peeringToken == "" { + c.UI.Error("Missing the required -peering-token flag") + return 1 + } + + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connecting to Consul agent: %s", err)) + return 1 + } + + peerings := client.Peerings() + + req := api.PeeringEstablishRequest{ + PeerName: c.name, + PeeringToken: c.peeringToken, + Partition: c.http.Partition(), + Meta: c.meta, + } + + _, _, err = peerings.Establish(context.Background(), req, &api.WriteOptions{}) + if err != nil { + c.UI.Error(fmt.Sprintf("Error establishing peering for %s: %v", req.PeerName, err)) + return 1 + } + + c.UI.Info(fmt.Sprintf("Successfully established peering connection with %s", req.PeerName)) + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(c.help, nil) +} + +const ( + synopsis = "Establish a peering connection" + help = ` +Usage: consul peering establish [options] -name -peering-token + + Establish a peering connection. The name provided will be used locally by + this cluster to refer to the peering connection. The peering token can + only be used once to establish the connection. + + Example: + + $ consul peering establish -name west-dc -peering-token +` +) diff --git a/command/peering/establish/establish_test.go b/command/peering/establish/establish_test.go new file mode 100644 index 000000000..95e7da505 --- /dev/null +++ b/command/peering/establish/establish_test.go @@ -0,0 +1,127 @@ +package establish + +import ( + "context" + "fmt" + "strings" + "testing" + + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testrpc" +) + +func TestEstablishCommand_noTabs(t *testing.T) { + t.Parallel() + + if strings.ContainsRune(New(cli.NewMockUi()).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestEstablishCommand(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + t.Parallel() + + acceptor := agent.NewTestAgent(t, ``) + t.Cleanup(func() { _ = acceptor.Shutdown() }) + + dialer := agent.NewTestAgent(t, ``) + t.Cleanup(func() { _ = dialer.Shutdown() }) + + testrpc.WaitForTestAgent(t, acceptor.RPC, "dc1") + testrpc.WaitForTestAgent(t, dialer.RPC, "dc1") + + acceptingClient := acceptor.Client() + dialingClient := dialer.Client() + + t.Run("name is required", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + dialer.HTTPAddr(), + "-peering-token=1234abcde", + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "err: %s", ui.ErrorWriter.String()) + require.Contains(t, ui.ErrorWriter.String(), "Missing the required -name flag") + }) + + t.Run("peering token is required", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + dialer.HTTPAddr(), + "-name=bar", + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "err: %s", ui.ErrorWriter.String()) + require.Contains(t, ui.ErrorWriter.String(), "Missing the required -peering-token flag") + }) + + t.Run("establish connection", func(t *testing.T) { + // Grab the token from the acceptor + req := api.PeeringGenerateTokenRequest{PeerName: "foo"} + res, _, err := acceptingClient.Peerings().GenerateToken(context.Background(), req, &api.WriteOptions{}) + require.NoError(t, err, "Could not generate peering token at acceptor") + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + dialer.HTTPAddr(), + "-name=bar", + fmt.Sprintf("-peering-token=%s", res.PeeringToken), + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.String() + require.Contains(t, output, "Success") + }) + + t.Run("establish connection with options", func(t *testing.T) { + // Grab the token from the acceptor + req := api.PeeringGenerateTokenRequest{PeerName: "foo"} + res, _, err := acceptingClient.Peerings().GenerateToken(context.Background(), req, &api.WriteOptions{}) + require.NoError(t, err, "Could not generate peering token at acceptor") + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + dialer.HTTPAddr(), + "-name=bar", + fmt.Sprintf("-peering-token=%s", res.PeeringToken), + "-meta=env=production", + "-meta=region=us-west-1", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.String() + require.Contains(t, output, "Success") + + //Meta + peering, _, err := dialingClient.Peerings().Read(context.Background(), "bar", &api.QueryOptions{}) + require.NoError(t, err) + + actual, ok := peering.Meta["env"] + require.True(t, ok) + require.Equal(t, "production", actual) + + actual, ok = peering.Meta["region"] + require.True(t, ok) + require.Equal(t, "us-west-1", actual) + }) +} diff --git a/command/peering/generate/generate.go b/command/peering/generate/generate.go new file mode 100644 index 000000000..cbbb23009 --- /dev/null +++ b/command/peering/generate/generate.go @@ -0,0 +1,139 @@ +package generate + +import ( + "context" + "encoding/json" + "flag" + "fmt" + "strings" + + "github.com/mitchellh/cli" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/hashicorp/consul/command/peering" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + name string + externalAddresses []string + meta map[string]string + format string +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + + c.flags.StringVar(&c.name, "name", "", "(Required) The local name assigned to the peer cluster.") + + c.flags.Var((*flags.FlagMapValue)(&c.meta), "meta", + "Metadata to associate with the peering, formatted as key=value. This flag "+ + "may be specified multiple times to set multiple metadata fields.") + + c.flags.Var((*flags.AppendSliceValue)(&c.externalAddresses), "server-external-addresses", + "A list of addresses to put into the generated token, formatted as a comma-separate list. "+ + "Addresses are the form of :port. "+ + "This could be used to specify load balancer(s) or external IPs to reach the servers from "+ + "the dialing side, and will override any server addresses obtained from the \"consul\" service.") + + c.flags.StringVar( + &c.format, + "format", + peering.PeeringFormatPretty, + fmt.Sprintf("Output format {%s} (default: %s)", strings.Join(peering.GetSupportedFormats(), "|"), peering.PeeringFormatPretty), + ) + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.PartitionFlag()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + if c.name == "" { + c.UI.Error("Missing the required -name flag") + return 1 + } + + if !peering.FormatIsValid(c.format) { + c.UI.Error(fmt.Sprintf("Invalid format, valid formats are {%s}", strings.Join(peering.GetSupportedFormats(), "|"))) + return 1 + } + + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connect to Consul agent: %s", err)) + return 1 + } + + peerings := client.Peerings() + + req := api.PeeringGenerateTokenRequest{ + PeerName: c.name, + Partition: c.http.Partition(), + Meta: c.meta, + ServerExternalAddresses: c.externalAddresses, + } + + res, _, err := peerings.GenerateToken(context.Background(), req, &api.WriteOptions{}) + if err != nil { + c.UI.Error(fmt.Sprintf("Error generating peering token for %s: %v", req.PeerName, err)) + return 1 + } + + if c.format == peering.PeeringFormatJSON { + output, err := json.Marshal(res) + if err != nil { + c.UI.Error(fmt.Sprintf("Error marshalling JSON: %s", err)) + return 1 + } + c.UI.Output(string(output)) + return 0 + } + + c.UI.Info(res.PeeringToken) + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(c.help, nil) +} + +const ( + synopsis = "Generate a peering token" + help = ` +Usage: consul peering generate-token [options] -name + + Generate a peering token. The name provided will be used locally by + this cluster to refer to the peering connection. Re-generating a token + for a given name will not interrupt any active connection, but will + invalidate any unused token for that name. + + Example: + + $ consul peering generate-token -name west-dc + + Example using a load balancer in front of Consul servers: + + $ consul peering generate-token -name west-dc -server-external-addresses load-balancer.elb.us-west-1.amazonaws.com:8502 +` +) diff --git a/command/peering/generate/generate_test.go b/command/peering/generate/generate_test.go new file mode 100644 index 000000000..c74597610 --- /dev/null +++ b/command/peering/generate/generate_test.go @@ -0,0 +1,141 @@ +package generate + +import ( + "context" + "encoding/base64" + "encoding/json" + "strings" + "testing" + + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testrpc" +) + +func TestGenerateCommand_noTabs(t *testing.T) { + t.Parallel() + + if strings.ContainsRune(New(cli.NewMockUi()).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestGenerateCommand(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + t.Parallel() + + a := agent.NewTestAgent(t, ``) + t.Cleanup(func() { _ = a.Shutdown() }) + testrpc.WaitForTestAgent(t, a.RPC, "dc1") + + client := a.Client() + + t.Run("name is required", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "err: %s", ui.ErrorWriter.String()) + require.Contains(t, ui.ErrorWriter.String(), "Missing the required -name flag") + }) + + t.Run("invalid format", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-name=foo", + "-format=toml", + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "exited successfully when it should have failed") + output := ui.ErrorWriter.String() + require.Contains(t, output, "Invalid format") + }) + + t.Run("generate token", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-name=foo", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + token, err := base64.StdEncoding.DecodeString(ui.OutputWriter.String()) + require.NoError(t, err, "error decoding token") + require.Contains(t, string(token), "\"ServerName\":\"server.dc1.consul\"") + }) + + t.Run("generate token with options", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-name=bar", + "-server-external-addresses=1.2.3.4,5.6.7.8", + "-meta=env=production", + "-meta=region=us-east-1", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + token, err := base64.StdEncoding.DecodeString(ui.OutputWriter.String()) + require.NoError(t, err, "error decoding token") + require.Contains(t, string(token), "\"ServerName\":\"server.dc1.consul\"") + + //ServerExternalAddresses + require.Contains(t, string(token), "1.2.3.4") + require.Contains(t, string(token), "5.6.7.8") + + //Meta + peering, _, err := client.Peerings().Read(context.Background(), "bar", &api.QueryOptions{}) + require.NoError(t, err) + + actual, ok := peering.Meta["env"] + require.True(t, ok) + require.Equal(t, "production", actual) + + actual, ok = peering.Meta["region"] + require.True(t, ok) + require.Equal(t, "us-east-1", actual) + }) + + t.Run("read with json", func(t *testing.T) { + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-name=baz", + "-format=json", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.Bytes() + + var outputRes api.PeeringGenerateTokenResponse + require.NoError(t, json.Unmarshal(output, &outputRes)) + + token, err := base64.StdEncoding.DecodeString(outputRes.PeeringToken) + require.NoError(t, err, "error decoding token") + require.Contains(t, string(token), "\"ServerName\":\"server.dc1.consul\"") + }) +} diff --git a/command/peering/list/list.go b/command/peering/list/list.go new file mode 100644 index 000000000..c445e3d57 --- /dev/null +++ b/command/peering/list/list.go @@ -0,0 +1,139 @@ +package list + +import ( + "context" + "encoding/json" + "flag" + "fmt" + "sort" + "strings" + + "github.com/mitchellh/cli" + "github.com/ryanuber/columnize" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/hashicorp/consul/command/peering" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + format string +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + + c.flags.StringVar( + &c.format, + "format", + peering.PeeringFormatPretty, + fmt.Sprintf("Output format {%s} (default: %s)", strings.Join(peering.GetSupportedFormats(), "|"), peering.PeeringFormatPretty), + ) + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.PartitionFlag()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + if !peering.FormatIsValid(c.format) { + c.UI.Error(fmt.Sprintf("Invalid format, valid formats are {%s}", strings.Join(peering.GetSupportedFormats(), "|"))) + return 1 + } + + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connect to Consul agent: %s", err)) + return 1 + } + + peerings := client.Peerings() + + res, _, err := peerings.List(context.Background(), &api.QueryOptions{}) + if err != nil { + c.UI.Error("Error listing peerings") + return 1 + } + + list := peeringList(res) + sort.Sort(list) + + if c.format == peering.PeeringFormatJSON { + output, err := json.Marshal(list) + if err != nil { + c.UI.Error(fmt.Sprintf("Error marshalling JSON: %s", err)) + return 1 + } + c.UI.Output(string(output)) + return 0 + } + + if len(res) == 0 { + c.UI.Info(fmt.Sprintf("There are no peering connections.")) + return 0 + } + + result := make([]string, 0, len(list)) + header := "Name\x1fState\x1fImported Svcs\x1fExported Svcs\x1fMeta" + result = append(result, header) + for _, peer := range list { + metaPairs := make([]string, 0, len(peer.Meta)) + for k, v := range peer.Meta { + metaPairs = append(metaPairs, fmt.Sprintf("%s=%s", k, v)) + } + meta := strings.Join(metaPairs, ",") + line := fmt.Sprintf("%s\x1f%s\x1f%d\x1f%d\x1f%s", + peer.Name, peer.State, peer.ImportedServiceCount, peer.ExportedServiceCount, meta) + result = append(result, line) + } + + output := columnize.Format(result, &columnize.Config{Delim: string([]byte{0x1f})}) + c.UI.Output(output) + + return 0 +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(c.help, nil) +} + +const ( + synopsis = "List peering connections" + help = ` +Usage: consul peering list [options] + + List all peering connections. The results will be filtered according + to ACL policy configuration. + + Example: + + $ consul peering list +` +) + +// peeringList applies sort.Interface to a list of peering connections for sorting by name. +type peeringList []*api.Peering + +func (d peeringList) Len() int { return len(d) } +func (d peeringList) Less(i, j int) bool { return d[i].Name < d[j].Name } +func (d peeringList) Swap(i, j int) { d[i], d[j] = d[j], d[i] } diff --git a/command/peering/list/list_test.go b/command/peering/list/list_test.go new file mode 100644 index 000000000..06f9248b0 --- /dev/null +++ b/command/peering/list/list_test.go @@ -0,0 +1,133 @@ +package list + +import ( + "context" + "encoding/json" + "strings" + "testing" + + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testrpc" +) + +func TestListCommand_noTabs(t *testing.T) { + t.Parallel() + + if strings.ContainsRune(New(cli.NewMockUi()).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestListCommand(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + t.Parallel() + + acceptor := agent.NewTestAgent(t, ``) + t.Cleanup(func() { _ = acceptor.Shutdown() }) + + testrpc.WaitForTestAgent(t, acceptor.RPC, "dc1") + + acceptingClient := acceptor.Client() + + t.Run("invalid format", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-format=toml", + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "exited successfully when it should have failed") + output := ui.ErrorWriter.String() + require.Contains(t, output, "Invalid format") + }) + + t.Run("no results - pretty", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.String() + require.Contains(t, output, "no peering connections") + }) + + t.Run("two results for pretty print", func(t *testing.T) { + + generateReq := api.PeeringGenerateTokenRequest{PeerName: "foo"} + _, _, err := acceptingClient.Peerings().GenerateToken(context.Background(), generateReq, &api.WriteOptions{}) + require.NoError(t, err, "Could not generate peering token at acceptor for \"foo\"") + + generateReq = api.PeeringGenerateTokenRequest{PeerName: "bar"} + _, _, err = acceptingClient.Peerings().GenerateToken(context.Background(), generateReq, &api.WriteOptions{}) + require.NoError(t, err, "Could not generate peering token at acceptor for \"bar\"") + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.String() + require.Equal(t, 3, strings.Count(output, "\n")) // There should be three lines including the header + + lines := strings.Split(output, "\n") + + require.Contains(t, lines[0], "Name") + require.Contains(t, lines[1], "bar") + require.Contains(t, lines[2], "foo") + }) + + t.Run("no results - json", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-format=json", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.String() + require.Contains(t, output, "[]") + }) + + t.Run("two results for JSON print", func(t *testing.T) { + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-format=json", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.Bytes() + + var outputList []*api.Peering + require.NoError(t, json.Unmarshal(output, &outputList)) + + require.Len(t, outputList, 2) + require.Equal(t, "bar", outputList[0].Name) + require.Equal(t, "foo", outputList[1].Name) + }) +} diff --git a/command/peering/peering.go b/command/peering/peering.go new file mode 100644 index 000000000..1872f3738 --- /dev/null +++ b/command/peering/peering.go @@ -0,0 +1,69 @@ +package peering + +import ( + "github.com/mitchellh/cli" + + "github.com/hashicorp/consul/command/flags" +) + +const ( + PeeringFormatJSON = "json" + PeeringFormatPretty = "pretty" +) + +func GetSupportedFormats() []string { + return []string{PeeringFormatJSON, PeeringFormatPretty} +} + +func FormatIsValid(f string) bool { + return f == PeeringFormatPretty || f == PeeringFormatJSON +} + +func New() *cmd { + return &cmd{} +} + +type cmd struct{} + +func (c *cmd) Run(args []string) int { + return cli.RunResultHelp +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(help, nil) +} + +const synopsis = "Create and manage peering connections between Consul clusters" +const help = ` +Usage: consul peering [options] [args] + + This command has subcommands for interacting with Cluster Peering + connections. Here are some simple examples, and more detailed + examples are available in the subcommands or the documentation. + + Generate a peering token: + + $ consul peering generate-token -name west-dc + + Establish a peering connection: + + $ consul peering establish -name east-dc -peering-token + + List all the local peering connections: + + $ consul peering list + + Print the status of a peering connection: + + $ consul peering read -name west-dc + + Delete and close a peering connection: + + $ consul peering delete -name west-dc + + For more examples, ask for subcommand help or view the documentation. +` diff --git a/command/peering/read/read.go b/command/peering/read/read.go new file mode 100644 index 000000000..c8340e19b --- /dev/null +++ b/command/peering/read/read.go @@ -0,0 +1,164 @@ +package read + +import ( + "bytes" + "context" + "encoding/json" + "flag" + "fmt" + "strings" + "time" + + "github.com/mitchellh/cli" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/command/flags" + "github.com/hashicorp/consul/command/peering" +) + +func New(ui cli.Ui) *cmd { + c := &cmd{UI: ui} + c.init() + return c +} + +type cmd struct { + UI cli.Ui + flags *flag.FlagSet + http *flags.HTTPFlags + help string + + name string + format string +} + +func (c *cmd) init() { + c.flags = flag.NewFlagSet("", flag.ContinueOnError) + + c.flags.StringVar(&c.name, "name", "", "(Required) The local name assigned to the peer cluster.") + + c.flags.StringVar( + &c.format, + "format", + peering.PeeringFormatPretty, + fmt.Sprintf("Output format {%s} (default: %s)", strings.Join(peering.GetSupportedFormats(), "|"), peering.PeeringFormatPretty), + ) + + c.http = &flags.HTTPFlags{} + flags.Merge(c.flags, c.http.ClientFlags()) + flags.Merge(c.flags, c.http.PartitionFlag()) + c.help = flags.Usage(help, c.flags) +} + +func (c *cmd) Run(args []string) int { + if err := c.flags.Parse(args); err != nil { + return 1 + } + + if c.name == "" { + c.UI.Error("Missing the required -name flag") + return 1 + } + + if !peering.FormatIsValid(c.format) { + c.UI.Error(fmt.Sprintf("Invalid format, valid formats are {%s}", strings.Join(peering.GetSupportedFormats(), "|"))) + return 1 + } + + client, err := c.http.APIClient() + if err != nil { + c.UI.Error(fmt.Sprintf("Error connect to Consul agent: %s", err)) + return 1 + } + + peerings := client.Peerings() + + res, _, err := peerings.Read(context.Background(), c.name, &api.QueryOptions{}) + if err != nil { + c.UI.Error("Error reading peerings") + return 1 + } + + if res == nil { + c.UI.Error(fmt.Sprintf("No peering with name %s found.", c.name)) + return 1 + } + + if c.format == peering.PeeringFormatJSON { + output, err := json.Marshal(res) + if err != nil { + c.UI.Error(fmt.Sprintf("Error marshalling JSON: %s", err)) + return 1 + } + c.UI.Output(string(output)) + return 0 + } + + c.UI.Output(formatPeering(res)) + + return 0 +} + +func formatPeering(peering *api.Peering) string { + var buffer bytes.Buffer + + buffer.WriteString(fmt.Sprintf("Name: %s\n", peering.Name)) + buffer.WriteString(fmt.Sprintf("ID: %s\n", peering.ID)) + if peering.Partition != "" { + buffer.WriteString(fmt.Sprintf("Partition: %s\n", peering.Partition)) + } + if peering.DeletedAt != nil { + buffer.WriteString(fmt.Sprintf("DeletedAt: %s\n", peering.DeletedAt.Format(time.RFC3339))) + } + buffer.WriteString(fmt.Sprintf("State: %s\n", peering.State)) + if peering.Meta != nil && len(peering.Meta) > 0 { + buffer.WriteString("Meta:\n") + for k, v := range peering.Meta { + buffer.WriteString(fmt.Sprintf(" %s=%s\n", k, v)) + } + } + + buffer.WriteString("\n") + buffer.WriteString(fmt.Sprintf("Peer ID: %s\n", peering.PeerID)) + buffer.WriteString(fmt.Sprintf("Peer Server Name: %s\n", peering.PeerServerName)) + buffer.WriteString(fmt.Sprintf("Peer CA Pems: %d\n", len(peering.PeerCAPems))) + if peering.PeerServerAddresses != nil && len(peering.PeerServerAddresses) > 0 { + buffer.WriteString("Peer Server Addresses:\n") + for _, v := range peering.PeerServerAddresses { + buffer.WriteString(fmt.Sprintf(" %s", v)) + } + } + + buffer.WriteString("\n") + buffer.WriteString(fmt.Sprintf("Imported Services: %d\n", peering.ImportedServiceCount)) + buffer.WriteString(fmt.Sprintf("Exported Services: %d\n", peering.ExportedServiceCount)) + + buffer.WriteString("\n") + buffer.WriteString(fmt.Sprintf("Create Index: %d\n", peering.CreateIndex)) + buffer.WriteString(fmt.Sprintf("Modify Index: %d\n", peering.ModifyIndex)) + + return buffer.String() +} + +func (c *cmd) Synopsis() string { + return synopsis +} + +func (c *cmd) Help() string { + return flags.Usage(c.help, nil) +} + +const ( + synopsis = "Read a peering connection" + help = ` +Usage: consul peering read [options] -name + + Read a peering connection with the provided name. If one is not found, + the command will exit with a non-zero code. The result will be filtered according + to ACL policy configuration. + + Example: + + $ consul peering read -name west-dc +` +) diff --git a/command/peering/read/read_test.go b/command/peering/read/read_test.go new file mode 100644 index 000000000..fe19e1100 --- /dev/null +++ b/command/peering/read/read_test.go @@ -0,0 +1,135 @@ +package read + +import ( + "context" + "encoding/json" + "strings" + "testing" + + "github.com/mitchellh/cli" + "github.com/stretchr/testify/require" + + "github.com/hashicorp/consul/agent" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/testrpc" +) + +func TestReadCommand_noTabs(t *testing.T) { + t.Parallel() + + if strings.ContainsRune(New(cli.NewMockUi()).Help(), '\t') { + t.Fatal("help has tabs") + } +} + +func TestReadCommand(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + t.Parallel() + + acceptor := agent.NewTestAgent(t, ``) + t.Cleanup(func() { _ = acceptor.Shutdown() }) + + testrpc.WaitForTestAgent(t, acceptor.RPC, "dc1") + + acceptingClient := acceptor.Client() + + t.Run("no name flag", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "err: %s", ui.ErrorWriter.String()) + require.Contains(t, ui.ErrorWriter.String(), "Missing the required -name flag") + }) + + t.Run("invalid format", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-name=foo", + "-format=toml", + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "exited successfully when it should have failed") + output := ui.ErrorWriter.String() + require.Contains(t, output, "Invalid format") + }) + + t.Run("peering does not exist", func(t *testing.T) { + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-name=foo", + } + + code := cmd.Run(args) + require.Equal(t, 1, code, "err: %s", ui.ErrorWriter.String()) + require.Contains(t, ui.ErrorWriter.String(), "No peering with name") + }) + + t.Run("read with pretty print", func(t *testing.T) { + + generateReq := api.PeeringGenerateTokenRequest{ + PeerName: "foo", + Meta: map[string]string{ + "env": "production", + }, + } + _, _, err := acceptingClient.Peerings().GenerateToken(context.Background(), generateReq, &api.WriteOptions{}) + require.NoError(t, err, "Could not generate peering token at acceptor for \"foo\"") + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-name=foo", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.String() + require.Greater(t, strings.Count(output, "\n"), 0) // Checking for some kind of empty output + + // Spot check some fields and values + require.Contains(t, output, "foo") + require.Contains(t, output, api.PeeringStatePending) + require.Contains(t, output, "env=production") + require.Contains(t, output, "Imported Services") + require.Contains(t, output, "Exported Services") + }) + + t.Run("read with json", func(t *testing.T) { + + ui := cli.NewMockUi() + cmd := New(ui) + + args := []string{ + "-http-addr=" + acceptor.HTTPAddr(), + "-name=foo", + "-format=json", + } + + code := cmd.Run(args) + require.Equal(t, 0, code) + output := ui.OutputWriter.Bytes() + + var outputPeering api.Peering + require.NoError(t, json.Unmarshal(output, &outputPeering)) + + require.Equal(t, "foo", outputPeering.Name) + require.Equal(t, "production", outputPeering.Meta["env"]) + }) +} diff --git a/command/registry.go b/command/registry.go index 28e441e87..b35ac2e42 100644 --- a/command/registry.go +++ b/command/registry.go @@ -96,6 +96,12 @@ import ( operraft "github.com/hashicorp/consul/command/operator/raft" operraftlist "github.com/hashicorp/consul/command/operator/raft/listpeers" operraftremove "github.com/hashicorp/consul/command/operator/raft/removepeer" + "github.com/hashicorp/consul/command/peering" + peerdelete "github.com/hashicorp/consul/command/peering/delete" + peerestablish "github.com/hashicorp/consul/command/peering/establish" + peergenerate "github.com/hashicorp/consul/command/peering/generate" + peerlist "github.com/hashicorp/consul/command/peering/list" + peerread "github.com/hashicorp/consul/command/peering/read" "github.com/hashicorp/consul/command/reload" "github.com/hashicorp/consul/command/rtt" "github.com/hashicorp/consul/command/services" @@ -214,6 +220,12 @@ func RegisteredCommands(ui cli.Ui) map[string]mcli.CommandFactory { entry{"operator raft", func(cli.Ui) (cli.Command, error) { return operraft.New(), nil }}, entry{"operator raft list-peers", func(ui cli.Ui) (cli.Command, error) { return operraftlist.New(ui), nil }}, entry{"operator raft remove-peer", func(ui cli.Ui) (cli.Command, error) { return operraftremove.New(ui), nil }}, + entry{"peering", func(cli.Ui) (cli.Command, error) { return peering.New(), nil }}, + entry{"peering delete", func(ui cli.Ui) (cli.Command, error) { return peerdelete.New(ui), nil }}, + entry{"peering generate-token", func(ui cli.Ui) (cli.Command, error) { return peergenerate.New(ui), nil }}, + entry{"peering establish", func(ui cli.Ui) (cli.Command, error) { return peerestablish.New(ui), nil }}, + entry{"peering list", func(ui cli.Ui) (cli.Command, error) { return peerlist.New(ui), nil }}, + entry{"peering read", func(ui cli.Ui) (cli.Command, error) { return peerread.New(ui), nil }}, entry{"reload", func(ui cli.Ui) (cli.Command, error) { return reload.New(ui), nil }}, entry{"rtt", func(ui cli.Ui) (cli.Command, error) { return rtt.New(ui), nil }}, entry{"services", func(cli.Ui) (cli.Command, error) { return services.New(), nil }}, diff --git a/testrpc/wait.go b/testrpc/wait.go index d6b72749e..39e3d6592 100644 --- a/testrpc/wait.go +++ b/testrpc/wait.go @@ -11,7 +11,9 @@ import ( type rpcFn func(string, interface{}, interface{}) error -// WaitForLeader ensures we have a leader and a node registration. +// WaitForLeader ensures we have a leader and a node registration. It +// does not wait for the Consul (node) service to be ready. Use `WaitForTestAgent` +// to make sure the Consul service is ready. // // Most uses of this would be better served in the agent/consul package by // using waitForLeaderEstablishment() instead. @@ -91,7 +93,8 @@ func flattenOptions(options []waitOption) waitOption { return flat } -// WaitForTestAgent ensures we have a node with serfHealth check registered +// WaitForTestAgent ensures we have a node with serfHealth check registered. +// You'll want to use this if you expect the Consul (node) service to be ready. func WaitForTestAgent(t *testing.T, rpc rpcFn, dc string, options ...waitOption) { t.Helper() diff --git a/website/content/api-docs/peering.mdx b/website/content/api-docs/peering.mdx index ef50fcb87..102161319 100644 --- a/website/content/api-docs/peering.mdx +++ b/website/content/api-docs/peering.mdx @@ -42,12 +42,9 @@ The table below shows this endpoint's support for - `Partition` `(string: "")` - The admin partition that the peering token is generated from. Uses `default` when not specified. -- `Datacenter` `(string: "")` - Specifies the datacenter where the peering token is generated. Defaults to the - agent's datacenter when not specified. - -- `Token` `(string: "")` - Specifies the ACL token to use in the request. Takes precedence - over the token specified in the `token` query parameter, `X-Consul-Token` request header, - and `CONSUL_HTTP_TOKEN` environment variable. +- `ServerExternalAddresses` `([]string: )` - A list of addresses to put +into the generated token. Addresses are the form of `{host or IP}:port`. +You can specify one or more load balancers or external IPs that route external traffic to this cluster's Consul servers. - `Meta` `(map: )` - Specifies KV metadata to associate with the peering. This parameter is not required and does not directly impact the cluster @@ -116,13 +113,6 @@ The table below shows this endpoint's support for - `PeeringToken` `(string: )` - The peering token fetched from the peer cluster. -- `Datacenter` `(string: "")` - Specifies the datacenter where the peering token is generated. Defaults to the - agent's datacenter when not specified. - -- `Token` `(string: "")` - Specifies the ACL token to use in the request. Takes precedence - over the token specified in the `token` query parameter, `X-Consul-Token` request header, - and `CONSUL_HTTP_TOKEN` environment variable. - - `Meta` `(map: )` - Specifies KV metadata to associate with the peering. This parameter is not required and does not directly impact the cluster peering process. @@ -314,6 +304,6 @@ $ curl --header "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" \ }, "CreateIndex": 109, "ModifyIndex": 119 - }, + } ] ``` diff --git a/website/content/commands/index.mdx b/website/content/commands/index.mdx index 7b0b9c2b0..d805d4eb2 100644 --- a/website/content/commands/index.mdx +++ b/website/content/commands/index.mdx @@ -50,6 +50,7 @@ Available commands are: members Lists the members of a Consul cluster monitor Stream logs from a Consul agent operator Provides cluster-level tools for Consul operators + peering Create and manage peering connections between Consul clusters reload Triggers the agent to reload configuration files rtt Estimates network round trip time between nodes services Interact with services diff --git a/website/content/commands/peering/delete.mdx b/website/content/commands/peering/delete.mdx new file mode 100644 index 000000000..04a7e16ba --- /dev/null +++ b/website/content/commands/peering/delete.mdx @@ -0,0 +1,50 @@ +--- +layout: commands +page_title: 'Commands: Peering Delete' +description: Learn how to use the consul peering delete command to remove a peering connection between Consul clusters. +--- + +# Consul Peering Delete + +Command: `consul peering delete` + +Corresponding HTTP API Endpoint: [\[DELETE\] /v1/peering/:name](/api-docs/peering#delete-a-peering-connection) + +The `peering delete` removes a peering connection with another cluster. +Consul deletes all data imported from the peer in the background. +The peering connection is removed after all associated data has been deleted. +Operators can still read the peering connections while the data is being removed. +The command adds a `DeletedAt` field to the peering connection object with the timestamp of when the peering was marked for deletion. +You can only use a peering token to establish the connection once. If you need to reestablish a peering connection, you must generate a new token. + +The table below shows this command's [required ACLs](/api#authentication). + +| ACL Required | +| ------------ | +| `peering:write` | + +## Usage + +Usage: `consul peering delete [options] -name ` + +#### Command Options + +- `-name=` - (Required) The name of the peer. + +#### Enterprise Options + +@include 'http_api_partition_options.mdx' + +#### API Options + +@include 'http_api_options_client.mdx' + +## Examples + +The following examples deletes a peering connection to a cluster locally referred to as "cluster-02": + +```shell-session hideClipboard +$ consul peering delete -name cluster-02 +Successfully submitted peering connection, cluster-02, for deletion +``` + diff --git a/website/content/commands/peering/establish.mdx b/website/content/commands/peering/establish.mdx new file mode 100644 index 000000000..d9906e068 --- /dev/null +++ b/website/content/commands/peering/establish.mdx @@ -0,0 +1,52 @@ +--- +layout: commands +page_title: 'Commands: Peering Establish' +description: Learn how to use the consul peering establish command to establish a peering connection between Consul clusters. +--- + +# Consul Peering Establish + +Command: `consul peering establish` + +Corresponding HTTP API Endpoint: [\[POST\] /v1/peering/establish](/api-docs/peering#establish-a-peering-connection) + +The `peering establish` starts a peering connection with the cluster that generated the peering token. +You can generate cluster peering tokens using the [`consul peering generate-token`](/commands/operator/generate-token) command or the [HTTP API](https://www.consul.io/api-docs/peering#generate-a-peering-token). + +You can only use a peering token to establish the connection once. If you need to reestablish a peering connection, you must generate a new token. + +The table below shows this command's [required ACLs](/api#authentication). + +| ACL Required | +| ------------ | +| `peering:write` | + +## Usage + +Usage: `consul peering establish [options] -name -peering-token ` + +#### Command Options + +- `-name=` - (Required) Specifies a local name for the cluster you are establishing a connection with. The `name` is only used to identify the connection with the peer. + +- `-peering-token=` - (Required) Specifies the peering token from the cluster that generated the token. + +- `-meta==` - Specifies key/value pairs to associate with the peering connection in `-meta="key"="value"` format. You can use the flag multiple times to set multiple metadata fields. + +#### Enterprise Options + +@include 'http_api_partition_options.mdx' + +#### API Options + +@include 'http_api_options_client.mdx' + +## Examples + +The following examples establishes a peering connection with a cluster locally referred to as "cluster-01": + +```shell-session hideClipboard +$ consul peering establish -name cluster-01 -peering-token eyJDQSI6bnVs...5Yi0wNzk5NTA1YTRmYjYifQ== +Successfully established peering connection with cluster-01 +``` + diff --git a/website/content/commands/peering/generate-token.mdx b/website/content/commands/peering/generate-token.mdx new file mode 100644 index 000000000..961122fc6 --- /dev/null +++ b/website/content/commands/peering/generate-token.mdx @@ -0,0 +1,68 @@ +--- +layout: commands +page_title: 'Commands: Peering Generate Token' +description: Learn how to use the consul peering generate-token command to generate token that enables you to peer Consul clusters. +--- + +# Consul Peering Generate Token + +Command: `consul peering generate-token` + +Corresponding HTTP API Endpoint: [\[POST\] /v1/peering/token](/api-docs/peering#generate-a-peering-token) + +The `peering generate-token` generates a peering token. The token is base 64-encoded string containing the token details. +This token should be transferred to the other cluster being peered and consumed using [`consul peering establish`](/commands/peering/establish). + +Generating a token and specifying the same local name associated with a previously-generated token does not affect active connections established with the original token. If the previously-generated token is not actively being used for a peer connection, however, it will become invalid when the new token with the same local name is generated. + +The table below shows this command's [required ACLs](/api#authentication). + +| ACL Required | +| ------------ | +| `peering:write` | + +## Usage + +Usage: `consul peering generate-token [options] -name ` + +#### Command Options + +- `-name=` - (Required) Specifies a local name for the cluster that the token is intended for. +The `name` is only used to identify the connection with the peer. +Generating a token and specifying the same local name associated with a previously-generated token does not affect active connections established with the original token. +If the previously-generated token is not actively being used for a peer connection, however, it will become invalid when the new token with the same local name is generated. + +- `-meta==` - Specifies key/value pairs to associate with the peering connection token in `-meta="key"="value"` format. You can use the flag multiple times to set multiple metadata fields. + +<<<<<<< HEAD +- `-server-external-addresses=[,string,...]` - Specifies a comma-separated list of addresses +to put into the generated token. Addresses are of the form of `{host or IP}:port`. +You can specify one or more load balancers or external IPs that route external traffic to this cluster's Consul servers. + +- `-format={pretty|json}` - Command output format. The default value is `pretty`. + +#### Enterprise Options + +@include 'http_api_partition_options.mdx' + +#### API Options + +@include 'http_api_options_client.mdx' + +## Examples + +The following example generates a peering token for a cluster called "cluster-02": + +```shell-session hideClipboard +$ consul peering generate-token -name cluster-02 +eyJDQSI6bnVs...5Yi0wNzk5NTA1YTRmYjYifQ== +``` + +### Using a Load Balancer for Consul Servers + +The following example generates a token for a cluster where servers are proxied by a load balancer: + +```shell-session hideClipboard +$ consul peering generate-token -server-external-addresses my-load-balancer-1234567890abcdef.elb.us-east-2.amazonaws.com -name cluster-02 +eyJDQSI6bnVs...5Yi0wNzk5NTA1YTRmYjYifQ== +``` diff --git a/website/content/commands/peering/index.mdx b/website/content/commands/peering/index.mdx new file mode 100644 index 000000000..47311a444 --- /dev/null +++ b/website/content/commands/peering/index.mdx @@ -0,0 +1,40 @@ +--- +layout: commands +page_title: 'Commands: Peering' +--- + +# Consul Peering + +Command: `consul peering` + +Use the `peering` command to create and manage peering connections between Consul clusters, including token generation and consumption. Refer to +[Create and Manage Peerings Connections](/docs/connect/cluster-peering/create-manage-peering) for an +overview of the CLI workflow for cluster peering. + +## Usage + +```text +Usage: consul peering [options] + + # ... + +Subcommands: + + delete Close and delete a peering connection + establish Consume a peering token and establish a connection with the accepting cluster + generate-token Generate a peering token for use by a dialing cluster + list List the local cluster's peering connections + read Read detailed information on a peering connection +``` + +For more information, examples, and usage about a subcommand, click on the name +of the subcommand in the sidebar or one of the links below: + +- [delete](/commands/peering/delete) +- [establish](/commands/peering/establish) +- [generate-token](/commands/peering/generate-token) +- [list](/commands/peering/list) +- [read](/commands/peering/read) + + + diff --git a/website/content/commands/peering/list.mdx b/website/content/commands/peering/list.mdx new file mode 100644 index 000000000..27f9f748f --- /dev/null +++ b/website/content/commands/peering/list.mdx @@ -0,0 +1,47 @@ +--- +layout: commands +page_title: 'Commands: Peering List' +--- + +# Consul Peering List + +Command: `consul peering List` + +Corresponding HTTP API Endpoint: [\[GET\] /v1/peerings](/api-docs/peering#list-all-peerings) + +The `peering list` lists all peering connections. +The results are filtered according to ACL policy configuration. + +The table below shows this command's [required ACLs](/api#authentication). + +| ACL Required | +| ------------ | +| `peering:read` | + +## Usage + +Usage: `consul peering list [options]` + +#### Command Options + +- `-format={pretty|json}` - Command output format. The default value is `pretty`. + +#### Enterprise Options + +@include 'http_api_partition_options.mdx' + +#### API Options + +@include 'http_api_options_client.mdx' + +## Examples + +The following example lists all peering connections associated with the cluster: + +```shell-session hideClipboard +$ consul peering list +Name State Imported Svcs Exported Svcs Meta +cluster-02 ACTIVE 0 2 env=production +cluster-03 PENDING 0 0 +``` + diff --git a/website/content/commands/peering/read.mdx b/website/content/commands/peering/read.mdx new file mode 100644 index 000000000..59d3cc74f --- /dev/null +++ b/website/content/commands/peering/read.mdx @@ -0,0 +1,62 @@ +--- +layout: commands +page_title: 'Commands: Peering Read' +--- + +# Consul Peering Read + +Command: `consul peering read` + +Corresponding HTTP API Endpoint: [\[GET\] /v1/peering/:name](/api-docs/peering#read-a-peering-connection) + +The `peering read` displays information on the status of a peering connection. + +The table below shows this command's [required ACLs](/api#authentication). + +| ACL Required | +| ------------ | +| `peering:read` | + +## Usage + +Usage: `consul peering read [options] -name ` + +#### Command Options + +- `-name=` - (Required) The name of the peer associated with a connection that you want to read. + +- `-format={pretty|json}` - Command output format. The default value is `pretty`. + +#### Enterprise Options + +@include 'http_api_partition_options.mdx' + +#### API Options + +@include 'http_api_options_client.mdx' + +## Examples + +The following example outputs information about a peering connection locally referred to as "cluster-02": + +```shell-session hideClipboard +$ consul peering read -name cluster-02 +Name: cluster-02 +ID: 3b001063-8079-b1a6-764c-738af5a39a97 +State: ACTIVE +Meta: + env=production + +Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05 +Peer Server Name: server.dc1.consul +Peer CA Pems: 0 +Peer Server Addresses: + 10.0.0.1:8300 + +Imported Services: 0 +Exported Services: 2 + +Create Index: 89 +Modify Index: 89 +``` + diff --git a/website/content/docs/connect/cluster-peering/create-manage-peering.mdx b/website/content/docs/connect/cluster-peering/create-manage-peering.mdx index ee0a69a94..5af8324e2 100644 --- a/website/content/docs/connect/cluster-peering/create-manage-peering.mdx +++ b/website/content/docs/connect/cluster-peering/create-manage-peering.mdx @@ -57,6 +57,19 @@ Create a JSON file that contains the first cluster's name and the peering token. + + +In `cluster-01`, use the [`consul peering generate-token` command](/commands/operator/generate-token) to issue a request for a peering token. + +```shell-session +$ consul peering generate-token -name cluster-02 +``` + +The CLI outputs the peering token, which is a base64-encoded string containing the token details. +Save this value to a file or clipboard to be used in the next step on `cluster-02`. + + + 1. In the Consul UI for the datacenter associated with `cluster-01`, click **Peers**. @@ -88,6 +101,25 @@ You can dial the `peering/establish` endpoint once per peering token. Peering to + + +In one of the client agents in "cluster-02," issue the [`consul peering establish` command](/commands/peering/establish) and specify the token generated in the previous step. The command establishes the peering connection. +The commands prints "Successfully established peering connection with cluster-01" after the connection is established. + +```shell-session +$ consul peering establish -name cluster-01 -peering-token token-from-generate +``` + +When you connect server agents through cluster peering, they peer their default partitions. +To establish peering connections for other partitions through server agents, you must add the `-partition` flag to the `establish` command and specify the partitions you want to peer. +For additional configuration information, refer to [`consul peering establish` command](/commands/peering/establish) . + +You can run the `peering establish` command once per peering token. +Peering tokens cannot be reused after being used to establish a connection. +If you need to re-establish a connection, you must generate a new peering token. + + + 1. In the Consul UI for the datacenter associated with `cluster 02`, click **Peers** and then **Add peer connection**. @@ -213,6 +245,20 @@ $ curl http://127.0.0.1:8500/v1/peerings ``` + + +After you establish a peering connection, run the [`consul peering list`](/commands/peering/list) command to get a list of all peering connections. +For example, the following command requests a list of all peering connections and returns the information in a table: + +```shell-session +$ consul peerings list + +Name State Imported Svcs Exported Svcs Meta +cluster-02 ACTIVE 0 2 env=production +cluster-03 PENDING 0 0 + ``` + + In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in a datacenter. @@ -248,6 +294,35 @@ $ curl http://127.0.0.1:8500/v1/peering/cluster-02 ``` + + +After you establish a peering connection, run the [`consul peering read`](/commands/peering/list) command to get peering information about for a specific cluster. +For example, the following command requests peering connection information for "cluster-02": + +```shell-session +$ consul peering read -name cluster-02 + +Name: cluster-02 +ID: 3b001063-8079-b1a6-764c-738af5a39a97 +State: ACTIVE +Meta: + env=production + +Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05 +Peer Server Name: server.dc1.consul +Peer CA Pems: 0 +Peer Server Addresses: + 10.0.0.1:8300 + +Imported Services: 0 +Exported Services: 2 + +Create Index: 89 +Modify Index: 89 + +``` + + In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. Click the name of a peered cluster to view additional details about the peering connection. @@ -281,6 +356,17 @@ $ curl --request DELETE http://127.0.0.1:8500/v1/peering/cluster-02 ``` + + +In "cluster-01," request the deletion through the [`consul peering delete`](/commands/peering/list) command. + +```shell-session +$ consul peering delete -name cluster-02 + +Successfully submitted peering connection, cluster-02, for deletion +``` + + In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. diff --git a/website/data/commands-nav-data.json b/website/data/commands-nav-data.json index 565851223..3a3bb0609 100644 --- a/website/data/commands-nav-data.json +++ b/website/data/commands-nav-data.json @@ -436,6 +436,35 @@ "title": "partition", "path": "partition" }, + { + "title": "peering", + "routes": [ + { + "title": "Overview", + "path": "peering" + }, + { + "title": "delete", + "path": "peering/delete" + }, + { + "title": "establish", + "path": "peering/establish" + }, + { + "title": "generate-token", + "path": "peering/generate-token" + }, + { + "title": "list", + "path": "peering/list" + }, + { + "title": "read", + "path": "peering/read" + } + ] + }, { "title": "reload", "path": "reload" From 90fa16c8b5e158863f87371e224a35d5db71d547 Mon Sep 17 00:00:00 2001 From: Kyle Havlovitz Date: Thu, 1 Sep 2022 14:24:30 -0700 Subject: [PATCH 44/55] Prune intermediates before appending new one --- agent/consul/leader_connect_ca.go | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/agent/consul/leader_connect_ca.go b/agent/consul/leader_connect_ca.go index d2cd02113..fe780f7f2 100644 --- a/agent/consul/leader_connect_ca.go +++ b/agent/consul/leader_connect_ca.go @@ -1098,9 +1098,13 @@ func setLeafSigningCert(caRoot *structs.CARoot, pem string) error { return fmt.Errorf("error parsing leaf signing cert: %w", err) } + if err := pruneExpiredIntermediates(caRoot); err != nil { + return err + } + caRoot.IntermediateCerts = append(caRoot.IntermediateCerts, pem) caRoot.SigningKeyID = connect.EncodeSigningKeyID(cert.SubjectKeyId) - return pruneExpiredIntermediates(caRoot) + return nil } // pruneExpiredIntermediates removes expired intermediate certificates @@ -1108,15 +1112,14 @@ func setLeafSigningCert(caRoot *structs.CARoot, pem string) error { func pruneExpiredIntermediates(caRoot *structs.CARoot) error { var newIntermediates []string now := time.Now() - for i, intermediatePEM := range caRoot.IntermediateCerts { + for _, intermediatePEM := range caRoot.IntermediateCerts { cert, err := connect.ParseCert(intermediatePEM) if err != nil { return fmt.Errorf("error parsing leaf signing cert: %w", err) } - // Only keep the intermediate cert if it's still valid, or if it's the most - // recently added (and thus the active signing cert). - if cert.NotAfter.After(now) || i == len(caRoot.IntermediateCerts) { + // Only keep the intermediate cert if it's still valid. + if cert.NotAfter.After(now) { newIntermediates = append(newIntermediates, intermediatePEM) } } From 6b6b53860723e6106662114e4c9c01d4716a05b8 Mon Sep 17 00:00:00 2001 From: David Yu Date: Thu, 1 Sep 2022 16:21:36 -0700 Subject: [PATCH 45/55] docs: Consul K8s 0.48.0 release notes (#14414) Co-authored-by: Thomas Eckert --- .../docs/release-notes/consul-k8s/v0_47_x.mdx | 2 +- .../docs/release-notes/consul-k8s/v0_48_x.mdx | 66 +++++++++++++++++++ .../docs/release-notes/consul/v1_11_x.mdx | 10 +-- .../docs/release-notes/consul/v1_12_x.mdx | 2 +- .../docs/release-notes/consul/v1_13_x.mdx | 2 +- website/data/docs-nav-data.json | 4 ++ 6 files changed, 78 insertions(+), 8 deletions(-) create mode 100644 website/content/docs/release-notes/consul-k8s/v0_48_x.mdx diff --git a/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx b/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx index a9228e998..b13d858fd 100644 --- a/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx +++ b/website/content/docs/release-notes/consul-k8s/v0_47_x.mdx @@ -21,7 +21,7 @@ description: >- - Consul 1.11.x, Consul 1.12.x and Consul 1.13.1+ - Kubernetes 1.19-1.23 -- Kubectl 1.21+ +- Kubectl 1.19+ - Envoy proxy support is determined by the Consul version deployed. Refer to [Envoy Integration](/docs/connect/proxies/envoy) for details. diff --git a/website/content/docs/release-notes/consul-k8s/v0_48_x.mdx b/website/content/docs/release-notes/consul-k8s/v0_48_x.mdx new file mode 100644 index 000000000..38c6732bf --- /dev/null +++ b/website/content/docs/release-notes/consul-k8s/v0_48_x.mdx @@ -0,0 +1,66 @@ +--- +layout: docs +page_title: 0.48.x +description: >- + Consul on Kubernetes release notes for version 0.48.x +--- + +# Consul on Kubernetes 0.48.0 + +## Release Highlights + +- **Consul CNI Plugin**: This release introduces the Consul CNI Plugin for Consul on Kubernetes, to allow for configuring traffic redirection rules without escalated container privileges such as `CAP_NET_ADMIN`. Refer to [Enable the Consul CNI Plugin](/docs/k8s/installation/install#enable-the-consul-cni-plugin) for more details. The Consul CNI Plugin is supported for Consul K8s 0.48.0+ and Consul 1.13.1+. + +- **Kubernetes 1.24 Support**: Add support for Kubernetes 1.24 where ServiceAccounts no longer have long-term JWT tokens. [[GH-1431](https://github.com/hashicorp/consul-k8s/pull/1431)] + +- **MaxInboundConnections in service-defaults CRD**: Add support for MaxInboundConnections on the Service Defaults CRD. [[GH-1437](https://github.com/hashicorp/consul-k8s/pull/1437)] + +- **API Gateway: ACL auth when using WAN Federation**: Configure ACL auth for controller correctly when deployed in secondary datacenter with federation enabled [[GH-1462](https://github.com/hashicorp/consul-k8s/pull/1462)] + +## What has Changed + +- **Kubernetes 1.24 Support for multiport applications require Kubernetes secrets**: Users deploying multiple services to the same Pod (multiport) on Kubernetes 1.24+ must also deploy a Kubernetes secret for each ServiceAccount associated with the Consul service. The name of the Secret must match the ServiceAccount name and be of type `kubernetes.io/service-account-token` +Example: + + ```yaml + apiVersion: v1 + kind: Secret + metadata: + name: svc1 + annotations: + kubernetes.io/service-account.name: svc1 + type: kubernetes.io/service-account-token + --- + apiVersion: v1 + kind: Secret + metadata: + name: svc2 + annotations: + kubernetes.io/service-account.name: svc2 + type: kubernetes.io/service-account-token + ``` + +## Supported Software + +- Consul 1.11.x, Consul 1.12.x and Consul 1.13.1+ +- Kubernetes 1.19-1.24 +- Kubectl 1.19+ +- Envoy proxy support is determined by the Consul version deployed. Refer to + [Envoy Integration](/docs/connect/proxies/envoy) for details. + +## Upgrading + +For detailed information on upgrading, please refer to the [Upgrades page](/docs/k8s/upgrade) + +## Known Issues +The following issues are know to exist in the v0.48.0 release: + +- Consul CNI Plugin currently does not support RedHat OpenShift as the CNI Plugin Daemonset requires additional SecurityContextConstraint objects to run on OpenShift. Support for OpenShift will be added in an upcoming release. + +## Changelogs + +The changelogs for this major release version and any maintenance versions are listed below. + +~> **Note:** The following link takes you to the changelogs on the GitHub website. + +- [0.48.0](https://github.com/hashicorp/consul-k8s/releases/tag/v0.48.0) diff --git a/website/content/docs/release-notes/consul/v1_11_x.mdx b/website/content/docs/release-notes/consul/v1_11_x.mdx index d26cd6a80..aa5e68f80 100644 --- a/website/content/docs/release-notes/consul/v1_11_x.mdx +++ b/website/content/docs/release-notes/consul/v1_11_x.mdx @@ -9,15 +9,15 @@ description: >- ## Release Highlights -- **Admin Partitions (Enterprise):** Consul 1.11.0 Enterprise introduces a new entity for defining administrative and networking boundaries within a Consul deployment. This feature also enables servers to communicate with clients over a specific gossip segment created for each partition. This release also enables cross partition communication between services across partitions, using Mesh Gateways. For more information refer to the [Admin Partitions](/docs/enterprise/admin-partitions) documentation. +- **Admin Partitions (Enterprise)**: Consul 1.11.0 Enterprise introduces a new entity for defining administrative and networking boundaries within a Consul deployment. This feature also enables servers to communicate with clients over a specific gossip segment created for each partition. This release also enables cross partition communication between services across partitions, using Mesh Gateways. For more information refer to the [Admin Partitions](/docs/enterprise/admin-partitions) documentation. -- **Virtual IPs for services deployed with Consul Service Mesh:** Consul will now generate a unique virtual IP for each service deployed within Consul Service Mesh, allowing transparent proxy to route to services within a data center that exist in different clusters or outside the service mesh. +- **Virtual IPs for services deployed with Consul Service Mesh**: Consul will now generate a unique virtual IP for each service deployed within Consul Service Mesh, allowing transparent proxy to route to services within a data center that exist in different clusters or outside the service mesh. -- **Replace [boltdb](https://github.com/boltdb/bolt) with [etcd-io/bbolt](https://github.com/etcd-io/bbolt) for raft log store:** Consul now leverages `etcd-io/bbolt` as the default implementation of `boltdb` instead of `boltdb/bolt`. This change also exposes a configuration to allow for disabling boltdb freelist syncing. In addition, Consul now emits metrics for the raft boltdb store to provide insights into boltdb performance. +- **Replace [boltdb](https://github.com/boltdb/bolt) with [etcd-io/bbolt](https://github.com/etcd-io/bbolt) for raft log store**: Consul now leverages `etcd-io/bbolt` as the default implementation of `boltdb` instead of `boltdb/bolt`. This change also exposes a configuration to allow for disabling boltdb freelist syncing. In addition, Consul now emits metrics for the raft boltdb store to provide insights into boltdb performance. -- **TLS Certificates for Ingress Gateways via an SDS source:**: Ingress Gateways can now be configured to retrieve TLS certificates from an external SDS Service and load the TLS certificates for Ingress listeners. This configuration is set using the `ingress-gateway` configuration entry via the [SDS](/docs/connect/config-entries/ingress-gateway#sds) stanza within the Ingress Gateway TLS configuration. +- **TLS Certificates for Ingress Gateways via an SDS source**: Ingress Gateways can now be configured to retrieve TLS certificates from an external SDS Service and load the TLS certificates for Ingress listeners. This configuration is set using the `ingress-gateway` configuration entry via the [SDS](/docs/connect/config-entries/ingress-gateway#sds) stanza within the Ingress Gateway TLS configuration. -- **Vault Auth Method support for Connect CA Vault Provider:** Consul now supports configuring the Connect CA Vault provider to use auth methods for authentication to Vault. Consul supports using any non-deprecated auth method that is available in Vault v1.8.5, including AppRole, AliCloud, AWS, Azure, Cloud Foundry, GitHub, Google Cloud, JWT/OIDC, Kerberos, Kubernetes, LDAP, Oracle Cloud Infrastructure, Okta, Radius, TLS Certificates, and Username & Password. The Vault Auth Method for Connect CA Provider is utilized by default for the [Vault Secrets Backend](/docs/k8s/installation/vault) feature on Consul on Kubernetes. Utilizing a Vault Auth method would no longer require a Vault token to be managed or provisioned ahead of time to be used for authentication to Vault. +- **Vault Auth Method support for Connect CA Vault Provider**: Consul now supports configuring the Connect CA Vault provider to use auth methods for authentication to Vault. Consul supports using any non-deprecated auth method that is available in Vault v1.8.5, including AppRole, AliCloud, AWS, Azure, Cloud Foundry, GitHub, Google Cloud, JWT/OIDC, Kerberos, Kubernetes, LDAP, Oracle Cloud Infrastructure, Okta, Radius, TLS Certificates, and Username & Password. The Vault Auth Method for Connect CA Provider is utilized by default for the [Vault Secrets Backend](/docs/k8s/installation/vault) feature on Consul on Kubernetes. Utilizing a Vault Auth method would no longer require a Vault token to be managed or provisioned ahead of time to be used for authentication to Vault. ## What's Changed diff --git a/website/content/docs/release-notes/consul/v1_12_x.mdx b/website/content/docs/release-notes/consul/v1_12_x.mdx index 842dfb31c..dd354d60b 100644 --- a/website/content/docs/release-notes/consul/v1_12_x.mdx +++ b/website/content/docs/release-notes/consul/v1_12_x.mdx @@ -15,7 +15,7 @@ description: >- - **AWS Lambda**: Adds the ability to invoke AWS Lambdas through terminating gateways, which allows for cross-datacenter communication, transparent proxy, and intentions with Consul Service Mesh. Refer to [AWS Lambda](/docs]/lambda) and [Invoke Lambda Functions](/docs/lambda/invocation) for more details. -- **Mesh-wide TLS min/max versions and cipher suites:** Using the [Mesh](/docs/connect/config-entries/mesh#tls) Config Entry or CRD, it is now possible to set TLS min/max versions and cipher suites for both inbound and outbound mTLS connections. +- **Mesh-wide TLS min/max versions and cipher suites**: Using the [Mesh](/docs/connect/config-entries/mesh#tls) Config Entry or CRD, it is now possible to set TLS min/max versions and cipher suites for both inbound and outbound mTLS connections. - **Expanded details for ACL Permission Denied errors**: Details are now provided when a permission denied errors surface for RPC calls. Details include the accessor ID of the ACL token, the missing permission, and any namespace or partition that the error occurred on. diff --git a/website/content/docs/release-notes/consul/v1_13_x.mdx b/website/content/docs/release-notes/consul/v1_13_x.mdx index 23b694a91..268712667 100644 --- a/website/content/docs/release-notes/consul/v1_13_x.mdx +++ b/website/content/docs/release-notes/consul/v1_13_x.mdx @@ -31,7 +31,7 @@ For more detailed information, please refer to the [upgrade details page](/docs/ The following issues are know to exist in the 1.13.0 release: - Consul 1.13.1 fixes a compatibility issue when restoring snapshots from pre-1.13.0 versions of Consul. Refer to GitHub issue [[GH-14149](https://github.com/hashicorp/consul/issues/14149)] for more details. -- Consul 1.13.0 and Consul 1.13.1 default to requiring TLS for gRPC communication with Envoy proxies when auto-encrypt and auto-config are enabled. In environments where Envoy proxies are not already configured to use TLS for gRPC, upgrading Consul 1.13 will cause Envoy proxies to disconnect from the control plane (Consul agents). A future patch release will default to disabling TLS by default for GRPC communication with Envoy proxies when using Service Mesh and auto-config or auto-encrypt. Refer to GitHub issue [GH-14253](https://github.com/hashicorp/consul/issues/14253) and [Service Mesh deployments using auto-config and auto-enrypt](https://www.consul.io/docs/upgrading/upgrade-specific#service-mesh-deployments-using-auto-encrypt-or-auto-config) for more details. +- Consul 1.13.0 and Consul 1.13.1 default to requiring TLS for gRPC communication with Envoy proxies when auto-encrypt and auto-config are enabled. In environments where Envoy proxies are not already configured to use TLS for gRPC, upgrading Consul 1.13 will cause Envoy proxies to disconnect from the control plane (Consul agents). A future patch release will default to disabling TLS by default for GRPC communication with Envoy proxies when using Service Mesh and auto-config or auto-encrypt. Refer to GitHub issue [[GH-14253](https://github.com/hashicorp/consul/issues/14253)] and [Service Mesh deployments using auto-config and auto-enrypt](https://www.consul.io/docs/upgrading/upgrade-specific#service-mesh-deployments-using-auto-encrypt-or-auto-config) for more details. ## Changelogs diff --git a/website/data/docs-nav-data.json b/website/data/docs-nav-data.json index 49b1d9110..cb33486b7 100644 --- a/website/data/docs-nav-data.json +++ b/website/data/docs-nav-data.json @@ -1273,6 +1273,10 @@ { "title": "Consul K8s", "routes": [ + { + "title": "v0.48.x", + "path": "release-notes/consul-k8s/v0_48_x" + }, { "title": "v0.47.x", "path": "release-notes/consul-k8s/v0_47_x" From 58233f616b8e6359752d8166a3eff4c0f6cf6d30 Mon Sep 17 00:00:00 2001 From: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Date: Thu, 1 Sep 2022 16:22:11 -0700 Subject: [PATCH 46/55] Docs cni plugin (#14009) Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com> --- .../docs/connect/transparent-proxy.mdx | 287 ++++++++++-------- .../clients-outside-kubernetes.mdx | 0 .../consul-enterprise.mdx | 0 .../multi-cluster/index.mdx | 0 .../multi-cluster/kubernetes.mdx | 0 .../multi-cluster/vms-and-kubernetes.mdx | 0 .../servers-outside-kubernetes.mdx | 0 .../single-dc-multi-k8s.mdx | 0 .../data-integration/bootstrap-token.mdx | 0 .../vault/data-integration/connect-ca.mdx | 0 .../data-integration/enterprise-license.mdx | 0 .../vault/data-integration/gossip.mdx | 0 .../vault/data-integration/index.mdx | 0 .../data-integration/partition-token.mdx | 0 .../data-integration/replication-token.mdx | 0 .../vault/data-integration/server-tls.mdx | 0 .../snapshot-agent-config.mdx | 0 .../vault/data-integration/webhook-certs.mdx | 0 .../vault/index.mdx | 0 .../vault/systems-integration.mdx | 0 .../vault/wan-federation.mdx | 0 .../docs/k8s/installation/install-cli.mdx | 98 +++++- .../content/docs/k8s/installation/install.mdx | 202 +++++------- .../platforms/self-hosted-kubernetes.mdx | 0 website/data/docs-nav-data.json | 146 ++++----- website/redirects.js | 246 +++++++++++++++ 26 files changed, 652 insertions(+), 327 deletions(-) rename website/content/docs/k8s/{installation => }/deployment-configurations/clients-outside-kubernetes.mdx (100%) rename website/content/docs/k8s/{installation => }/deployment-configurations/consul-enterprise.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/multi-cluster/index.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/multi-cluster/kubernetes.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/multi-cluster/vms-and-kubernetes.mdx (100%) rename website/content/docs/k8s/{installation => }/deployment-configurations/servers-outside-kubernetes.mdx (100%) rename website/content/docs/k8s/{installation => }/deployment-configurations/single-dc-multi-k8s.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/bootstrap-token.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/connect-ca.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/enterprise-license.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/gossip.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/index.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/partition-token.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/replication-token.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/server-tls.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/snapshot-agent-config.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/data-integration/webhook-certs.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/index.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/systems-integration.mdx (100%) rename website/content/docs/k8s/{installation => deployment-configurations}/vault/wan-federation.mdx (100%) rename website/content/docs/k8s/{installation => }/platforms/self-hosted-kubernetes.mdx (100%) diff --git a/website/content/docs/connect/transparent-proxy.mdx b/website/content/docs/connect/transparent-proxy.mdx index 57ad48ba7..a60bb72d0 100644 --- a/website/content/docs/connect/transparent-proxy.mdx +++ b/website/content/docs/connect/transparent-proxy.mdx @@ -9,72 +9,59 @@ description: |- # Transparent Proxy -Transparent proxy allows applications to communicate through the mesh without changing their configuration. -Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. +This topic describes how to use Consul’s transparent proxy feature, which allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. -#### Without Transparent Proxy +## Introduction -![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png) +When transparent proxy is enabled, Consul is able to perform the following actions automatically: -Without transparent proxy, application owners need to: +- Infer the location of upstream services using service intentions. +- Redirect outbound connections that point to KubeDNS through the proxy. +- Force traffic through the proxy to prevent unauthorized direct access to the application. -1. Explicitly configure upstream services, choosing a local port to access them. -1. Change application to access `localhost:`. -1. Configure application to listen only on the loopback interface to prevent unauthorized - traffic from bypassing the mesh. - -#### With Transparent Proxy +The following diagram shows how transparent proxy routes traffic: ![Diagram demonstrating that with transparent proxy, connections are automatically routed through the mesh](/img/consul-connect/with-transparent-proxy.png) -With transparent proxy: +When transparent proxy is disabled, you must manually specify the following configurations so that your applications can communicate with other services in the mesh: -1. Local upstreams are inferred from service intentions and peered upstreams are - inferred from imported services, so no explicit configuration is needed. -1. Outbound connections pointing to a Kubernetes DNS record "just work" — network rules - redirect them through the proxy. -1. Inbound traffic is forced to go through the proxy to prevent unauthorized - direct access to the application. +* Explicitly configure upstream services by specifying a local port to access them. +* Change application to access `localhost:`. +* Configure applications to only listen on the loopback interface to prevent unauthorized traffic from bypassing the mesh. + +The following diagram shows how traffic flows through the mesh without transparent proxy enabled: -#### Overview +![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png) -Transparent proxy allows users to reach other services in the service mesh while ensuring that inbound and outbound -traffic for services in the mesh are directed through the sidecar proxy. Traffic is secured -and only reaches intended destinations since the proxy can enforce security and policy like TLS and Service Intentions. +Transparent proxy is available for Kubernetes environments. As part of the integration with Kubernetes, Consul registers Kubernetes Services, injects sidecar proxies, and enables traffic redirection. -Previously, service mesh users would need to explicitly define upstreams for a service as a local listener on the sidecar -proxy, and dial the local listener to reach the appropriate upstream. Users would also have to set intentions to allow -specific services to talk to one another. Transparent proxying reduces this duplication, by determining upstreams -implicitly from Service Intentions and imported services from a peer. Explicit upstreams are still supported in the [proxy service -registration](/docs/connect/registration/service-registration) on VMs and via the -[annotation](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) in Kubernetes. +## Requirements -To support transparent proxying, Consul's CLI now has a command -[`consul connect redirect-traffic`](/commands/connect/redirect-traffic) to redirect traffic through an inbound and -outbound listener on the sidecar. Consul also watches Service Intentions and imported services then configures the Envoy -proxy with the appropriate upstream IPs. If the default ACL policy is "allow", then Service Intentions are not required. -In Consul on Kubernetes, the traffic redirection command is automatically set up via an init container. +Your network must meet the following environment and software requirements to use transparent proxy. -## Prerequisites +* Transparent proxy is available for Kubernetes environments. +* Consul 1.10.0+ +* Consul Helm chart 0.32.0+. If you want to use the Consul CNI plugin to redirect traffic, Helm chart 0.48.0+ is required. Refer to [Enable the Consul CNI plugin](#enable-the-consul-cni-plugin) for additional information. +* [Service intentions](/docs/connect/intentions) must be configured to allow communication between intended services. +* The `ip_tables` kernel module must be running on all worker nodes within a Kubernetes cluster. If you are using the `modprobe` Linux utility, for example, issue the following command: -### Kubernetes + `$ modprobe ip_tables` -* To use transparent proxy on Kubernetes, Consul-helm >= `0.32.0` and Consul-k8s >= `0.26.0` are required in addition to Consul >= `1.10.0`. -* If the default policy for ACLs is "deny", then Service Intentions should be set up to allow intended services to connect to each other. -Otherwise, all Connect services can talk to all other services. -* If using Transparent Proxy, all worker nodes within a Kubernetes cluster must have the `ip_tables` kernel module running, e.g. `modprobe ip_tables`. - -The Kubernetes integration takes care of registering Kubernetes services with Consul, injecting a sidecar proxy, and -enabling traffic redirection. - -## Upgrading to Transparent Proxy - -~> When upgrading from older versions (i.e Consul-k8s < `0.26.0` or Consul-helm < `0.32.0`) to Consul-k8s >= `0.26.0` and Consul-helm >= `0.32.0`, please make sure to follow the upgrade steps [here](/docs/upgrading/upgrade-specific/#transparent-proxy-on-kubernetes). +~> **Upgrading to a supported version**: Always follow the [proper upgrade path](/docs/upgrading/upgrade-specific/#transparent-proxy-on-kubernetes) when upgrading to a supported version of Consul, Consul on Kubernetes (`consul-k8s`), and the Consul Helm chart. ## Configuration -### Enabling Transparent Proxy -Transparent proxy can be enabled in Kubernetes on the whole cluster via the Helm value: +This section describes how to configure the transparent proxy. + +### Enable transparent proxy + +You can enable the transparent proxy for an entire cluster, individual Kubernetes namespaces, and individual services. + +When you install Consul using the Helm chart, transparent proxy is enabled for the entire cluster by default. + +#### Entire cluster + +Use the `connectInject.transparentProxy.defaultEnabled` Helm value to enable or disable transparent proxy for the entire cluster: ```yaml connectInject: @@ -82,15 +69,16 @@ connectInject: defaultEnabled: true ``` -It can also be enabled on a per namespace basis by setting the label `consul.hashicorp.com/transparent-proxy=true` on the -Kubernetes namespace. This will override the Helm value `connectInject.transparentProxy.defaultEnabled` and define the -default behavior of Pods in the namespace. For example: +#### Kubernetes namespace + +Apply the `consul.hashicorp.com/transparent-proxy=true` label to enable transparent proxy for a Kubernetes namespace. The label overrides the `connectInject.transparentProxy.defaultEnabled` Helm value and defines the default behavior of Pods in the namespace. The following example enables transparent proxy for Pods in the `my-app` namespace: + ```bash kubectl label namespaces my-app "consul.hashicorp.com/transparent-proxy=true" ``` +#### Individual service -It can also be enabled on a per service basis via the annotation `consul.hashicorp.com/transparent-proxy=true` on the -Pod for each service, which will override both the Helm value and the namespace label: +Apply the `consul.hashicorp.com/transparent-proxy=true` annotation to eanble transparent proxy on the Pod for each service. The annotation overrides the Helm value and the namespace label. The following example enables transparent proxy for the `static-server` service: ```yaml apiVersion: v1 @@ -140,78 +128,136 @@ spec: serviceAccountName: static-server ``` -### Kubernetes HTTP Health Probes Configuration -Traffic redirection interferes with [Kubernetes HTTP health -probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) since the -probes expect that kubelet can directly reach the application container on the probe's endpoint, but that traffic will -be redirected through the sidecar proxy, causing errors because kubelet itself is not encrypting that traffic using a -mesh proxy. For this reason, Consul allows you to [overwrite Kubernetes HTTP health probes](/docs/k8s/connect/health) to point to the proxy instead. -This can be done using the Helm value `connectInject.transparentProxy.defaultOverwriteProbes` -or the Pod annotation `consul.hashicorp.com/transparent-proxy-overwrite-probes`. +### Enable the Consul CNI plugin -### Traffic Redirection Configuration -Pods with transparent proxy enabled will have an init container injected that sets up traffic redirection for all -inbound and outbound traffic through the sidecar proxies. This will include all traffic by default, with the ability to -configure exceptions on a per-Pod basis. The following Pod annotations allow you to exclude certain traffic from redirection to the sidecar proxies: +By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization. -- [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) -- [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) -- [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) -- [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-uids) +Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. Because the plugin is executed by the Kubernetes kubelet, it already has the elevated privileges necessary to configure the network. Additionally, you do not need to specify annotations that automatically overwrite Kubernetes HTTP health probes when the plugin is enabled (see [Overwrite Kubernetes HTTP health probes](#overwrite-kubernetes-http-health-probes)). +The Consul Helm chart installs the CNI plugin, but it is disabled by default. Refer to the [instructions for enabling the CNI plugin](/docs/k8s/installation/install#enable-the-consul-cni-plugin) in the Consul on Kubernetes installation documentation for additional information. -### Dialing Services Across Kubernetes Clusters +### Traffic redirection -- You cannot use transparent proxy in a deployment configuration with [federation between Kubernetes clusters](/docs/k8s/installation/multi-cluster/kubernetes). - Instead, services in one Kubernetes cluster must explicitly dial a service to a Consul datacenter in another Kubernetes cluster using the - [consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) - annotation. For example, an annotation of - `"consul.hashicorp.com/connect-service-upstreams": "my-service:1234:dc2"` reaches an upstream service called `my-service` - in the datacenter `dc2` on port `1234`. +There are two mechanisms for redirecting traffic through the sidecar proxies. By default, Consul injects an init container that redirects all inbound and outbound traffic. The default mechanism requires elevated permissions (CAP_NET_ADMIN) in order to redirect traffic to the service mesh. -- You cannot use transparent proxy in a deployment configuration with a - [single Consul datacenter spanning multiple Kubernetes clusters](/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s). Instead, - services in one Kubernetes cluster must explicitly dial a service in another Kubernetes cluster using the - [consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) - annotation. For example, an annotation of - `"consul.hashicorp.com/connect-service-upstreams": "my-service:1234"`, - reaches an upstream service called `my-service` in another Kubernetes cluster and on port `1234`. - Although transparent proxy is enabled, Kubernetes DNS is not utilized when communicating between services that exist on separate Kubernetes clusters. +Alternatively, you can enable the Consul CNI plugin to handle traffic redirection. Because the Kubernetes kubelet runs CNI plugins, the Consul CNI plugin has the necessary privileges to apply routing tables in the network. -- In a deployment configuration with [cluster peering](/docs/connect/cluster-peering), - transparent proxy is fully supported and thus dialing services explicitly is not required. +Both mechanisms redirect all inbound and outbound traffic, but you can configure exceptions for specific Pods or groups of Pods. The following annotations enable you to exclude certain traffic from being redirected to sidecar proxies. +#### Exclude inbound ports -## Known Limitations +The [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) annotation defines a comma-separated list of inbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: -- Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a - service in another datacenter or cluster using annotations. + -- When dialing headless services, the request is proxied using a plain TCP proxy. The upstream's protocol is not considered. +```yaml +"metadata": { + "annotations": { + "consul.hashicorp.com/transparent-proxy-exclude-inbound-ports" : "8200, 8201” + } +} +``` + -## Using Transparent Proxy +#### Exclude outbound ports -In Kubernetes, services can reach other services via their -[Kubernetes DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) address or through Pod IPs, and that -traffic will be transparently sent through the proxy. Connect services in Kubernetes are required to have a Kubernetes -service selecting the Pods. +The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) annotation defines a comma-separated list of outbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: -~> **Note**: In order to use Kubernetes DNS, the Kubernetes service name needs to match the Consul service name. This is the -case by default, unless the service Pods have the annotation `consul.hashicorp.com/connect-service` overriding the -Consul service name. + -Transparent proxy is enabled by default in Consul-helm >=`0.32.0`. The Helm value used to enable/disable transparent -proxy for all applications in a Kubernetes cluster is `connectInject.transparentProxy.defaultEnabled`. +```yaml +"metadata": { + "annotations": { + "consul.hashicorp.com/transparent-proxy-exclude-outbound-ports" : "8200, 8201” + } +} +``` -Each Pod for the service will be configured with iptables rules to direct all inbound and outbound traffic through an -inbound and outbound listener on the sidecar proxy. The proxy will be configured to know how to route traffic to the -appropriate upstream services based on [Service -Intentions](/docs/connect/config-entries/service-intentions). This means Connect services no longer -need to use the `consul.hashicorp.com/connect-service-upstreams` annotation to configure upstreams explicitly. Once the -Service Intentions are set, they can simply address the upstream services using Kubernetes DNS. + -As of Consul-k8s >= `0.26.0` and Consul-helm >= `0.32.0`, a Kubernetes service that selects application pods is required -for Connect applications, i.e: +#### Exclude outbound CIDR blocks + +The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) annotation +defines a comma-separated list of outbound CIDR blocks to exclude from traffic redirection when running in transparent proxy mode. The CIDR blocks are string data values. +In the following example, services in the `3.3.3.3/24` IP range are not redirected through the transparent proxy: + + + +```yaml +"metadata": { + "annotations": { + "consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs" : "3.3.3.3,3.3.3.3/24" + } +} +``` + + +#### Exclude user IDs + +The [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-uids) annotation +defines a comma-separated list of additional user IDs to exclude from traffic redirection when running in transparent proxy mode. The user IDs are string data values. +In the following example, services with the IDs `4444 ` and `44444 ` are not redirected through the transparent proxy: + + + +```yaml +"metadata": { + "annotations": { + "consul.hashicorp.com/transparent-proxy-exclude-uids" : "4444,44444” + } +} +``` + + + +### Kubernetes HTTP health probes configuration + +By default, `connect-inject` is disabled. As a result, Consul on Kubernetes uses a mechanism for traffic redirection that interferes with [Kubernetes HTTP health +probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). This is because probes expect the kubelet to reach the application container on the probe's endpoint. Instead, traffic is redirected through the sidecar proxy. As a result, health probes return errors because the kubelet does not encrypt that traffic using a mesh proxy. + +There are two methods for solving this issue. The first method is to set the `connectInject.transparentProxy.defaultOverwriteProbes` annotation to overwrite the Kubernetes HTTP health probes so that they point to the proxy. The second method is to [enable the Consul container network interface (CNI) plugin](#enable-the-consul-cni-plugin) to perform traffic redirection. Refer to the [Consul on Kubernetes installation instructions](/docs/k8s/installation/install) for additional information. + +#### Overwrite Kubernetes HTTP health probes + +You can either include the `connectInject.transparentProxy.defaultOverwriteProbes` Helm value to your command or add the `consul.hashicorp.com/transparent-proxy-overwrite-probes` Kubernetes annotation to your pod configuration to overwrite health probes. + +Refer to [Kubernetes Health Checks in Consul on Kubernetes](/docs/k8s/connect/health) for additional information. + +### Dial services across Kubernetes cluster + +If your [Consul servers are federated between Kubernetes clusters](/docs/k8s/installation/multi-cluster/kubernetes), +then you must configure services in one Kubernetes cluster to explicitly dial a service in the datacenter of another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. +The following example configures the service to dial an upstream service called `my-service` in datacenter `dc2` on port `1234`: + +```yaml + "consul.hashicorp.com/connect-service-upstreams": "my-service:1234:dc2" +``` + +If your Consul cluster is deployed to a [single datacenter spanning multiple Kubernetes clusters](/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s), +then you must configure services in one Kubernetes cluster to explicitly dial a service in another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. +The following example configures the service to dial an upstream service called `my-service` in another Kubernetes cluster on port `1234`: + +```yaml +"consul.hashicorp.com/connect-service-upstreams": "my-service:1234" +``` + +You do not need to configure services to explicitlly dial upstream services if your Consul clusters are connected with a [peering connection](/docs/connect/cluster-peering). + +## Usage + +When transparent proxy is enabled, traffic sent to [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) +or Pod IP addresses is redirected through the proxy. You must use a selector to bind Kubernetes Services to Pods as you define Kubernetes Services in the mesh. +The Kubernetes Service name must match the Consul service name to use KubeDNS. This is the default behavior unless you have applied the `consul.hashicorp.com/connect-service` +Kubernetes annotation to the service pods. The annotation overrides the Consul service name. + +Consul configures redirection for each Pod bound to the Kubernetes Service using `iptables` rules. The rules redirect all inbound and outbound traffic through an inbound and outbound listener on the sidecar proxy. Consul configures the proxy to route traffic to the appropriate upstream services based on [service +intentions](/docs/connect/config-entries/service-intentions), which address the upstream services using KubeDNS. + +In the following example, the Kubernetes service selects `sample-app` application Pods so that they can be reached within the mesh. + + ```yaml apiVersion: v1 @@ -227,22 +273,17 @@ spec: port: 80 ``` -In the example above, if another service wants to reach `sample-app` via transparent proxying, -it can dial `sample-app.default.svc.cluster.local`, using -[Kubernetes DNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/). -If ACLs with default "deny" policy are enabled, it also needs a -[ServiceIntention](/docs/connect/config-entries/service-intentions) allowing it to talk to -`sample-app`. + + +Additional services can query the [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) at `sample-app.default.svc.cluster.local` to reach `sample-app`. If ACLs are enabled and configured with default `deny` policies, the configuration also requires a [`ServiceIntention`](/docs/connect/config-entries/service-intentions) to allow it to talk to `sample-app`. ### Headless Services -For services that are not addressed using a virtual cluster IP, the upstream service must be -configured using the [DialedDirectly](/docs/connect/config-entries/service-defaults#dialeddirectly) -option. +For services that are not addressed using a virtual cluster IP, you must configure the upstream service using the [DialedDirectly](/docs/connect/config-entries/service-defaults#dialeddirectly) option. Then, use DNS to discover individual instance addresses and dial them through the transparent proxy. When this mode is enabled on the upstream, services present connect certificates for mTLS and intentions are enforced at the destination. -Individual instance addresses can then be discovered using DNS, and dialed through the transparent proxy. -When this mode is enabled on the upstream, connect certificates will be presented for mTLS and -intentions will be enforced at the destination. +Note that when dialing individual instances, Consul ignores the HTTP routing rules configured with configuration entries. The transparent proxy acts as a TCP proxy to the original destination IP address. -Note that when dialing individual instances HTTP routing rules configured with config entries -will **not** be considered. The transparent proxy acts as a TCP proxy to the original -destination IP address. +## Known Limitations + +- Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations. + +- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol. diff --git a/website/content/docs/k8s/installation/deployment-configurations/clients-outside-kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/clients-outside-kubernetes.mdx similarity index 100% rename from website/content/docs/k8s/installation/deployment-configurations/clients-outside-kubernetes.mdx rename to website/content/docs/k8s/deployment-configurations/clients-outside-kubernetes.mdx diff --git a/website/content/docs/k8s/installation/deployment-configurations/consul-enterprise.mdx b/website/content/docs/k8s/deployment-configurations/consul-enterprise.mdx similarity index 100% rename from website/content/docs/k8s/installation/deployment-configurations/consul-enterprise.mdx rename to website/content/docs/k8s/deployment-configurations/consul-enterprise.mdx diff --git a/website/content/docs/k8s/installation/multi-cluster/index.mdx b/website/content/docs/k8s/deployment-configurations/multi-cluster/index.mdx similarity index 100% rename from website/content/docs/k8s/installation/multi-cluster/index.mdx rename to website/content/docs/k8s/deployment-configurations/multi-cluster/index.mdx diff --git a/website/content/docs/k8s/installation/multi-cluster/kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/multi-cluster/kubernetes.mdx similarity index 100% rename from website/content/docs/k8s/installation/multi-cluster/kubernetes.mdx rename to website/content/docs/k8s/deployment-configurations/multi-cluster/kubernetes.mdx diff --git a/website/content/docs/k8s/installation/multi-cluster/vms-and-kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/multi-cluster/vms-and-kubernetes.mdx similarity index 100% rename from website/content/docs/k8s/installation/multi-cluster/vms-and-kubernetes.mdx rename to website/content/docs/k8s/deployment-configurations/multi-cluster/vms-and-kubernetes.mdx diff --git a/website/content/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/servers-outside-kubernetes.mdx similarity index 100% rename from website/content/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes.mdx rename to website/content/docs/k8s/deployment-configurations/servers-outside-kubernetes.mdx diff --git a/website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx b/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx similarity index 100% rename from website/content/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s.mdx rename to website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/bootstrap-token.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/bootstrap-token.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/connect-ca.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/connect-ca.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/connect-ca.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/connect-ca.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/enterprise-license.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/enterprise-license.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/gossip.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/gossip.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/gossip.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/gossip.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/index.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/index.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/index.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/index.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/partition-token.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/partition-token.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/partition-token.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/partition-token.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/replication-token.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/replication-token.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/replication-token.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/replication-token.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/server-tls.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/server-tls.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/server-tls.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/server-tls.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/snapshot-agent-config.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/snapshot-agent-config.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config.mdx diff --git a/website/content/docs/k8s/installation/vault/data-integration/webhook-certs.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/data-integration/webhook-certs.mdx rename to website/content/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs.mdx diff --git a/website/content/docs/k8s/installation/vault/index.mdx b/website/content/docs/k8s/deployment-configurations/vault/index.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/index.mdx rename to website/content/docs/k8s/deployment-configurations/vault/index.mdx diff --git a/website/content/docs/k8s/installation/vault/systems-integration.mdx b/website/content/docs/k8s/deployment-configurations/vault/systems-integration.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/systems-integration.mdx rename to website/content/docs/k8s/deployment-configurations/vault/systems-integration.mdx diff --git a/website/content/docs/k8s/installation/vault/wan-federation.mdx b/website/content/docs/k8s/deployment-configurations/vault/wan-federation.mdx similarity index 100% rename from website/content/docs/k8s/installation/vault/wan-federation.mdx rename to website/content/docs/k8s/deployment-configurations/vault/wan-federation.mdx diff --git a/website/content/docs/k8s/installation/install-cli.mdx b/website/content/docs/k8s/installation/install-cli.mdx index 4cd3ea9f5..ae6de69bf 100644 --- a/website/content/docs/k8s/installation/install-cli.mdx +++ b/website/content/docs/k8s/installation/install-cli.mdx @@ -1,23 +1,43 @@ --- layout: docs -page_title: Installing the Consul K8s CLI +page_title: Install Consul from the Consul K8s CLI description: >- - Consul K8s CLI is a tool for quickly installing and interacting with Consul on Kubernetes. + This topic describes how to install Consul on Kubernetes using the Consul K8s CLI tool. --- -# Installing the Consul K8s CLI -Consul K8s CLI is a tool for quickly installing and interacting with Consul on Kubernetes. Ensure that you are installing the correct version of the CLI for your Consul on Kubernetes deployment, as the CLI and the control plane are version dependent. +# Install Consul on Kubernetes from Consul K8s CLI + +This topic describes how to install Consul on Kubernetes using the Consul K8s CLI tool. The Consul K8s CLI tool enables you to quickly install and interact with Consul on Kubernetes. Use the Consul K8s CLI tool to install Consul on Kubernetes if you are deploying a single cluster. We recommend using the [Helm chart installation method](/docs/k8s/installation/install) if you are installing Consul on Kubernetes for multi-cluster deployments that involve cross-partition or cross datacenter communication. + +## Introduction + +If it is your first time installing Consul on Kubernetes, then you must first install the Consul K8s CLI tool. You can install Consul on Kubernetes using the Consul K8s tool after installing the CLI. + +## Requirements + +- The `kubectl` client must already be configured to authenticate to the Kubernetes cluster using a valid `kubeconfig` file. +- Install one of the following package managers so that you can install the Consul K8s CLI tool. The installation instructions also provide commands for installing and using the package managers: + - MacOS: [Homebrew](https://brew.sh) + - Ubuntu/Debian: apt + - CentOS/RHEL: yum + +You must install the correct version of the CLI for your Consul on Kubernetes deployment. To deploy a previous version of Consul on Kubernetes, download the specific version of the CLI that matches the version of the control plane that you would like to deploy. Refer to the [compatibility matrix](/docs/k8s/compatibility) for details. + ## Install the CLI -These instructions describe how to install the latest version of the CLI depending on your Operating System, and are suited for fresh installations of Consul on Kubernetes. +The following instructions describe how to install the latest version of the Consul K8s CLI tool, as well as earlier versions, so that you can install an appropriate version of tool for your control plane. + +### Install the latest version + +Complete the following instructions for a fresh installation of Consul on Kubernetes. -The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. The Homebrew formulae will always install the latest version of a binary. If you are looking to install a specific version of the CLI please follow [Install a specific version of Consul K8s CLI](#install-a-specific-version-of-the-cli). +The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. The Homebrew formulae always installs the latest version of a binary. 1. Install the HashiCorp `tap`, which is a repository of all Homebrew packages for HashiCorp: ```shell-session @@ -104,17 +124,15 @@ The [Homebrew](https://brew.sh) package manager is required to complete the foll -## Install a specific version of the CLI +### Install a previous version -These instructions describe how to install a specific version of the CLI and are best suited for installing or managing specific versions of the Consul on Kubernetes control plane. +Complete the following instructions to install a specific version of the CLI so that your tool is compatible with your Consul on Kubernetes control plane. Refer to the [compatibility matrix](/docs/k8s/compatibility) for additional information. -Homebrew does not provide a method to install previous versions of a package. The Consul K8s CLI will need to be installed manually. Previous versions of the Consul K8s CLI could be used to install a specific version of Consul on the Kubernetes control plane. Manual upgrades to the Consul K8s CLI is also performed in the same manner, provided that the Consul K8s CLI was manually installed before. - -1. Download the desired Consul K8s CLI using the following `curl` command. Enter the appropriate version for your deployment via the `$VERSION` environment variable. +1. Download the appropriate version of Consul K8s CLI using the following `curl` command. Set the `$VERSION` environment variable to the appropriate version for your deployment. ```shell-session $ export VERSION=0.39.0 && \ @@ -203,3 +221,61 @@ Homebrew does not provide a method to install previous versions of a package. Th + +## Install Consul on Kubernetes + +After installing the Consul K8s CLI tool (`consul-k8s`), issue the `install` subcommand and any additional options to install Consul on Kubernetes. Refer to the [Consul K8s CLI reference](/docs/k8s/k8s-cli) for details about all commands and available options. If you do not include any additional options, the `consul-k8s` CLI installs Consul on Kubernetes using the default settings form the Consul Helm chart values. The following example installs Consul on Kubernetes with service mesh and CRDs enabled. + +```shell-session +$ consul-k8s install -set connectInject.enabled=true -set controller.enabled=true + +==> Pre-Install Checks +No existing installations found. + ✓ No previous persistent volume claims found + ✓ No previous secrets found +==> Consul Installation Summary + Installation name: consul + Namespace: consul + Overrides: + connectInject: + enabled: true + controller: + enabled: true + + Proceed with installation? (y/N) y + +==> Running Installation + ✓ Downloaded charts +--> creating 1 resource(s) +--> creating 45 resource(s) +--> beginning wait for 45 resources with timeout of 10m0s + ✓ Consul installed into namespace "consul" +``` + +You can include the `-auto-approve` option set to `true` to proceed with the installation if the pre-install checks pass. + +The pre-install checks may fail if existing `PersistentVolumeClaims` (PVC) are detected. Refer to the [uninstall instructions](/docs/k8s/operations/uninstall#uninstall-consul) for information about removing PVCs. + +## Check the Consul cluster status + +Issue the `consul-k8s status` command to view the status of the installed Consul cluster. + +```shell-session +$ consul-k8s status + +==> Consul-K8s Status Summary + NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED +---------+-----------+----------+--------------+------------+----------+-------------------------- + consul | consul | deployed | 0.40.0 | 1.11.2 | 1 | 2022/01/31 16:58:51 PST + +==> Config: + connectInject: + enabled: true + controller: + enabled: true + global: + name: consul + +✓ Consul servers healthy (3/3) +✓ Consul clients healthy (3/3) +``` \ No newline at end of file diff --git a/website/content/docs/k8s/installation/install.mdx b/website/content/docs/k8s/installation/install.mdx index ffed47349..7247013d6 100644 --- a/website/content/docs/k8s/installation/install.mdx +++ b/website/content/docs/k8s/installation/install.mdx @@ -1,133 +1,37 @@ --- layout: docs -page_title: Installing Consul on Kubernetes +page_title: Install Consul on Kubernetes from the Helm Chart description: >- - Consul can run directly on Kubernetes, both in server or client mode. For - pure-Kubernetes workloads, this enables Consul to also exist purely within - Kubernetes. For heterogeneous workloads, Consul agents can join a server - running inside or outside of Kubernetes. + This topic describes how to install Consul on Kubernetes using the official Consul Helm chart. --- -# Installing Consul on Kubernetes +# Install Consul on Kubernetes from the Helm Chart -Consul can run directly on Kubernetes, both in server or client mode. -For pure-Kubernetes workloads, this enables Consul to also exist purely -within Kubernetes. For heterogeneous workloads, Consul agents can join -a server running inside or outside of Kubernetes. +This topic describes how to install Consul on Kubernetes using the official Consul Helm chart. For instruction on how to install Consul on Kubernetes using the Consul K8s CLI, refer to [Installing the Consul K8s CLI](/docs/k8s/installation/install-cli). -You can install Consul on Kubernetes using the following methods: +## Introduction -1. [Consul K8s CLI install](#consul-k8s-cli-installation) -1. [Helm chart install](#helm-chart-installation) +We recommend using the Consul Helm chart to install Consul on Kubernetes for multi-cluster installations that involve cross-partition or cross datacenter communication. The Helm chart installs and configures all necessary components to run Consul. The configuration enables you to run a server cluster, a client cluster, or both. + +Consul can run directly on Kubernetes in server or client mode so that you can leverage Consul functionality if your workloads are fully deployed to Kubernetes. For heterogeneous workloads, Consul agents can join a server running inside or outside of Kubernetes. Refer to the [architecture section](/docs/k8s/architecture) to learn more about the general architecture of Consul on Kubernetes. + +The Helm chart exposes several useful configurations and automatically sets up complex resources, but it does not automatically operate Consul. You must still become familiar with how to monitor, backup, and upgrade the Consul cluster. + +The Helm chart has no required configuration, so it installs a Consul cluster with default configurations. We strongly recommend that you [learn about the configuration options](/docs/k8s/helm#configuration-values) prior to going to production. + +-> **Security warning**: By default, Helm installs Consul with security configurations disabled so that the out-of-box experience is optimized for new users. We strongly recommend using a properly-secured Kubernetes cluster or making sure that you understand and enable [Consul’s security features](/docs/security) before going into production. Some security features are not supported in the Helm chart and require additional manual configuration. Refer to the [architecture](/docs/k8s/installation/install#architecture) section to learn more about the general architecture of Consul on Kubernetes. + For a hands-on experience with Consul as a service mesh for Kubernetes, follow the [Getting Started with Consul service mesh](https://learn.hashicorp.com/tutorials/consul/service-mesh-deploy?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. -## Consul K8s CLI Installation +## Requirements -We recommend using the [Consul K8s CLI](/docs/k8s/k8s-cli) to install Consul on Kubernetes for single-cluster deployments. You can install Consul on Kubernetes using the Consul K8s CLI tool after installing the CLI. +- Helm version 3.2+. Visit the [Helm website](https://helm.sh/docs/intro/install/) to download the latest version. -Before beginning the installation process, verify that `kubectl` is already configured to authenticate to the Kubernetes cluster using a valid `kubeconfig` file. - -The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. - --> ** NOTE:** To deploy a previous version of Consul on Kubernetes via the CLI, you will need to first download the specific version of the CLI that matches the version of the control plane that you would like to deploy. Please follow [Install a specific version of Consul K8s CLI](/docs/k8s/installation/install-cli#install-a-specific-version-of-the-cli). - -1. Install the HashiCorp `tap`, which is a repository of all Homebrew packages for HashiCorp: - ```shell-session - $ brew tap hashicorp/tap - ``` - -1. Install the Consul K8s CLI with the `hashicorp/tap/consul` formula. - ```shell-session - $ brew install hashicorp/tap/consul-k8s - ``` - -1. Issue the `install` subcommand to install Consul on Kubernetes. Refer to the [Consul K8s CLI reference](/docs/k8s/k8s-cli) for details about all commands and available options. Without any additional options passed, the `consul-k8s` CLI will install Consul on Kubernetes by using the Consul Helm chart's default values. Below is an example that installs Consul on Kubernetes with Service Mesh and CRDs enabled. If you did not set the `-auto-approve` option to `true`, you will be prompted to proceed with the installation if the pre-install checks pass. - - -> The pre-install checks may fail if existing `PersistentVolumeClaims` (PVC) are detected. Refer to the [uninstall instructions](/docs/k8s/operations/uninstall#uninstall-consul) for information about removing PVCs. - - ```shell-session - $ consul-k8s install -set connectInject.enabled=true -set controller.enabled=true - - ==> Pre-Install Checks - No existing installations found. - ✓ No previous persistent volume claims found - ✓ No previous secrets found - - ==> Consul Installation Summary - Installation name: consul - Namespace: consul - Overrides: - connectInject: - enabled: true - controller: - enabled: true - - Proceed with installation? (y/N) y - - ==> Running Installation - ✓ Downloaded charts - --> creating 1 resource(s) - --> creating 45 resource(s) - --> beginning wait for 45 resources with timeout of 10m0s - ✓ Consul installed into namespace "consul" - ``` - -1. (Optional) Issue the `consul-k8s status` command to quickly glance at the status of the installed Consul cluster. - - ```shell-session - $ consul-k8s status - - ==> Consul-K8s Status Summary - NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED - ---------+-----------+----------+--------------+------------+----------+-------------------------- - consul | consul | deployed | 0.40.0 | 1.11.2 | 1 | 2022/01/31 16:58:51 PST - - ==> Config: - connectInject: - enabled: true - controller: - enabled: true - global: - name: consul - - ✓ Consul servers healthy (3/3) - ✓ Consul clients healthy (3/3) - ``` - -## Helm Chart Installation - -We recommend using the Consul Helm chart to install Consul on Kubernetes for multi-cluster installations that involve cross-partition of cross datacenter communication. The Helm chart installs and configures all necessary components to run Consul. The configuration enables you to run a server cluster, a client cluster, or both. - -Step-by-step tutorials for how to deploy Consul to Kubernetes, please see -our [Deploy to Kubernetes](https://learn.hashicorp.com/collections/consul/kubernetes-deploy) -collection. This collection includes configuration caveats for single-node deployments. - -The Helm chart exposes several useful configurations and automatically -sets up complex resources, but it **does not automatically operate Consul.** -You must still become familiar with how to monitor, backup, -upgrade, etc. the Consul cluster. - -The Helm chart has no required configuration and will install a Consul -cluster with default configurations. We strongly recommend [learning about the configuration options](/docs/k8s/helm#configuration-values) prior to going to production. - -~> **Security Warning:** By default, the chart will install an insecure configuration -of Consul. This provides a less complicated out-of-box experience for new users, -but is not appropriate for a production setup. We strongly recommend using -a properly-secured Kubernetes cluster or making sure that you understand and enable -the [recommended security features](/docs/security). Currently, -some of these features are not supported in the Helm chart and require additional -manual configuration. - -### Prerequisites - -The Consul Helm only supports Helm 3.2+. Install the latest version of the Helm CLI here: -[Installing Helm](https://helm.sh/docs/intro/install/). - -### Installing Consul +## Install Consul 1. Add the HashiCorp Helm Repository: @@ -171,14 +75,14 @@ The Consul Helm only supports Helm 3.2+. Install the latest version of the Helm ``` -### Customizing Your Installation +## Custom installation If you want to customize your installation, create a `config.yaml` file to override the default settings. You can learn what settings are available by running `helm inspect values hashicorp/consul` or by reading the [Helm Chart Reference](/docs/k8s/helm). -#### Minimal `config.yaml` for Consul Service Mesh +### Minimal `config.yaml` for Consul service mesh The minimal settings to enable [Consul Service Mesh]((/docs/k8s/connect)) would be captured in the following `config.yaml` config file: @@ -203,7 +107,59 @@ NAME: consul ... ``` -#### Enable Consul Service Mesh on select namespaces +### Enable the Consul CNI plugin + +By default, Consul generates a `connect-inject init` container as part of the Kubernetes pod startup process when Consul is in [transparent proxy mode](/docs/connect/transparent-proxy). The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization. + +Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. Because the plugin is executed by the Kubernetes kubelet, the plugin already has the elevated privileges necessary to configure the network. + +Add the following configuration to your `config.yaml` file to enable the Consul CNI plugin: + + + + + +```yaml +global: + name: consul +connectInject: + enabled: true + cni: + enabled: true + logLevel: info + cniBinDir: "/opt/cni/bin" + cniNetDir: "/etc/cni/net.d" +``` + + + + +```yaml +global: + name: consul +connectInject: + enabled: true + cni: + enabled: true + logLevel: info + cniBinDir: "/home/kubernetes/bin" + cniNetDir: "/etc/cni/net.d" +``` + + + + + +The following table describes the available CNI plugin options: + +| Option | Description | Default | +| --- | --- | --- | +| `cni.enabled` | Boolean value that enables or disables the CNI plugin. If `true`, the plugin is responsible for redirecting traffic in the service mesh. If `false`, redirection is handled by the `connect-inject init` container. | `false` | +| `cni.logLevel` | String value that specifies the log level for the installer and plugin. You can specify the following values: `info`, `debug`, `error`. | `info` | +| `cni.cniBinDir` | String value that specifies the location on the Kubernetes node where the CNI plugin is installed. | `/opt/cni/bin` | +| `cni.cniNetDir` | String value that specifies the location on the Kubernetes node for storing the CNI configuration. | `/etc/cni/net.d` | + +### Enable Consul service mesh on select namespaces By default, Consul Service Mesh is enabled on almost all namespaces (with the exception of `kube-system` and `local-path-storage`) within a Kubernetes cluster. You can restrict this to a subset of namespaces by specifying a `namespaceSelector` that matches a label attached to each namespace denoting whether to enable Consul service mesh. In order to default to enabling service mesh on select namespaces by label, the `connectInject.default` value must be set to `true`. @@ -239,12 +195,16 @@ NAME: consul ... ``` -#### Updating your Consul on Kubernetes configuration +### Update your Consul on Kubernetes configuration If you've already installed Consul and want to make changes, you'll need to run `helm upgrade`. See [Upgrading](/docs/k8s/upgrade) for more details. -## Viewing the Consul UI +## Usage + +You can view the Consul UI and access the Consul HTTP API after installation. + +### Viewing the Consul UI The Consul UI is enabled by default when using the Helm chart. For security reasons, it isn't exposed via a `LoadBalancer` Service by default so you must @@ -293,14 +253,14 @@ Then paste the token into the UI under the ACLs tab (without the `%`). to retrieve the bootstrap token since secondary datacenters use a separate token with less permissions. -### Exposing the UI via a service +#### Exposing the UI via a service If you want to expose the UI via a Kubernetes Service, configure the [`ui.service` chart values](/docs/k8s/helm#v-ui-service). This service will allow requests to the Consul servers so it should not be open to the world. -## Accessing the Consul HTTP API +### Accessing the Consul HTTP API The Consul HTTP API should be accessed by communicating to the local agent running on the same node. While technically any listening agent (client or diff --git a/website/content/docs/k8s/installation/platforms/self-hosted-kubernetes.mdx b/website/content/docs/k8s/platforms/self-hosted-kubernetes.mdx similarity index 100% rename from website/content/docs/k8s/installation/platforms/self-hosted-kubernetes.mdx rename to website/content/docs/k8s/platforms/self-hosted-kubernetes.mdx diff --git a/website/data/docs-nav-data.json b/website/data/docs-nav-data.json index cb33486b7..b89ec4098 100644 --- a/website/data/docs-nav-data.json +++ b/website/data/docs-nav-data.json @@ -419,85 +419,53 @@ "title": "Architecture", "path": "k8s/architecture" }, + { - "title": "Get Started", + "title": "Installation", "routes": [ { - "title": "Installing Consul on Kubernetes", - "path": "k8s/installation/install" - }, - { - "title": "Installing Consul K8s CLI", + "title": "Install from Consul K8s CLI", "path": "k8s/installation/install-cli" }, { - "title": "Platform Guides", - "routes": [ - { - "title": "Minikube", - "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-minikube?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=mk" - }, - { - "title": "Kind", - "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-kind?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=kind" - }, - { - "title": "AKS (Azure)", - "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-aks-azure?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=aks" - }, - { - "title": "EKS (AWS)", - "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-eks-aws?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=eks" - }, - { - "title": "GKE (Google Cloud)", - "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-gke-google?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=gke" - }, - { - "title": "Red Hat OpenShift", - "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-openshift-red-hat?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=openshift" - }, - { - "title": "Self Hosted Kubernetes", - "path": "k8s/installation/platforms/self-hosted-kubernetes" - } - ] + "title": "Install from Helm Chart", + "path": "k8s/installation/install" + } + ] + }, + { + "title": "Deployment Configurations", + "routes": [ + { + "title": "Consul Clients Outside Kubernetes", + "path": "k8s/deployment-configurations/clients-outside-kubernetes" }, { - "title": "Deployment Configurations", - "routes": [ - { - "title": "Consul Clients Outside Kubernetes", - "path": "k8s/installation/deployment-configurations/clients-outside-kubernetes" - }, - { - "title": "Consul Servers Outside Kubernetes", - "path": "k8s/installation/deployment-configurations/servers-outside-kubernetes" - }, - { - "title": "Single Consul Datacenter in Multiple Kubernetes Clusters", - "path": "k8s/installation/deployment-configurations/single-dc-multi-k8s" - }, - { - "title": "Consul Enterprise", - "path": "k8s/installation/deployment-configurations/consul-enterprise" - } - ] + "title": "Consul Servers Outside Kubernetes", + "path": "k8s/deployment-configurations/servers-outside-kubernetes" + }, + { + "title": "Single Consul Datacenter in Multiple Kubernetes Clusters", + "path": "k8s/deployment-configurations/single-dc-multi-k8s" + }, + { + "title": "Consul Enterprise", + "path": "k8s/deployment-configurations/consul-enterprise" }, { "title": "Multi-Cluster Federation", "routes": [ { "title": "Overview", - "path": "k8s/installation/multi-cluster" + "path": "k8s/deployment-configurations/multi-cluster" }, { "title": "Federation Between Kubernetes Clusters", - "path": "k8s/installation/multi-cluster/kubernetes" + "path": "k8s/deployment-configurations/multi-cluster/kubernetes" }, { "title": "Federation Between VMs and Kubernetes", - "path": "k8s/installation/multi-cluster/vms-and-kubernetes" + "path": "k8s/deployment-configurations/multi-cluster/vms-and-kubernetes" } ] }, @@ -506,65 +474,98 @@ "routes": [ { "title": "Overview", - "path": "k8s/installation/vault" + "path": "k8s/deployment-configurations/vault" }, { "title": "Systems Integration", - "path": "k8s/installation/vault/systems-integration" + "path": "k8s/deployment-configurations/vault/systems-integration" }, { "title": "Data Integration", "routes": [ { "title": "Overview", - "path": "k8s/installation/vault/data-integration" + "path": "k8s/deployment-configurations/vault/data-integration" }, { "title": "Bootstrap Token", - "path": "k8s/installation/vault/data-integration/bootstrap-token" + "path": "k8s/deployment-configurations/vault/data-integration/bootstrap-token" }, { "title": "Enterprise License", - "path": "k8s/installation/vault/data-integration/enterprise-license" + "path": "k8s/deployment-configurations/vault/data-integration/enterprise-license" }, { "title": "Gossip Encryption Key", - "path": "k8s/installation/vault/data-integration/gossip" + "path": "k8s/deployment-configurations/vault/data-integration/gossip" }, { "title": "Partition Token", - "path": "k8s/installation/vault/data-integration/partition-token" + "path": "k8s/deployment-configurations/vault/data-integration/partition-token" }, { "title": "Replication Token", - "path": "k8s/installation/vault/data-integration/replication-token" + "path": "k8s/deployment-configurations/vault/data-integration/replication-token" }, { "title": "Server TLS", - "path": "k8s/installation/vault/data-integration/server-tls" + "path": "k8s/deployment-configurations/vault/data-integration/server-tls" }, { "title": "Service Mesh Certificates", - "path": "k8s/installation/vault/data-integration/connect-ca" + "path": "k8s/deployment-configurations/vault/data-integration/connect-ca" }, { "title": "Snapshot Agent Config", - "path": "k8s/installation/vault/data-integration/snapshot-agent-config" + "path": "k8s/deployment-configurations/vault/data-integration/snapshot-agent-config" }, { "title": "Webhook Certificates", - "path": "k8s/installation/vault/data-integration/webhook-certs" + "path": "k8s/deployment-configurations/vault/data-integration/webhook-certs" } ] }, { "title": "WAN Federation", - "path": "k8s/installation/vault/wan-federation" + "path": "k8s/deployment-configurations/vault/wan-federation" } ] } ] }, + { + "title": "Platform Guides", + "routes": [ + { + "title": "Minikube", + "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-minikube?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=mk" + }, + { + "title": "Kind", + "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-kind?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=kind" + }, + { + "title": "AKS (Azure)", + "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-aks-azure?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=aks" + }, + { + "title": "EKS (AWS)", + "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-eks-aws?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=eks" + }, + { + "title": "GKE (Google Cloud)", + "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-gke-google?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=gke" + }, + { + "title": "Red Hat OpenShift", + "href": "https://learn.hashicorp.com/tutorials/consul/kubernetes-openshift-red-hat?utm_source=consul.io&utm_medium=docs&utm_content=k8s&utm_term=openshift" + }, + { + "title": "Self Hosted Kubernetes", + "path": "k8s/platforms/self-hosted-kubernetes" + } + ] + }, { "title": "Service Mesh", "routes": [ @@ -693,6 +694,7 @@ } ] }, + { "title": "AWS ECS", "routes": [ diff --git a/website/redirects.js b/website/redirects.js index a4b3f272b..c9c5b668b 100644 --- a/website/redirects.js +++ b/website/redirects.js @@ -1278,6 +1278,129 @@ module.exports = [ destination: '/docs/k8s/installation/vault/data-integration/connect-ca', permanent: true, }, + { + source: '/docs/k8s/installation/install#consul-k8s-cli-installation', + destination: '/docs/k8s/installation/install-cli', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/clients-outside-kubernetes', + destination: + '/docs/k8s/deployment-configurations/clients-outside-kubernetes', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes', + destination: + '/docs/k8s/deployment-configurations/servers-outside-kubernetes', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s', + destination: '/docs/k8s/deployment-configurations/single-dc-multi-k8s', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/consul-enterprise', + destination: '/docs/k8s/deployment-configurations/consul-enterprise', + permanent: true, + }, + { + source: '/docs/k8s/installation/multi-cluster', + destination: '/docs/k8s/deployment-configurations/multi-cluster', + permanent: true, + }, + { + source: '/docs/k8s/installation/multi-cluster/kubernetes', + destination: '/docs/k8s/deployment-configurations/multi-cluster/kubernetes', + permanent: true, + }, + { + source: '/docs/k8s/installation/multi-cluster/vms-and-kubernetes', + destination: + '/docs/k8s/deployent-configurations/multi-cluster/vms-and-kubernetes', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault', + destination: '/docs/k8s/deployment-configurations/vault', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/systems-integration', + destination: + '/docs/k8s/deployment-configurations/vault/systems-integration', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration', + destination: '/docs/k8s/deployment-configurations/vault/data-integration', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/bootstrap-token', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/enterprise-license', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/gossip', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/gossip', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/partition-token', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/partition-token', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/replication-token', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/replication-token', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/server-tls', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/server-tls', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/connect-ca', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/connect-ca', + permanent: true, + }, + { + source: + '/docs/k8s/installation/vault/data-integration/snapshot-agent-config', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/webhook-certs', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/wan-federation', + destination: '/docs/k8s/deployment-configurations/vault/wan-federation', + permanent: true, + }, { source: '/docs/api-gateway/common-errors', destination: '/docs/api-gateway/usage#error-messages', @@ -1288,4 +1411,127 @@ module.exports = [ destination: '/docs/api-gateway/upgrades', permanent: true, }, + { + source: '/docs/k8s/installation/install#consul-k8s-cli-installation', + destination: '/docs/k8s/installation/install-cli', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/clients-outside-kubernetes', + destination: + '/docs/k8s/deployment-configurations/clients-outside-kubernetes', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/servers-outside-kubernetes', + destination: + '/docs/k8s/deployment-configurations/servers-outside-kubernetes', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/single-dc-multi-k8s', + destination: '/docs/k8s/deployment-configurations/single-dc-multi-k8s', + permanent: true, + }, + { + source: + '/docs/k8s/installation/deployment-configurations/consul-enterprise', + destination: '/docs/k8s/deployment-configurations/consul-enterprise', + permanent: true, + }, + { + source: '/docs/k8s/installation/multi-cluster', + destination: '/docs/k8s/deployment-configurations/multi-cluster', + permanent: true, + }, + { + source: '/docs/k8s/installation/multi-cluster/kubernetes', + destination: '/docs/k8s/deployment-configurations/multi-cluster/kubernetes', + permanent: true, + }, + { + source: '/docs/k8s/installation/multi-cluster/vms-and-kubernetes', + destination: + '/docs/k8s/deployent-configurations/multi-cluster/vms-and-kubernetes', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault', + destination: '/docs/k8s/deployment-configurations/vault', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/systems-integration', + destination: + '/docs/k8s/deployment-configurations/vault/systems-integration', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration', + destination: '/docs/k8s/deployment-configurations/vault/data-integration', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/bootstrap-token', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/enterprise-license', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/gossip', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/gossip', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/partition-token', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/partition-token', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/replication-token', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/replication-token', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/server-tls', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/server-tls', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/connect-ca', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/connect-ca', + permanent: true, + }, + { + source: + '/docs/k8s/installation/vault/data-integration/snapshot-agent-config', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/data-integration/webhook-certs', + destination: + '/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs', + permanent: true, + }, + { + source: '/docs/k8s/installation/vault/wan-federation', + destination: '/docs/k8s/deployment-configurations/vault/wan-federation', + permanent: true, + }, ] From 14994212c5bc2ce7de8a6a7a092a9a1bc811b363 Mon Sep 17 00:00:00 2001 From: Kyle Schochenmaier Date: Thu, 1 Sep 2022 19:21:27 -0500 Subject: [PATCH 47/55] update helm docs for release 0.48.0 (#14459) --- website/content/docs/k8s/helm.mdx | 52 +++++++++++++++++++++++++++---- 1 file changed, 46 insertions(+), 6 deletions(-) diff --git a/website/content/docs/k8s/helm.mdx b/website/content/docs/k8s/helm.mdx index be0c34080..c39bfaac2 100644 --- a/website/content/docs/k8s/helm.mdx +++ b/website/content/docs/k8s/helm.mdx @@ -270,10 +270,10 @@ Use these links to navigate to a particular top-level stanza. - `authMethodPath` ((#v-global-secretsbackend-vault-connectca-authmethodpath)) (`string: kubernetes`) - The mount path of the Kubernetes auth method in Vault. - `rootPKIPath` ((#v-global-secretsbackend-vault-connectca-rootpkipath)) (`string: ""`) - The path to a PKI secrets engine for the root certificate. - For more details, [Vault Connect CA configuration](https://www.consul.io/docs/connect/ca/vault#rootpkipath). + For more details, please refer to [Vault Connect CA configuration](https://www.consul.io/docs/connect/ca/vault#rootpkipath). - `intermediatePKIPath` ((#v-global-secretsbackend-vault-connectca-intermediatepkipath)) (`string: ""`) - The path to a PKI secrets engine for the generated intermediate certificate. - For more details, [Vault Connect CA configuration](https://www.consul.io/docs/connect/ca/vault#intermediatepkipath). + For more details, please refer to [Vault Connect CA configuration](https://www.consul.io/docs/connect/ca/vault#intermediatepkipath). - `additionalConfig` ((#v-global-secretsbackend-vault-connectca-additionalconfig)) (`string: {}`) - Additional Connect CA configuration in JSON format. Please refer to [Vault Connect CA configuration](https://www.consul.io/docs/connect/ca/vault#configuration) @@ -286,8 +286,8 @@ Use these links to navigate to a particular top-level stanza. { "connect": [{ "ca_config": [{ - "leaf_cert_ttl": "36h", - "namespace": "my-vault-ns" + "namespace": "my-vault-ns", + "leaf_cert_ttl": "36h" }] }] } @@ -505,8 +505,7 @@ Use these links to navigate to a particular top-level stanza. `-federation` (if setting `global.name`), otherwise `-consul-federation`. - - `primaryDatacenter` ((#v-global-federation-primarydatacenter)) (`string: null`) - The name of the primary datacenter. This should only be set for datacenters - that are not the primary datacenter. + - `primaryDatacenter` ((#v-global-federation-primarydatacenter)) (`string: null`) - The name of the primary datacenter. - `primaryGateways` ((#v-global-federation-primarygateways)) (`array: []`) - A list of addresses of the primary mesh gateways in the form `:`. (e.g. ["1.1.1.1:443", "2.3.4.5:443"] @@ -1577,6 +1576,47 @@ Use these links to navigate to a particular top-level stanza. --set 'connectInject.disruptionBudget.maxUnavailable=0'` flag to the helm chart installation command because of a limitation in the Helm templating language. + - `cni` ((#v-connectinject-cni)) - Configures consul-cni plugin for Consul Service mesh services + + - `enabled` ((#v-connectinject-cni-enabled)) (`boolean: false`) - If true, then all traffic redirection setup will use the consul-cni plugin. + Requires connectInject.enabled to also be true. + + - `logLevel` ((#v-connectinject-cni-loglevel)) (`string: null`) - Log level for the installer and plugin. Overrides global.logLevel + + - `cniBinDir` ((#v-connectinject-cni-cnibindir)) (`string: /opt/cni/bin`) - Location on the kubernetes node where the CNI plugin is installed. Shoud be the absolute path and start with a '/' + Example on GKE: + + ```yaml + cniBinDir: "/home/kubernetes/bin" + ``` + + - `cniNetDir` ((#v-connectinject-cni-cninetdir)) (`string: /etc/cni/net.d`) - Location on the kubernetes node of all CNI configuration. Should be the absolute path and start with a '/' + + - `resources` ((#v-connectinject-cni-resources)) (`map`) - The resource settings for CNI installer daemonset. + + - `resourceQuota` ((#v-connectinject-cni-resourcequota)) - Resource quotas for running the daemonset as system critical pods + + - `pods` ((#v-connectinject-cni-resourcequota-pods)) (`integer: 5000`) + + - `securityContext` ((#v-connectinject-cni-securitycontext)) (`map`) - The security context for the CNI installer daemonset. This should be a YAML map corresponding to a + Kubernetes [SecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) object. + By default, servers will run as root, with user ID `0` and group ID `0`. + Note: if running on OpenShift, this setting is ignored because the user and group are set automatically + by the OpenShift platform. + + - `updateStrategy` ((#v-connectinject-cni-updatestrategy)) (`string: null`) - updateStrategy for the CNI installer DaemonSet. + See https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy. + This should be a multi-line string mapping directly to the updateStrategy + + Example: + + ```yaml + updateStrategy: | + rollingUpdate: + maxUnavailable: 5 + type: RollingUpdate + ``` + - `metrics` ((#v-connectinject-metrics)) - Configures metrics for Consul Connect services. All values are overridable via annotations on a per-pod basis. From 9390d71cc527f6a1ecaa10f69c62d258271ee69d Mon Sep 17 00:00:00 2001 From: "Chris S. Kim" Date: Fri, 2 Sep 2022 11:57:28 -0400 Subject: [PATCH 48/55] Fix early return in prototest.AssertElementsMatch (#14467) --- proto/prototest/testing.go | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/proto/prototest/testing.go b/proto/prototest/testing.go index 275d8502b..bf25fb0a1 100644 --- a/proto/prototest/testing.go +++ b/proto/prototest/testing.go @@ -57,7 +57,7 @@ func AssertElementsMatch[V any]( } } - if len(outX) == len(outY) && len(outX) == len(listX) { + if len(outX) == len(outY) && len(listX) == len(listY) { return // matches } From 098cd512b20e8a37049a811b578a4c76e79ad646 Mon Sep 17 00:00:00 2001 From: DanStough Date: Wed, 31 Aug 2022 17:15:32 -0400 Subject: [PATCH 49/55] fix(api): OSS<->ENT exported service incompatibility --- api/agent_test.go | 16 +++++----- api/catalog_test.go | 18 +++++------ api/config_entry_discoverychain_test.go | 40 ++++++++++++------------- api/config_entry_exports.go | 2 +- api/config_entry_exports_test.go | 6 ++-- api/config_entry_test.go | 4 +-- api/coordinate_test.go | 2 +- api/health_test.go | 4 +-- api/{oss.go => oss_test.go} | 4 +-- api/txn_test.go | 30 +++++++++---------- 10 files changed, 63 insertions(+), 63 deletions(-) rename api/{oss.go => oss_test.go} (79%) diff --git a/api/agent_test.go b/api/agent_test.go index d67aba7b8..0c1660b1e 100644 --- a/api/agent_test.go +++ b/api/agent_test.go @@ -363,7 +363,7 @@ func TestAPI_AgentServicesWithFilterOpts(t *testing.T) { } require.NoError(t, agent.ServiceRegister(reg)) - opts := &QueryOptions{Namespace: splitDefaultNamespace} + opts := &QueryOptions{Namespace: defaultNamespace} services, err := agent.ServicesWithFilterOpts("foo in Tags", opts) require.NoError(t, err) require.Len(t, services, 1) @@ -791,8 +791,8 @@ func TestAPI_AgentService(t *testing.T) { Warning: 1, }, Meta: map[string]string{}, - Namespace: splitDefaultNamespace, - Partition: splitDefaultPartition, + Namespace: defaultNamespace, + Partition: defaultPartition, Datacenter: "dc1", } require.Equal(t, expect, got) @@ -932,7 +932,7 @@ func TestAPI_AgentUpdateTTLOpts(t *testing.T) { } } - opts := &QueryOptions{Namespace: splitDefaultNamespace} + opts := &QueryOptions{Namespace: defaultNamespace} if err := agent.UpdateTTLOpts("service:foo", "foo", HealthWarning, opts); err != nil { t.Fatalf("err: %v", err) @@ -1007,7 +1007,7 @@ func TestAPI_AgentChecksWithFilterOpts(t *testing.T) { reg.TTL = "15s" require.NoError(t, agent.CheckRegister(reg)) - opts := &QueryOptions{Namespace: splitDefaultNamespace} + opts := &QueryOptions{Namespace: defaultNamespace} checks, err := agent.ChecksWithFilterOpts("Name == foo", opts) require.NoError(t, err) require.Len(t, checks, 1) @@ -1382,7 +1382,7 @@ func TestAPI_ServiceMaintenanceOpts(t *testing.T) { } // Specify namespace in query option - opts := &QueryOptions{Namespace: splitDefaultNamespace} + opts := &QueryOptions{Namespace: defaultNamespace} // Enable maintenance mode if err := agent.EnableServiceMaintenanceOpts("redis", "broken", opts); err != nil { @@ -1701,7 +1701,7 @@ func TestAPI_AgentHealthServiceOpts(t *testing.T) { requireServiceHealthID := func(t *testing.T, serviceID, expected string, shouldExist bool) { msg := fmt.Sprintf("service id:%s, shouldExist:%v, expectedStatus:%s : bad %%s", serviceID, shouldExist, expected) - opts := &QueryOptions{Namespace: splitDefaultNamespace} + opts := &QueryOptions{Namespace: defaultNamespace} state, out, err := agent.AgentHealthServiceByIDOpts(serviceID, opts) require.Nil(t, err, msg, "err") require.Equal(t, expected, state, msg, "state") @@ -1715,7 +1715,7 @@ func TestAPI_AgentHealthServiceOpts(t *testing.T) { requireServiceHealthName := func(t *testing.T, serviceName, expected string, shouldExist bool) { msg := fmt.Sprintf("service name:%s, shouldExist:%v, expectedStatus:%s : bad %%s", serviceName, shouldExist, expected) - opts := &QueryOptions{Namespace: splitDefaultNamespace} + opts := &QueryOptions{Namespace: defaultNamespace} state, outs, err := agent.AgentHealthServiceByNameOpts(serviceName, opts) require.Nil(t, err, msg, "err") require.Equal(t, expected, state, msg, "state") diff --git a/api/catalog_test.go b/api/catalog_test.go index 2c926d199..b0071e87f 100644 --- a/api/catalog_test.go +++ b/api/catalog_test.go @@ -51,7 +51,7 @@ func TestAPI_CatalogNodes(t *testing.T) { want := &Node{ ID: s.Config.NodeID, Node: s.Config.NodeName, - Partition: splitDefaultPartition, + Partition: defaultPartition, Address: "127.0.0.1", Datacenter: "dc1", TaggedAddresses: map[string]string{ @@ -1144,8 +1144,8 @@ func TestAPI_CatalogGatewayServices_Terminating(t *testing.T) { expect := []*GatewayService{ { - Service: CompoundServiceName{Name: "api", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, - Gateway: CompoundServiceName{Name: "terminating", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, + Service: CompoundServiceName{Name: "api", Namespace: defaultNamespace, Partition: defaultPartition}, + Gateway: CompoundServiceName{Name: "terminating", Namespace: defaultNamespace, Partition: defaultPartition}, GatewayKind: ServiceKindTerminatingGateway, CAFile: "api/ca.crt", CertFile: "api/client.crt", @@ -1153,8 +1153,8 @@ func TestAPI_CatalogGatewayServices_Terminating(t *testing.T) { SNI: "my-domain", }, { - Service: CompoundServiceName{Name: "redis", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, - Gateway: CompoundServiceName{Name: "terminating", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, + Service: CompoundServiceName{Name: "redis", Namespace: defaultNamespace, Partition: defaultPartition}, + Gateway: CompoundServiceName{Name: "terminating", Namespace: defaultNamespace, Partition: defaultPartition}, GatewayKind: ServiceKindTerminatingGateway, CAFile: "ca.crt", CertFile: "client.crt", @@ -1212,15 +1212,15 @@ func TestAPI_CatalogGatewayServices_Ingress(t *testing.T) { expect := []*GatewayService{ { - Service: CompoundServiceName{Name: "api", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, - Gateway: CompoundServiceName{Name: "ingress", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, + Service: CompoundServiceName{Name: "api", Namespace: defaultNamespace, Partition: defaultPartition}, + Gateway: CompoundServiceName{Name: "ingress", Namespace: defaultNamespace, Partition: defaultPartition}, GatewayKind: ServiceKindIngressGateway, Protocol: "tcp", Port: 8888, }, { - Service: CompoundServiceName{Name: "redis", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, - Gateway: CompoundServiceName{Name: "ingress", Namespace: splitDefaultNamespace, Partition: splitDefaultPartition}, + Service: CompoundServiceName{Name: "redis", Namespace: defaultNamespace, Partition: defaultPartition}, + Gateway: CompoundServiceName{Name: "ingress", Namespace: defaultNamespace, Partition: defaultPartition}, GatewayKind: ServiceKindIngressGateway, Protocol: "tcp", Port: 9999, diff --git a/api/config_entry_discoverychain_test.go b/api/config_entry_discoverychain_test.go index c990fa0c6..8facb72e1 100644 --- a/api/config_entry_discoverychain_test.go +++ b/api/config_entry_discoverychain_test.go @@ -139,8 +139,8 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { entry: &ServiceResolverConfigEntry{ Kind: ServiceResolver, Name: "test-failover", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, DefaultSubset: "v1", Subsets: map[string]ServiceResolverSubset{ "v1": { @@ -159,7 +159,7 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { }, "v1": { Service: "alternate", - Namespace: splitDefaultNamespace, + Namespace: defaultNamespace, }, "v3": { Targets: []ServiceResolverFailoverTarget{ @@ -182,12 +182,12 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { entry: &ServiceResolverConfigEntry{ Kind: ServiceResolver, Name: "test-redirect", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, Redirect: &ServiceResolverRedirect{ Service: "test-failover", ServiceSubset: "v2", - Namespace: splitDefaultNamespace, + Namespace: defaultNamespace, Datacenter: "d", }, }, @@ -198,8 +198,8 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { entry: &ServiceResolverConfigEntry{ Kind: ServiceResolver, Name: "test-redirect", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, Redirect: &ServiceResolverRedirect{ Service: "test-failover", Peer: "cluster-01", @@ -212,14 +212,14 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { entry: &ServiceSplitterConfigEntry{ Kind: ServiceSplitter, Name: "test-split", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, Splits: []ServiceSplit{ { Weight: 90, Service: "test-failover", ServiceSubset: "v1", - Namespace: splitDefaultNamespace, + Namespace: defaultNamespace, RequestHeaders: &HTTPHeaderModifiers{ Set: map[string]string{ "x-foo": "bar", @@ -232,7 +232,7 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { { Weight: 10, Service: "test-redirect", - Namespace: splitDefaultNamespace, + Namespace: defaultNamespace, }, }, Meta: map[string]string{ @@ -247,8 +247,8 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { entry: &ServiceRouterConfigEntry{ Kind: ServiceRouter, Name: "test-route", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, Routes: []ServiceRoute{ { Match: &ServiceRouteMatch{ @@ -265,8 +265,8 @@ func TestAPI_ConfigEntry_DiscoveryChain(t *testing.T) { Destination: &ServiceRouteDestination{ Service: "test-failover", ServiceSubset: "v2", - Namespace: splitDefaultNamespace, - Partition: splitDefaultPartition, + Namespace: defaultNamespace, + Partition: defaultPartition, PrefixRewrite: "/", RequestTimeout: 5 * time.Second, NumRetries: 5, @@ -358,8 +358,8 @@ func TestAPI_ConfigEntry_ServiceResolver_LoadBalancer(t *testing.T) { entry: &ServiceResolverConfigEntry{ Kind: ServiceResolver, Name: "test-least-req", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, LoadBalancer: &LoadBalancer{ Policy: "least_request", LeastRequestConfig: &LeastRequestConfig{ChoiceCount: 10}, @@ -372,8 +372,8 @@ func TestAPI_ConfigEntry_ServiceResolver_LoadBalancer(t *testing.T) { entry: &ServiceResolverConfigEntry{ Kind: ServiceResolver, Name: "test-ring-hash", - Namespace: splitDefaultNamespace, - Partition: splitDefaultPartition, + Namespace: defaultNamespace, + Partition: defaultPartition, LoadBalancer: &LoadBalancer{ Policy: "ring_hash", RingHashConfig: &RingHashConfig{ diff --git a/api/config_entry_exports.go b/api/config_entry_exports.go index e162b5fa6..0827e5816 100644 --- a/api/config_entry_exports.go +++ b/api/config_entry_exports.go @@ -57,7 +57,7 @@ type ServiceConsumer struct { func (e *ExportedServicesConfigEntry) GetKind() string { return ExportedServices } func (e *ExportedServicesConfigEntry) GetName() string { return e.Name } func (e *ExportedServicesConfigEntry) GetPartition() string { return e.Name } -func (e *ExportedServicesConfigEntry) GetNamespace() string { return splitDefaultNamespace } +func (e *ExportedServicesConfigEntry) GetNamespace() string { return "" } func (e *ExportedServicesConfigEntry) GetMeta() map[string]string { return e.Meta } func (e *ExportedServicesConfigEntry) GetCreateIndex() uint64 { return e.CreateIndex } func (e *ExportedServicesConfigEntry) GetModifyIndex() uint64 { return e.ModifyIndex } diff --git a/api/config_entry_exports_test.go b/api/config_entry_exports_test.go index 8d56be0ab..4a6f3c7a2 100644 --- a/api/config_entry_exports_test.go +++ b/api/config_entry_exports_test.go @@ -17,7 +17,7 @@ func TestAPI_ConfigEntries_ExportedServices(t *testing.T) { testutil.RunStep(t, "set and get", func(t *testing.T) { exports := &ExportedServicesConfigEntry{ Name: PartitionDefaultName, - Partition: splitDefaultPartition, + Partition: defaultPartition, Meta: map[string]string{ "gir": "zim", }, @@ -48,7 +48,7 @@ func TestAPI_ConfigEntries_ExportedServices(t *testing.T) { Services: []ExportedService{ { Name: "db", - Namespace: splitDefaultNamespace, + Namespace: defaultNamespace, Consumers: []ServiceConsumer{ { PeerName: "alpha", @@ -60,7 +60,7 @@ func TestAPI_ConfigEntries_ExportedServices(t *testing.T) { "foo": "bar", "gir": "zim", }, - Partition: splitDefaultPartition, + Partition: defaultPartition, } _, wm, err := entries.Set(updated, nil) diff --git a/api/config_entry_test.go b/api/config_entry_test.go index 63aba11b8..a897cdeb0 100644 --- a/api/config_entry_test.go +++ b/api/config_entry_test.go @@ -215,8 +215,8 @@ func TestAPI_ConfigEntries(t *testing.T) { "foo": "bar", "gir": "zim", }, - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, } ce := c.ConfigEntries() diff --git a/api/coordinate_test.go b/api/coordinate_test.go index 984167e17..071b1f99e 100644 --- a/api/coordinate_test.go +++ b/api/coordinate_test.go @@ -87,7 +87,7 @@ func TestAPI_CoordinateUpdate(t *testing.T) { newCoord.Height = 0.5 entry := &CoordinateEntry{ Node: node, - Partition: splitDefaultPartition, + Partition: defaultPartition, Coord: newCoord, } _, err = coord.Update(entry, nil) diff --git a/api/health_test.go b/api/health_test.go index 7fc7a3f12..b69e9275f 100644 --- a/api/health_test.go +++ b/api/health_test.go @@ -223,8 +223,8 @@ func TestAPI_HealthChecks(t *testing.T) { ServiceName: "foo", ServiceTags: []string{"bar"}, Type: "ttl", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, }, } diff --git a/api/oss.go b/api/oss_test.go similarity index 79% rename from api/oss.go rename to api/oss_test.go index 93d639e69..e4e266a38 100644 --- a/api/oss.go +++ b/api/oss_test.go @@ -6,5 +6,5 @@ package api // The following defaults return "default" in enterprise and "" in OSS. // This constant is useful when a default value is needed for an // operation that will reject non-empty values in OSS. -const splitDefaultNamespace = "" -const splitDefaultPartition = "" +const defaultNamespace = "" +const defaultPartition = "" diff --git a/api/txn_test.go b/api/txn_test.go index bf69b7bc8..81348a8c2 100644 --- a/api/txn_test.go +++ b/api/txn_test.go @@ -187,7 +187,7 @@ func TestAPI_ClientTxn(t *testing.T) { CreateIndex: ret.Results[0].KV.CreateIndex, ModifyIndex: ret.Results[0].KV.ModifyIndex, Namespace: ret.Results[0].KV.Namespace, - Partition: splitDefaultPartition, + Partition: defaultPartition, }, }, &TxnResult{ @@ -199,14 +199,14 @@ func TestAPI_ClientTxn(t *testing.T) { CreateIndex: ret.Results[1].KV.CreateIndex, ModifyIndex: ret.Results[1].KV.ModifyIndex, Namespace: ret.Results[0].KV.Namespace, - Partition: splitDefaultPartition, + Partition: defaultPartition, }, }, &TxnResult{ Node: &Node{ ID: nodeID, Node: "foo", - Partition: splitDefaultPartition, + Partition: defaultPartition, Address: "2.2.2.2", Datacenter: "dc1", CreateIndex: ret.Results[2].Node.CreateIndex, @@ -218,8 +218,8 @@ func TestAPI_ClientTxn(t *testing.T) { ID: "foo1", CreateIndex: ret.Results[3].Service.CreateIndex, ModifyIndex: ret.Results[3].Service.CreateIndex, - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, }, }, &TxnResult{ @@ -237,8 +237,8 @@ func TestAPI_ClientTxn(t *testing.T) { DeregisterCriticalServiceAfterDuration: 20 * time.Second, }, Type: "tcp", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, CreateIndex: ret.Results[4].Check.CreateIndex, ModifyIndex: ret.Results[4].Check.CreateIndex, }, @@ -258,8 +258,8 @@ func TestAPI_ClientTxn(t *testing.T) { DeregisterCriticalServiceAfterDuration: 160 * time.Second, }, Type: "tcp", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, CreateIndex: ret.Results[4].Check.CreateIndex, ModifyIndex: ret.Results[4].Check.CreateIndex, }, @@ -279,8 +279,8 @@ func TestAPI_ClientTxn(t *testing.T) { DeregisterCriticalServiceAfterDuration: 20 * time.Second, }, Type: "udp", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, CreateIndex: ret.Results[4].Check.CreateIndex, ModifyIndex: ret.Results[4].Check.CreateIndex, }, @@ -300,8 +300,8 @@ func TestAPI_ClientTxn(t *testing.T) { DeregisterCriticalServiceAfterDuration: 20 * time.Second, }, Type: "udp", - Partition: splitDefaultPartition, - Namespace: splitDefaultNamespace, + Partition: defaultPartition, + Namespace: defaultNamespace, CreateIndex: ret.Results[4].Check.CreateIndex, ModifyIndex: ret.Results[4].Check.CreateIndex, }, @@ -342,14 +342,14 @@ func TestAPI_ClientTxn(t *testing.T) { CreateIndex: ret.Results[0].KV.CreateIndex, ModifyIndex: ret.Results[0].KV.ModifyIndex, Namespace: ret.Results[0].KV.Namespace, - Partition: splitDefaultPartition, + Partition: defaultPartition, }, }, &TxnResult{ Node: &Node{ ID: s.Config.NodeID, Node: s.Config.NodeName, - Partition: splitDefaultPartition, + Partition: defaultPartition, Address: "127.0.0.1", Datacenter: "dc1", TaggedAddresses: map[string]string{ From 8de0aefea4a6a06b0e162bd5e78b4d777711bc76 Mon Sep 17 00:00:00 2001 From: alex <8968914+acpana@users.noreply.github.com> Date: Fri, 2 Sep 2022 09:56:40 -0700 Subject: [PATCH 50/55] lint net/rpc usage (#12816) Signed-off-by: acpana <8968914+acpana@users.noreply.github.com> Co-authored-by: R.B. Boyer --- .golangci.yml | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/.golangci.yml b/.golangci.yml index 5dd923583..d71c93d16 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -8,6 +8,8 @@ linters: - ineffassign - unparam - forbidigo + - gomodguard + - depguard issues: # Disable the default exclude list so that all excludes are explicitly @@ -75,6 +77,30 @@ linters-settings: # Exclude godoc examples from forbidigo checks. # Default: true exclude_godoc_examples: false + gomodguard: + blocked: + # List of blocked modules. + modules: + # Blocked module. + - github.com/hashicorp/net-rpc-msgpackrpc: + recommendations: + - github.com/hashicorp/consul-net-rpc/net-rpc-msgpackrpc + - github.com/hashicorp/go-msgpack: + recommendations: + - github.com/hashicorp/consul-net-rpc/go-msgpack + + depguard: + list-type: denylist + include-go-root: true + # A list of packages for the list type specified. + # Default: [] + packages: + - net/rpc + # A list of packages for the list type specified. + # Specify an error message to output when a denied package is used. + # Default: [] + packages-with-error-message: + - net/rpc: 'only use forked copy in github.com/hashicorp/consul-net-rpc/net/rpc' run: timeout: 10m From 4641a78d2739954fdc7f7550f36077367283c0c6 Mon Sep 17 00:00:00 2001 From: cskh Date: Fri, 2 Sep 2022 14:28:05 -0400 Subject: [PATCH 51/55] fix(txn api): missing proxy config in registering proxy service (#14471) * fix(txn api): missing proxy config in registering proxy service --- agent/txn_endpoint.go | 37 ++++++++++ agent/txn_endpoint_test.go | 136 +++++++++++++++++++++++++++++++++---- api/catalog.go | 1 + api/health.go | 5 +- 4 files changed, 165 insertions(+), 14 deletions(-) diff --git a/agent/txn_endpoint.go b/agent/txn_endpoint.go index 4e898bfce..f528bbda4 100644 --- a/agent/txn_endpoint.go +++ b/agent/txn_endpoint.go @@ -185,6 +185,7 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( Address: node.Address, Datacenter: node.Datacenter, TaggedAddresses: node.TaggedAddresses, + PeerName: node.PeerName, Meta: node.Meta, RaftIndex: structs.RaftIndex{ ModifyIndex: node.ModifyIndex, @@ -207,6 +208,7 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( Service: structs.NodeService{ ID: svc.ID, Service: svc.Service, + Kind: structs.ServiceKind(svc.Kind), Tags: svc.Tags, Address: svc.Address, Meta: svc.Meta, @@ -226,6 +228,39 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( }, }, } + + if svc.Proxy != nil { + out.Service.Service.Proxy = structs.ConnectProxyConfig{} + t := &out.Service.Service.Proxy + if svc.Proxy.DestinationServiceName != "" { + t.DestinationServiceName = svc.Proxy.DestinationServiceName + } + if svc.Proxy.DestinationServiceID != "" { + t.DestinationServiceID = svc.Proxy.DestinationServiceID + } + if svc.Proxy.LocalServiceAddress != "" { + t.LocalServiceAddress = svc.Proxy.LocalServiceAddress + } + if svc.Proxy.LocalServicePort != 0 { + t.LocalServicePort = svc.Proxy.LocalServicePort + } + if svc.Proxy.LocalServiceSocketPath != "" { + t.LocalServiceSocketPath = svc.Proxy.LocalServiceSocketPath + } + if svc.Proxy.MeshGateway.Mode != "" { + t.MeshGateway.Mode = structs.MeshGatewayMode(svc.Proxy.MeshGateway.Mode) + } + + if svc.Proxy.TransparentProxy != nil { + if svc.Proxy.TransparentProxy.DialedDirectly { + t.TransparentProxy.DialedDirectly = svc.Proxy.TransparentProxy.DialedDirectly + } + + if svc.Proxy.TransparentProxy.OutboundListenerPort != 0 { + t.TransparentProxy.OutboundListenerPort = svc.Proxy.TransparentProxy.OutboundListenerPort + } + } + } opsRPC = append(opsRPC, out) case in.Check != nil: @@ -265,6 +300,8 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( ServiceID: check.ServiceID, ServiceName: check.ServiceName, ServiceTags: check.ServiceTags, + PeerName: check.PeerName, + ExposedPort: check.ExposedPort, Definition: structs.HealthCheckDefinition{ HTTP: check.Definition.HTTP, TLSServerName: check.Definition.TLSServerName, diff --git a/agent/txn_endpoint_test.go b/agent/txn_endpoint_test.go index 4b529d5de..f6f47b8fa 100644 --- a/agent/txn_endpoint_test.go +++ b/agent/txn_endpoint_test.go @@ -585,6 +585,7 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { "Output": "success", "ServiceID": "", "ServiceName": "", + "ExposedPort": 5678, "Definition": { "IntervalDuration": "15s", "TimeoutDuration": "15s", @@ -600,12 +601,8 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { req, _ := http.NewRequest("PUT", "/v1/txn", buf) resp := httptest.NewRecorder() obj, err := a.srv.Txn(resp, req) - if err != nil { - t.Fatalf("err: %v", err) - } - if resp.Code != 200 { - t.Fatalf("expected 200, got %d", resp.Code) - } + require.NoError(t, err) + require.Equal(t, 200, resp.Code, resp.Body) txnResp, ok := obj.(structs.TxnResponse) if !ok { @@ -662,12 +659,13 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { }, &structs.TxnResult{ Check: &structs.HealthCheck{ - Node: a.config.NodeName, - CheckID: "nodecheck", - Name: "Node http check", - Status: api.HealthPassing, - Notes: "Http based health check", - Output: "success", + Node: a.config.NodeName, + CheckID: "nodecheck", + Name: "Node http check", + Status: api.HealthPassing, + Notes: "Http based health check", + Output: "success", + ExposedPort: 5678, Definition: structs.HealthCheckDefinition{ Interval: 15 * time.Second, Timeout: 15 * time.Second, @@ -686,3 +684,117 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { } assert.Equal(t, expected, txnResp) } + +func TestTxnEndpoint_NodeService(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + t.Parallel() + a := NewTestAgent(t, "") + defer a.Shutdown() + testrpc.WaitForTestAgent(t, a.RPC, "dc1") + + // Make sure the fields of a check are handled correctly when both creating and + // updating, and test both sets of duration fields to ensure backwards compatibility. + buf := bytes.NewBuffer([]byte(fmt.Sprintf(` +[ + { + "Service": { + "Verb": "set", + "Node": "%s", + "Service": { + "Service": "test", + "Port": 4444 + } + } + }, + { + "Service": { + "Verb": "set", + "Node": "%s", + "Service": { + "Service": "test-sidecar-proxy", + "Port": 20000, + "Kind": "connect-proxy", + "Proxy": { + "DestinationServiceName": "test", + "DestinationServiceID": "test", + "LocalServiceAddress": "127.0.0.1", + "LocalServicePort": 4444, + "upstreams": [ + { + "DestinationName": "fake-backend", + "LocalBindPort": 25001 + } + ] + } + } + } + } +] +`, a.config.NodeName, a.config.NodeName))) + req, _ := http.NewRequest("PUT", "/v1/txn", buf) + resp := httptest.NewRecorder() + obj, err := a.srv.Txn(resp, req) + require.NoError(t, err) + require.Equal(t, 200, resp.Code) + + txnResp, ok := obj.(structs.TxnResponse) + if !ok { + t.Fatalf("bad type: %T", obj) + } + require.Equal(t, 2, len(txnResp.Results)) + + index := txnResp.Results[0].Service.ModifyIndex + expected := structs.TxnResponse{ + Results: structs.TxnResults{ + &structs.TxnResult{ + Service: &structs.NodeService{ + Service: "test", + ID: "test", + Port: 4444, + Weights: &structs.Weights{ + Passing: 1, + Warning: 1, + }, + RaftIndex: structs.RaftIndex{ + CreateIndex: index, + ModifyIndex: index, + }, + EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(), + }, + }, + &structs.TxnResult{ + Service: &structs.NodeService{ + Service: "test-sidecar-proxy", + ID: "test-sidecar-proxy", + Port: 20000, + Kind: "connect-proxy", + Weights: &structs.Weights{ + Passing: 1, + Warning: 1, + }, + Proxy: structs.ConnectProxyConfig{ + DestinationServiceName: "test", + DestinationServiceID: "test", + LocalServiceAddress: "127.0.0.1", + LocalServicePort: 4444, + }, + TaggedAddresses: map[string]structs.ServiceAddress{ + "consul-virtual": { + Address: "240.0.0.1", + Port: 20000, + }, + }, + RaftIndex: structs.RaftIndex{ + CreateIndex: index, + ModifyIndex: index, + }, + EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(), + }, + }, + }, + } + assert.Equal(t, expected, txnResp) +} diff --git a/api/catalog.go b/api/catalog.go index 80ae325ea..84a2bdbc6 100644 --- a/api/catalog.go +++ b/api/catalog.go @@ -20,6 +20,7 @@ type Node struct { CreateIndex uint64 ModifyIndex uint64 Partition string `json:",omitempty"` + PeerName string `json:",omitempty"` } type ServiceAddress struct { diff --git a/api/health.go b/api/health.go index 2bcb3cb52..0886bb12a 100644 --- a/api/health.go +++ b/api/health.go @@ -45,6 +45,8 @@ type HealthCheck struct { Type string Namespace string `json:",omitempty"` Partition string `json:",omitempty"` + ExposedPort int + PeerName string `json:",omitempty"` Definition HealthCheckDefinition @@ -176,8 +178,7 @@ type HealthChecks []*HealthCheck // attached, this function determines the best representative of the status as // as single string using the following heuristic: // -// maintenance > critical > warning > passing -// +// maintenance > critical > warning > passing func (c HealthChecks) AggregatedStatus() string { var passing, warning, critical, maintenance bool for _, check := range c { From 07c5d4247f7047c9987344fa893e6d2b051437d8 Mon Sep 17 00:00:00 2001 From: David Yu Date: Fri, 2 Sep 2022 15:34:15 -0700 Subject: [PATCH 52/55] docs: Update single dc multiple k8s clusters doc (#14476) Co-authored-by: Jona Apelbaum --- .../deployment-configurations/single-dc-multi-k8s.mdx | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx b/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx index d27c23fed..b854ebc3e 100644 --- a/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx +++ b/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx @@ -6,6 +6,8 @@ description: Single Consul Datacenter deployed in multiple Kubernetes clusters # Single Consul Datacenter in Multiple Kubernetes Clusters +~> **Note:** For running Consul across multiple Kubernetes, it is generally recommended to utilize [Admin Partitions](/docs/enterprise/admin-partitions) for production environments. This Consul Enterprise feature allows for the ability to accommodate for multiple tenants without concerns of resource collisions when administering a cluster at scale, and for the ability to run Consul on Kubernetes clusters across a non-flat network. + This page describes deploying a single Consul datacenter in multiple Kubernetes clusters, with servers and clients running in one cluster and only clients in the rest of the clusters. This example uses two Kubernetes clusters, but this approach could be extended to using more than two. @@ -19,16 +21,13 @@ to pods or nodes in another. In many hosted Kubernetes environments, this may ha * [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking) * [AWS EKS CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) * [GKE VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips). - -If a flat network is unavailable across all Kubernetes clusters, follow the instructions for using [Admin Partitions](/docs/enterprise/admin-partitions), which is a Consul Enterprise feature. - +* Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix. ## Prepare Helm release name ahead of installs The Helm release name must be unique for each Kubernetes cluster. The Helm chart uses the Helm release name as a prefix for the -ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases -are identical, subsequent Consul on Kubernetes clusters overwrite existing ACL resources and cause the clusters to fail. +ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are identical, or if `global.name` for each cluster is identical, subsequent Consul on Kubernetes clusters will overwrite existing ACL resources and cause the clusters to fail. Before proceeding with installation, prepare the Helm release names as environment variables for both the server and client install. From 9d555e538ed81b598d75a9a4ddcb85c8f441d900 Mon Sep 17 00:00:00 2001 From: John Cowen Date: Mon, 5 Sep 2022 19:17:33 +0100 Subject: [PATCH 53/55] ui: Additionally use message for displaying errors in DataWriter (#14074) --- ui/packages/consul-ui/app/components/data-writer/index.hbs | 2 ++ 1 file changed, 2 insertions(+) diff --git a/ui/packages/consul-ui/app/components/data-writer/index.hbs b/ui/packages/consul-ui/app/components/data-writer/index.hbs index 5f0ecdfcd..36f418459 100644 --- a/ui/packages/consul-ui/app/components/data-writer/index.hbs +++ b/ui/packages/consul-ui/app/components/data-writer/index.hbs @@ -108,6 +108,8 @@ as |after|}} There was an error saving your {{or label type}}. {{#if (and api.error.status api.error.detail)}}
    {{api.error.status}}: {{api.error.detail}} + {{else if api.error.message}} +
    {{api.error.message}} {{/if}}

    From 9780aba54a3361edccba04851d723b5ed50d0db7 Mon Sep 17 00:00:00 2001 From: John Cowen Date: Tue, 6 Sep 2022 11:13:51 +0100 Subject: [PATCH 54/55] ui: Add support for prefixing the API path (#14342) --- .../consul-ui/app/services/client/http.js | 3 ++- .../consul-ui/app/utils/get-environment.js | 7 ++++++ ui/packages/consul-ui/config/environment.js | 8 ++++++- .../mock-api/prefixed-api/v1/catalog/.config | 7 ++++++ .../prefixed-api/v1/catalog/datacenters | 13 +++++++++++ .../prefixed-api/v1/internal/acl/authorize | 14 ++++++++++++ .../prefixed-api/v1/internal/ui/services | 22 +++++++++++++++++++ .../node-tests/config/environment.js | 8 ++++++- .../tests/acceptance/api-prefix.feature | 7 ++++++ .../acceptance/steps/api-prefix-steps.js | 11 ++++++++++ ui/packages/consul-ui/tests/steps.js | 1 + .../consul-ui/tests/steps/doubles/http.js | 3 +++ 12 files changed, 101 insertions(+), 3 deletions(-) create mode 100644 ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/.config create mode 100644 ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/datacenters create mode 100644 ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/acl/authorize create mode 100644 ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/ui/services create mode 100644 ui/packages/consul-ui/tests/acceptance/api-prefix.feature create mode 100644 ui/packages/consul-ui/tests/acceptance/steps/api-prefix-steps.js diff --git a/ui/packages/consul-ui/app/services/client/http.js b/ui/packages/consul-ui/app/services/client/http.js index 9b7736501..708b64d2a 100644 --- a/ui/packages/consul-ui/app/services/client/http.js +++ b/ui/packages/consul-ui/app/services/client/http.js @@ -203,12 +203,13 @@ export default class HttpService extends Service { // also see adapters/kv content-types in requestForCreate/UpdateRecord // also see https://github.com/hashicorp/consul/issues/3804 params.headers[CONTENT_TYPE] = 'application/json; charset=utf-8'; + params.url = `${this.env.var('CONSUL_API_PREFIX')}${params.url}`; return params; } fetchWithToken(path, params) { return this.settings.findBySlug('token').then(token => { - return fetch(`${path}`, { + return fetch(`${this.env.var('CONSUL_API_PREFIX')}${path}`, { ...params, credentials: 'include', headers: { diff --git a/ui/packages/consul-ui/app/utils/get-environment.js b/ui/packages/consul-ui/app/utils/get-environment.js index fd3757fbf..9fabe8b7a 100644 --- a/ui/packages/consul-ui/app/utils/get-environment.js +++ b/ui/packages/consul-ui/app/utils/get-environment.js @@ -132,6 +132,12 @@ export default function(config = {}, win = window, doc = document) { return operatorConfig.LocalDatacenter; case 'CONSUL_DATACENTER_PRIMARY': return operatorConfig.PrimaryDatacenter; + case 'CONSUL_API_PREFIX': + // we want API prefix to look like an env var for if we ever change + // operator config to be an API request, we need this variable before we + // make and API request so this specific variable should never be be + // retrived via an API request + return operatorConfig.APIPrefix; case 'CONSUL_UI_CONFIG': dashboards = { service: undefined, @@ -246,6 +252,7 @@ export default function(config = {}, win = window, doc = document) { case 'CONSUL_UI_CONFIG': case 'CONSUL_DATACENTER_LOCAL': case 'CONSUL_DATACENTER_PRIMARY': + case 'CONSUL_API_PREFIX': case 'CONSUL_ACLS_ENABLED': case 'CONSUL_NSPACES_ENABLED': case 'CONSUL_PEERINGS_ENABLED': diff --git a/ui/packages/consul-ui/config/environment.js b/ui/packages/consul-ui/config/environment.js index a85b0093d..4bd096799 100644 --- a/ui/packages/consul-ui/config/environment.js +++ b/ui/packages/consul-ui/config/environment.js @@ -86,6 +86,7 @@ module.exports = function(environment, $ = process.env) { PartitionsEnabled: false, LocalDatacenter: env('CONSUL_DATACENTER_LOCAL', 'dc1'), PrimaryDatacenter: env('CONSUL_DATACENTER_PRIMARY', 'dc1'), + APIPrefix: env('CONSUL_API_PREFIX', '') }, // Static variables used in multiple places throughout the UI @@ -111,6 +112,7 @@ module.exports = function(environment, $ = process.env) { PartitionsEnabled: env('CONSUL_PARTITIONS_ENABLED', false), LocalDatacenter: env('CONSUL_DATACENTER_LOCAL', 'dc1'), PrimaryDatacenter: env('CONSUL_DATACENTER_PRIMARY', 'dc1'), + APIPrefix: env('CONSUL_API_PREFIX', '') }, '@hashicorp/ember-cli-api-double': { @@ -118,6 +120,7 @@ module.exports = function(environment, $ = process.env) { enabled: true, endpoints: { '/v1': '/mock-api/v1', + '/prefixed-api': '/mock-api/prefixed-api', }, }, APP: Object.assign({}, ENV.APP, { @@ -162,6 +165,7 @@ module.exports = function(environment, $ = process.env) { PartitionsEnabled: env('CONSUL_PARTITIONS_ENABLED', true), LocalDatacenter: env('CONSUL_DATACENTER_LOCAL', 'dc1'), PrimaryDatacenter: env('CONSUL_DATACENTER_PRIMARY', 'dc1'), + APIPrefix: env('CONSUL_API_PREFIX', '') }, '@hashicorp/ember-cli-api-double': { @@ -176,7 +180,9 @@ module.exports = function(environment, $ = process.env) { ENV = Object.assign({}, ENV, { // in production operatorConfig is populated at consul runtime from // operator configuration - operatorConfig: {}, + operatorConfig: { + APIPrefix: '' + }, }); break; } diff --git a/ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/.config b/ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/.config new file mode 100644 index 000000000..93326539b --- /dev/null +++ b/ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/.config @@ -0,0 +1,7 @@ +--- +"*": + GET: + "*": + headers: + response: + X-Consul-Default-Acl-Policy: ${env('CONSUL_ACL_POLICY', fake.helpers.randomize(['allow', 'deny']))} diff --git a/ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/datacenters b/ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/datacenters new file mode 100644 index 000000000..5cfa3ee6b --- /dev/null +++ b/ui/packages/consul-ui/mock-api/prefixed-api/v1/catalog/datacenters @@ -0,0 +1,13 @@ +[ + ${ + range(env('CONSUL_DATACENTER_COUNT', 10)).map((item, i) => { + if(i === 0) { + return `"${env('CONSUL_DATACENTER_LOCAL', 'dc1')}"`; + } + return ` + "${fake.address.countryCode().toLowerCase()}_${ i % 2 ? "west" : "east"}-${i}" +`; + } + ) + } +] diff --git a/ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/acl/authorize b/ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/acl/authorize new file mode 100644 index 000000000..11d84ed6e --- /dev/null +++ b/ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/acl/authorize @@ -0,0 +1,14 @@ +[ +${ + http.body.map(item => { + return JSON.stringify( + Object.assign( + item, + { + Allow: !!JSON.parse(env(`CONSUL_RESOURCE_${item.Resource.toUpperCase()}_${item.Access.toUpperCase()}`, 'true')) + } + ) + ); + }) +} +] diff --git a/ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/ui/services b/ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/ui/services new file mode 100644 index 000000000..ccf51a864 --- /dev/null +++ b/ui/packages/consul-ui/mock-api/prefixed-api/v1/internal/ui/services @@ -0,0 +1,22 @@ +[ + { + "Name": "consul", + "Datacenter": "dc1", + "Tags": null, + "Nodes": [ + "node" + ], + "ExternalSources": null, + "InstanceCount": 1, + "ChecksPassing": 1, + "ChecksWarning": 0, + "ChecksCritical": 0, + "GatewayConfig": {}, + "TransparentProxy": false, + "ConnectNative": false, + "Partition": "default", + "Namespace": "default", + "ConnectedWithProxy": false, + "ConnectedWithGateway": false + } +] diff --git a/ui/packages/consul-ui/node-tests/config/environment.js b/ui/packages/consul-ui/node-tests/config/environment.js index fc64faf94..286fb741d 100644 --- a/ui/packages/consul-ui/node-tests/config/environment.js +++ b/ui/packages/consul-ui/node-tests/config/environment.js @@ -11,7 +11,9 @@ test( { environment: 'production', CONSUL_BINARY_TYPE: 'oss', - operatorConfig: {} + operatorConfig: { + APIPrefix: '', + } }, { environment: 'test', @@ -24,6 +26,7 @@ test( PeeringEnabled: true, LocalDatacenter: 'dc1', PrimaryDatacenter: 'dc1', + APIPrefix: '', } }, { @@ -40,6 +43,7 @@ test( PeeringEnabled: true, LocalDatacenter: 'dc1', PrimaryDatacenter: 'dc1', + APIPrefix: '', } }, { @@ -56,6 +60,7 @@ test( PeeringEnabled: true, LocalDatacenter: 'dc1', PrimaryDatacenter: 'dc1', + APIPrefix: '', } }, { @@ -69,6 +74,7 @@ test( PeeringEnabled: true, LocalDatacenter: 'dc1', PrimaryDatacenter: 'dc1', + APIPrefix: '', } } ].forEach( diff --git a/ui/packages/consul-ui/tests/acceptance/api-prefix.feature b/ui/packages/consul-ui/tests/acceptance/api-prefix.feature new file mode 100644 index 000000000..bc4a9d87e --- /dev/null +++ b/ui/packages/consul-ui/tests/acceptance/api-prefix.feature @@ -0,0 +1,7 @@ +@setupApplicationTest +Feature: api-prefix + Scenario: + Given 1 datacenter model with the value "dc1" + And an API prefix of "/prefixed-api" + When I visit the index page + Then a GET request was made to "/prefixed-api/v1/catalog/datacenters" diff --git a/ui/packages/consul-ui/tests/acceptance/steps/api-prefix-steps.js b/ui/packages/consul-ui/tests/acceptance/steps/api-prefix-steps.js new file mode 100644 index 000000000..f40e97910 --- /dev/null +++ b/ui/packages/consul-ui/tests/acceptance/steps/api-prefix-steps.js @@ -0,0 +1,11 @@ +import steps from './steps'; + +// step definitions that are shared between features should be moved to the +// tests/acceptance/steps/steps.js file + +export default function(assert) { + return steps(assert) + .then('I should find a file', function() { + assert.ok(true, this.step); + }); +} diff --git a/ui/packages/consul-ui/tests/steps.js b/ui/packages/consul-ui/tests/steps.js index 43b1c0c55..973098dae 100644 --- a/ui/packages/consul-ui/tests/steps.js +++ b/ui/packages/consul-ui/tests/steps.js @@ -87,6 +87,7 @@ export default function({ api.server.respondWith(url.split('?')[0], data); }; const setCookie = function(key, value) { + document.cookie = `${key}=${value}`; api.server.setCookie(key, value); }; diff --git a/ui/packages/consul-ui/tests/steps/doubles/http.js b/ui/packages/consul-ui/tests/steps/doubles/http.js index b92ec8ae7..3624cc1aa 100644 --- a/ui/packages/consul-ui/tests/steps/doubles/http.js +++ b/ui/packages/consul-ui/tests/steps/doubles/http.js @@ -17,5 +17,8 @@ export default function(scenario, respondWith, set, oidc) { }) .given('a network latency of $number', function(number) { set('CONSUL_LATENCY', number); + }) + .given('an API prefix of "$prefix"', function(prefix) { + set('CONSUL_API_PREFIX', prefix); }); } From d771725a142e42acf0edd1bbf2c13105044e8335 Mon Sep 17 00:00:00 2001 From: Derek Menteer Date: Fri, 2 Sep 2022 14:56:02 -0500 Subject: [PATCH 55/55] Add kv txn get-not-exists operation. --- .changelog/14474.txt | 3 +++ agent/consul/kvs_endpoint.go | 2 +- agent/consul/state/txn.go | 9 ++++++++- agent/consul/state/txn_test.go | 32 ++++++++++++++++++++++++++++++++ api/txn.go | 1 + website/content/api-docs/txn.mdx | 29 +++++++++++++++-------------- 6 files changed, 60 insertions(+), 16 deletions(-) create mode 100644 .changelog/14474.txt diff --git a/.changelog/14474.txt b/.changelog/14474.txt new file mode 100644 index 000000000..fcc326547 --- /dev/null +++ b/.changelog/14474.txt @@ -0,0 +1,3 @@ +```release-note:feature +http: Add new `get-or-empty` operation to the txn api. Refer to the [API docs](https://www.consul.io/api-docs/txn#kv-operations) for more information. +``` \ No newline at end of file diff --git a/agent/consul/kvs_endpoint.go b/agent/consul/kvs_endpoint.go index 434ebcada..3f2cbe1cc 100644 --- a/agent/consul/kvs_endpoint.go +++ b/agent/consul/kvs_endpoint.go @@ -49,7 +49,7 @@ func kvsPreApply(logger hclog.Logger, srv *Server, authz resolver.Result, op api return false, err } - case api.KVGet, api.KVGetTree: + case api.KVGet, api.KVGetTree, api.KVGetOrEmpty: // Filtering for GETs is done on the output side. case api.KVCheckSession, api.KVCheckIndex: diff --git a/agent/consul/state/txn.go b/agent/consul/state/txn.go index 087bb4fe8..af6e98995 100644 --- a/agent/consul/state/txn.go +++ b/agent/consul/state/txn.go @@ -60,6 +60,13 @@ func (s *Store) txnKVS(tx WriteTxn, idx uint64, op *structs.TxnKVOp) (structs.Tx err = fmt.Errorf("key %q doesn't exist", op.DirEnt.Key) } + case api.KVGetOrEmpty: + _, entry, err = kvsGetTxn(tx, nil, op.DirEnt.Key, op.DirEnt.EnterpriseMeta) + if entry == nil && err == nil { + entry = &op.DirEnt + entry.Value = nil + } + case api.KVGetTree: var entries structs.DirEntries _, entries, err = s.kvsListTxn(tx, nil, op.DirEnt.Key, op.DirEnt.EnterpriseMeta) @@ -95,7 +102,7 @@ func (s *Store) txnKVS(tx WriteTxn, idx uint64, op *structs.TxnKVOp) (structs.Tx // value (we have to clone so we don't modify the entry being used by // the state store). if entry != nil { - if op.Verb == api.KVGet { + if op.Verb == api.KVGet || op.Verb == api.KVGetOrEmpty { result := structs.TxnResult{KV: entry} return structs.TxnResults{&result}, nil } diff --git a/agent/consul/state/txn_test.go b/agent/consul/state/txn_test.go index f98325df3..a7694089b 100644 --- a/agent/consul/state/txn_test.go +++ b/agent/consul/state/txn_test.go @@ -577,6 +577,22 @@ func TestStateStore_Txn_KVS(t *testing.T) { }, }, }, + &structs.TxnOp{ + KV: &structs.TxnKVOp{ + Verb: api.KVGetOrEmpty, + DirEnt: structs.DirEntry{ + Key: "foo/update", + }, + }, + }, + &structs.TxnOp{ + KV: &structs.TxnKVOp{ + Verb: api.KVGetOrEmpty, + DirEnt: structs.DirEntry{ + Key: "foo/not-exists", + }, + }, + }, &structs.TxnOp{ KV: &structs.TxnKVOp{ Verb: api.KVCheckIndex, @@ -702,6 +718,22 @@ func TestStateStore_Txn_KVS(t *testing.T) { }, }, }, + &structs.TxnResult{ + KV: &structs.DirEntry{ + Key: "foo/update", + Value: []byte("stale"), + RaftIndex: structs.RaftIndex{ + CreateIndex: 5, + ModifyIndex: 5, + }, + }, + }, + &structs.TxnResult{ + KV: &structs.DirEntry{ + Key: "foo/not-exists", + Value: nil, + }, + }, &structs.TxnResult{ KV: &structs.DirEntry{ diff --git a/api/txn.go b/api/txn.go index 59fd1c0d9..4aa06d9f5 100644 --- a/api/txn.go +++ b/api/txn.go @@ -67,6 +67,7 @@ const ( KVLock KVOp = "lock" KVUnlock KVOp = "unlock" KVGet KVOp = "get" + KVGetOrEmpty KVOp = "get-or-empty" KVGetTree KVOp = "get-tree" KVCheckSession KVOp = "check-session" KVCheckIndex KVOp = "check-index" diff --git a/website/content/api-docs/txn.mdx b/website/content/api-docs/txn.mdx index 97eefeece..a42176cbb 100644 --- a/website/content/api-docs/txn.mdx +++ b/website/content/api-docs/txn.mdx @@ -266,20 +266,21 @@ look like this: The following tables summarize the available verbs and the fields that apply to those operations ("X" means a field is required and "O" means it is optional): -| Verb | Operation | Key | Value | Flags | Index | Session | -| ------------------ | --------------------------------------- | :-: | :---: | :---: | :---: | :-----: | -| `set` | Sets the `Key` to the given `Value` | `x` | `x` | `o` | | | -| `cas` | Sets, but with CAS semantics | `x` | `x` | `o` | `x` | | -| `lock` | Lock with the given `Session` | `x` | `x` | `o` | | `x` | -| `unlock` | Unlock with the given `Session` | `x` | `x` | `o` | | `x` | -| `get` | Get the key, fails if it does not exist | `x` | | | | | -| `get-tree` | Gets all keys with the prefix | `x` | | | | | -| `check-index` | Fail if modify index != index | `x` | | | `x` | | -| `check-session` | Fail if not locked by session | `x` | | | | `x` | -| `check-not-exists` | Fail if key exists | `x` | | | | | -| `delete` | Delete the key | `x` | | | | | -| `delete-tree` | Delete all keys with a prefix | `x` | | | | | -| `delete-cas` | Delete, but with CAS semantics | `x` | | | `x` | | +| Verb | Operation | Key | Value | Flags | Index | Session | +| ------------------ | ----------------------------------------- | :-: | :---: | :---: | :---: | :-----: | +| `set` | Sets the `Key` to the given `Value` | `x` | `x` | `o` | | | +| `cas` | Sets, but with CAS semantics | `x` | `x` | `o` | `x` | | +| `lock` | Lock with the given `Session` | `x` | `x` | `o` | | `x` | +| `unlock` | Unlock with the given `Session` | `x` | `x` | `o` | | `x` | +| `get` | Get the key, fails if it does not exist | `x` | | | | | +| `get-or-empty` | Get the key, or null if it does not exist | `x` | | | | | +| `get-tree` | Gets all keys with the prefix | `x` | | | | | +| `check-index` | Fail if modify index != index | `x` | | | `x` | | +| `check-session` | Fail if not locked by session | `x` | | | | `x` | +| `check-not-exists` | Fail if key exists | `x` | | | | | +| `delete` | Delete the key | `x` | | | | | +| `delete-tree` | Delete all keys with a prefix | `x` | | | | | +| `delete-cas` | Delete, but with CAS semantics | `x` | | | `x` | | #### Node Operations