Backport of PmTLS and tproxy improvements with failover and L7 traffic mgmt for k8s into release/1.16.x (#17645)

* backport of commit e4c2789cefde79333e10c3af7a3bbd6c594b20a6

* backport of commit c3a2d0b9696cdda90169b646404cf86f7f37f76e

* backport of commit 81f8f7c04ec70b9e513b2e40f8c2f29d105a7c4d

* backport of commit 63d12fbc04e89ad0d1cc6aa34f1a2d7d1c32ff3c

* backport of commit 73d7179c55de6780c27fa05bdcbf1ef1c84862f0

* backport of commit f8873368cb6289d1460337ee54604d2eae0489b8

---------

Co-authored-by: trujillo-adam <ajosetru@gmail.com>
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
This commit is contained in:
hc-github-team-consul-core 2023-06-12 10:06:15 -04:00 committed by GitHub
parent 3c996fb16e
commit 4d61326013
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 791 additions and 192 deletions

View File

@ -0,0 +1,45 @@
---
layout: docs
page_title: Failover configuration overview
description: Learn about failover strategies and service mesh features you can implement to route traffic if services become unhealthy or unreachable, including sameness groups, prepared queries, and service resolvers.
---
# Failover overview
Services in your mesh may become unhealthy or unreachable for many reasons, but you can mitigate some of the effects associated with infrastructure issues by configuring Consul to automatically route traffic to and from failover service instances. This topic provides an overview of the failover strategies you can implement with Consul.
## Service failover strategies in Consul
There are several methods for implementing failover strategies between datacenters in Consul. You can adopt one of the following strategies based on your deployment configuration and network requirements:
- Configure the `Failover` stanza in a service resolver configuration entry to explicitly define which services should failover and the targeting logic they should follow.
- Make a prepared query for each service that you can use to automate geo-failover.
- Create a sameness group to identify partitions with identical namespaces and service names to establish default failover targets.
The following table compares these strategies in deployments with multiple datacenters to help you determine the best approach for your service:
| Failover Strategy | Supports WAN Federation | Supports Cluster Peering | Multi-Datacenter Failover Strength | Multi-Datacenter Usage Scenario |
| :---------------: | :---------------------: | :----------------------: | :--------------------------------- | :------------------------------ |
| `Failover` stanza | &#9989; | &#9989; | Enables more granular logic for failover targeting | Configuring failover for a single service or service subset, especially for testing or debugging purposes |
| Prepared query | &#9989; | &#9989; | Central policies that can automatically target the nearest datacenter | WAN-federated deployments where a primary datacenter is configured. Prepared queries are not replicated over peer connections. |
| Sameness groups | &#10060; | &#9989; | Group size changes without edits to existing member configurations | Cluster peering deployments with consistently named services and namespaces |
### Failover configurations for a service mesh with a single datacenter
You can implement a service resolver configuration entry and specify a pool of failover service instances that other services can exchange messages with when the primary service becomes unhealthy or unreachable. We recommend adopting this strategy as a minimum baseline when implementing Consul service mesh and layering additional failover strategies to build resilience into your application network.
Refer to the [`Failover` configuration ](/consul/docs/connect/config-entries/service-resolver#failover) for examples of how to configure failover services in the service resolver configuration entry on both VMs and Kubernetes deployments.
### Failover configuration for WAN-federated datacenters
If your network has multiple Consul datacenters that are WAN-federated, you can configure your applications to look for failover services with prepared queries. [Prepared queries](/consul/api-docs/) are configurations that enable you to define complex service discovery lookups. This strategy hinges on the secondary datacenter containing service instances that have the same name and residing in the same namespace as their counterparts in the primary datacenter.
Refer to the [Automate geo-failover with prepared queries tutorial](/consul/tutorials/developer-discovery/automate-geo-failover) for additional information.
### Failover configuration for peered clusters and partitions
In networks with multiple datacenters or partitions that share a peer connection, each datacenter or partition functions as an independent unit. As a result, Consul does not correlate services that have the same name, even if they are in the same namespace.
You can configure sameness groups for this type of network. Sameness groups allow you to define a group of admin partitions where identical services are deployed in identical namespaces. After you configure the sameness group, you can reference the `SamenessGroup` parameter in service resolver, exported service, and service intention configuration entries, enabling you to add or remove cluster peers from the group without making changes to every cluster peer every time.
Refer to [Sameness groups usage page](/consul/docs/connect/cluster-peering/usage/sameness-groups) for more information.

View File

@ -1,126 +1,80 @@
---
layout: docs
page_title: Service Mesh Traffic Management - Overview
page_title: Service mesh traffic management overview
description: >-
Consul can route, split, and resolve Layer 7 traffic in a service mesh to support workflows like canary testing and blue/green deployments. Learn about the three configuration entry kinds that define L7 traffic management behavior in Consul.
---
-> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer.
# Service mesh traffic management overview
# Service Mesh Traffic Management Overview
This topic provides overview information about the application layer traffic management capabilities available in Consul service mesh. These capabilities are also referred to as *Layer 7* or *L7 traffic management*.
Layer 7 traffic management allows operators to divide L7 traffic between
different
[subsets](/consul/docs/connect/config-entries/service-resolver#service-subsets) of
service instances when using service mesh.
## Introduction
There are many ways you may wish to carve up a single datacenter's pool of
services beyond simply returning all healthy instances for load balancing.
Canary testing, A/B tests, blue/green deploys, and soft multi-tenancy
(prod/qa/staging sharing compute resources) all require some mechanism of
carving out portions of the Consul catalog smaller than the level of a single
service and configuring when that subset should receive traffic.
Consul service mesh allows you to divide application layer traffic between different subsets of service instances. You can leverage L7 traffic management capabilities to perform complex processes, such as configuring backup services for failover scenarios, canary and A-B testing, blue-green deployments, and soft multi-tenancy in which production, QA, and staging environments share compute resources. L7 traffic management with Consul service mesh allows you to designate groups of service instances in the Consul catalog smaller than all instances of single service and configure when that subset should receive traffic.
-> **Note:** This feature is not compatible with the
[built-in proxy](/consul/docs/connect/proxies/built-in),
[native proxies](/consul/docs/connect/native),
and some [Envoy proxy escape hatches](/consul/docs/connect/proxies/envoy#escape-hatch-overrides).
You cannot manage L7 traffic with the [built-in proxy](/consul/docs/connect/proxies/built-in),
[native proxies](/consul/docs/connect/native), or some [Envoy proxy escape hatches](/consul/docs/connect/proxies/envoy#escape-hatch-overrides).
## Stages
## Discovery chain
Service mesh proxy upstreams are discovered using a series of stages: routing,
splitting, and resolution. These stages represent different ways of managing L7
traffic.
Consul uses a series of stages to discover service mesh proxy upstreams. Each stage represents different ways of managing L7 traffic. They are referred to as the _discovery chain_:
- routing
- splitting
- resolution
For information about integrating service mesh proxy upstream discovery using the discovery chain, refer to [Discovery Chain for Service Mesh Traffic Management](/consul/docs/connect/l7-traffic/discovery-chain).
The Consul UI shows discovery chain stages in the **Routing** tab of the **Services** page:
![screenshot of L7 traffic visualization in the UI](/img/l7-routing/full.png)
Each stage of this discovery process can be dynamically reconfigured via various
[configuration entries](/consul/docs/agent/config-entries). When a configuration
entry is missing, that stage will fall back on reasonable default behavior.
You can define how Consul manages each stage of the discovery chain in a Consul _configuration entry_. [Configuration entries](/consul/docs/connect/config-entries) modify the default behavior of the Consul service mesh.
When managing L7 traffic with cluster peering, there are additional configuration requirements to resolve peers in the discovery chain. Refer to [Cluster peering L7 traffic management](/consul/docs/connect/cluster-peering/usage/peering-traffic-management) for more information.
### Routing
A [`service-router`](/consul/docs/connect/config-entries/service-router) config
entry kind is the first configurable stage.
The first stage of the discovery chain is the service router. Routers intercept traffic according to a set of L7 attributes, such as path prefixes and HTTP headers, and route the traffic to a different service or service subset.
Apply a [service router configuration entry](/consul/docs/connect/config-entries/service-router) to implement a router. Service router configuration entries can only reference service splitter or service resolver configuration entries.
![screenshot of service router in the UI](/img/l7-routing/Router.png)
A router config entry allows for a user to intercept traffic using L7 criteria
such as path prefixes or http headers, and change behavior such as by sending
traffic to a different service or service subset.
These config entries may only reference `service-splitter` or
`service-resolver` entries.
[Examples](/consul/docs/connect/config-entries/service-router#sample-config-entries)
can be found in the `service-router` documentation.
### Splitting
A [`service-splitter`](/consul/docs/connect/config-entries/service-splitter) config
entry kind is the next stage after routing.
The second stage of the discovery chain is the service splitter. Service splitters split incoming requests and route them to different services or service subsets. Splitters enable staged canary rollouts, versioned releases, and similar use cases.
Apply a [service splitter configuration entry](/consul/docs/connect/config-entries/service-splitter) to implement a splitter. Service splitters configuration entries can only reference other service splitters or service resolver configuration entries.
![screenshot of service splitter in the UI](/img/l7-routing/Splitter.png)
A splitter config entry allows for a user to choose to split incoming requests
across different subsets of a single service (like during staged canary
rollouts), or perhaps across different services (like during a v2 rewrite or
other type of codebase migration).
If multiple service splitters are chained, Consul flattens the splits so that they behave as a single service spitter. In the following equation, `splitter[A]` references `splitter[B]`:
These config entries may only reference `service-splitter` or
`service-resolver` entries.
```text
splitter[A]: A_v1=50%, A_v2=50%
splitter[B]: A=50%, B=50%
---------------------
splitter[effective_B]: A_v1=25%, A_v2=25%, B=50%
```
If one splitter references another splitter the overall effects are flattened
into one effective splitter config entry which reflects the multiplicative
union. For instance:
splitter[A]: A_v1=50%, A_v2=50%
splitter[B]: A=50%, B=50%
---------------------
splitter[effective_B]: A_v1=25%, A_v2=25%, B=50%
[Examples](/consul/docs/connect/config-entries/service-splitter#sample-config-entries)
can be found in the `service-splitter` documentation.
### Resolution
A [`service-resolver`](/consul/docs/connect/config-entries/service-resolver) config
entry kind is the last stage.
The third stage of the discovery chain is the service resolver. Service resolvers specify which instances of a service satisfy discovery requests for the provided service name. Service resolvers enable several use cases, including:
- Designate failovers when service instances become unhealthy or unreachable.
- Configure service subsets based on DNS values.
- Route traffic to the latest version of a service.
- Route traffic to specific Consul datacenters.
- Create virtual services that route traffic to instances of the actual service in specific Consul datacenters.
Apply a [service resolver configuration entry](/consul/docs/connect/config-entries/service-resolver) to implement a resolver. Service resolver configuration entries can only reference other service resolvers.
![screenshot of service resolver in the UI](/img/l7-routing/Resolver.png)
A resolver config entry allows for a user to define which instances of a
service should satisfy discovery requests for the provided name.
If no resolver is configured for a service, Consul sends all traffic to healthy instances of the service that have the same name in the current datacenter or specified namespace and ends the discovery chain.
Examples of things you can do with resolver config entries:
- Control where to send traffic if all instances of `api` in the current
datacenter are unhealthy.
- Configure service subsets based on `Service.Meta.version` values.
- Send all traffic for `web` that does not specify a service subset to the
`version1` subset.
- Send all traffic for `api` to `new-api`.
- Send all traffic for `api` in all datacenters to instances of `api` in `dc2`.
- Create a "virtual service" `api-dc2` that sends traffic to instances of `api`
in `dc2`. This can be referenced in upstreams or in other config entries.
If no resolver config is defined for a service it is assumed 100% of traffic
flows to the healthy instances of a service with the same name in the current
datacenter/namespace and discovery terminates.
This should feel similar in spirit to various uses of Prepared Queries, but is
not intended to be a drop-in replacement currently.
These config entries may only reference other `service-resolver` entries.
[Examples](/consul/docs/connect/config-entries/service-resolver#sample-config-entries)
can be found in the `service-resolver` documentation.
-> **Note:** `service-resolver` config entries kinds can function at L4 (unlike
`service-router` and `service-splitter` kinds). These can be created for
services of any protocol such as `tcp`.
Service resolver configuration entries can also process network layer, also called level 4 (L4), traffic. As a result, you can implement service resolvers for services that communicate over `tcp` and other non-HTTP protocols.

View File

@ -7,30 +7,21 @@ description: >-
# How does Consul Service Mesh Work on Kubernetes?
[Consul service mesh](/consul/docs/connect) is a feature built into to Consul that enables
automatic service-to-service authorization and connection encryption across
your Consul services. Consul Service Mesh can be used with Kubernetes to secure pod
communication with other pods and external Kubernetes services.
Consul service mesh automates service-to-service authorization and encryption across your Consul services. You can use service mesh in Kubernetes-orchestrated networks to secure communication between pods as well as communication between pods and external Kubernetes services.
The noun _connect_ is used throughout this documentation to refer to the connect
subsystem that provides Consul's service mesh capabilities.
Where you encounter the _noun_ connect, it is usually functionality specific to
service mesh.
## Workflow
Consul can automatically inject the sidecar running Envoy into pods in
your cluster, making configuration for Kubernetes automatic.
This functionality is provided by the
[consul-k8s project](https://github.com/hashicorp/consul-k8s) and can be
automatically installed and configured using the
[Consul Helm chart](/consul/docs/k8s/installation/install#helm-chart-installation).
Consul service mesh is enabled by default when you install Consul on Kubernetes using the Consul Helm chart. Consul also automatically injects sidecars into the pods in your clusters that run Envoy. These sidecar proxies, called Consul dataplanes, are enabled when `connectInject.default` is set to `false` in the Helm chart. Refer to the following documentation for additional information about these concepts:
## Usage
- [Installation and Configuration](#installation-and-configuration) in this topic
- [Consul Helm chart reference](/consul/docs/k8s/helm)
- [Simplified Service Mesh with Consul Dataplane](/consul/docs/connect/dataplane)
-> **Important:** As of consul-k8s `v0.26.0` and Consul Helm `v0.32.0`, a Kubernetes
service is required to run services on the Consul service mesh.
If `connectInject.default` is set to `false` or you want to explicitly enable service mesh sidecar proxy injection for a specific deployment, add the `consul.hashicorp.com/connect-inject` annotation to the pod specification template and set it to `true` when connecting services to the mesh.
Installing Consul on Kubernetes with [`connect-inject` enabled](/consul/docs/k8s/connect#installation-and-configuration) adds a sidecar to all pods. By default, it enables service mesh functionality with Consul Dataplane by injecting an Envoy proxy. You can also configure Consul to inject a client agent sidecar to connect to your service mesh. Refer to [Simplified Service Mesh with Consul Dataplane](/consul/docs/connect/dataplane) for more information.
### Example
The following example shows a Kubernetes configuration that specifically enables service mesh connections for the `static-server` service. Consul starts and registers a sidecar proxy that listens on port 20000 by default and proxies valid inbound connections to port 8080.
```yaml
apiVersion: v1
@ -81,37 +72,17 @@ spec:
serviceAccountName: static-server
```
The only change for service mesh is the addition of the
`consul.hashicorp.com/connect-inject` annotation. This enables injection
for the Pod in this Deployment. The injector can also be
[configured](/consul/docs/k8s/connect#installation-and-configuration)
to automatically inject unless explicitly disabled, but the default
installation requires opt-in using the annotation shown above.
To establish a connection to the Pod using service mesh, a client must use another mesh proxy. The client mesh proxy will use Consul service discovery to find all available upstream proxies and their public ports.
~> **A common mistake** is to set the annotation on the Deployment or
other resource. Ensure that the injector annotations are specified on
the _pod specification template_ as shown above.
### Service names
This will start a sidecar proxy that listens on port `20000` registered
with Consul and proxies valid inbound connections to port 8080 in the pod.
To establish a connection to the pod using service mesh, a client must use another mesh
proxy. The client mesh proxy will use Consul service discovery to find
all available upstream proxies and their public ports.
When the service is onboarded, the name registered in Consul is set to the name of the Kubernetes Service associated with the Pod. You can specify a custom name for the service in the [`consul.hashicorp.com/connect-service` annotation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service), but if ACLs are enabled, then the name of the service registered in Consul must match the Pod's `ServiceAccount` name.
In the example above, the server is listening on `:8080`.
By default, the Consul service mesh runs in [transparent proxy](/consul/docs/connect/transparent-proxy) mode.
This means that even though the server binds to all interfaces,
the inbound and outbound connections will automatically go through to the sidecar proxy.
It also allows you to use Kubernetes DNS like you normally would without the
Consul Service Mesh.
### Transparent proxy mode
-> **Note:** As of consul `v1.10.0`, consul-k8s `v0.26.0` and Consul Helm `v0.32.0`,
all Consul Service Mesh services will run with transparent proxy enabled by default. Running with transparent
proxy will enforce all inbound and outbound traffic to go through the Envoy proxy.
By default, the Consul service mesh runs in transparent proxy mode. This mode forces inbound and outbound traffic through the sidecar proxy even though the service binds to all interfaces. Transparent proxy infers the location of upstream services using Consul service intentions, and also allows you to use Kubernetes DNS as you normally would for your workloads.
The service name registered in Consul will be set to the name of the Kubernetes service
associated with the Pod. This can be customized with the `consul.hashicorp.com/connect-service`
annotation. If using ACLs, this name must be the same as the Pod's `ServiceAccount` name.
When transparent proxy mode is enabled, all service-to-service traffic is required to use mTLS. While onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication. You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/k8s/connect/onboarding-tproxy-mode) for additional information.
### Connecting to Mesh-Enabled Services

View File

@ -0,0 +1,295 @@
---
layout: docs
page_title: Onboard services in transparent proxy mode
description: Learn how to enable permissive mutual transport layer security (permissive mTLS) so that you can safely add services to your service mesh when transparent proxy is enabled in Kubernetes deployments.
---
# Onboard services while in transparent proxy mode
This topic describes how to run Consul in permissive mTLS mode so that you can safely onboard existing applications to Consul service mesh when transparent proxy mode is enabled.
## Background
When [transparent proxy mode](/consul/docs/k8s/transparent-proxy/) is enabled, all service-to-service traffic is secured by mTLS. Until the services that you want to add to the network are fully onboarded, your network may have a mix of mTLS and non-mTLS traffic, which can result in broken service-to-service communication. This situation occurs because sidecar proxies for existing mesh services reject traffic from services that are not yet onboarded.
You can enable the `permissive` mTLS mode to ensure existing non-mTLS service-to-service traffic is allowed during the onboarding phase. The `permissive` mTLS mode enables sidecar proxies to accept both mTLS and non-mTLS traffic to an application. Using this mode enables you to onboard without downtime and without being required to reconfigure or redeploy your application.
We recommend enabling permissive mTLS as a temporary operating mode. After onboarding is complete, you should reconfigure all services to `strict` mTLS mode to ensure all service-to-service communication is automatically secured by Consul service mesh.
!> **Security warning**: We recommend that you disable permissive mTLS mode after onboarding services to prevent non-mTLS connections to the service. Intentions are not enforced and encryption is not enabled for non-mTLS connections.
## Workflow
The workflow to configure mTLS settings depends on the applications you are onboarding and the order you intend to onboard them, but the following steps describe the general workflow:
1. **Configure global settings**: Configure the mesh to allow services to send non-mTLS messages to services outside the mesh. Additionally, configure the mesh to let services in the mesh use permissive mTLS mode.
1. **Enable permissive mTLS mode**: If you are onboarding an upstream service prior to its related downstream services, then enable permissive mTLS mode in the service defaults configuration entry. This allows the upstream service to send encrypted messages from the mesh when you register the service with Consul.
1. **Configure intentions**: Intentions are controls that authorize traffic between services in the mesh. Transparent proxy uses intentions to infer traffic routes between Envoy proxies. Consul does not enforce intentions for non-mTLS connections made while proxies are in permissive mTLS mode, but intentions are necessary for completing the onboarding process.
1. **Register the service**: Create the service definition and configure and deploy its sidecar proxy.
1. **Re-secure the mesh**: If you enabled permissive mTLS mode, switch back to strict mTLS mode and revert the global settings to disable non-mTLS traffic in the service mesh.
## Requirements
Permissive mTLS is only supported for services running in transparent proxy mode. Transparent proxy mode is only available on Kubernetes deployments.
## Configure global settings
Configure Consul to allow services that are already in the mesh to send non-mTLS messages to services outside the mesh. You can also Consul to allow services to run in permissive mTLS mode. Set both configurations in the mesh gateway configuration entry, which is the global configuration that defines service mesh proxy behavior.
### Allow outgoing non-mTLS traffic
You can configure a global setting that allows services in the mesh to send non-mTLS messages to services outside the mesh.
Add the `MeshDestinationsOnly` property to the mesh configuration entry and set the property to `false`. If the services belong to multiple admin partitions, you must apply the setting in each partition:
<CodeTabs heading="Allow non-mTLS traffic">
```hcl
Kind = "mesh"
TransparentProxy {
MeshDestinationsOnly = false
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
allowEnablingPermissiveMutualTLS: true
```
```json
{
"Kind": "mesh",
"TransparentProxy": [
{
"MeshDestinationsOnly": false
}
]
}
```
</CodeTabs>
Alternatively, you can selectively allow outgoing traffic on a per-service basis by configuring [outbound port exclusions](/consul/docs/k8s/connect/transparent-proxy/enable-transparent-proxy#exclude-outbound-ports). This setting excludes outgoing traffic from traffic redirection imposed by transparent proxy. When changing this setting, you must redeploy your application.
### Allow permissive mTLS modes for incoming traffic
Set the `AllowEnablingPermissiveMutualTLS` parameter in the mesh configuration entry to `true` so that services in the mesh _are able_ to use permissive mTLS mode for incoming traffic. The parameter does not direct services to use permissive mTLS. It is a global parameter that allows services to run in permissive mTLS mode.
<CodeTabs heading="Allow permissive mTLS mode">
```hcl
Kind = "mesh"
AllowEnablingPermissiveMutualTLS = true
TransparentProxy {
MeshDestinationsOnly = false
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
allowEnablingPermissiveMutualTLS: true
transparentProxy:
meshDestinationsOnly: false
```
```json
{
"Kind": "mesh",
"AllowEnablingPermissiveMutualTLS": true,
"TransparentProxy": [
{
"MeshDestinationsOnly": false
}
]
}
```
</CodeTabs>
You can change this setting back to `false` at any time, even if there are services currently running in permissive mode. Doing so allows you to decide at which point during the onboarding process to stop allowing services to use permissive mTLS. When the `MeshDestinationOnly` is set to `false`, you must configure all new services added to the mesh with `MutualTLSMode=strict` for the Consul to securely route traffic throughout the mesh.
## Enable permissive mTLS mode
Depending on the services you are onboarding, you may not need to enable permissive mTLS mode. If the service does not accept incoming traffic or accepts traffic from downstream services that are already part of the service mesh, then permissive mTLS mode is not required to continue.
To enable permissive mTLS mode for the service, set [`MutualTLSMode=permissive`](/consul/docs/connect/config-entries/service-defaults#mutualtlsmode) in the service defaults configuration entry for the service. The following example shows how to configure this setting for a service named `example-service`.
<CodeTabs heading="Enable permissive mTLS for applicable services">
```hcl
Kind = "service-defaults"
Name = "example-service"
MutualTLSMode = "permissive"
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: example-service
spec:
mutualTLSMode: "permissive"
```
```json
{
"Kind": "service-defaults",
"Name": "example-service",
"MutualTLSMode": "permissive"
}
```
</CodeTabs>
Refer to the [service defaults configuration reference](/consul/docs/connect/config-entries/service-defaults) for information about all settings.
You can change this setting back to `strict` at any time to ensure mTLS is required for incoming traffic to this service.
## Configure intentions
Service intentions are mechanisms in Consul that control traffic between services in the mesh.
We recommend creating intentions that restrict services to accepting only necessary traffic. You must identify the downstream services that send messages to the service you want to add to the mesh and then create an intention to allow traffic to the service from its downstreams.
When transparent proxy enabled and the `MutualTLSMode` parameter is set to `permissive`, incoming traffic from a downstream service to another upstream service is not secured by mTLS unless that upstream relationship is known to Consul. You must either define an intention so that Consul can infer the upstream relationship from the intention, or you must include an explicit upstream as part of the service definition for the downstream.
Refer to [Service intentions](/consul/docs/connect/intentions) for additional information about how intentions work and how to create them.
## Add the service to the mesh
Register your service into the catalog and update your application to deploy a sidecar proxy. You should also monitor your service to verify its configuration. Refer to the [Consul on Kubernetes service mesh overview](/consul/docs/k8s/connect) for additional information.
## Re-secure mesh traffic
If the newly added service was placed in permissive mTLS mode for onboarding, then you should switch to strict mode when it is safe to do so. You should also revert the global settings that allow services to send and receive non-mTLS traffic.
### Disable permissive mTLS mode
Configure the service to operate in strict mTLS mode after the service is no longer receiving incoming non-mTLS traffic. After the downstream services that send messages to this service are all onboarded to the mesh, this service should no longer receive non-mTLS traffic.
Check the following Envoy listener statistics for the sidecar proxy to determine if the sidecar is receiving non-mTLS traffic:
- The `tcp.permissive_public_listener.*` statistics indicate non-mTLS traffic. If these metrics are static over a sufficient period of time, that indicates the sidecar is not receiving non-mTLS traffic.
- The `tcp.public_listener.*` statistics indicate mTLS traffic. If incoming traffic is expected to this service and these statistics are changing, then the sidecar is receiving mTLS traffic.
Refer to the [service mesh observability overview](/consul/docs/connect/observability) and [metrics configuration for Consul on Kubernetes documentation](/consul/docs/k8s/connect/observability/metrics) for additional information.
If your service is still receiving non-mTLS traffic, complete the following steps to determine the source of the non-mTLS traffic:
1. Verify the list of downstream services. Optionally, you can enable [Envoy access logging](/consul/docs/connect/observability/access-logs) to determine source IP addresses for incoming traffic, and cross-reference those IP addresses with services in your network.
1. Verify that each downstream is onboarded to the service mesh. If a downstream is not onboarded, consider onboarding it next.
1. Verify that each downstream has an intention that allows it to send traffic to the upstream service.
After you determine it is safe to move the service to strict mode, set `MutualTLSMode=strict` in the service defaults configuration entry.
<CodeTabs heading="Enable strict mTLS mode for applicable services">
```hcl
Kind = "service-defaults"
Name = "example-service"
MutualTLSMode = "strict"
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceDefaults
metadata:
name: example-service
spec:
mutualTLSMode: "strict"
```
```json
{
"Kind": "service-defaults",
"MutualTLSMode": "strict",
"Name": "example-service"
}
```
</CodeTabs>
### Disable non-mTLS traffic
After all services are onboarded, revert the global settings that allow non-mTLS traffic and verify that permissive mTLS mode is not being used in the mesh.
Set `AllowEnablingPermissiveMutualTLS=false` and `MeshDestinationsOnly=true` in the mesh config entry.
<CodeTabs heading="Disable non-mTLS traffic">
```hcl
Kind = “mesh”
AllowEnablingPermissiveMutualTLS = false
TransparentProxy {
MeshDestinationsOnly = true
}
```
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
allowEnablingPermissiveMutualTLS: false
transparentProxy:
meshDestinationsOnly: true
```
```json
{
"Kind": "mesh",
"AllowEnablingPermissiveMutualTLS": false,
"TransparentProxy": [
{
"MeshDestinationsOnly": true
}
]
}
```
</CodeTabs>
For each namespace, admin partition, and datacenter in your Consul deployment, run the `consul config list` and `consul config read` commands to verify that no services are using `permissive` mTLS mode.
The following command returns any service defaults configuration entries that contain `'MutualTLSMode = "permissive"'`:
```shell-session
$ consul config list -kind service-defaults -filter 'MutualTLSMode == "permissive"'
```
In each admin partition and datacenter, verify that `MutualTLSMode = "permissive"` is not set in the proxy defaults configuration entry . If `MutualTLSMode` is either empty or if the configuration entry is not found, then the mode is `strict` by default.
The following command fetches the proxy defaults configuration entry:
```shell-session
$ consul config read -kind proxy-defaults -name global
{
"Kind": "proxy-defaults",
"Name": "global",
"Partition": "default",
"Namespace": "default",
"TransparentProxy": {},
"MutualTLSMode": "",
"MeshGateway": {},
"Expose": {},
"AccessLogs": {},
"CreateIndex": 26,
"ModifyIndex": 30
}
```

View File

@ -1,37 +1,13 @@
---
layout: docs
page_title: Service Mesh - Enable Transparent Proxy Mode
page_title: Enable transparent proxy mode
description: >-
Learn how transparent proxy enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh. Use transparent proxying to increase application security without configuring individual upstream services.
Learn how to enable transparent proxy mode, which enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh and increase application security without configuring individual upstream services.
---
# Enable Transparent Proxy Mode
# Enable transparent proxy mode
This topic describes how to use Consuls transparent proxy feature, which allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh.
## Introduction
When transparent proxy is enabled, Consul is able to perform the following actions automatically:
- Infer the location of upstream services using service intentions.
- Redirect outbound connections that point to KubeDNS through the proxy.
- Force traffic through the proxy to prevent unauthorized direct access to the application.
The following diagram shows how transparent proxy routes traffic:
![Diagram demonstrating that with transparent proxy, connections are automatically routed through the mesh](/img/consul-connect/with-transparent-proxy.png)
When transparent proxy is disabled, you must manually specify the following configurations so that your applications can communicate with other services in the mesh:
* Explicitly configure upstream services by specifying a local port to access them.
* Change application to access `localhost:<chosen port>`.
* Configure applications to only listen on the loopback interface to prevent unauthorized traffic from bypassing the mesh.
The following diagram shows how traffic flows through the mesh without transparent proxy enabled:
![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png)
Transparent proxy is available for Kubernetes environments. As part of the integration with Kubernetes, Consul registers Kubernetes Services, injects sidecar proxies, and enables traffic redirection.
This topic describes how to use transparent proxy mode in your service mesh. Transparent proxy allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. Refer to [Transparent proxy overview](/consul/docs/k8s/connect/transparent-proxy) for additional information.
## Requirements
@ -40,24 +16,20 @@ Your network must meet the following environment and software requirements to us
* Transparent proxy is available for Kubernetes environments.
* Consul 1.10.0+
* Consul Helm chart 0.32.0+. If you want to use the Consul CNI plugin to redirect traffic, Helm chart 0.48.0+ is required. Refer to [Enable the Consul CNI plugin](#enable-the-consul-cni-plugin) for additional information.
* [Service intentions](/consul/docs/connect/intentions) must be configured to allow communication between intended services.
* You must create [service intentions](/consul/docs/connect/intentions) that explicitly allow communication between intended services so that Consul can infer upstream connections and use sidecar proxies to route messages appropriately.
* The `ip_tables` kernel module must be running on all worker nodes within a Kubernetes cluster. If you are using the `modprobe` Linux utility, for example, issue the following command:
`$ modprobe ip_tables`
~> **Upgrading to a supported version**: Always follow the [proper upgrade path](/consul/docs/upgrading/upgrade-specific/#transparent-proxy-on-kubernetes) when upgrading to a supported version of Consul, Consul on Kubernetes (`consul-k8s`), and the Consul Helm chart.
## Configuration
## Enable transparent proxy
This section describes how to configure the transparent proxy.
Transparent proxy mode is enabled for the entire cluster by default when you install Consul on Kubernetes using the Consul Helm chart. Refer to the [Consul Helm chart reference](/consul/docs/k8s/helm) for information about all default configurations.
### Enable transparent proxy
You can explicitly enable transparent proxy for the entire cluster, individual namespaces, and individual services.
You can enable the transparent proxy for an entire cluster, individual Kubernetes namespaces, and individual services.
When you install Consul using the Helm chart, transparent proxy is enabled for the entire cluster by default.
#### Entire cluster
### Entire cluster
Use the `connectInject.transparentProxy.defaultEnabled` Helm value to enable or disable transparent proxy for the entire cluster:
@ -67,14 +39,14 @@ connectInject:
defaultEnabled: true
```
#### Kubernetes namespace
### Kubernetes namespace
Apply the `consul.hashicorp.com/transparent-proxy=true` label to enable transparent proxy for a Kubernetes namespace. The label overrides the `connectInject.transparentProxy.defaultEnabled` Helm value and defines the default behavior of Pods in the namespace. The following example enables transparent proxy for Pods in the `my-app` namespace:
```bash
kubectl label namespaces my-app "consul.hashicorp.com/transparent-proxy=true"
```
#### Individual service
### Individual service
Apply the `consul.hashicorp.com/transparent-proxy=true` annotation to enable transparent proxy on the Pod for each service. The annotation overrides the Helm value and the namespace label. The following example enables transparent proxy for the `static-server` service:
@ -126,7 +98,7 @@ spec:
serviceAccountName: static-server
```
### Enable the Consul CNI plugin
## Enable the Consul CNI plugin
By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization.
@ -134,7 +106,7 @@ Instead, you can enable the Consul container network interface (CNI) plugin to p
The Consul Helm chart installs the CNI plugin, but it is disabled by default. Refer to the [instructions for enabling the CNI plugin](/consul/docs/k8s/installation/install#enable-the-consul-cni-plugin) in the Consul on Kubernetes installation documentation for additional information.
### Traffic redirection
## Traffic redirection
There are two mechanisms for redirecting traffic through the sidecar proxies. By default, Consul injects an init container that redirects all inbound and outbound traffic. The default mechanism requires elevated permissions (CAP_NET_ADMIN) in order to redirect traffic to the service mesh.
@ -142,7 +114,7 @@ Alternatively, you can enable the Consul CNI plugin to handle traffic redirectio
Both mechanisms redirect all inbound and outbound traffic, but you can configure exceptions for specific Pods or groups of Pods. The following annotations enable you to exclude certain traffic from being redirected to sidecar proxies.
#### Exclude inbound ports
### Exclude inbound ports
The [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) annotation defines a comma-separated list of inbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy:
@ -157,7 +129,7 @@ The [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/consul/doc
```
</CodeBlockConfig>
#### Exclude outbound ports
### Exclude outbound ports
The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) annotation defines a comma-separated list of outbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy:
@ -173,7 +145,7 @@ The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/consul/do
</CodeBlockConfig>
#### Exclude outbound CIDR blocks
### Exclude outbound CIDR blocks
The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) annotation
defines a comma-separated list of outbound CIDR blocks to exclude from traffic redirection when running in transparent proxy mode. The CIDR blocks are string data values.
@ -190,7 +162,7 @@ In the following example, services in the `3.3.3.3/24` IP range are not redirect
```
</CodeBlockConfig>
#### Exclude user IDs
### Exclude user IDs
The [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-uids) annotation
defines a comma-separated list of additional user IDs to exclude from traffic redirection when running in transparent proxy mode. The user IDs are string data values.
@ -208,20 +180,20 @@ In the following example, services with the IDs `4444 ` and `44444 ` are not red
</CodeBlockConfig>
### Kubernetes HTTP health probes configuration
## Kubernetes HTTP health probes configuration
By default, `connect-inject` is disabled. As a result, Consul on Kubernetes uses a mechanism for traffic redirection that interferes with [Kubernetes HTTP health
probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). This is because probes expect the kubelet to reach the application container on the probe's endpoint. Instead, traffic is redirected through the sidecar proxy. As a result, health probes return errors because the kubelet does not encrypt that traffic using a mesh proxy.
There are two methods for solving this issue. The first method is to set the `connectInject.transparentProxy.defaultOverwriteProbes` annotation to overwrite the Kubernetes HTTP health probes so that they point to the proxy. The second method is to [enable the Consul container network interface (CNI) plugin](#enable-the-consul-cni-plugin) to perform traffic redirection. Refer to the [Consul on Kubernetes installation instructions](/consul/docs/k8s/installation/install) for additional information.
#### Overwrite Kubernetes HTTP health probes
### Overwrite Kubernetes HTTP health probes
You can either include the `connectInject.transparentProxy.defaultOverwriteProbes` Helm value to your command or add the `consul.hashicorp.com/transparent-proxy-overwrite-probes` Kubernetes annotation to your pod configuration to overwrite health probes.
Refer to [Kubernetes Health Checks in Consul on Kubernetes](/consul/docs/k8s/connect/health) for additional information.
### Dial services across Kubernetes cluster
## Dial services across Kubernetes cluster
If your [Consul servers are federated between Kubernetes clusters](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes),
then you must configure services in one Kubernetes cluster to explicitly dial a service in the datacenter of another Kubernetes cluster using the
@ -243,7 +215,7 @@ The following example configures the service to dial an upstream service called
You do not need to configure services to explicitly dial upstream services if your Consul clusters are connected with a [peering connection](/consul/docs/connect/cluster-peering).
## Usage
## Configure service selectors
When transparent proxy is enabled, traffic sent to [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
or Pod IP addresses is redirected through the proxy. You must use a selector to bind Kubernetes Services to Pods as you define Kubernetes Services in the mesh.
@ -275,13 +247,13 @@ spec:
Additional services can query the [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) at `sample-app.default.svc.cluster.local` to reach `sample-app`. If ACLs are enabled and configured with default `deny` policies, the configuration also requires a [`ServiceIntention`](/consul/docs/connect/config-entries/service-intentions) to allow it to talk to `sample-app`.
### Headless Services
### Headless services
For services that are not addressed using a virtual cluster IP, you must configure the upstream service using the [DialedDirectly](/consul/docs/connect/config-entries/service-defaults#dialeddirectly) option. Then, use DNS to discover individual instance addresses and dial them through the transparent proxy. When this mode is enabled on the upstream, services present service mesh certificates for mTLS and intentions are enforced at the destination.
Note that when dialing individual instances, Consul ignores the HTTP routing rules configured with configuration entries. The transparent proxy acts as a TCP proxy to the original destination IP address.
## Known Limitations
## Known limitations
- Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations.
- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol.
- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol.

View File

@ -0,0 +1,47 @@
---
layout: docs
page_title: Transparent proxy overview
description: >-
Transparent proxy enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh. Learn how transparently proxying increases application security without configuring individual upstream services.
---
# Transparent proxy overview
This topic provides overview information about transparent proxy mode, which allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh.
## Introduction
When service mesh proxies are in transparent mode, Consul service mesh uses IPtables to direct all inbound and outbound traffic to the sidecar. Consul also uses information configured in service intentions to infer routes, which eliminates the need to explicitly configure upstreams.
### Transparent proxy enabled
The following diagram shows how Consul routes traffic when proxies are in transparent mode:
![Diagram demonstrating that with transparent proxy, connections are automatically routed through the mesh](/img/consul-connect/with-transparent-proxy.png)
### Transparent proxy disabled
When transparent proxy mode is disabled, you must manually configure explicit upstreams, configure your applications to query for services at `localhost:<port>`, and configure applications to only listen on the loopback interface to prevent services from bypassing the mesh.
The following diagram shows how Consul routes traffic when transparent proxy mode is disabled:
![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png)
Transparent proxy is available for Kubernetes environments. As part of the integration with Kubernetes, Consul registers Kubernetes Services, injects sidecar proxies, and enables traffic redirection.
## Supported networking architectures
Transparent proxy mode enables several networking architectures and workflows. You can query Consul DNS to discover upstreams for single services, virtual services, and failover service instances that are in peered clusters.
Consul supports the following intra-datacenter connection types for discovering upstreams when transparent proxy mode is enabled:
- KubeDNS lookups across WAN-federated datacenters
- Consul DNS lookups across WAN-federated datacenters
- KubeDNS lookups in peered clusters and admin partitions
- Consul DNS lookups in peered clusters and admin partitions
## Mutual TLS for transparent proxy mode
Transparent proxy mode is enabled by default when you install Consul on Kubernetes using the Consul Helm chart. As a result, all services in the mesh must communicate through sidecar proxies, which enforce service intentions and mTLS encryption for the service mesh. While onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication.
You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/k8s/connect/onboarding-tproxy-mode) for additional information.

View File

@ -0,0 +1,124 @@
---
layout: docs
page_title: Configure failover service instances
description: Learn how to define failover services in Consul on Kubernetes when proxies are in transparent proxy mode. Consul can send traffic to backup service instances if a destination service becomes unhealthy or unresponsive.
---
# Configure failover service instances
This topic describes how to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode. If a service becomes unhealthy or unresponsive, Consul can use the service resolver configuration entry to send inbound requests to backup services. Service resolvers are part of the service mesh proxy upstream discovery chain. Refer to [Service mesh traffic management](/consul/docs/connect/l7-traffic) for additional information.
## Overview
Complete the following steps to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode:
- Create a service resolver configuration entry
- Create intentions that allow the downstream service to access the primary and failover service instances.
- Configure your application to call the discovery chain using the Consul DNS or KubeDNS.
## Requirements
- `consul-k8s` v1.2.0-beta1 or newer.
- Consul service mesh must be enabled. Refer to [How does Consul Service Mesh Work on Kubernetes](/consul/docs/k8s/connect).
- Proxies must be configured to run in transparent proxy mode.
- To query virtual DNS names, you must use Consul DNS.
- To query the discovery chain using KubeDNS, the service resolver must be in the same partition as the running service.
### ACL requirements
The default ACLs that the Consul Helm chart configures are suitable for most cases, but if you have more complex security policies for Consul API access, refer to the [ACL documentation](/consul/docs/security/acl) for additional guidance.
## Create a service resolver configuration entry
Specify the target failover in the [`spec.failover.targets`](/consul/docs/connect/config-entries/service-resolver#failover-targets-service) field in the service resolver configuration entry. In the following example, the `api-beta` service is configured to failover to the `api` service in any service subset:
<CodeBlockConfig heading="api-beta-failover.yaml">
```yaml
apiversion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: api-beta
spec:
failover:
'*':
targets:
- service: api
```
</CodeBlockConfig>
Refer to the [service resolver configuration entry reference](/consul/docs/connect/config-entries/service-resolver) documentation for information about all service resolver configurations.
You can apply the configuration using the `kubectl apply` command:
```shell-session
$ kubectl apply -f api-beta-failover.yaml
```
## Create service intentions
If intentions are not already defined, create and apply intentions that allow the appropriate downstream to access the target service and the failover service. In the following examples, the `frontend` service is allowed to send messages to the `api` service, which is allowed to send messages to the `api-beta` failover service.
<CodeBlockConfig heading="frontend-api-api-beta-allow.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: api
spec:
destination:
name: api
sources:
- name: frontend
action: allow
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: api-beta
spec:
destination:
name: api-beta
sources:
- name: frontend
action: allow
```
</CodeBlockConfig>
Refer to the [service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional information about configuring service intentions.
You can apply the configuration using the `kubectl apply` command:
```shell-session
$ kubectl apply -f frontend-api-api-beta-allow.yaml
```
## Configure your application to call the DNS
Configure your application to contact the discovery chain in either the Consul DNS or the KubeDNS.
### Consul DNS
You can query the Consul DNS using the `<service>.virtual.consul` lookup format. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups.
In the following example, the application queries the Consul catalog for `api-beta` over HTTP. By default, the lookup would query the `default` partition and `default` namespace if Consul Enterprise manages the network infrastructure:
```text
http://api-beta.virtual.consul/
```
### KubeDNS
You can query the KubeDNS if the failover service is in the same Kubernetes cluster as the primary service. In the following example, the application queries KubeDNS for `api-beta` over HTTP:
```text
http://api-beta.<namespace>.svc.cluster.local
```
Note that you cannot use KubeDNS if a corresponding Kubernetes service and pod do not exist.

View File

@ -0,0 +1,122 @@
---
layout: docs
page_title: Route traffic to a virtual service
description: Define virtual services in service resolver config entries so that Consul on Kubernetes can route traffic to virtual services when transparent proxy mode is enabled for Envoy proxies.
---
# Route traffic to a virtual service
This topic describes how to define virtual services so that Consul on Kubernetes can route traffic to virtual services when transparent proxy mode is enabled for Envoy proxies.
## Overview
You can define virtual services in service resolver configuration entries so that downstream applications can send requests to a virtual service using a Consul DNS name in peered clusters. Your applications can send requests to virtual services in the same cluster using KubeDNS. Service resolvers are part of the service mesh proxy upstream discovery chain. Refer to [Service mesh traffic management](/consul/docs/connect/l7-traffic) for additional information.
Complete the following steps to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode:
1. Create a service resolver configuration entry.
1. Create intentions that allow the downstream service to access the real service and the virtual service.
1. Configure your application to call the discovery chain using the Consul DNS or KubeDNS.
## Requirements
- `consul-k8s` v1.2.0-beta1 or newer.
- Consul service mesh must be enabled. Refer to [How does Consul service mesh work on Kubernetes](/consul/docs/k8s/connect).
- Proxies must be configured to run in transparent proxy mode.
- To query virtual DNS names, you must use Consul DNS.
- To query the discovery chain using KubeDNS, the service resolver must be in the same partition as the running service.
### ACL requirements
The default ACLs that the Consul Helm chart configures are suitable for most cases, but if you have more complex security policies for Consul API access, refer to the [ACL documentation](/consul/docs/security/acl) for additional guidance.
## Create a service resolver configuration entry
Specify the target failover in the [`spec.redirect.service`](/consul/docs/connect/config-entries/service-resolver#spec-redirect-service) field in the service resolver configuration entry. In the following example, the `virtual-api` service is configured to redirect to the `real-api`:
<CodeBlockConfig heading="virtual-api-redirect.yaml">
```yaml
apiversion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: virtual-api
spec:
redirect:
service: real-api
```
</CodeBlockConfig>
Refer to the [service resolver configuration entry reference](/consul/docs/connect/config-entries/service-resolver) documentation for information about all service resolver configurations.
You can apply the configuration using the `kubectl apply` command:
```shell-session
$ kubectl apply -f virtual-api-redirect.yaml
```
## Create service intentions
If intentions are not already defined, create and apply intentions that allow the appropriate downstream to access the real service and the target redirect service. In the following examples, the `frontend` service is allowed to send messages to the `virtual-api` and `real-api` services:
<CodeBlockConfig heading="frontend-api-api-beta-allow.yaml">
```yaml
apiversion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: virtual-api
spec:
destination:
name: virtual-api
sources:
- name: frontend
action: allow
---
apiversion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: real-api
spec:
destination:
name: real-api
sources:
- name: frontend
action: allow
```
</CodeBlockConfig>
Refer to the [service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional information about configuring service intentions.
You can apply the configuration using the `kubectl apply` command:
```shell-session
$ kubectl apply -f frontend-api-api-beta-allow.yaml
```
## Configure your application to call the DNS
Configure your application to contact the discovery chain in either the Consul DNS or the KubeDNS.
### Consul DNS
You can query the Consul DNS using the `<service>.virtual.consul` lookup format. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups.
In the following example, the application queries the Consul catalog for `virtual-api` over HTTP. By default, the lookup would query the `default` partition and `default` namespace if Consul Enterprise manages the network infrastructure:
```text
http://virtual-api.virtual.consul/
```
### KubeDNS
You can query the KubeDNS if the real and virtual services are in the same Kubernetes cluster by specifying the name of the service. In the following example, the application queries KubeDNS for `virtual-api` over HTTP:
```text
http://virtual-api.<namespace>.svc.cluster.local
```
Note that you cannot use KubeDNS if a corresponding Kubernetes service and pod do not exist.

View File

@ -559,10 +559,6 @@
}
]
},
{
"title": "Transparent Proxy",
"path": "connect/transparent-proxy"
},
{
"title": "Observability",
"routes": [
@ -729,6 +725,45 @@
}
]
},
{
"title": "Failover",
"routes": [
{
"title": "Overview",
"path": "connect/failover"
},
{
"title": "Usage",
"routes": [
{
"title": "Configure failover services for Kubernetes",
"href": "/docs/k8s/l7-traffic/failover-tproxy"
},
{
"title": "Automate geo-failover with prepared queries",
"href": "/tutorials/developer-discovery/automate-geo-failover"
}
]
},
{
"title": "Configuration",
"routes": [
{
"title": "Sameness groups",
"href": "/consul/docs/connect/config-entries/service-resolver"
},
{
"title": "Service resolver",
"href": "/consul/docs/connect/config-entries/service-resolver"
},
{
"title": "Service intentions",
"href": "/consul/docs/connect/config-entries/service-intentions"
}
]
}
]
},
{
"title": "Nomad",
"path": "connect/nomad"
@ -1222,7 +1257,24 @@
},
{
"title": "Transparent Proxy",
"href": "/docs/connect/transparent-proxy"
"routes": [
{
"title": "Overview",
"path": "k8s/connect/transparent-proxy"
},
{
"title": "Enable transparent proxy",
"path": "k8s/connect/transparent-proxy/enable-transparent-proxy"
},
{
"title": "Route traffic to virtual services",
"href": "/docs/k8s/l7-traffic/route-to-virtual-services"
}
]
},
{
"title": "Onboarding services in transparent proxy mode",
"path": "k8s/connect/onboarding-tproxy-mode"
},
{
"title": "Ingress Gateways",
@ -1255,6 +1307,23 @@
}
]
},
{
"title": "L7 traffic management",
"routes": [
{
"title": "Overview",
"href": "/docs/connect/l7-traffic"
},
{
"title": "Configure failover services",
"path": "k8s/l7-traffic/failover-tproxy"
},
{
"title": "Route traffic to virtual services",
"path": "k8s/l7-traffic/route-to-virtual-services"
}
]
},
{
"title": "Service Sync",
"path": "k8s/service-sync"