2022-06-14 22:15:14 +00:00
---
layout: docs
page_title: Cluster Peering on Kubernetes
description: >-
2022-09-16 15:28:32 +00:00
If you use Consul on Kubernetes, learn how to enable cluster peering, create peering CRDs, and then manage peering connections in consul-k8s.
2022-06-14 22:15:14 +00:00
---
# Cluster Peering on Kubernetes
2022-08-01 15:30:36 +00:00
~> **Cluster peering is currently in beta:** Functionality associated
with cluster peering is subject to change. You should never use the beta release in secure environments or production scenarios. Features in
2022-08-10 15:21:26 +00:00
beta may have performance issues, scaling issues, and limited support.<br/><br/>Cluster peering is not currently available in the HCP Consul offering.
2022-06-14 22:15:14 +00:00
2022-08-09 14:25:45 +00:00
To establish a cluster peering connection on Kubernetes, you need to enable the feature in the Helm chart and create custom resource definitions (CRDs) for each side of the peering.
2022-06-14 22:15:14 +00:00
2022-08-02 21:20:43 +00:00
The following CRDs are used to create and manage a peering connection:
2022-06-14 22:15:14 +00:00
- `PeeringAcceptor`: Generates a peering token and accepts an incoming peering connection.
- `PeeringDialer`: Uses a peering token to make an outbound peering connection with the cluster that generated the token.
2022-09-28 20:42:21 +00:00
As of Consul v1.14, you can also [implement service failovers and redirects to control traffic](/consul/docs/connect/l7-traffic) between peers.
2022-10-03 19:08:57 +00:00
> To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) environments, complete the [Consul Cluster Peering on Kubernetes tutorial](https://learn.hashicorp.com/tutorials/consul/cluster-peering-aws?utm_source=docs).
2022-06-14 22:15:14 +00:00
## Prerequisites
2022-08-01 15:30:36 +00:00
You must implement the following requirements to create and use cluster peering connections with Kubernetes:
2022-09-28 20:42:21 +00:00
- Consul v1.13.1 or later
2022-08-01 15:30:36 +00:00
- At least two Kubernetes clusters
2022-08-12 18:22:41 +00:00
- The installation must be running on Consul on Kubernetes version 0.47.1 or later
2022-06-14 22:15:14 +00:00
2022-09-28 20:42:21 +00:00
### Prepare for installation
2022-06-15 21:25:57 +00:00
2022-09-28 20:42:21 +00:00
1. After you provision a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters, use the `kubectl` command to export the Kubernetes context names and then set them to variables for future use. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
2022-06-14 22:15:14 +00:00
2022-08-29 20:07:08 +00:00
You can use the following methods to get the context names for your clusters:
2022-09-28 20:42:21 +00:00
- Use the `kubectl config current-context` command to get the context for the cluster you are currently in.
- Use the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file.
2022-08-29 20:07:08 +00:00
```shell-session
$ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
$ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
```
2022-06-23 15:38:36 +00:00
2022-09-28 20:42:21 +00:00
1. To establish cluster peering through Kubernetes, create a `values.yaml` file with the following Helm values.
2022-06-21 23:31:49 +00:00
2022-08-29 20:07:08 +00:00
<CodeBlockConfig filename="values.yaml">
```yaml
global:
2022-09-27 16:51:12 +00:00
name: consul
2022-08-29 20:07:08 +00:00
image: "hashicorp/consul:1.13.1"
peering:
enabled: true
connectInject:
enabled: true
dns:
enabled: true
enableRedirection: true
server:
exposeService:
2022-09-01 17:10:32 +00:00
enabled: true
2022-08-29 20:07:08 +00:00
controller:
enabled: true
meshGateway:
enabled: true
replicas: 1
```
</CodeBlockConfig>
2022-09-28 20:42:21 +00:00
These Helm values configure the servers in each cluster so that they expose ports over a Kubernetes load balancer service. For additional configuration options, refer to [`server.exposeService`](/docs/k8s/helm#v-server-exposeservice).
When generating a peering token from one of the clusters, Consul includes a load balancer address in the token so that the peering stream goes through the load balancer in front of the servers. For additional configuration options, refer to [`global.peering.tokenGeneration`](/docs/k8s/helm#v-global-peering-tokengeneration).
2022-08-29 20:07:08 +00:00
### Install Consul on Kubernetes
2022-09-28 20:42:21 +00:00
Install Consul on Kubernetes by using the CLI to apply `values.yaml` to each cluster.
2022-08-29 20:07:08 +00:00
2022-09-28 20:42:21 +00:00
1. In `cluster-01`, run the following commands:
2022-08-29 20:07:08 +00:00
```shell-session
$ export HELM_RELEASE_NAME=cluster-01
```
```shell-session
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER1_CONTEXT
```
2022-09-28 20:42:21 +00:00
1. In `cluster-02`, run the following commands:
2022-08-29 20:07:08 +00:00
```shell-session
$ export HELM_RELEASE_NAME=cluster-02
```
```shell-session
$ helm install ${HELM_RELEASE_NAME} hashicorp/consul --create-namespace --namespace consul --version "0.47.1" --values values.yaml --kube-context $CLUSTER2_CONTEXT
```
2022-06-21 23:27:06 +00:00
2022-09-28 20:42:21 +00:00
## Create a peering connection for Consul on Kubernetes
2022-06-14 22:15:14 +00:00
2022-09-28 20:42:21 +00:00
### Create a peering token
To peer Kubernetes clusters running Consul, you need to create a peering token and share it with the other cluster. Peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs.
2022-06-14 22:15:14 +00:00
2022-06-21 23:13:39 +00:00
1. In `cluster-01`, create the `PeeringAcceptor` custom resource.
2022-06-14 22:15:14 +00:00
2022-08-29 20:07:08 +00:00
<CodeBlockConfig filename="acceptor.yaml">
2022-06-14 22:15:14 +00:00
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringAcceptor` resource to the first cluster.
```shell-session
2022-08-29 20:07:08 +00:00
$ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
2022-06-14 22:15:14 +00:00
````
1. Save your peering token so that you can export it to the other cluster.
```shell-session
2022-08-29 20:07:08 +00:00
$ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml
2022-06-14 22:15:14 +00:00
```
2022-09-28 20:42:21 +00:00
### Establish a peering connection between clusters
2022-08-01 19:28:50 +00:00
2022-06-14 22:15:14 +00:00
1. Apply the peering token to the second cluster.
```shell-session
2022-08-29 20:07:08 +00:00
$ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
2022-06-14 22:15:14 +00:00
```
2022-09-28 20:42:21 +00:00
1. In `cluster-02`, create the `PeeringDialer` custom resource.
2022-06-14 22:15:14 +00:00
2022-08-29 20:07:08 +00:00
<CodeBlockConfig filename="dialer.yaml">
2022-06-14 22:15:14 +00:00
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringDialer
metadata:
name: cluster-01 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringDialer` resource to the second cluster.
```shell-session
2022-08-29 20:07:08 +00:00
$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
2022-06-14 22:15:14 +00:00
```
2022-09-28 20:42:21 +00:00
### Export services between clusters
~> **Tip**: The examples in this section demonstrate how to export a service named `backend`. You should change instances of `backend` in the example code to the name of the service you want to export.
2022-06-14 22:15:14 +00:00
2022-09-28 20:42:21 +00:00
1. For the service in `cluster-02` that you want to export, add the following [annotations](/docs/k8s/annotations-and-labels) to your service's pods.
2022-06-14 22:15:14 +00:00
2022-09-01 17:10:32 +00:00
<CodeBlockConfig filename="backend.yaml">
2022-06-14 22:15:14 +00:00
```yaml
2022-08-29 20:07:08 +00:00
# Service to expose backend
apiVersion: v1
kind: Service
metadata:
2022-09-01 17:10:32 +00:00
name: backend
2022-08-29 20:07:08 +00:00
spec:
selector:
app: backend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
---
2022-09-28 20:42:21 +00:00
# Deployment for backend
2022-08-29 20:07:08 +00:00
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: backend
containers:
- name: backend
image: nicholasjackson/fake-service:v0.22.4
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "NAME"
value: "backend"
- name: "MESSAGE"
value: "Response from backend"
2022-06-14 22:15:14 +00:00
```
</CodeBlockConfig>
2022-06-29 18:08:37 +00:00
1. In `cluster-02`, create an `ExportedServices` custom resource.
2022-06-14 22:15:14 +00:00
2022-08-29 20:07:08 +00:00
<CodeBlockConfig filename="exportedsvc.yaml">
2022-06-14 22:15:14 +00:00
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ExportedServices
metadata:
name: default ## The name of the partition containing the service
spec:
services:
2022-09-01 17:10:32 +00:00
- name: backend ## The name of the service you want to export
2022-06-29 18:08:37 +00:00
consumers:
- peer: cluster-01 ## The name of the peer that receives the service
2022-06-14 22:15:14 +00:00
```
</CodeBlockConfig>
2022-09-28 20:42:21 +00:00
1. Apply the service file and the `ExportedServices` resource to the second cluster.
2022-08-01 19:28:50 +00:00
```shell-session
2022-09-01 17:10:32 +00:00
$ kubectl apply --context $CLUSTER2_CONTEXT --filename backend.yaml --filename exportedsvc.yaml
2022-08-01 19:28:50 +00:00
```
2022-09-28 20:42:21 +00:00
### Authorize services for peers
2022-08-01 19:28:50 +00:00
2022-06-14 22:15:14 +00:00
1. Create service intentions for the second cluster.
<CodeBlockConfig filename="intention.yml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: backend-deny
spec:
destination:
2022-09-01 17:10:32 +00:00
name: backend
2022-06-14 22:15:14 +00:00
sources:
- name: "*"
action: deny
2022-09-01 17:10:32 +00:00
- name: frontend
2022-06-14 22:15:14 +00:00
action: allow
```
</CodeBlockConfig>
2022-08-01 19:28:50 +00:00
1. Apply the intentions to the second cluster.
2022-06-14 22:15:14 +00:00
```shell-session
2022-08-29 20:07:08 +00:00
$ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yml
2022-06-14 22:15:14 +00:00
```
2022-09-28 20:42:21 +00:00
1. For the services in `cluster-01` that you want to access `backend`, add the following annotations to the service file. To dial the upstream service from an application, ensure that the requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/docs/discovery/dns#service-virtual-ip-lookups).
2022-06-14 22:15:14 +00:00
2022-09-01 17:10:32 +00:00
<CodeBlockConfig filename="frontend.yaml">
2022-06-14 22:15:14 +00:00
```yaml
2022-08-29 20:07:08 +00:00
# Service to expose frontend
apiVersion: v1
kind: Service
metadata:
2022-09-01 17:10:32 +00:00
name: frontend
2022-08-29 20:07:08 +00:00
spec:
selector:
app: frontend
ports:
- name: http
protocol: TCP
port: 9090
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: frontend
containers:
- name: frontend
image: nicholasjackson/fake-service:v0.22.4
securityContext:
capabilities:
add: ["NET_ADMIN"]
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "UPSTREAM_URIS"
2022-09-01 17:10:32 +00:00
value: "http://backend.virtual.cluster-02.consul"
2022-08-29 20:07:08 +00:00
- name: "NAME"
value: "frontend"
- name: "MESSAGE"
value: "Hello World"
- name: "HTTP_CLIENT_KEEP_ALIVES"
value: "false"
2022-06-14 22:15:14 +00:00
```
</CodeBlockConfig>
1. Apply the service file to the first cluster.
```shell-session
2022-09-01 17:10:32 +00:00
$ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
2022-06-14 22:15:14 +00:00
```
2022-09-28 20:42:21 +00:00
1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully.
2022-06-14 22:15:14 +00:00
```shell-session
2022-08-29 20:07:08 +00:00
$ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
2022-09-28 20:42:21 +00:00
2022-06-14 22:15:14 +00:00
{
2022-08-29 20:07:08 +00:00
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.16.2.11"
],
"start_time": "2022-08-26T23:40:01.167199",
"end_time": "2022-08-26T23:40:01.226951",
"duration": "59.752279ms",
"body": "Hello World",
"upstream_calls": {
2022-09-01 17:10:32 +00:00
"http://backend.virtual.cluster-02.consul": {
2022-08-29 20:07:08 +00:00
"name": "backend",
2022-09-01 17:10:32 +00:00
"uri": "http://backend.virtual.cluster-02.consul",
2022-08-29 20:07:08 +00:00
"type": "HTTP",
"ip_addresses": [
"10.32.2.10"
],
"start_time": "2022-08-26T23:40:01.223503",
"end_time": "2022-08-26T23:40:01.224653",
"duration": "1.149666ms",
"headers": {
"Content-Length": "266",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Fri, 26 Aug 2022 23:40:01 GMT"
},
"body": "Response from backend",
"code": 200
}
},
"code": 200
2022-06-14 22:15:14 +00:00
}
```
## End a peering connection
To end a peering connection, delete both the `PeeringAcceptor` and `PeeringDialer` resources.
2022-06-29 18:08:37 +00:00
To confirm that you deleted your peering connection, in `cluster-01`, query the `/health` HTTP endpoint. The peered services should no longer appear.
2022-06-14 22:15:14 +00:00
```shell-session
$ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02"
```
2022-08-10 17:14:36 +00:00
2022-08-10 18:25:12 +00:00
## Recreate or reset a peering connection
2022-08-10 17:14:36 +00:00
2022-09-28 20:42:21 +00:00
To recreate or reset the peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor`.
2022-08-10 17:14:36 +00:00
2022-09-28 20:42:21 +00:00
1. In the `PeeringAcceptor` CRD, add the annotation `consul.hashicorp.com/peering-version`. If the annotation already exists, update its value to a higher version.
2022-08-10 17:14:36 +00:00
2022-08-10 20:53:45 +00:00
<CodeBlockConfig filename="acceptor.yml" highlight="6" hideClipboard>
2022-08-10 18:25:12 +00:00
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02
annotations:
consul.hashicorp.com/peering-version: 1 ## The peering version you want to set.
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
2022-09-28 20:42:21 +00:00
2. After you update `PeeringAcceptor`, repeat the steps to create a peering connection, including saving a new peering token and exporting it to the other cluster. Your peering connection is re-established with the updated token.
~> **Note:** The only way to create or set a new peering token is to manually adjust the value of the annotation `consul.hashicorp.com/peering-version`. Creating a new token causes the previous token to expire.
## Traffic management between peers
2022-09-28 20:52:03 +00:00
As of Consul v1.14, you can use [dynamic traffic management](/consul/docs/connect/l7-traffic) to configure your service mesh so that services automatically failover and redirect between peers.
2022-09-28 20:42:21 +00:00
2022-09-28 20:52:03 +00:00
To configure automatic service failovers and redirect, edit the `ServiceResolver` CRD so that traffic resolves to a backup service instance on a peer. The following example updates the `ServiceResolver` CRD in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in `cluster-02` when it detects multiple connection failures to the primary instance.
2022-09-28 20:42:21 +00:00
<CodeBlockConfig filename="service-resolver.yaml" hideClipboard>
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
2022-09-28 20:52:03 +00:00
name: frontend
2022-09-28 20:42:21 +00:00
spec:
connectTimeout: 15s
failover:
'*':
targets:
2022-09-28 20:52:03 +00:00
- peer: 'cluster-02'
2022-09-28 20:42:21 +00:00
service: 'backup'
namespace: 'default'
```
2022-08-10 17:14:36 +00:00
2022-09-28 20:42:21 +00:00
</CodeBlockConfig>