open-consul/website/content/docs/k8s/connect/cluster-peering/usage/establish-peering.mdx

453 lines
16 KiB
Plaintext
Raw Normal View History

Docs/cluster peering 1.15 updates (#16291) * initial commit * initial commit * Overview updates * Overview page improvements * More Overview improvements * improvements * Small fixes/updates * Updates * Overview updates * Nav data * More nav updates * Fix * updates * Updates + tip test * Directory test * refining * Create restructure w/ k8s * Single usage page * Technical Specification * k8s pages * typo * L7 traffic management * Manage connections * k8s page fix * Create page tab corrections * link to k8s * intentions * corrections * Add-on intention descriptions * adjustments * Missing </CodeTabs> * Diagram improvements * Final diagram update * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * diagram name fix * Fixes * Updates to index.mdx * Tech specs page corrections * Tech specs page rename * update link to tech specs * K8s - new pages + tech specs * k8s - manage peering connections * k8s L7 traffic management * Separated establish connection pages * Directory fixes * Usage clean up * k8s docs edits * Updated nav data * CodeBlock Component fix * filename * CodeBlockConfig removal * Redirects * Update k8s filenames * Reshuffle k8s tech specs for clarity, fmt yaml files * Update general cluster peering docs, reorder CLI > API > UI, cross link to kubernetes * Fix config rendering in k8s usage docs, cross link to general usage from k8s docs * fix legacy link * update k8s docs * fix nested list rendering * redirect fix * page error --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Tu Nguyen <im2nguyen@gmail.com>
2023-02-23 17:58:39 +00:00
---
layout: docs
page_title: Establish Cluster Peering Connections on Kubernetes
description: >-
To establish a cluster peering connection on Kubernetes, generate a peering token to establish communication. Then export services and authorize requests with service intentions.
---
# Establish cluster peering connections on Kubernetes
This page details the process for establishing a cluster peering connection between services in a Consul on Kubernetes deployment.
The overall process for establishing a cluster peering connection consists of the following steps:
1. Create a peering token in one cluster.
1. Use the peering token to establish peering with a second cluster.
1. Export services between clusters.
1. Create intentions to authorize services for peers.
Cluster peering between services cannot be established until all four steps are complete.
For general guidance for establishing cluster peering connections, refer to [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering).
Docs/cluster peering 1.15 updates (#16291) * initial commit * initial commit * Overview updates * Overview page improvements * More Overview improvements * improvements * Small fixes/updates * Updates * Overview updates * Nav data * More nav updates * Fix * updates * Updates + tip test * Directory test * refining * Create restructure w/ k8s * Single usage page * Technical Specification * k8s pages * typo * L7 traffic management * Manage connections * k8s page fix * Create page tab corrections * link to k8s * intentions * corrections * Add-on intention descriptions * adjustments * Missing </CodeTabs> * Diagram improvements * Final diagram update * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * diagram name fix * Fixes * Updates to index.mdx * Tech specs page corrections * Tech specs page rename * update link to tech specs * K8s - new pages + tech specs * k8s - manage peering connections * k8s L7 traffic management * Separated establish connection pages * Directory fixes * Usage clean up * k8s docs edits * Updated nav data * CodeBlock Component fix * filename * CodeBlockConfig removal * Redirects * Update k8s filenames * Reshuffle k8s tech specs for clarity, fmt yaml files * Update general cluster peering docs, reorder CLI > API > UI, cross link to kubernetes * Fix config rendering in k8s usage docs, cross link to general usage from k8s docs * fix legacy link * update k8s docs * fix nested list rendering * redirect fix * page error --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Tu Nguyen <im2nguyen@gmail.com>
2023-02-23 17:58:39 +00:00
## Prerequisites
You must meet the following requirements to use Consul's cluster peering features with Kubernetes:
- Consul v1.14.1 or higher
- Consul on Kubernetes v1.0.0 or higher
- At least two Kubernetes clusters
In Consul on Kubernetes, peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs. For additional information about requirements for cluster peering on Kubernetes deployments, refer to [Cluster peering on Kubernetes technical specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs).
### Assign cluster IDs to environmental variables
After you provision a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters, you can assign your clusters to environmental variables for future use.
1. Get the context names for your Kubernetes clusters using one of these methods:
- Run the `kubectl config current-context` command to get the context for the cluster you are currently in.
- Run the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file.
1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
```shell-session
$ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
$ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
```
### Update the Helm chart
To use cluster peering with Consul on Kubernetes deployments, update the Helm chart with [the required values](/consul/docs/k8s/connect/cluster-peering/tech-specs#helm-requirements). After updating the Helm chart, you can use the `consul-k8s` CLI to apply `values.yaml` to each cluster.
1. In `cluster-01`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME1=cluster-01
```
```shell-session
$ helm install ${HELM_RELEASE_NAME1} hashicorp/consul --create-namespace --namespace consul --version "1.0.1" --values values.yaml --set global.datacenter=dc1 --kube-context $CLUSTER1_CONTEXT
```
1. In `cluster-02`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME2=cluster-02
```
```shell-session
$ helm install ${HELM_RELEASE_NAME2} hashicorp/consul --create-namespace --namespace consul --version "1.0.1" --values values.yaml --set global.datacenter=dc2 --kube-context $CLUSTER2_CONTEXT
```
### Configure the mesh gateway mode for traffic between services
In Kubernetes deployments, you can configure mesh gateways to use `local` mode so that a service dialing a service in a remote peer dials the remote mesh gateway instead of the local mesh gateway. To configure the mesh gateway mode so that this traffic always leaves through the local mesh gateway, you can use the `ProxyDefaults` CRD.
1. In `cluster-01` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode.
<CodeBlockConfig filename="proxy-defaults.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
meshGateway:
mode: local
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply -f proxy-defaults.yaml
```
1. In `cluster-02` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode.
<CodeBlockConfig filename="proxy-defaults.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
meshGateway:
mode: local
```
</CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply -f proxy-defaults.yaml
```
## Create a peering token
To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection.
Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections.
1. In `cluster-01`, create the `PeeringAcceptor` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name.
<CodeBlockConfig filename="acceptor.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: cluster-02 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringAcceptor` resource to the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml
```
1. Save your peering token so that you can export it to the other cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml
```
## Establish a connection between clusters
Next, use the peering token to establish a secure connection between the clusters.
1. Apply the peering token to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml
```
1. In `cluster-02`, create the `PeeringDialer` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name.
<CodeBlockConfig filename="dialer.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringDialer
metadata:
name: cluster-01 ## The name of the peer you want to connect to
spec:
peer:
secret:
name: "peering-token"
key: "data"
backend: "kubernetes"
```
</CodeBlockConfig>
1. Apply the `PeeringDialer` resource to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
```
## Export services between clusters
After you establish a connection between the clusters, you need to create an `exported-services` CRD that defines the services that are available to another admin partition.
While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to [exported service consumers](/consul/docs/connect/config-entries/exported-services#consumers-1) for more information.
1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:
<CodeBlockConfig filename="backend.yaml" highlight="37">
```yaml
# Service to expose backend
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
---
# Deployment for backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: backend
containers:
- name: backend
image: nicholasjackson/fake-service:v0.22.4
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "NAME"
value: "backend"
- name: "MESSAGE"
value: "Response from backend"
```
</CodeBlockConfig>
1. Deploy the `backend` service to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename backend.yaml
```
1. In `cluster-02`, create an `ExportedServices` custom resource. The name of the peer that consumes the service should be identical to the name set in the `PeeringDialer` CRD.
<CodeBlockConfig filename="exported-service.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ExportedServices
metadata:
name: default ## The name of the partition containing the service
spec:
services:
- name: backend ## The name of the service you want to export
consumers:
- peer: cluster-01 ## The name of the peer that receives the service
```
</CodeBlockConfig>
1. Apply the `ExportedServices` resource to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename exported-service.yaml
```
## Authorize services for peers
Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters.
1. Create service intentions for the second cluster. The name of the peer should match the name set in the `PeeringDialer` CRD.
<CodeBlockConfig filename="intention.yaml">
```yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: backend-deny
spec:
destination:
name: backend
sources:
- name: "*"
action: deny
- name: frontend
action: allow
peer: cluster-01 ## The peer of the source service
```
</CodeBlockConfig>
1. Apply the intentions to the second cluster.
<CodeBlockConfig>
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml
```
</CodeBlockConfig>
Docs/services refactor docs day 122022 (#16103) * converted main services page to services overview page * set up services usage dirs * added Define Services usage page * converted health checks everything page to Define Health Checks usage page * added Register Services and Nodes usage page * converted Query with DNS to Discover Services and Nodes Overview page * added Configure DNS Behavior usage page * added Enable Static DNS Lookups usage page * added the Enable Dynamic Queries DNS Queries usage page * added the Configuration dir and overview page - may not need the overview, tho * fixed the nav from previous commit * added the Services Configuration Reference page * added Health Checks Configuration Reference page * updated service defaults configuraiton entry to new configuration ref format * fixed some bad links found by checker * more bad links found by checker * another bad link found by checker * converted main services page to services overview page * set up services usage dirs * added Define Services usage page * converted health checks everything page to Define Health Checks usage page * added Register Services and Nodes usage page * converted Query with DNS to Discover Services and Nodes Overview page * added Configure DNS Behavior usage page * added Enable Static DNS Lookups usage page * added the Enable Dynamic Queries DNS Queries usage page * added the Configuration dir and overview page - may not need the overview, tho * fixed the nav from previous commit * added the Services Configuration Reference page * added Health Checks Configuration Reference page * updated service defaults configuraiton entry to new configuration ref format * fixed some bad links found by checker * more bad links found by checker * another bad link found by checker * fixed cross-links between new topics * updated links to the new services pages * fixed bad links in scale file * tweaks to titles and phrasing * fixed typo in checks.mdx * started updating the conf ref to latest template * update SD conf ref to match latest CT standard * Apply suggestions from code review Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com> * remove previous version of the checks page * fixed cross-links * Apply suggestions from code review Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com> --------- Co-authored-by: Eddie Rowe <74205376+eddie-rowe@users.noreply.github.com>
2023-02-28 22:09:56 +00:00
1. Add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods before deploying the workload so that the services in `cluster-01` can dial `backend` in `cluster-02`. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups). In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted. [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups-for-consul-enterprise) details how you would similarly format a DNS name including partitions and namespaces.
Docs/cluster peering 1.15 updates (#16291) * initial commit * initial commit * Overview updates * Overview page improvements * More Overview improvements * improvements * Small fixes/updates * Updates * Overview updates * Nav data * More nav updates * Fix * updates * Updates + tip test * Directory test * refining * Create restructure w/ k8s * Single usage page * Technical Specification * k8s pages * typo * L7 traffic management * Manage connections * k8s page fix * Create page tab corrections * link to k8s * intentions * corrections * Add-on intention descriptions * adjustments * Missing </CodeTabs> * Diagram improvements * Final diagram update * Apply suggestions from code review Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> * diagram name fix * Fixes * Updates to index.mdx * Tech specs page corrections * Tech specs page rename * update link to tech specs * K8s - new pages + tech specs * k8s - manage peering connections * k8s L7 traffic management * Separated establish connection pages * Directory fixes * Usage clean up * k8s docs edits * Updated nav data * CodeBlock Component fix * filename * CodeBlockConfig removal * Redirects * Update k8s filenames * Reshuffle k8s tech specs for clarity, fmt yaml files * Update general cluster peering docs, reorder CLI > API > UI, cross link to kubernetes * Fix config rendering in k8s usage docs, cross link to general usage from k8s docs * fix legacy link * update k8s docs * fix nested list rendering * redirect fix * page error --------- Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com> Co-authored-by: Tu Nguyen <im2nguyen@users.noreply.github.com> Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Tu Nguyen <im2nguyen@gmail.com>
2023-02-23 17:58:39 +00:00
<CodeBlockConfig filename="frontend.yaml" highlight="36,51">
```yaml
# Service to expose frontend
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: frontend
ports:
- name: http
protocol: TCP
port: 9090
targetPort: 9090
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
serviceAccountName: frontend
containers:
- name: frontend
image: nicholasjackson/fake-service:v0.22.4
securityContext:
capabilities:
add: ["NET_ADMIN"]
ports:
- containerPort: 9090
env:
- name: "LISTEN_ADDR"
value: "0.0.0.0:9090"
- name: "UPSTREAM_URIS"
value: "http://backend.virtual.cluster-02.consul"
- name: "NAME"
value: "frontend"
- name: "MESSAGE"
value: "Hello World"
- name: "HTTP_CLIENT_KEEP_ALIVES"
value: "false"
```
</CodeBlockConfig>
1. Apply the service file to the first cluster.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml
```
1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully.
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
```
<CodeBlockConfig highlight="29" hideClipboard>
```json
{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.16.2.11"
],
"start_time": "2022-08-26T23:40:01.167199",
"end_time": "2022-08-26T23:40:01.226951",
"duration": "59.752279ms",
"body": "Hello World",
"upstream_calls": {
"http://backend.virtual.cluster-02.consul": {
"name": "backend",
"uri": "http://backend.virtual.cluster-02.consul",
"type": "HTTP",
"ip_addresses": [
"10.32.2.10"
],
"start_time": "2022-08-26T23:40:01.223503",
"end_time": "2022-08-26T23:40:01.224653",
"duration": "1.149666ms",
"headers": {
"Content-Length": "266",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Fri, 26 Aug 2022 23:40:01 GMT"
},
"body": "Response from backend",
"code": 200
}
},
"code": 200
}
```
</CodeBlockConfig>
### Authorize service reads with ACLs
If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access.
Read access to all imported services is granted using either of the following rules associated with an ACL token:
- `service:write` permissions for any service in the sidecar's partition.
- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition.
For Consul Enterprise, the permissions apply to all imported services in the service's partition. These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities).
Refer to [Reading servers](/consul/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` configuration entry documentation for example rules.
For additional information about how to configure and use ACLs, refer to [ACLs system overview](/consul/docs/security/acl).