2021-06-29 20:23:36 +00:00
---
layout: docs
2022-09-23 21:11:23 +00:00
page_title: Deploy Single Consul Datacenter Across Multiple K8s Clusters
description: >-
A single Consul datacenter can run across multiple Kubernetes pods in a flat network as long as only one pod has server agents. Learn how to configure the Helm chart, deploy pods in sequence, and verify your service mesh.
2021-06-29 20:23:36 +00:00
---
2022-09-23 21:11:23 +00:00
# Deploy Single Consul Datacenter Across Multiple Kubernetes Clusters
2021-06-29 20:23:36 +00:00
2023-01-25 16:52:43 +00:00
~> **Note:** When running Consul across multiple Kubernetes clusters, we recommend using [admin partitions](/consul/docs/enterprise/admin-partitions) for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network.
2022-09-09 20:56:33 +00:00
2022-07-19 16:45:41 +00:00
This page describes deploying a single Consul datacenter in multiple Kubernetes clusters,
2022-11-18 17:33:02 +00:00
with servers running in one cluster and only Consul on Kubernetes components in the rest of the clusters.
2022-07-19 16:45:41 +00:00
This example uses two Kubernetes clusters, but this approach could be extended to using more than two.
2021-06-29 20:23:36 +00:00
2022-07-19 16:45:41 +00:00
## Requirements
2022-11-18 17:33:02 +00:00
* Consul-Helm version `v1.0.0` or higher
2022-09-09 20:56:33 +00:00
* Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix.
2022-05-25 18:34:56 +00:00
## Prepare Helm release name ahead of installs
The Helm release name must be unique for each Kubernetes cluster.
The Helm chart uses the Helm release name as a prefix for the
2022-09-02 22:34:15 +00:00
ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are identical, or if `global.name` for each cluster is identical, subsequent Consul on Kubernetes clusters will overwrite existing ACL resources and cause the clusters to fail.
2022-05-25 18:34:56 +00:00
2022-07-19 16:45:41 +00:00
Before proceeding with installation, prepare the Helm release names as environment variables for both the server and client install.
2022-05-25 18:34:56 +00:00
```shell-session
$ export HELM_RELEASE_SERVER=server
2022-11-18 17:33:02 +00:00
$ export HELM_RELEASE_CONSUL=consul
2022-05-25 18:34:56 +00:00
...
2022-11-18 17:33:02 +00:00
$ export HELM_RELEASE_CONSUL2=consul2
2022-05-25 18:34:56 +00:00
```
2021-06-29 20:23:36 +00:00
2022-11-18 17:33:02 +00:00
## Deploying Consul servers in the first cluster
2021-06-29 20:23:36 +00:00
2022-11-18 17:33:02 +00:00
First, deploy the first cluster with Consul servers according to the following example Helm configuration.
2021-06-29 20:23:36 +00:00
2022-09-09 20:56:33 +00:00
<CodeBlockConfig filename="cluster1-values.yaml">
2021-07-31 01:37:33 +00:00
2021-06-29 20:23:36 +00:00
```yaml
global:
datacenter: dc1
tls:
enabled: true
enableAutoEncrypt: true
acls:
manageSystemACLs: true
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
ui:
service:
type: NodePort
```
2021-07-31 01:37:33 +00:00
</CodeBlockConfig>
2022-07-19 16:45:41 +00:00
Note that this will deploy a secure configuration with gossip encryption,
TLS for all components and ACLs. In addition, this will enable the Consul Service Mesh and the controller for CRDs
that can be used later to verify the connectivity of services across clusters.
2021-06-29 20:23:36 +00:00
2022-07-19 16:45:41 +00:00
The UI's service type is set to be `NodePort`.
This is needed to connect to servers from another cluster without using the pod IPs of the servers,
2021-06-29 20:23:36 +00:00
which are likely going to change.
2022-07-19 16:45:41 +00:00
To deploy, first generate the Gossip encryption key and save it as a Kubernetes secret.
2021-06-29 20:23:36 +00:00
2022-09-06 23:55:09 +00:00
```shell-session
2021-06-29 20:23:36 +00:00
$ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen)
```
2022-07-19 16:45:41 +00:00
Now install Consul cluster with Helm:
2022-05-25 18:34:56 +00:00
```shell-session
2022-09-09 20:56:33 +00:00
$ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul
2021-06-29 20:23:36 +00:00
```
2022-07-19 16:45:41 +00:00
Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster.
2022-09-09 20:56:33 +00:00
* The CA certificate generated during installation
2022-07-19 16:45:41 +00:00
* The ACL bootstrap token generated during installation
2021-06-29 20:23:36 +00:00
2022-01-12 23:05:01 +00:00
```shell-session
2022-11-18 17:33:02 +00:00
$ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
2021-06-29 20:23:36 +00:00
```
2022-11-18 17:33:02 +00:00
## Deploying Consul Kubernetes in the second cluster
2021-06-29 20:23:36 +00:00
~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.
2022-07-19 16:45:41 +00:00
Switch to the second Kubernetes cluster where Consul clients will be deployed
2021-06-29 20:23:36 +00:00
that will join the first Consul cluster.
2022-07-19 16:45:41 +00:00
```shell-session
$ kubectl config use-context <K8S_CONTEXT_NAME>
```
First, apply the credentials extracted from the first cluster to the second cluster:
2021-06-29 20:23:36 +00:00
2022-01-12 23:05:01 +00:00
```shell-session
$ kubectl apply --filename cluster1-credentials.yaml
2021-06-29 20:23:36 +00:00
```
2022-07-19 16:45:41 +00:00
To deploy in the second cluster, the following example Helm configuration will be used:
2021-06-29 20:23:36 +00:00
2022-09-09 20:56:33 +00:00
<CodeBlockConfig filename="cluster2-values.yaml" highlight="6-11,15-17">
2021-07-31 01:37:33 +00:00
2021-06-29 20:23:36 +00:00
```yaml
global:
enabled: false
datacenter: dc1
acls:
manageSystemACLs: true
bootstrapToken:
secretName: cluster1-consul-bootstrap-acl-token
secretKey: token
tls:
enabled: true
caCert:
secretName: cluster1-consul-ca-cert
secretKey: tls.crt
externalServers:
enabled: true
2022-11-18 17:33:02 +00:00
# This should be any node IP of the first k8s cluster or the load balancer IP if using LoadBalancer service type for the UI.
2021-06-29 20:23:36 +00:00
hosts: ["10.0.0.4"]
2022-11-18 17:33:02 +00:00
# The node port of the UI's NodePort service or the load balancer port.
2021-06-29 20:23:36 +00:00
httpsPort: 31557
tlsServerName: server.dc1.consul
# The address of the kube API server of this Kubernetes cluster
k8sAuthMethodHost: https://kubernetes.example.com:443
connectInject:
enabled: true
```
2021-07-31 01:37:33 +00:00
</CodeBlockConfig>
2022-11-18 17:33:02 +00:00
Note the references to the secrets extracted and applied from the first cluster in ACL and TLS configuration.
2021-06-29 20:23:36 +00:00
The `externalServers.hosts` and `externalServers.httpsPort`
refer to the IP and port of the UI's NodePort service deployed in the first cluster.
Set the `externalServers.hosts` to any Node IP of the first cluster,
2022-07-19 16:45:41 +00:00
which can be seen by running `kubectl get nodes --output wide`.
2021-06-29 20:23:36 +00:00
Set `externalServers.httpsPort` to the `nodePort` of the `cluster1-consul-ui` service.
In our example, the port is `31557`.
2022-09-06 23:55:09 +00:00
```shell-session
2021-06-29 20:23:36 +00:00
$ kubectl get service cluster1-consul-ui --context cluster1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cluster1-consul-ui NodePort 10.0.240.80 <none> 443:31557/TCP 40h
```
2022-07-19 16:45:41 +00:00
Set the `externalServer.tlsServerName` to `server.dc1.consul`. This the DNS SAN
2021-06-29 20:23:36 +00:00
(Subject Alternative Name) that is present in the Consul server's certificate.
2022-07-19 16:45:41 +00:00
This is required because the connection to the Consul servers uses the node IP,
2021-06-29 20:23:36 +00:00
but that IP isn't present in the server's certificate.
2022-07-19 16:45:41 +00:00
To make sure that the hostname verification succeeds during the TLS handshake, set the TLS
2021-06-29 20:23:36 +00:00
server name to a DNS name that *is* present in the certificate.
2022-07-19 16:45:41 +00:00
Next, set `externalServers.k8sAuthMethodHost` to the address of the second Kubernetes API server.
This should be the address that is reachable from the first cluster, so it cannot be the internal DNS
2021-06-29 20:23:36 +00:00
available in each Kubernetes cluster. Consul needs it so that `consul login` with the Kubernetes auth method will work
from the second cluster.
More specifically, the Consul server will need to perform the verification of the Kubernetes service account
2022-07-19 16:45:41 +00:00
whenever `consul login` is called, and to verify service accounts from the second cluster, it needs to
2021-06-29 20:23:36 +00:00
reach the Kubernetes API in that cluster.
2022-07-19 16:45:41 +00:00
The easiest way to get it is from the `kubeconfig` by running `kubectl config view` and grabbing
2021-06-29 20:23:36 +00:00
the value of `cluster.server` for the second cluster.
2022-07-19 16:45:41 +00:00
Now, proceed with the installation of the second cluster.
2021-06-29 20:23:36 +00:00
2022-01-12 23:05:01 +00:00
```shell-session
2022-11-18 17:33:02 +00:00
$ helm install ${HELM_RELEASE_CONSUL} --values cluster2-values.yaml hashicorp/consul
2021-06-29 20:23:36 +00:00
```
## Verifying the Consul Service Mesh works
2023-01-25 16:52:43 +00:00
~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have an explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation.
2021-07-08 17:44:29 +00:00
2022-07-19 16:45:41 +00:00
Now that the Consul cluster spanning across multiple k8s clusters is up and running, deploy two services in separate k8s clusters and verify that they can connect to each other.
2021-06-29 20:23:36 +00:00
2022-07-19 16:45:41 +00:00
First, deploy `static-server` service in the first cluster:
2021-06-29 20:23:36 +00:00
2021-07-31 01:37:33 +00:00
<CodeBlockConfig filename="static-server.yaml">
2021-06-29 20:23:36 +00:00
```yaml
---
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: static-server
spec:
destination:
name: static-server
sources:
- name: static-client
action: allow
---
apiVersion: v1
kind: Service
metadata:
name: static-server
spec:
type: ClusterIP
selector:
app: static-server
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-server
spec:
replicas: 1
selector:
matchLabels:
app: static-server
template:
metadata:
name: static-server
labels:
app: static-server
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
containers:
- name: static-server
image: hashicorp/http-echo:latest
args:
- -text="hello world"
- -listen=:8080
ports:
- containerPort: 8080
name: http
2021-07-12 20:28:36 +00:00
serviceAccountName: static-server
2021-06-29 20:23:36 +00:00
```
2021-07-31 01:37:33 +00:00
</CodeBlockConfig>
2022-07-19 16:45:41 +00:00
Note that defining a Service intention is required so that our services are allowed to talk to each other.
2021-06-29 20:23:36 +00:00
2022-07-19 16:45:41 +00:00
Next, deploy `static-client` in the second cluster with the following configuration:
2021-06-29 20:23:36 +00:00
2021-07-31 01:37:33 +00:00
<CodeBlockConfig filename="static-client.yaml">
2021-06-29 20:23:36 +00:00
```yaml
apiVersion: v1
kind: Service
metadata:
name: static-client
spec:
selector:
app: static-client
ports:
- port: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: static-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: static-client
spec:
replicas: 1
selector:
matchLabels:
app: static-client
template:
metadata:
name: static-client
labels:
app: static-client
annotations:
"consul.hashicorp.com/connect-inject": "true"
"consul.hashicorp.com/connect-service-upstreams": "static-server:1234"
spec:
containers:
- name: static-client
image: curlimages/curl:latest
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
serviceAccountName: static-client
```
2021-07-31 01:37:33 +00:00
</CodeBlockConfig>
2022-07-19 16:45:41 +00:00
Once both services are up and running, try connecting to the `static-server` from `static-client`:
2021-06-29 20:23:36 +00:00
2022-01-12 23:05:01 +00:00
```shell-session
$ kubectl exec deploy/static-client -- curl --silent localhost:1234
2021-06-29 20:23:36 +00:00
"hello world"
```
2022-07-19 16:45:41 +00:00
A successful installation would return `hello world` for the above curl command output.