update docs for single-dc-multi-k8s install (#13008)
* update docs for single-dc-multi-k8s install Co-authored-by: David Yu <dyu@hashicorp.com> Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
This commit is contained in:
parent
bccd920db7
commit
ce1c58f0f5
|
@ -14,7 +14,23 @@ In this example, we will use two Kubernetes clusters, but this approach could be
|
|||
|
||||
~> **Note:** This deployment topology requires that your Kubernetes clusters have a flat network
|
||||
for both pods and nodes, so that pods or nodes from one cluster can connect
|
||||
to pods or nodes in another.
|
||||
to pods or nodes in another. If a flat network is not available across all Kubernetes clusters, follow the instructions for using [Admin Partitions](/docs/enterprise/admin-partitions), which is a Consul Enterprise feature.
|
||||
|
||||
## Prepare Helm release name ahead of installs
|
||||
|
||||
The Helm release name must be unique for each Kubernetes cluster.
|
||||
The Helm chart uses the Helm release name as a prefix for the
|
||||
ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases
|
||||
are identical, subsequent Consul on Kubernetes clusters overwrite existing ACL resources and cause the clusters to fail.
|
||||
|
||||
Before you proceed with installation, prepare the Helm release names as environment variables for both the server and client installs to use.
|
||||
|
||||
```shell-session
|
||||
$ export HELM_RELEASE_SERVER=server
|
||||
$ export HELM_RELEASE_CLIENT=client
|
||||
...
|
||||
$ export HELM_RELEASE_CLIENT2=client2
|
||||
```
|
||||
|
||||
## Deploying Consul servers and clients in the first cluster
|
||||
|
||||
|
@ -60,14 +76,10 @@ $ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=
|
|||
```
|
||||
|
||||
Now we can install our Consul cluster with Helm:
|
||||
```shell
|
||||
$ helm install cluster1 --values cluster1-config.yaml hashicorp/consul
|
||||
```shell-session
|
||||
$ helm install ${HELM_RELEASE_SERVER} --values cluster1-config.yaml hashicorp/consul
|
||||
```
|
||||
|
||||
~> **Note:** The Helm release name must be unique for each Kubernetes cluster.
|
||||
That is because the Helm chart will use the Helm release name as a prefix for the
|
||||
ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases
|
||||
are the same, the Helm installation in subsequent clusters will clobber existing ACL resources.
|
||||
|
||||
Once the installation finishes and all components are running and ready,
|
||||
we need to extract the gossip encryption key we've created, the CA certificate
|
||||
|
@ -75,7 +87,7 @@ and the ACL bootstrap token generated during installation,
|
|||
so that we can apply them to our second Kubernetes cluster.
|
||||
|
||||
```shell-session
|
||||
$ kubectl get secret consul-gossip-encryption-key cluster1-consul-ca-cert cluster1-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
|
||||
$ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
|
||||
```
|
||||
|
||||
## Deploying Consul clients in the second cluster
|
||||
|
@ -183,7 +195,7 @@ for more details.
|
|||
Now we're ready to install!
|
||||
|
||||
```shell-session
|
||||
$ helm install cluster2 --values cluster2-config.yaml hashicorp/consul
|
||||
$ helm install ${HELM_RELEASE_CLIENT} --values cluster2-config.yaml hashicorp/consul
|
||||
```
|
||||
|
||||
## Verifying the Consul Service Mesh works
|
||||
|
|
Loading…
Reference in New Issue