docs: remaining agentless docs updates (#15455)

* Update servers-outside-kubernetes.mdx

* Update single-dc-multi-k8s.mdx

* update Vault data integration for snapshot agent

* update k8s health checks page

* remove all instances of controller.enabled in helm values examples

* API Gateway update

* Apply suggestions from code review

Co-authored-by: Riddhi Shah <riddhi@hashicorp.com>

* Apply suggestions from code review

* Apply suggestions from code review

* Cleaner diagram

* added change around clients to workloads

Co-authored-by: Jeff Boruszak <104028618+boruszak@users.noreply.github.com>
Co-authored-by: boruszak <jeffrey.boruszak@hashicorp.com>
Co-authored-by: Riddhi Shah <riddhi@hashicorp.com>
Co-authored-by: David Yu <dyu@hashicorp.com>
This commit is contained in:
Iryna Shustava 2022-11-18 10:33:02 -07:00 committed by GitHub
parent d71f071bec
commit 57a2c201fa
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
19 changed files with 142 additions and 251 deletions

View File

@ -37,8 +37,6 @@ Ensure that the environment you are deploying Consul API Gateway in meets the re
name: consul
connectInject:
enabled: true
controller:
enabled: true
apiGateway:
enabled: true
image: hashicorp/consul-api-gateway:$VERSION

View File

@ -18,7 +18,7 @@ The following CRDs are used to create and manage a peering connection:
- `PeeringAcceptor`: Generates a peering token and accepts an incoming peering connection.
- `PeeringDialer`: Uses a peering token to make an outbound peering connection with the cluster that generated the token.
Peering connections, including both data-plane and control-plane traffic, will be routed over mesh gateways.
Peering connections, including both data plane and control plane traffic, is routed through mesh gateways.
As of Consul v1.14, you can also [implement service failovers and redirects to control traffic](/consul/docs/connect/l7-traffic) between peers.
> To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) environments, complete the [Consul Cluster Peering on Kubernetes tutorial](https://learn.hashicorp.com/tutorials/consul/cluster-peering-aws?utm_source=docs).
@ -33,15 +33,15 @@ You must implement the following requirements to create and use cluster peering
### Prepare for installation
Complete the following procedure after you have provisioned a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters.
Complete the following procedure after you have provisioned a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters.
1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables for future use. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
You can use the following methods to get the context names for your clusters:
- Use the `kubectl config current-context` command to get the context for the cluster you are currently in.
- Use the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file.
- Use the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file.
```shell-session
$ export CLUSTER1_CONTEXT=<CONTEXT for first Kubernetes cluster>
$ export CLUSTER2_CONTEXT=<CONTEXT for second Kubernetes cluster>
@ -64,21 +64,19 @@ Complete the following procedure after you have provisioned a Kubernetes cluster
dns:
enabled: true
enableRedirection: true
controller:
enabled: true
meshGateway:
enabled: true
replicas: 1
```
</CodeBlockConfig>
### Install Consul on Kubernetes
Install Consul on Kubernetes by using the CLI to apply `values.yaml` to each cluster.
1. In `cluster-01`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME=cluster-01
```
@ -88,7 +86,7 @@ Install Consul on Kubernetes by using the CLI to apply `values.yaml` to each clu
```
1. In `cluster-02`, run the following commands:
```shell-session
$ export HELM_RELEASE_NAME=cluster-02
```
@ -166,14 +164,14 @@ Peers identify each other using the `metadata.name` values you establish when cr
1. Apply the `PeeringDialer` resource to the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
$ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml
```
### Export services between clusters
The examples described in this section demonstrate how to export a service named `backend`. You should change instances of `backend` in the example code to the name of the service you want to export.
1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:
1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example:
<CodeBlockConfig filename="backend.yaml" highlight="37">
@ -234,7 +232,7 @@ The examples described in this section demonstrate how to export a service named
</CodeBlockConfig>
1. Deploy the `backend` service to the second cluster.
```shell-session
$ kubectl apply --context $CLUSTER2_CONTEXT --filename backend.yaml
```
@ -374,7 +372,7 @@ The examples described in this section demonstrate how to export a service named
```shell-session
$ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090
{
"name": "frontend",
"uri": "/",
@ -409,17 +407,17 @@ The examples described in this section demonstrate how to export a service named
"code": 200
}
```
</CodeBlockConfig>
## End a peering connection
To end a peering connection, delete both the `PeeringAcceptor` and `PeeringDialer` resources.
1. Delete the `PeeringDialer` resource from the second cluster.
```shell-session
$ kubectl --context $CLUSTER2_CONTEXT delete --filename dialer.yaml
$ kubectl --context $CLUSTER2_CONTEXT delete --filename dialer.yaml
```
1. Delete the `PeeringAcceptor` resource from the first cluster.
@ -430,27 +428,27 @@ To end a peering connection, delete both the `PeeringAcceptor` and `PeeringDiale
1. Confirm that you deleted your peering connection in `cluster-01` by querying the the `/health` HTTP endpoint. The peered services should no longer appear.
1. Exec into the server pod for the first cluster.
1. Exec into the server pod for the first cluster.
```shell-session
$ kubectl exec -it consul-server-0 --context $CLUSTER1_CONTEXT -- /bin/sh
```
1. If you've enabled ACLs, export an ACL token to access the `/health` HTP endpoint for services. The bootstrap token may be used if an ACL token is not already provisioned.
1. If you've enabled ACLs, export an ACL token to access the `/health` HTP endpoint for services. The bootstrap token may be used if an ACL token is not already provisioned.
```shell-session
$ export CONSUL_HTTP_TOKEN=<INSERT BOOTSTRAP ACL TOKEN>
$ export CONSUL_HTTP_TOKEN=<INSERT BOOTSTRAP ACL TOKEN>
```
1. Query the the `/health` HTTP endpoint. The peered services should no longer appear.
```shell-session
$ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02"
```
## Recreate or reset a peering connection
To recreate or reset the peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor`.
To recreate or reset the peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor`.
1. In the `PeeringAcceptor` CRD, add the annotation `consul.hashicorp.com/peering-version`. If the annotation already exists, update its value to a higher version.
@ -478,7 +476,7 @@ To recreate or reset the peering connection, you need to generate a new peering
1. [Establish a peering connection between clusters](#establish-a-peering-connection-between-clusters)
1. [Export services between clusters](#export-services-between-clusters)
1. [Authorize services for peers](#authorize-services-for-peers)
Your peering connection is re-established with the updated token.
~> **Note:** The only way to create or set a new peering token is to manually adjust the value of the annotation `consul.hashicorp.com/peering-version`. Creating a new token causes the previous token to expire.

View File

@ -69,10 +69,10 @@ Consul Dataplane supports the following features:
- xDS load balancing is supported.
- Servers running in Kubernetes and servers external to Kubernetes are both supported.
- HCP Consul is supported.
- Consul API Gateway is supported.
### Technical Constraints
Be aware of the following limitations and recommendations for Consul Dataplane:
- Consul API Gateway is not currently supported.
- Consul Dataplane is not supported on Windows.

View File

@ -111,7 +111,7 @@ For other runtimes, refer to the documentation for your infrastructure environme
One of the primary use cases for admin partitions is for enabling a service mesh across multiple Kubernetes clusters. The following requirements must be met to create admin partitions on Kubernetes:
- If you are deploying Consul servers on Kubernetes, then ensure that the Consul servers are deployed within the same Kubernetes cluster. Consul servers may be deployed external to Kubernetes and configured using the `externalServers` stanza.
- Consul clients deployed on the same Kubernetes cluster as the Consul Servers must use the `default` partition. If the clients are required to run on a non-default partition, then the clients must be deployed in a separate Kubernetes cluster.
- Workloads deployed on the same Kubernetes cluster as the Consul Servers must use the `default` partition. If the workloads are required to run on a non-default partition, then the clients must be deployed in a separate Kubernetes cluster.
- For Kubernetes clusters that join the Consul datacenter as admin partitions, ensure that a unique `global.name` value is assigned for the corresponding Helm `values.yaml` file.
- A Consul Enterprise license must be installed on each Kubernetes cluster.
- The helm chart for consul-k8s v0.39.0 or greater.

View File

@ -2,7 +2,7 @@
layout: docs
page_title: Configure Health Checks for Consul on Kubernetes
description: >-
Kubernetes has built-in health probes you can sync with Consul's health checks to ensure service mesh traffic is routed to healthy pods. Learn how to register a TTL Health check and use mutating webhooks to redirect k8s liveness, readiness, and startup probes through Envoy proxies.
Kubernetes has built-in health probes you can sync with Consul's health checks to ensure service mesh traffic is routed to healthy pods.
---
# Configure Health Checks for Consul on Kubernetes
@ -14,14 +14,17 @@ Health check synchronization with Consul is done automatically whenever `connect
For each Kubernetes pod that is connect-injected the following will be configured:
1. A [TTL health check](/docs/discovery/checks#ttl) is registered within Consul.
The Consul health check's state will reflect the pod's readiness status,
which is the combination of all Kubernetes probes registered with the pod.
1. A [Consul health check](/consul/api-docs/catalog#register-entity) is registered within Consul catalog.
The Consul health check's state reflects the pod's readiness status.
1. If the pod is utilizing [Transparent Proxy](/docs/connect/transparent-proxy) mode, the mutating webhook will mutate all `http` based Startup, Liveness, and Readiness probes in the pod to redirect through the Envoy proxy.
This is done with [`ExposePaths` configuration](/docs/connect/registration/service-registration#expose-paths-configuration-reference) for each probe so that kubelet can access the endpoint through the Envoy proxy.
1. If the pod is using [transparent proxy mode](/docs/connect/transparent-proxy),
the mutating webhook redirects all `http` based startup, liveness, and readiness probes in the pod through the Envoy proxy.
This webhook is defined in the
[`ExposePaths` configuration](/docs/connect/registration/service-registration#expose-paths-configuration-reference)
for each probe so that kubelet can access the endpoint through the Envoy proxy.
~> The mutation behavior can be disabled by either setting the `consul.hashicorp.com/transparent-proxy-overwrite-probes` pod annotation to `false` or the `connectInject.defaultOverwriteProbes` Helm value to `false`.
The mutation behavior can be disabled, by setting either the `consul.hashicorp.com/transparent-proxy-overwrite-probes`
pod annotation to `false` or the `connectInject.defaultOverwriteProbes` Helm value to `false`.
When readiness probes are set for a pod, the status of the pod will be reflected within Consul and will cause Consul to redirect service
mesh traffic to the pod based on the pod's health. If the pod has failing health checks, Consul will no longer use

View File

@ -382,9 +382,6 @@ upgrade the installation using `helm upgrade` for existing installs or
```yaml
connectInject:
enabled: true
controller:
enabled: true
```
This will configure the injector to inject when the

View File

@ -34,8 +34,6 @@ global:
name: consul
connectInject:
enabled: true
controller:
enabled: true
ingressGateways:
enabled: true
gateways:
@ -268,8 +266,6 @@ global:
name: consul
connectInject:
enabled: true
controller:
enabled: true
ingressGateways:
enabled: false # Set to false
gateways:

View File

@ -60,11 +60,6 @@ new and existing services:
1. Next, modify your Helm values:
1. Remove the `defaultProtocol` config. This won't affect existing services.
1. Set:
```yaml
controller:
enabled: true
```
1. Now you can upgrade your Helm chart to the latest version with the new Helm values.
1. From now on, any new service will require a [`ServiceDefaults`](/docs/connect/config-entries/service-defaults)
resource to set its protocol:
@ -164,13 +159,6 @@ You will need to perform the following steps to upgrade:
1. Next, remove this annotation from existing deployments. This will have no
effect on the deployments because the annotation was only used when the
service was first created.
1. Modify your Helm values and add:
```yaml
controller:
enabled: true
```
1. Now you can upgrade your Helm chart to the latest version.
1. From now on, any new service will require a [`ServiceDefaults`](/docs/connect/config-entries/service-defaults)
resource to set its protocol:

View File

@ -68,9 +68,6 @@ connectInject:
# Consul Connect service mesh must be enabled for federation.
enabled: true
controller:
enabled: true
meshGateway:
# Mesh gateways are gateways between datacenters. They must be enabled
# for federation in Kubernetes since the communication between datacenters
@ -358,8 +355,6 @@ global:
secretKey: gossipEncryptionKey
connectInject:
enabled: true
controller:
enabled: true
meshGateway:
enabled: true
server:

View File

@ -270,7 +270,7 @@ You'll need:
```
With Consul Enterprise, use:
```hcl
partition_prefix "" {
namespace_prefix "" {
@ -283,16 +283,16 @@ You'll need:
}
}
```
These permissions are needed to allow cross-datacenter requests. To make a cross-dc request the sidecar proxy in the originating DC needs to know about the
services running in the remote DC. To do so, it needs an ACL token that allows it to look up the services in the remote DC. The way tokens are created in
Kubernetes, the sidecar proxies have local ACL tokensi.e tokens that are only valid in the local DC. When a request goes from one DC to another, if the
request has a local token, it is stripped from the request because the remote DC won't be able to validate it. When the request lands in the other DC,
it has no ACL token and so will be subject to the anonymous token policy. This is why the anonymous token policy must be configured to allow read access
to all services. When the Kubernetes DC is the primary, this is handled automatically, but when the primary DC is on VMs, this must be configured manually.
To configure the anonymous token policy, first create a policy with the above rules, then attach it to the anonymous token. For example using the CLI:
```sh
echo 'node_prefix "" {
policy = "read"
@ -300,7 +300,7 @@ You'll need:
service_prefix "" {
policy = "read"
}' | consul acl policy create -name anonymous -rules -
consul acl token update -id 00000000-0000-0000-0000-000000000002 -policy-name anonymous
```
@ -374,8 +374,6 @@ global:
connectInject:
enabled: true
controller:
enabled: true
meshGateway:
enabled: true
server:

View File

@ -8,25 +8,22 @@ description: >-
# Join External Servers to Consul on Kubernetes
If you have a Consul cluster already running, you can configure your
Consul clients inside Kubernetes to join this existing cluster.
Consul on Kubernetes installation to join this existing cluster.
The below `values.yaml` file shows how to configure the Helm chart to install
Consul clients that will join an existing cluster.
Consul so that it joins an existing Consul server cluster.
The `global.enabled` value first disables all chart components by default
so that each component is opt-in. This allows us to _only_ setup the client
agents. We then opt-in to the client agents by setting `client.enabled` to
`true`.
Next, `client.exposeGossipPorts` can be set to `true` or `false` depending on if
you want the clients to be exposed on the Kubernetes internal node IPs (`true`) or
their pod IPs (`false`).
Finally, `client.join` is set to an array of valid
[`-retry-join` values](/docs/agent/config/cli-flags#retry-join). In the
example above, a fake [cloud auto-join](/docs/install/cloud-auto-join)
value is specified. This should be set to resolve to the proper addresses of
your existing Consul cluster.
Next, configure `externalServers` to point it to Consul servers.
The `externalServers.hosts` value must be provided and should be set to a DNS, an IP,
or an `exec=` string with a command returning Consul IPs. Please see [this documentation](https://github.com/hashicorp/go-netaddrs)
on how the `exec=` string works.
Other values in the `externalServers` section are optional. Please refer to
[Helm Chart configuration](https://developer.hashicorp.com/consul/docs/k8s/helm#h-externalservers) for more details.
<CodeBlockConfig filename="values.yaml">
@ -34,26 +31,16 @@ your existing Consul cluster.
global:
enabled: false
client:
enabled: true
# Set this to true to expose the Consul clients using the Kubernetes node
# IPs. If false, the pod IPs must be routable from the external servers.
exposeGossipPorts: true
join:
- 'provider=my-cloud config=val ...'
externalServers:
hosts: [<consul server DNS, IP or exec= string>]
```
</CodeBlockConfig>
-> **Networking:** Note that for the Kubernetes nodes to join an existing
cluster, the nodes (and specifically the agent pods) must be able to connect
to all other server and client agents inside and _outside_ of Kubernetes over [LAN](/docs/install/glossary#lan-gossip).
If this isn't possible, consider running a separate Consul cluster inside Kubernetes
and federating it with your cluster outside Kubernetes.
You may also consider adopting Consul Enterprise for
[network segments](/docs/enterprise/network-segments).
**Note:** To join Consul on Kubernetes to an existing Consul server cluster running outside of Kubernetes,
refer to [Consul servers outside of Kubernetes](/docs/k8s/deployment-configurations/servers-outside-kubernetes).
## Configuring TLS with Auto-encrypt
## Configuring TLS
-> **Note:** Consul on Kubernetes currently does not support external servers that require mutual authentication
for the HTTPS clients of the Consul servers, that is when servers have either
@ -62,10 +49,7 @@ As noted in the [Security Model](/docs/security#secure-configuration),
that setting isn't strictly necessary to support Consul's threat model as it is recommended that
all requests contain a valid ACL token.
Consul's auto-encrypt feature allows clients to automatically provision their certificates by making a request to the servers at startup.
If you would like to use this feature with external Consul servers, you need to configure the Helm chart with information about the servers
so that it can retrieve the clients' CA to use for securing the rest of the cluster.
To do that, you must add the following values, in addition to the values mentioned above:
If the Consul server has TLS enabled, you need to provide the CA certificate so that Consul on Kubernetes can communicate with the server. Save the certificate in a Kubernetes secret and then provide it in your Helm values, as demonstrated in the following example:
<CodeBlockConfig filename="values.yaml" highlight="2-8">
@ -73,19 +57,17 @@ To do that, you must add the following values, in addition to the values mention
global:
tls:
enabled: true
enableAutoEncrypt: true
caCert:
secretName: <CA certificate secret name>
secretKey: <CA Certificate secret key>
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
```
</CodeBlockConfig>
In most cases, `externalServers.hosts` will be the same as `client.join`, however, both keys must be set because
they are used for different purposes: one for Serf LAN and the other for HTTPS connections.
Please see the [reference documentation](/docs/k8s/helm#v-externalservers-hosts)
for more info. If your HTTPS port is different from Consul's default `8501`, you must also set
If your HTTPS port is different from Consul's default `8501`, you must also set
`externalServers.httpsPort`.
## Configuring ACLs
@ -137,8 +119,7 @@ with `consul login`.
```yaml
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```
@ -156,17 +137,9 @@ global:
bootstrapToken:
secretName: bootstrap-token
secretKey: token
client:
enabled: true
# Set this to true to expose the Consul clients using the Kubernetes node
# IPs. If false, the pod IPs must be routable from the external servers.
exposeGossipPorts: true
join:
- 'provider=my-cloud config=val ...'
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```
@ -184,17 +157,9 @@ global:
enabled: false
acls:
manageSystemACLs: true
client:
enabled: true
# Set this to true to expose the Consul clients using the Kubernetes node
# IPs. If false, the pod IPs must be routable from the external servers.
exposeGossipPorts: true
join:
- 'provider=my-cloud config=val ...'
externalServers:
enabled: true
hosts:
- 'provider=my-cloud config=val ...'
hosts: [<consul server DNS, IP or exec= string>]
k8sAuthMethodHost: 'https://kubernetes.example.com:443'
```

View File

@ -10,18 +10,12 @@ description: >-
~> **Note:** When running Consul across multiple Kubernetes clusters, we recommend using [admin partitions](/docs/enterprise/admin-partitions) for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network.
This page describes deploying a single Consul datacenter in multiple Kubernetes clusters,
with servers and clients running in one cluster and only clients in the rest of the clusters.
with servers running in one cluster and only Consul on Kubernetes components in the rest of the clusters.
This example uses two Kubernetes clusters, but this approach could be extended to using more than two.
## Requirements
* Consul-Helm version `v0.32.1` or higher
* This deployment topology requires that the Kubernetes clusters have a flat network
for both pods and nodes so that pods or nodes from one cluster can connect
to pods or nodes in another. In many hosted Kubernetes environments, this may have to be explicitly configured based on the hosting provider's network. Refer to the following documentation for instructions:
* [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking)
* [AWS EKS CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html)
* [GKE VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips).
* Consul-Helm version `v1.0.0` or higher
* Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix.
## Prepare Helm release name ahead of installs
@ -34,14 +28,14 @@ Before proceeding with installation, prepare the Helm release names as environme
```shell-session
$ export HELM_RELEASE_SERVER=server
$ export HELM_RELEASE_CLIENT=client
$ export HELM_RELEASE_CONSUL=consul
...
$ export HELM_RELEASE_CLIENT2=client2
$ export HELM_RELEASE_CONSUL2=consul2
```
## Deploying Consul servers and clients in the first cluster
## Deploying Consul servers in the first cluster
First, deploy the first cluster with Consul Servers and Clients with the example Helm configuration below.
First, deploy the first cluster with Consul servers according to the following example Helm configuration.
<CodeBlockConfig filename="cluster1-values.yaml">
@ -56,10 +50,6 @@ global:
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
connectInject:
enabled: true
controller:
enabled: true
ui:
service:
type: NodePort
@ -86,17 +76,15 @@ Now install Consul cluster with Helm:
$ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul
```
Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster.
* The Gossip encryption key created
* The CA certificate generated during installation
* The ACL bootstrap token generated during installation
```shell-session
$ kubectl get secret consul-gossip-encryption-key ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
$ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml
```
## Deploying Consul clients in the second cluster
## Deploying Consul Kubernetes in the second cluster
~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster.
Switch to the second Kubernetes cluster where Consul clients will be deployed
@ -124,38 +112,27 @@ global:
bootstrapToken:
secretName: cluster1-consul-bootstrap-acl-token
secretKey: token
gossipEncryption:
secretName: consul-gossip-encryption-key
secretKey: key
tls:
enabled: true
enableAutoEncrypt: true
caCert:
secretName: cluster1-consul-ca-cert
secretKey: tls.crt
externalServers:
enabled: true
# This should be any node IP of the first k8s cluster
# This should be any node IP of the first k8s cluster or the load balancer IP if using LoadBalancer service type for the UI.
hosts: ["10.0.0.4"]
# The node port of the UI's NodePort service
# The node port of the UI's NodePort service or the load balancer port.
httpsPort: 31557
tlsServerName: server.dc1.consul
# The address of the kube API server of this Kubernetes cluster
k8sAuthMethodHost: https://kubernetes.example.com:443
client:
enabled: true
join: ["provider=k8s kubeconfig=/consul/userconfig/cluster1-kubeconfig/kubeconfig label_selector=\"app=consul,component=server\""]
extraVolumes:
- type: secret
name: cluster1-kubeconfig
load: false
connectInject:
enabled: true
```
</CodeBlockConfig>
Note the references to the secrets extracted and applied from the first cluster in ACL, gossip, and TLS configuration.
Note the references to the secrets extracted and applied from the first cluster in ACL and TLS configuration.
The `externalServers.hosts` and `externalServers.httpsPort`
refer to the IP and port of the UI's NodePort service deployed in the first cluster.
@ -187,23 +164,10 @@ reach the Kubernetes API in that cluster.
The easiest way to get it is from the `kubeconfig` by running `kubectl config view` and grabbing
the value of `cluster.server` for the second cluster.
Lastly, set up the clients so that they can discover the servers in the first cluster.
For this, Consul's cloud auto-join feature
for the [Kubernetes provider](/docs/install/cloud-auto-join#kubernetes-k8s) can be used.
This can be configured by saving the `kubeconfig` for the first cluster as a Kubernetes secret in the second cluster
and referencing it in the `clients.join` value. Note that the secret is made available to the client pods
by setting it in `client.extraVolumes`.
~> **Note:** The kubeconfig provided to the client should have minimal permissions.
The cloud auto-join provider will only need permission to read pods.
Please see [Kubernetes Cloud auto-join](/docs/install/cloud-auto-join#kubernetes-k8s)
for more details.
Now, proceed with the installation of the second cluster.
```shell-session
$ helm install ${HELM_RELEASE_CLIENT} --values cluster2-values.yaml hashicorp/consul
$ helm install ${HELM_RELEASE_CONSUL} --values cluster2-values.yaml hashicorp/consul
```
## Verifying the Consul Service Mesh works

View File

@ -20,7 +20,7 @@ For each secret you want to store in Vault, you must complete two multi-step pro
Complete the following steps once:
1. Store the secret in Vault.
1. Create a Vault policy that authorizes the desired level of access to the secret.
Repeat the following steps for each datacenter in the cluster:
1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access.
1. Update the Consul on Kubernetes Helm chart.
@ -42,7 +42,7 @@ It includes things like terminating gateways, ingress gateways, etc.)
|[ACL Replication token](/docs/k8s/deployment-configurations/vault/data-integration/replication-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)|
|[Enterprise license](/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Gossip encryption key](/docs/k8s/deployment-configurations/vault/data-integration/gossip) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul snapshot agent | [`global.secretsBackend.vault.consulSnapshotAgentRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulsnapshotagentrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls) | Consul servers<br/>Consul clients<br/>Consul components | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)<br/>[`global.secretsBackend.vault.consulCARole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulcarole)|
|[Service Mesh and Consul client TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Webhook TLS certificates for controller and connect inject](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers<br/>Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)<br />[`global.secretsBackend.vault.connectInjectRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)|
@ -61,14 +61,14 @@ The mapping for secondary data centers is similar with the following differences
|[ACL Replication token](/docs/k8s/deployment-configurations/vault/data-integration/replication-token) | Consul server-acl-init job<br/>Consul servers | [`global.secretsBackend.vault.manageSystemACLsRole`](/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)<br/>[`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Enterprise license](/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Gossip encryption key](/docs/k8s/deployment-configurations/vault/data-integration/gossip) | Consul servers<br/>Consul clients | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul snapshot agent | [`global.secretsBackend.vault.consulSnapshotAgentRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulsnapshotagentrole)|
|[Snapshot Agent config](/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls) | Consul servers<br/>Consul clients<br/>Consul components | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)<br/>[`global.secretsBackend.vault.consulClientRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)<br/>[`global.secretsBackend.vault.consulCARole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulcarole)|
|[Service Mesh and Consul client TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)|
|[Webhook TLS certificates for controller and connect inject](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers<br/>Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)<br />[`global.secretsBackend.vault.connectInjectRole`](/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)|
### Combining policies within roles
As you can see in the table above, depending upon your needs, a Consul on Kubernetes service account could have the need to request more than one secret. In these cases, you will want to create one role for the Consul on Kubernetes service account that is mapped to multiple policies, each of which allows it access to a given secret.
Depending upon your needs, a Consul on Kubernetes service account may need to request more than one secret. To request multiple secrets, create one role for the Consul on Kubernetes service account that is mapped to multiple policies associated with the required secrets.
For example, if your Consul on Kubernetes servers need access to [Consul Server TLS credentials](/docs/k8s/deployment-configurations/vault/data-integration/server-tls) and an [Enterprise license](/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license):

View File

@ -20,7 +20,7 @@ Repeat the following steps for each datacenter in the cluster:
1. Update the Consul on Kubernetes helm chart.
## Prerequisites
Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have:
Before you set up data integration between Vault and Consul on Kubernetes, complete the following prerequisites:
1. Read and completed the steps in the [Systems Integration](/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/docs/k8s/deployment-configurations/vault).
2. Read the [Data Integration Overview](/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/docs/k8s/deployment-configurations/vault).
@ -56,21 +56,23 @@ $ vault policy write snapshot-agent-config-policy snapshot-agent-config-policy.h
## Create Vault Authorization Roles for Consul
Next, you will create a Kubernetes auth role for the Consul snapshot agent:
Next, add this policy to your Consul server Kubernetes auth role:
```shell-session
$ vault write auth/kubernetes/role/consul-server \
bound_service_account_names=<Consul snapshot agent service account> \
bound_service_account_names=<Consul server service account> \
bound_service_account_namespaces=<Consul installation namespace> \
policies=snapshot-agent-config-policy \
ttl=1h
```
Note that if you have other policies associated
with the Consul server service account that are not in the example, you need to include those as well.
To find out the service account name of the Consul snapshot agent,
you can run the following `helm template` command with your Consul on Kubernetes values file:
```shell-session
$ helm template --release-name ${RELEASE_NAME} -s templates/client-snapshot-agent-serviceaccount.yaml hashicorp/consul -f values.yaml
$ helm template --release-name ${RELEASE_NAME} -s templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml
```
## Update Consul on Kubernetes Helm chart
@ -85,7 +87,7 @@ global:
secretsBackend:
vault:
enabled: true
consulSnapshotAgentRole: snapshot-agent
consulServerRole: consul-server
client:
snapshotAgent:
configSecret:

View File

@ -10,9 +10,9 @@ description: >-
This topic describes how to configure the Consul Helm chart to use TLS certificates issued by Vault in the Consul controller and connect inject webhooks.
## Overview
In a Consul Helm chart configuration that does not use Vault, webhook-cert-manager normally fulfills the role of ensuring that a valid certificate is updated to the `mutatingwebhookconfiguration` of either controller or connect inject to ensure that Kubernetes can communicate with each of these services.
In a Consul Helm chart configuration that does not use Vault, `webhook-cert-manager` ensures that a valid certificate is updated to the `mutatingwebhookconfiguration` of either the controller or connect inject to ensure that Kubernetes can communicate with each of these services.
When Vault is configured as the controller and connect inject Webhook Certificate Provider on Kubernetes:
When Vault is configured as the controller and connect inject Webhook Certificate Provider on Kubernetes:
- `webhook-cert-manager` is no longer deployed to the cluster.
- controller and connect inject each get their webhook certificates from its own Vault PKI mount via the injected Vault Agent.
- controller and connect inject each need to be configured with its own Vault Role that has necessary permissions to receive certificates from its respective PKI mount.
@ -28,10 +28,10 @@ These following steps will be repeated for each datacenter:
1. Configure the Vault Kubernetes auth roles in the Consul on Kubernetes helm chart.
## Prerequisites
Complete the following prerequisites prior to implementing the integration described in this topic:
Complete the following prerequisites prior to implementing the integration described in this topic:
1. Verify that you have completed the steps described in [Systems Integration](/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/docs/k8s/deployment-configurations/vault).
1. You should be familiar with the [Data Integration Overview](/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/docs/k8s/deployment-configurations/vault).
1. Configure [Vault as the Server TLS Certificate Provider on Kubernetes](/docs/k8s/deployment-configurations/vault/data-integration/server-tls)
1. Configure [Vault as the Server TLS Certificate Provider on Kubernetes](/docs/k8s/deployment-configurations/vault/data-integration/server-tls)
1. Configure [Vault as the Service Mesh Certificate Provider on Kubernetes](/docs/k8s/deployment-configurations/vault/data-integration/connect-ca)
## Bootstrapping the PKI Engines
@ -204,11 +204,6 @@ global:
consulCARole: "consul-ca"
controllerRole: "controller-role"
connectInjectRole: "connect-inject-role"
controller:
caCert:
secretName: "controller/cert/ca"
tlsCert:
secretName: "controller/issue/controller-role"
connectInject:
caCert:
secretName: "connect-inject/cert/ca"
@ -228,8 +223,6 @@ server:
load: "false"
connectInject:
enabled: true
controller:
enabled: true
```
</CodeBlockConfig>

View File

@ -16,7 +16,7 @@ The [Federation Between Kubernetes Clusters](/docs/k8s/deployment-configurations
## Usage
The expected use case is to create WAN Federation on Kubernetes clusters. The following procedure will result in a WAN Federation with Vault as the secrets backend between two clusters, dc1 and dc2. dc1 will act as the primary Consul cluster and will also contain the Vault server installation. dc2 will be the secondary Consul cluster.
The expected use case is to create WAN Federation on Kubernetes clusters. The following procedure results in a WAN Federation with Vault as the secrets backend between two clusters, dc1 and dc2. dc1 acts as the primary Consul cluster and also contains the Vault server installation. dc2 is the secondary Consul cluster.
![Consul on Kubernetes with Vault as the Secrets Backend](/img/k8s/consul-vault-wan-federation-topology.svg 'Consul on Kubernetes with Vault as the Secrets Backend')
@ -36,7 +36,7 @@ The two data centers will federated using mesh gateways. This communication top
In this setup, you will deploy Vault server in the primary datacenter (dc1) Kubernetes cluster, which is also the primary Consul datacenter. You will configure your Vault Helm installation in the secondary datacenter (dc2) Kubernetes cluster to use it as an external server. This way there will be a single vault server cluster that will be used by both Consul datacenters.
~> **Note**: For demonstration purposes, you will deploy a Vault server in dev mode. For production installations, this is not recommended. Please visit the [Vault Deployment Guide](https://learn.hashicorp.com/tutorials/vault/raft-deployment-guide) for guidance on how to install Vault in a production setting.
~> **Note**: For demonstration purposes, the following example deploys a Vault server in dev mode. Do not use dev mode for production installations. Refer to the [Vault Deployment Guide](https://learn.hashicorp.com/tutorials/vault/raft-deployment-guide) for guidance on how to install Vault in a production setting.
1. Change your current Kubernetes context to target the primary datacenter (dc1).
@ -143,8 +143,8 @@ Repeat the following steps for each datacenter in the cluster:
```
### Primary Datacenter (dc1)
1. Install the Vault Injector in your Consul Kubernetes cluster (dc1), which is used for accessing secrets from Vault.
1. Install the Vault Injector in your Consul Kubernetes cluster (dc1), which is used for accessing secrets from Vault.
-> **Note**: In the primary datacenter (dc1), you will not have to configure `injector.externalvaultaddr` value because the Vault server is in the same primary datacenter (dc1) cluster.
<CodeBlockConfig filename="vault-dc1.yaml" linenumbers lineNumbers highlight="7,8,9">
@ -165,7 +165,7 @@ Repeat the following steps for each datacenter in the cluster:
</CodeBlockConfig>
Next, install Vault in the Kubernetes cluster.
```shell-session
$ helm upgrade vault-dc1 --values vault-dc1.yaml hashicorp/vault --wait
```
@ -202,7 +202,7 @@ Repeat the following steps for each datacenter in the cluster:
1. Install the Vault Injector in the secondary datacenter (dc2).
In the secondary datacenter (dc2), you will configure the `externalvaultaddr` value point to the external address of the Vault server in the primary datacenter (dc1).
Change your Kubernetes context to target the secondary datacenter (dc2):
```shell-session
@ -210,7 +210,7 @@ Repeat the following steps for each datacenter in the cluster:
```
<CodeBlockConfig filename="vault-dc2.yaml" linenumbers highlight="5,6">
```yaml
server:
enabled: false
@ -219,9 +219,9 @@ Repeat the following steps for each datacenter in the cluster:
externalVaultAddr: ${VAULT_ADDR}
authPath: auth/kubernetes-dc2
```
</CodeBlockConfig>
Next, install Vault in the Kubernetes cluster.
```shell-session
$ helm install vault-dc2 --values vault-dc2.yaml hashicorp/vault --wait
@ -273,7 +273,7 @@ Repeat the following steps for each datacenter in the cluster:
$ export K8S_DC2_JWT_TOKEN="$(kubectl get secret `kubectl get serviceaccounts vault-dc2-auth-method --output jsonpath='{.secrets[0].name}'` --output jsonpath='{.data.token}' | base64 --decode)"
```
1. Configure Auth Method with the JWT token of service account. You will have to get the externally reachable address of the secondary Consul datacenter (dc2) in the secondary Kubernetes cluster and set `kubernetes_host` within the Auth Method configuration.
1. Configure the auth method with the JWT token of service account. First, get the externally reachable address of the secondary Consul datacenter (dc2) in the secondary Kubernetes cluster. Then set `kubernetes_host` in the auth method configuration.
```shell-session
$ export KUBE_API_URL_DC2=$(kubectl config view --output jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}")
@ -297,7 +297,7 @@ Repeat the following steps for each datacenter in the cluster:
enabled: true
```
</CodeBlockConfig>
</CodeBlockConfig>
## Data Integration
There are two main procedures for using Vault as the service mesh certificate provider in Kubernetes.
@ -319,10 +319,10 @@ Repeat the following steps for each datacenter in the cluster:
```shell-session
$ vault kv put consul/secret/replication token="$(uuidgen | tr '[:upper:]' '[:lower:]')"
```
```
```shell-session
$ vault write pki/root/generate/internal common_name="Consul CA" ttl=87600h
```
```
1. Create Vault policies that authorize the desired level of access to the secrets.
@ -475,8 +475,6 @@ Repeat the following steps for each datacenter in the cluster:
connectInject:
replicas: 1
enabled: true
controller:
enabled: true
meshGateway:
enabled: true
replicas: 1
@ -490,7 +488,7 @@ Repeat the following steps for each datacenter in the cluster:
```
### Pre-installation for Secondary Datacenter (dc2)
1. Update the Consul on Kubernetes helm chart. For secondary datacenter (dc2), you will need to get the address of the mesh gateway from the **primary datacenter (dc1)** cluster.
1. Update the Consul on Kubernetes Helm chart. For secondary datacenter (dc2), you need to get the address of the mesh gateway from the _primary datacenter (dc1)_ cluster.
Keep your Kubernetes context targeting dc1 and set the `MESH_GW_HOST` environment variable that you will use in the Consul Helm chart for secondary datacenter (dc2).
@ -610,7 +608,7 @@ Repeat the following steps for each datacenter in the cluster:
```
1. Configure and install Consul in the secondary datacenter (dc2).
-> **Note**: To configure Vault as the Connect CA in secondary datacenters, you need to make sure that the Root CA path is the same. The intermediate path is different for each datacenter. In the `connectCA` Helm configuration for a secondary datacenter, you can specify a `intermediatePKIPath` that is, for example, prefixed with the datacenter for which this configuration is intended (e.g. `dc2/connect-intermediate`).
<CodeBlockConfig filename="consul-dc2.yaml" linenumbers highlight="4,5,6,7,8,9,10,11,12,13,14,15,19,20,21,22,23,24,25,26,29,30,31,32,33,34,37,38">

View File

@ -7,7 +7,7 @@ description: >-
# Install Consul on K8s CLI
# Install Consul on Kubernetes from Consul K8s CLI
# Install Consul on Kubernetes from Consul K8s CLI
This topic describes how to install Consul on Kubernetes using the Consul K8s CLI tool. The Consul K8s CLI tool enables you to quickly install and interact with Consul on Kubernetes. Use the Consul K8s CLI tool to install Consul on Kubernetes if you are deploying a single cluster. We recommend using the [Helm chart installation method](/docs/k8s/installation/install) if you are installing Consul on Kubernetes for multi-cluster deployments that involve cross-partition or cross datacenter communication.
@ -20,33 +20,33 @@ If it is your first time installing Consul on Kubernetes, then you must first in
- The `kubectl` client must already be configured to authenticate to the Kubernetes cluster using a valid `kubeconfig` file.
- Install one of the following package managers so that you can install the Consul K8s CLI tool. The installation instructions also provide commands for installing and using the package managers:
- MacOS: [Homebrew](https://brew.sh)
- Ubuntu/Debian: apt
- MacOS: [Homebrew](https://brew.sh)
- Ubuntu/Debian: apt
- CentOS/RHEL: yum
You must install the correct version of the CLI for your Consul on Kubernetes deployment. To deploy a previous version of Consul on Kubernetes, download the specific version of the CLI that matches the version of the control plane that you would like to deploy. Refer to the [compatibility matrix](/docs/k8s/compatibility) for details.
## Install the CLI
## Install the CLI
The following instructions describe how to install the latest version of the Consul K8s CLI tool, as well as earlier versions, so that you can install an appropriate version of tool for your control plane.
### Install the latest version
Complete the following instructions for a fresh installation of Consul on Kubernetes.
Complete the following instructions for a fresh installation of Consul on Kubernetes.
<Tabs>
<Tab heading="MacOS">
The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. The Homebrew formulae always installs the latest version of a binary.
The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. The Homebrew formulae always installs the latest version of a binary.
1. Install the HashiCorp `tap`, which is a repository of all Homebrew packages for HashiCorp:
```shell-session
$ brew tap hashicorp/tap
```
1. Install the Consul K8s CLI with `hashicorp/tap/consul` formula.
1. Install the Consul K8s CLI with `hashicorp/tap/consul` formula.
```shell-session
$ brew install hashicorp/tap/consul-k8s
```
@ -63,7 +63,7 @@ The [Homebrew](https://brew.sh) package manager is required to complete the foll
$ consul-k8s version
consul-k8s 1.0
```
</Tab>
<Tab heading="Linux - Ubuntu/Debian">
@ -73,19 +73,19 @@ The [Homebrew](https://brew.sh) package manager is required to complete the foll
```shell-session
$ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
```
1. Add the HashiCorp apt repository.
1. Add the HashiCorp apt repository.
```shell-session
$ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
```
1. Run apt-get install to install the `consul-k8s` CLI.
```shell-session
$ sudo apt-get update && sudo apt-get install consul-k8s
```
1. (Optional) Issue the `consul-k8s version` command to verify the installation.
```shell-session
@ -109,26 +109,26 @@ The [Homebrew](https://brew.sh) package manager is required to complete the foll
$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
```
1. Install the `consul-k8s` CLI.
1. Install the `consul-k8s` CLI.
```shell-session
$ sudo yum -y install consul-k8s
```
1. (Optional) Issue the `consul-k8s version` command to verify the installation.
```shell-session
$ consul-k8s version
consul-k8s 1.0
```
</Tab>
</Tabs>
### Install a previous version
Complete the following instructions to install a specific version of the CLI so that your tool is compatible with your Consul on Kubernetes control plane. Refer to the [compatibility matrix](/docs/k8s/compatibility) for additional information.
Complete the following instructions to install a specific version of the CLI so that your tool is compatible with your Consul on Kubernetes control plane. Refer to the [compatibility matrix](/docs/k8s/compatibility) for additional information.
<Tabs>
@ -146,13 +146,13 @@ Complete the following instructions to install a specific version of the CLI so
```shell-session
$ unzip -o consul-k8s-cli.zip -d ~/.consul-k8s
```
1. Add the path to your directory. In order to persist the `$PATH` across sessions, you will need to add this to your shellrc (i.e. shell run commands) file for the shell used by your terminal.
1. Add the path to your directory. In order to persist the `$PATH` across sessions, dd it to your shellrc (i.e. shell run commands) file for the shell used by your terminal.
```shell-session
$ export PATH=$PATH:$HOME/.consul-k8s/
```
1. (Optional) Issue the `consul-k8s version` command to verify the installation.
```shell-session
@ -169,20 +169,20 @@ Complete the following instructions to install a specific version of the CLI so
```shell-session
$ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
```
1. Add the HashiCorp apt repository.
1. Add the HashiCorp apt repository.
```shell-session
$ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
```
1. Run apt-get install to install the `consul-k8s` CLI.
```shell-session
$ export VERSION=0.39.0 && \
sudo apt-get update && sudo apt-get install consul-k8s=${VERSION}
```
1. (Optional) Issue the `consul-k8s version` command to verify the installation.
```shell-session
@ -206,27 +206,27 @@ Complete the following instructions to install a specific version of the CLI so
$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
```
1. Install the `consul-k8s` CLI.
1. Install the `consul-k8s` CLI.
```shell-session
$ export VERSION=-1.0 && \
sudo yum -y install consul-k8s-${VERSION}-1
```
2. (Optional) Issue the `consul-k8s version` command to verify the installation.
```shell-session
$ consul-k8s version
consul-k8s 1.0
```
</Tab>
</Tabs>
## Install Consul on Kubernetes
After installing the Consul K8s CLI tool (`consul-k8s`), issue the `install` subcommand and any additional options to install Consul on Kubernetes. Refer to the [Consul K8s CLI reference](/docs/k8s/k8s-cli) for details about all commands and available options. If you do not include any additional options, the `consul-k8s` CLI installs Consul on Kubernetes using the default settings form the Consul Helm chart values. The following example installs Consul on Kubernetes with service mesh and CRDs enabled.
After installing the Consul K8s CLI tool (`consul-k8s`), issue the `install` subcommand and any additional options to install Consul on Kubernetes. Refer to the [Consul K8s CLI reference](/docs/k8s/k8s-cli) for details about all commands and available options. If you do not include any additional options, the `consul-k8s` CLI installs Consul on Kubernetes using the default settings form the Consul Helm chart values. The following example installs Consul on Kubernetes with service mesh and CRDs enabled.
```shell-session
$ consul-k8s install -set connectInject.enabled=true -set controller.enabled=true
@ -241,11 +241,9 @@ No existing installations found.
Overrides:
connectInject:
enabled: true
controller:
enabled: true
Proceed with installation? (y/N) y
==> Running Installation
✓ Downloaded charts
--> creating 1 resource(s)
@ -260,7 +258,7 @@ The pre-install checks may fail if existing `PersistentVolumeClaims` (PVC) are d
## Check the Consul cluster status
Issue the `consul-k8s status` command to view the status of the installed Consul cluster.
Issue the `consul-k8s status` command to view the status of the installed Consul cluster.
```shell-session
$ consul-k8s status
@ -272,4 +270,4 @@ $ consul-k8s status
✓ Consul servers healthy (3/3)
✓ Consul clients healthy (3/3)
```
```

View File

@ -472,8 +472,6 @@ $ consul-k8s status
defaultEnableMerging: true
defaultEnabled: true
enableGatewayMetrics: true
controller:
enabled: true
global:
metrics:
enableAgentMetrics: true

BIN
website/public/img/dataplanes-diagram.png (Stored with Git LFS)

Binary file not shown.