Add docs about rolling out TLS on k8s (#7096)

* Add docs about gradually rolling out TLS on k8s

Co-authored-by: Luke Kysow <1034429+lkysow@users.noreply.github.com>
This commit is contained in:
Iryna Shustava 2020-01-21 19:29:55 -08:00 committed by GitHub
parent ce7ab8abc1
commit 2163f79170
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 121 additions and 1 deletions

View File

@ -11,4 +11,5 @@ description: |-
This section holds documentation on various operational tasks you may need to perform. This section holds documentation on various operational tasks you may need to perform.
* [Upgrading](/docs/platform/k8s/upgrading.html) - Upgrading your Consul servers or clients and the Helm chart. * [Upgrading](/docs/platform/k8s/upgrading.html) - Upgrading your Consul servers or clients and the Helm chart.
* [Configuring TLS on an Existing Cluster](/docs/platform/k8s/tls-on-existing-cluster.html) - Configuring TLS on an existing Consul cluster running in Kubernetes.
* [Uninstalling](/docs/platform/k8s/uninstalling.html) - Uninstaling the Helm chart. * [Uninstalling](/docs/platform/k8s/uninstalling.html) - Uninstaling the Helm chart.

View File

@ -0,0 +1,109 @@
---
layout: "docs"
page_title: "Configuring TLS on an Existing Cluster"
sidebar_current: "docs-platform-k8s-ops-tls-on-existing-cluster"
description: |-
Configuring TLS on an existing Consul cluster running in Kubernetes
---
# Configuring TLS on an Existing Cluster
As of Consul Helm version `0.16.0`, the chart supports TLS for communication
within the cluster. If you already have a Consul cluster deployed on Kubernetes,
you may want to configure TLS in a way that minimizes downtime to your applications.
Consul already supports rolling out TLS on an existing cluster without downtime.
However, depending on your Kubernetes use case, your upgrade procedure may be different.
## Gradual TLS Rollout without Consul Connect
If you're **not using Consul Connect**, follow this process.
1. Run a Helm upgrade with the following config:
```yaml
global:
tls:
enabled: true
# This configuration sets `verify_outgoing`, `verify_server_hostname`,
# and `verify_incoming` to `false` on servers and clients,
# which allows TLS-disabled nodes to join the cluster.
verify: false
server:
updatePartition: <number_of_server_replicas>
```
This upgrade will trigger a rolling update of the clients, as well as any
other `consul-k8s` components, such as sync catalog or client snapshot deployments.
1. Perform a rolling upgrade of the servers, as described in
[Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers).
1. Repeat steps 1 and 2, turning on TLS verification by setting `global.tls.verify`
to `true`.
## Gradual TLS Rollout with Consul Connect
Because the sidecar Envoy proxies need to talk to the Consul client agent regularly
for service discovery, we can't enable TLS on the clients without also re-injecting a
TLS-enabled proxy into the application pods. To perform TLS rollout with minimal
downtime, we recommend instead to add a new Kubernetes node pool and migrate your
applications to it.
1. Add a new identical node pool.
1. Cordon all nodes in the **old** pool by running `kubectl cordon`
to ensure Kubernetes doesn't schedule any new workloads on those nodes
and instead schedules onto the new nodes, which shortly will be TLS-enabled.
1. Create the following Helm config file for the upgrade:
```yaml
global:
tls:
enabled: true
# This configuration sets `verify_outgoing`, `verify_server_hostname`,
# and `verify_incoming` to `false` on servers and clients,
# which allows TLS-disabled nodes to join the cluster.
verify: false
server:
updatePartition: <number_of_server_replicas>
client:
updateStrategy: |
type: OnDelete
```
In this configuration, we're setting `server.updatePartition` to the number of
server replicas as described in [Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers)
and `client.updateStrategy` to `OnDelete` to manually trigger an upgrade of the clients.
1. Run `helm upgrade` with the above config file. The upgrade will trigger an update of all
components except clients and servers, such as the Consul Connect webhook deployment
or the sync catalog deployment. Note that the sync catalog and the client
snapshot deployments will not be in the `ready` state until the clients on their
nodes are upgraded. It is OK to proceed to the next step without them being ready
because Kubernetes will keep the old deployment pod around, and so there will be no
downtime.
1. Gradually perform an upgrade of the clients by deleting client pods on the **new** node
pool.
1. At this point, all components (e.g., Consul Connect webhook and sync catalog) should be running
on the new node pool.
1. Redeploy all your Connect-enabled applications.
One way to trigger a redeploy is to run `kubectl drain` on the nodes in the old pool.
Now that the Connect webhook is TLS-aware, it will add TLS configuration to
the sidecar proxy. Also, Kubernetes should schedule these applications on the new node pool.
1. Perform a rolling upgrade of the servers described in
[Upgrade Consul Servers](/docs/platform/k8s/upgrading.html#upgrading-consul-servers).
1. If everything is healthy, delete the old node pool.
1. Finally, set `global.tls.verify` to `true` in your Helm config file, remove the
`client.updateStrategy` property, and perform a rolling upgrade of the servers.
-> **Note:** It is possible to do this upgrade without fully duplicating the node pool.
You could drain a subset of the Kubernetes nodes within your existing node pool and treat it
as your "new node pool." Then follow the above instructions. Repeat this process for the rest
of the nodes in the node pool.

View File

@ -22,7 +22,7 @@ be updated in batches.
To initiate the upgrade, change the `server.image` value to the To initiate the upgrade, change the `server.image` value to the
desired Consul version. For illustrative purposes, the example below will desired Consul version. For illustrative purposes, the example below will
use `consul:123.456`. Also set the `server.updatePartition` value use `consul:123.456`. Also, set the `server.updatePartition` value
_equal to the number of server replicas_: _equal to the number of server replicas_:
```yaml ```yaml
@ -61,3 +61,10 @@ With the servers upgraded, it is time to upgrade the clients. To upgrade
the clients, set the `client.image` value to the desired Consul version. the clients, set the `client.image` value to the desired Consul version.
Then, run `helm upgrade`. This will upgrade the clients in batches, waiting Then, run `helm upgrade`. This will upgrade the clients in batches, waiting
until the clients come up healthy before continuing. until the clients come up healthy before continuing.
## Configuring TLS on an Existing Cluster
If you already have a Consul cluster deployed on Kubernetes and
would like to turn on TLS for internal Consul communication,
please see
[Configuring TLS on an Existing Cluster](/docs/platform/k8s/tls-on-existing-cluster.html).

View File

@ -624,6 +624,9 @@
<li<%= sidebar_current("docs-platform-k8s-ops-upgrading") %>> <li<%= sidebar_current("docs-platform-k8s-ops-upgrading") %>>
<a href="/docs/platform/k8s/upgrading.html">Upgrading</a> <a href="/docs/platform/k8s/upgrading.html">Upgrading</a>
</li> </li>
<li<%= sidebar_current("docs-platform-k8s-ops-tls-on-existing-cluster") %>>
<a href="/docs/platform/k8s/tls-on-existing-cluster.html">Configuring TLS on an Existing Cluster</a>
</li>
<li<%= sidebar_current("docs-platform-k8s-ops-uninstalling") %>> <li<%= sidebar_current("docs-platform-k8s-ops-uninstalling") %>>
<a href="/docs/platform/k8s/uninstalling.html">Uninstalling</a> <a href="/docs/platform/k8s/uninstalling.html">Uninstalling</a>
</li> </li>