Reorg kube docs
This commit is contained in:
parent
befb914cf6
commit
520d37fcd5
|
@ -3,18 +3,20 @@ layout: "docs"
|
|||
page_title: "Install Consul"
|
||||
sidebar_current: "docs-install-install"
|
||||
description: |-
|
||||
Installing Consul is simple. You can download a precompiled binary or compile
|
||||
from source. This page details both methods.
|
||||
Installing Consul is simple. You can download a precompiled binary, compile
|
||||
from source or run on Kubernetes. This page details these methods.
|
||||
---
|
||||
|
||||
# Install Consul
|
||||
|
||||
Installing Consul is simple. There are two approaches to installing Consul:
|
||||
Installing Consul is simple. There are three approaches to installing Consul:
|
||||
|
||||
1. Using a [precompiled binary](#precompiled-binaries)
|
||||
|
||||
1. Installing [from source](#compiling-from-source)
|
||||
|
||||
1. Installing [on Kubernetes](/docs/platform/k8s/run.html)
|
||||
|
||||
Downloading a precompiled binary is easiest, and we provide downloads over TLS
|
||||
along with SHA256 sums to verify the binary. We also distribute a PGP signature
|
||||
with the SHA256 sums that can be verified.
|
||||
|
|
|
@ -1,15 +1,14 @@
|
|||
---
|
||||
layout: "docs"
|
||||
page_title: "Out-of-Cluster Nodes - Kubernetes"
|
||||
sidebar_current: "docs-platform-k8s-ooc-nodes"
|
||||
page_title: "Consul Clients Outside of Kubernetes - Kubernetes"
|
||||
sidebar_current: "docs-platform-k8s-run-clients-outside"
|
||||
description: |-
|
||||
Non-Kubernetes nodes can join a Consul cluster running within Kubernetes. These are considered "out-of-cluster" nodes.
|
||||
Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
|
||||
---
|
||||
|
||||
# Out-of-Cluster Nodes
|
||||
# Consul Clients Outside Kubernetes
|
||||
|
||||
Non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
|
||||
These are considered "out-of-cluster" nodes.
|
||||
Consul clients running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes.
|
||||
|
||||
## Auto-join
|
||||
|
||||
|
@ -37,8 +36,8 @@ different pods to have different exposed ports.
|
|||
|
||||
## Networking
|
||||
|
||||
Consul typically requires a fully connected network. Therefore, out-of-cluster
|
||||
nodes joining a cluster running within Kubernetes must be able to communicate
|
||||
Consul typically requires a fully connected network. Therefore,
|
||||
nodes outside of Kubernetes joining a cluster running within Kubernetes must be able to communicate
|
||||
to pod IPs or Kubernetes node IPs via the network.
|
||||
|
||||
-> **Consul Enterprise customers** may use
|
|
@ -0,0 +1,115 @@
|
|||
---
|
||||
layout: "docs"
|
||||
page_title: "Consul Enterprise"
|
||||
sidebar_current: "docs-platform-k8s-run-consul-ent"
|
||||
description: |-
|
||||
Configuration for running Consul Enterprise
|
||||
---
|
||||
|
||||
# Consul Enterprise
|
||||
|
||||
You can use this Helm chart to deploy Consul Enterprise by following a few extra steps.
|
||||
|
||||
Find the license file that you received in your welcome email. It should have a `.hclic` extension. You will use the contents of this file to create a Kubernetes secret before installing the Helm chart.
|
||||
|
||||
You can use the following commands to create the secret with name `consul-ent-license` and key `key`:
|
||||
|
||||
```bash
|
||||
secret=$(cat 1931d1f4-bdfd-6881-f3f5-19349374841f.hclic)
|
||||
kubectl create secret generic consul-ent-license --from-literal="key=${secret}"
|
||||
```
|
||||
|
||||
-> **Note:** If you cannot find your `.hclic` file, please contact your sales team or Technical Account Manager.
|
||||
|
||||
In your `config.yaml`, change the value of `global.image` to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/consul-enterprise/tags).
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
image: "hashicorp/consul-enterprise:1.4.3-ent"
|
||||
```
|
||||
|
||||
Add the name and key of the secret you just created to `server.enterpriseLicense`.
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
image: "hashicorp/consul-enterprise:1.4.3-ent"
|
||||
server:
|
||||
enterpriseLicense:
|
||||
secretName: "consul-ent-license"
|
||||
secretKey: "key"
|
||||
```
|
||||
|
||||
Now run `helm install`:
|
||||
|
||||
```bash
|
||||
$ helm install --wait hashicorp ./consul-helm -f config.yaml
|
||||
```
|
||||
|
||||
Once the cluster is up, you can verify the nodes are running Consul Enterprise by
|
||||
using the `consul license get` command.
|
||||
|
||||
First, forward your local port 8500 to the Consul servers so you can run `consul`
|
||||
commands locally against the Consul servers in Kubernetes:
|
||||
|
||||
```bash
|
||||
$ kubectl port-forward service/hashicorp-consul-server 8500:8500
|
||||
```
|
||||
|
||||
In a separate tab, run the `consul license get` command (if using ACLs see below):
|
||||
|
||||
```bash
|
||||
$ consul license get
|
||||
License is valid
|
||||
License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f
|
||||
Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996
|
||||
Expires At: 2020-03-09 03:59:59.999 +0000 UTC
|
||||
Datacenter: *
|
||||
Package: premium
|
||||
Licensed Features:
|
||||
Automated Backups
|
||||
Automated Upgrades
|
||||
Enhanced Read Scalability
|
||||
Network Segments
|
||||
Redundancy Zone
|
||||
Advanced Network Federation
|
||||
$ consul members
|
||||
Node Address Status Type Build Protocol DC Segment
|
||||
hashicorp-consul-server-0 10.60.0.187:8301 alive server 1.4.3+ent 2 dc1 <all>
|
||||
hashicorp-consul-server-1 10.60.1.229:8301 alive server 1.4.3+ent 2 dc1 <all>
|
||||
hashicorp-consul-server-2 10.60.2.197:8301 alive server 1.4.3+ent 2 dc1 <all>
|
||||
```
|
||||
|
||||
If you get an error:
|
||||
|
||||
```bash
|
||||
Error getting license: invalid character 'r' looking for beginning of value
|
||||
```
|
||||
|
||||
Then you have likely enabled ACLs. You need to specify your ACL token when
|
||||
running the `license get` command. First, assign the ACL token to the `CONSUL_HTTP_TOKEN` environment variable:
|
||||
|
||||
```bash
|
||||
$ export CONSUL_HTTP_TOKEN=$(kubectl get secrets/hashicorp-consul-bootstrap-acl-token --template={{.data.token}} | base64 -D)
|
||||
```
|
||||
|
||||
Now the token will be used when running Consul commands:
|
||||
|
||||
```bash
|
||||
$ consul license get
|
||||
License is valid
|
||||
License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f
|
||||
Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996
|
||||
Expires At: 2020-03-09 03:59:59.999 +0000 UTC
|
||||
Datacenter: *
|
||||
Package: premium
|
||||
Licensed Features:
|
||||
Automated Backups
|
||||
Automated Upgrades
|
||||
Enhanced Read Scalability
|
||||
Network Segments
|
||||
Redundancy Zone
|
||||
Advanced Network Federation
|
||||
```
|
||||
|
|
@ -353,113 +353,6 @@ to run the sync program.
|
|||
}
|
||||
```
|
||||
|
||||
## Using the Helm Chart to deploy Consul Enterprise
|
||||
|
||||
You can also use this Helm chart to deploy Consul Enterprise by following a few extra steps.
|
||||
|
||||
Find the license file that you received in your welcome email. It should have the extension `.hclic`. You will use the contents of this file to create a Kubernetes secret before installing the Helm chart.
|
||||
|
||||
You can use the following commands to create the secret:
|
||||
|
||||
```bash
|
||||
secret=$(cat 1931d1f4-bdfd-6881-f3f5-19349374841f.hclic)
|
||||
kubectl create secret generic consul-ent-license --from-literal="key=${secret}"
|
||||
```
|
||||
|
||||
-> **Note:** If you cannot find your `.hclic` file, please contact your sales team or Technical Account Manager.
|
||||
|
||||
In your `config.yaml`, change the value of `global.image` to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/consul-enterprise/tags).
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
image: "hashicorp/consul-enterprise:1.4.3-ent"
|
||||
```
|
||||
|
||||
Add the name of the secret you just created to `server.enterpriseLicense`.
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
image: "hashicorp/consul-enterprise:1.4.3-ent"
|
||||
server:
|
||||
enterpriseLicense:
|
||||
secretName: "consul-ent-license"
|
||||
secretKey: "key"
|
||||
```
|
||||
|
||||
Now run `helm install`:
|
||||
|
||||
```bash
|
||||
$ helm install --wait hashicorp ./consul-helm -f config.yaml
|
||||
```
|
||||
|
||||
Once the cluster is up, you can verify the nodes are running Consul Enterprise by
|
||||
using the `consul license get` command.
|
||||
|
||||
First, forward your local port 8500 to the Consul servers so you can run `consul`
|
||||
commands locally against the Consul servers in Kubernetes:
|
||||
|
||||
```bash
|
||||
$ kubectl port-forward service/hashicorp-consul-server 8500:8500
|
||||
```
|
||||
|
||||
In a separate tab, run the `consul license get` command (if using ACLs see below):
|
||||
|
||||
```bash
|
||||
$ consul license get
|
||||
License is valid
|
||||
License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f
|
||||
Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996
|
||||
Expires At: 2020-03-09 03:59:59.999 +0000 UTC
|
||||
Datacenter: *
|
||||
Package: premium
|
||||
Licensed Features:
|
||||
Automated Backups
|
||||
Automated Upgrades
|
||||
Enhanced Read Scalability
|
||||
Network Segments
|
||||
Redundancy Zone
|
||||
Advanced Network Federation
|
||||
$ consul members
|
||||
Node Address Status Type Build Protocol DC Segment
|
||||
hashicorp-consul-server-0 10.60.0.187:8301 alive server 1.4.3+ent 2 dc1 <all>
|
||||
hashicorp-consul-server-1 10.60.1.229:8301 alive server 1.4.3+ent 2 dc1 <all>
|
||||
hashicorp-consul-server-2 10.60.2.197:8301 alive server 1.4.3+ent 2 dc1 <all>
|
||||
```
|
||||
|
||||
If you get an error:
|
||||
|
||||
```bash
|
||||
Error getting license: invalid character 'r' looking for beginning of value
|
||||
```
|
||||
|
||||
Then you have likely enabled ACLs. You need to specify your ACL token when
|
||||
running the `license get` command. First, get the ACL token:
|
||||
|
||||
```bash
|
||||
$ kubectl get secrets/hashicorp-consul-bootstrap-acl-token --template={{.data.token}} | base64 -D
|
||||
4dae8373-b4d7-8009-9880-a796850caef9%
|
||||
```
|
||||
|
||||
Now use the token when running the `license get` command:
|
||||
|
||||
```bash
|
||||
$ consul license get -token=4dae8373-b4d7-8009-9880-a796850caef9
|
||||
License is valid
|
||||
License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f
|
||||
Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996
|
||||
Expires At: 2020-03-09 03:59:59.999 +0000 UTC
|
||||
Datacenter: *
|
||||
Package: premium
|
||||
Licensed Features:
|
||||
Automated Backups
|
||||
Automated Upgrades
|
||||
Enhanced Read Scalability
|
||||
Network Segments
|
||||
Redundancy Zone
|
||||
Advanced Network Federation
|
||||
```
|
||||
|
||||
## Helm Chart Examples
|
||||
|
||||
|
|
|
@ -0,0 +1,14 @@
|
|||
---
|
||||
layout: "docs"
|
||||
page_title: "Operations"
|
||||
sidebar_current: "docs-platform-k8s-ops"
|
||||
description: |-
|
||||
Operating Consul on Kubernetes
|
||||
---
|
||||
|
||||
# Operations
|
||||
|
||||
This section holds documentation on various operational tasks you may need to perform.
|
||||
|
||||
* [Upgrading](/docs/platform/k8s/upgrading.html) - Upgrading your Consul servers or clients and the Helm chart.
|
||||
* [Uninstalling](/docs/platform/k8s/uninstalling.html) - Uninstaling the Helm chart.
|
|
@ -0,0 +1,32 @@
|
|||
---
|
||||
layout: "docs"
|
||||
page_title: "Predefined PVCs"
|
||||
sidebar_current: "docs-platform-k8s-run-pvcs"
|
||||
description: |-
|
||||
Using predefined Persistent Volume Claims
|
||||
---
|
||||
|
||||
# Predefined Persistent Volume Claims (PVCs)
|
||||
|
||||
The only way to use a pre-created PVC is to name them in the format Kubernetes expects:
|
||||
|
||||
```
|
||||
data-<kubernetes namespace>-<helm release name>-consul-server-<ordinal>
|
||||
```
|
||||
|
||||
The Kubernetes namespace you are installing into, Helm release name, and ordinal
|
||||
must match between your Consul servers and your pre-created PVCs. You only
|
||||
need as many PVCs as you have Consul servers. For example, given a Kubernetes
|
||||
namespace of "vault," a release name of "consul," and 5 servers, you would need
|
||||
to create PVCs with the following names:
|
||||
|
||||
```
|
||||
data-vault-consul-consul-server-0
|
||||
data-vault-consul-consul-server-1
|
||||
data-vault-consul-consul-server-2
|
||||
data-vault-consul-consul-server-3
|
||||
data-vault-consul-consul-server-4
|
||||
```
|
||||
|
||||
If you are using your own storage, you'll need to configure a storage class. See the
|
||||
documentation for configuring storage classes [here](https://kubernetes.io/docs/concepts/storage/storage-classes/).
|
|
@ -117,33 +117,6 @@ If you've already installed Consul and want to make changes, you'll need to run
|
|||
`helm upgrade`. See the [Upgrading Consul on Kubernetes](/docs/platform/k8s/run.html#upgrading-consul-on-kubernetes)
|
||||
section for more details.
|
||||
|
||||
## Uninstalling Consul
|
||||
Consul can be uninstalled via the `helm delete` command:
|
||||
|
||||
```bash
|
||||
$ helm delete hashicorp
|
||||
release "hashicorp" uninstalled
|
||||
```
|
||||
|
||||
-> If using Helm 2, run `helm delete --purge hashicorp`
|
||||
|
||||
After deleting the Helm release, you need to delete the `PersistentVolumeClaim`'s
|
||||
for the persistent volumes that store Consul's data. These are not deleted by Helm due to a [bug](https://github.com/helm/helm/issues/5156).
|
||||
To delete, run:
|
||||
|
||||
```bash
|
||||
$ kubectl get pvc -l chart=consul-helm
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
data-default-hashicorp-consul-server-0 Bound pvc-32cb296b-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
|
||||
data-default-hashicorp-consul-server-1 Bound pvc-32d79919-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
|
||||
data-default-hashicorp-consul-server-2 Bound pvc-331581ea-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
|
||||
|
||||
$ kubectl delete pvc -l chart=consul-helm
|
||||
persistentvolumeclaim "data-default-hashicorp-consul-server-0" deleted
|
||||
persistentvolumeclaim "data-default-hashicorp-consul-server-1" deleted
|
||||
persistentvolumeclaim "data-default-hashicorp-consul-server-2" deleted
|
||||
```
|
||||
|
||||
## Viewing the Consul UI
|
||||
|
||||
The Consul UI is enabled by default when using the Helm chart.
|
||||
|
@ -162,44 +135,6 @@ the [`ui.service` chart values](/docs/platform/k8s/helm.html#v-ui-service).
|
|||
This service will allow requests to the Consul servers so it should
|
||||
not be open to the world.
|
||||
|
||||
|
||||
## Joining an Existing Consul Cluster
|
||||
|
||||
If you have a Consul cluster already running, you can configure your
|
||||
Kubernetes nodes to join this existing cluster.
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
enabled: false
|
||||
|
||||
client:
|
||||
enabled: true
|
||||
join:
|
||||
- "provider=my-cloud config=val ..."
|
||||
```
|
||||
|
||||
The `config.yaml` file to configure the Helm chart sets the proper
|
||||
configuration to join an existing cluster.
|
||||
|
||||
The `global.enabled` value first disables all chart components by default
|
||||
so that each component is opt-in. This allows us to _only_ setup the client
|
||||
agents. We then opt-in to the client agents by setting `client.enabled` to
|
||||
`true`.
|
||||
|
||||
Next, `client.join` is set to an array of valid
|
||||
[`-retry-join` values](/docs/agent/options.html#retry-join). In the
|
||||
example above, a fake [cloud auto-join](/docs/agent/cloud-auto-join.html)
|
||||
value is specified. This should be set to resolve to the proper addresses of
|
||||
your existing Consul cluster.
|
||||
|
||||
-> **Networking:** Note that for the Kubernetes nodes to join an existing
|
||||
cluster, the nodes (and specifically the agent pods) must be able to connect
|
||||
to all other server and client agents inside and _outside_ of Kubernetes.
|
||||
If this isn't possible, consider running the Kubernetes agents as a separate
|
||||
DC or adopting Enterprise for
|
||||
[network segments](/docs/enterprise/network-segments/index.html).
|
||||
|
||||
## Accessing the Consul HTTP API
|
||||
|
||||
The Consul HTTP API should be accessed by communicating to the local agent
|
||||
|
@ -274,86 +209,6 @@ spec:
|
|||
consul kv put hello world
|
||||
```
|
||||
|
||||
## Upgrading Consul on Kubernetes
|
||||
|
||||
To upgrade Consul on Kubernetes, we follow the same pattern as
|
||||
[generally upgrading Consul](/docs/upgrading.html), except we can use
|
||||
the Helm chart to step through a rolling deploy. It is important to understand
|
||||
how to [generally upgrade Consul](/docs/upgrading.html) before reading this
|
||||
section.
|
||||
|
||||
Upgrading Consul on Kubernetes will follow the same pattern: each server
|
||||
will be updated one-by-one. After that is successful, the clients will
|
||||
be updated in batches.
|
||||
|
||||
### Upgrading Consul Servers
|
||||
|
||||
To initiate the upgrade, change the `server.image` value to the
|
||||
desired Consul version. For illustrative purposes, the example below will
|
||||
use `consul:123.456`. Also set the `server.updatePartition` value
|
||||
_equal to the number of server replicas_:
|
||||
|
||||
```yaml
|
||||
server:
|
||||
image: "consul:123.456"
|
||||
replicas: 3
|
||||
updatePartition: 3
|
||||
```
|
||||
|
||||
The `updatePartition` value controls how many instances of the server
|
||||
cluster are updated. Only instances with an index _greater than_ the
|
||||
`updatePartition` value are updated (zero-indexed). Therefore, by setting
|
||||
it equal to replicas, none should update yet.
|
||||
|
||||
Next, run the upgrade. You should run this with `--dry-run` first to verify
|
||||
the changes that will be sent to the Kubernetes cluster.
|
||||
|
||||
```
|
||||
$ helm upgrade consul ./
|
||||
...
|
||||
```
|
||||
|
||||
This should cause no changes (although the resource will be updated). If
|
||||
everything is stable, begin by decreasing the `updatePartition` value by one,
|
||||
and running `helm upgrade` again. This should cause the first Consul server
|
||||
to be stopped and restarted with the new image.
|
||||
|
||||
Wait until the Consul server cluster is healthy again (30s to a few minutes)
|
||||
then decrease `updatePartition` and upgrade again. Continue until
|
||||
`updatePartition` is `0`. At this point, you may remove the
|
||||
`updatePartition` configuration. Your server upgrade is complete.
|
||||
|
||||
### Upgrading Consul Clients
|
||||
|
||||
With the servers upgraded, it is time to upgrade the clients. To upgrade
|
||||
the clients, set the `client.image` value to the desired Consul version.
|
||||
Then, run `helm upgrade`. This will upgrade the clients in batches, waiting
|
||||
until the clients come up healthy before continuing.
|
||||
|
||||
### Using Existing Persistent Volume Claims (PVCs)
|
||||
|
||||
The only way to use a pre-created PVC is to name them in the format Kubernetes expects:
|
||||
|
||||
```
|
||||
data-<kubernetes namespace>-<helm release name>-consul-server-<ordinal>
|
||||
```
|
||||
|
||||
The Kubernetes namespace you are installing into, helm release name, and ordinal
|
||||
must match between your Consul servers and your pre-created PVCs. You only
|
||||
need as many PVCs as you have Consul servers. For example, given a Kubernetes
|
||||
namespace of "vault" and a release name of "consul" and 5 servers, you would need
|
||||
to create PVCs with the following names:
|
||||
|
||||
```
|
||||
data-vault-consul-consul-server-0
|
||||
data-vault-consul-consul-server-1
|
||||
data-vault-consul-consul-server-2
|
||||
data-vault-consul-consul-server-3
|
||||
data-vault-consul-consul-server-4
|
||||
```
|
||||
|
||||
If you are using your own storage, you'll need to configure a storage class. See the
|
||||
documentation for configuring storage classes [here](https://kubernetes.io/docs/concepts/storage/storage-classes/).
|
||||
|
||||
## Architecture
|
||||
|
||||
|
|
|
@ -0,0 +1,54 @@
|
|||
---
|
||||
layout: "docs"
|
||||
page_title: "Consul Servers Outside of Kubernetes - Kubernetes"
|
||||
sidebar_current: "docs-platform-k8s-run-servers-outside"
|
||||
description: |-
|
||||
Running Consul servers outside of Kubernetes
|
||||
---
|
||||
|
||||
# Consul Servers Outside of Kubernetes
|
||||
|
||||
If you have a Consul cluster already running, you can configure your
|
||||
Consul clients inside Kubernetes to join this existing cluster.
|
||||
|
||||
The below `config.yaml` file shows how to configure the Helm chart to install
|
||||
Consul clients that will join an existing cluster.
|
||||
|
||||
The `global.enabled` value first disables all chart components by default
|
||||
so that each component is opt-in. This allows us to _only_ setup the client
|
||||
agents. We then opt-in to the client agents by setting `client.enabled` to
|
||||
`true`.
|
||||
|
||||
Next, `client.exposeGossipPorts` can be set to true or false depending on if
|
||||
you want the clients to be exposed on the Kubernetes node IPs (`true`) or
|
||||
their pod IPs (`false`).
|
||||
|
||||
Finally, `client.join` is set to an array of valid
|
||||
[`-retry-join` values](/docs/agent/options.html#retry-join). In the
|
||||
example above, a fake [cloud auto-join](/docs/agent/cloud-auto-join.html)
|
||||
value is specified. This should be set to resolve to the proper addresses of
|
||||
your existing Consul cluster.
|
||||
|
||||
```yaml
|
||||
# config.yaml
|
||||
global:
|
||||
enabled: false
|
||||
|
||||
client:
|
||||
enabled: true
|
||||
# Set this to true to expose the Consul clients using the Kubernetes node
|
||||
# IPs. If false, the pod IPs must be routable from the external servers.
|
||||
exposeGossipPorts: true
|
||||
join:
|
||||
- "provider=my-cloud config=val ..."
|
||||
```
|
||||
|
||||
|
||||
-> **Networking:** Note that for the Kubernetes nodes to join an existing
|
||||
cluster, the nodes (and specifically the agent pods) must be able to connect
|
||||
to all other server and client agents inside and _outside_ of Kubernetes.
|
||||
If this isn't possible, consider running a separate Consul cluster inside Kubernetes
|
||||
and federating it with your cluster outside Kubernetes.
|
||||
You may also consider adopting Consul Enterprise for
|
||||
[network segments](/docs/enterprise/network-segments/index.html).
|
||||
|
|
@ -0,0 +1,34 @@
|
|||
---
|
||||
layout: "docs"
|
||||
page_title: "Uninstalling"
|
||||
sidebar_current: "docs-platform-k8s-ops-uninstalling"
|
||||
description: |-
|
||||
Uninstalling Consul on Kubernetes
|
||||
---
|
||||
|
||||
# Uninstalling Consul
|
||||
Consul can be uninstalled via the `helm delete` command:
|
||||
|
||||
```bash
|
||||
$ helm delete hashicorp
|
||||
release "hashicorp" uninstalled
|
||||
```
|
||||
|
||||
-> If using Helm 2, run `helm delete --purge hashicorp`
|
||||
|
||||
After deleting the Helm release, you need to delete the `PersistentVolumeClaim`'s
|
||||
for the persistent volumes that store Consul's data. These are not deleted by Helm due to a [bug](https://github.com/helm/helm/issues/5156).
|
||||
To delete, run:
|
||||
|
||||
```bash
|
||||
$ kubectl get pvc -l chart=consul-helm
|
||||
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
|
||||
data-default-hashicorp-consul-server-0 Bound pvc-32cb296b-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
|
||||
data-default-hashicorp-consul-server-1 Bound pvc-32d79919-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
|
||||
data-default-hashicorp-consul-server-2 Bound pvc-331581ea-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m
|
||||
|
||||
$ kubectl delete pvc -l chart=consul-helm
|
||||
persistentvolumeclaim "data-default-hashicorp-consul-server-0" deleted
|
||||
persistentvolumeclaim "data-default-hashicorp-consul-server-1" deleted
|
||||
persistentvolumeclaim "data-default-hashicorp-consul-server-2" deleted
|
||||
```
|
|
@ -0,0 +1,63 @@
|
|||
---
|
||||
layout: "docs"
|
||||
page_title: "Upgrading"
|
||||
sidebar_current: "docs-platform-k8s-ops-upgrading"
|
||||
description: |-
|
||||
Upgrading Consul on Kubernetes
|
||||
---
|
||||
|
||||
# Upgrading Consul on Kubernetes
|
||||
|
||||
To upgrade Consul on Kubernetes, we follow the same pattern as
|
||||
[generally upgrading Consul](/docs/upgrading.html), except we can use
|
||||
the Helm chart to step through a rolling deploy. It is important to understand
|
||||
how to [generally upgrade Consul](/docs/upgrading.html) before reading this
|
||||
section.
|
||||
|
||||
Upgrading Consul on Kubernetes will follow the same pattern: each server
|
||||
will be updated one-by-one. After that is successful, the clients will
|
||||
be updated in batches.
|
||||
|
||||
## Upgrading Consul Servers
|
||||
|
||||
To initiate the upgrade, change the `server.image` value to the
|
||||
desired Consul version. For illustrative purposes, the example below will
|
||||
use `consul:123.456`. Also set the `server.updatePartition` value
|
||||
_equal to the number of server replicas_:
|
||||
|
||||
```yaml
|
||||
server:
|
||||
image: "consul:123.456"
|
||||
replicas: 3
|
||||
updatePartition: 3
|
||||
```
|
||||
|
||||
The `updatePartition` value controls how many instances of the server
|
||||
cluster are updated. Only instances with an index _greater than_ the
|
||||
`updatePartition` value are updated (zero-indexed). Therefore, by setting
|
||||
it equal to replicas, none should update yet.
|
||||
|
||||
Next, run the upgrade. You should run this with `--dry-run` first to verify
|
||||
the changes that will be sent to the Kubernetes cluster.
|
||||
|
||||
```
|
||||
$ helm upgrade consul ./
|
||||
...
|
||||
```
|
||||
|
||||
This should cause no changes (although the resource will be updated). If
|
||||
everything is stable, begin by decreasing the `updatePartition` value by one,
|
||||
and running `helm upgrade` again. This should cause the first Consul server
|
||||
to be stopped and restarted with the new image.
|
||||
|
||||
Wait until the Consul server cluster is healthy again (30s to a few minutes)
|
||||
then decrease `updatePartition` and upgrade again. Continue until
|
||||
`updatePartition` is `0`. At this point, you may remove the
|
||||
`updatePartition` configuration. Your server upgrade is complete.
|
||||
|
||||
## Upgrading Consul Clients
|
||||
|
||||
With the servers upgraded, it is time to upgrade the clients. To upgrade
|
||||
the clients, set the `client.image` value to the desired Consul version.
|
||||
Then, run `helm upgrade`. This will upgrade the clients in batches, waiting
|
||||
until the clients come up healthy before continuing.
|
|
@ -108,3 +108,7 @@ on `consul -v`.
|
|||
of Consul, especially newer features, may not be available. If this is the
|
||||
case, Consul will typically warn you. In general, you should always upgrade
|
||||
your cluster so that you can run the latest protocol version.
|
||||
|
||||
## Upgrading on Kubernetes
|
||||
|
||||
See the dedicated [Upgrading Consul on Kubernetes](/docs/platform/k8s/upgrading.html) page.
|
||||
|
|
|
@ -593,25 +593,41 @@
|
|||
<a href="/docs/platform/k8s/index.html">Kubernetes</a>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("docs-platform-k8s-run") %>>
|
||||
<a href="/docs/platform/k8s/run.html">Installing Consul</a>
|
||||
<a href="/docs/platform/k8s/run.html">Installation</a>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("docs-platform-k8s-run-aks") %>>
|
||||
<a href="/docs/platform/k8s/aks.html">Consul on Azure Cloud</a>
|
||||
<a href="/docs/platform/k8s/aks.html">Azure Kubernetes Service (AKS)</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-run-gke") %>>
|
||||
<a href="/docs/platform/k8s/gke.html">Consul on Google Cloud</a>
|
||||
<a href="/docs/platform/k8s/gke.html">Google Kubernetes Service (GKE)</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-run-mini") %>>
|
||||
<a href="/docs/platform/k8s/minikube.html">Consul on Minikube</a>
|
||||
<a href="/docs/platform/k8s/minikube.html">Minikube</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-run-consul-ent") %>>
|
||||
<a href="/docs/platform/k8s/consul-enterprise.html">Consul Enterprise</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-run-clients-outside") %>>
|
||||
<a href="/docs/platform/k8s/clients-outside-kubernetes.html">Consul Clients Outside Kubernetes</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-run-servers-outside") %>>
|
||||
<a href="/docs/platform/k8s/servers-outside-kubernetes.html">Consul Servers Outside Kubernetes</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-run-pvcs") %>>
|
||||
<a href="/docs/platform/k8s/predefined-pvcs.html">Predefined Persistent Volume Claims</a>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-helm") %>>
|
||||
<a href="/docs/platform/k8s/helm.html">Helm Chart Reference</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-ooc-nodes") %>>
|
||||
<a href="/docs/platform/k8s/out-of-cluster-nodes.html">Out-of-Cluster Nodes</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-ops") %>>
|
||||
<a href="/docs/platform/k8s/operations.html">Operations</a>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("docs-platform-k8s-ops-upgrading") %>>
|
||||
<a href="/docs/platform/k8s/upgrading.html">Upgrading</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-ops-uninstalling") %>>
|
||||
<a href="/docs/platform/k8s/uninstalling.html">Uninstalling</a>
|
||||
</li>
|
||||
</ul>
|
||||
<li<%= sidebar_current("docs-platform-k8s-dns") %>>
|
||||
<a href="/docs/platform/k8s/dns.html">Consul DNS</a>
|
||||
</li>
|
||||
|
@ -624,6 +640,9 @@
|
|||
<li<%= sidebar_current("docs-platform-k8s-ambassador") %>>
|
||||
<a href="/docs/platform/k8s/ambassador.html">Ambassador Integration</a>
|
||||
</li>
|
||||
<li<%= sidebar_current("docs-platform-k8s-helm") %>>
|
||||
<a href="/docs/platform/k8s/helm.html">Helm Chart Reference</a>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
|
||||
|
|
Loading…
Reference in New Issue