Language changes from review

Co-Authored-By: Derek Strickland <1111455+DerekStrickland@users.noreply.github.com>
This commit is contained in:
Joel Watson 2020-09-15 16:03:17 -05:00
parent 9ff9a8c478
commit 7097a7b92c
6 changed files with 97 additions and 106 deletions

View File

@ -47,18 +47,13 @@ Consul is A, and version B is released.
## Large Version Jumps ## Large Version Jumps
Consul is a service that frequently sits in the critical path for many Operating a Consul datacenter that is multiple major versions behind the current major version can increase the risk incured during
or even all other applications and services you might have. As such, upgrades. We encourage our customers to remain no more than two major versions behind
it can be easy to get into a "if it's not broken, don't fix it" mentality (i.e., if 1.8.x is the current release, do not use versions older than 1.6.x).
to avoid the potential risk in upgrading. Unfortunately, this can lead to If you find yourself in a situation
being many major versions behind and ultimately just makes upgrading where you are many major versions behind, and need to upgrade, please review our
riskier overall. [Upgrade Instructions page](/docs/upgrading/instructions) for information on
how to perform those upgrades.
We encourage our customers to remain no more than two major versions behind
(i.e., if 1.8.x is the current release, don't use versions older than 1.6.x),
but in some environments that's not possible. If you find yourself in this
situation and need to upgrade, please see our [Upgrade Instructions page](/docs/upgrading/instructions) for
additional help.
## Backward Incompatible Upgrades ## Backward Incompatible Upgrades

View File

@ -11,28 +11,27 @@ description: >-
## Introduction ## Introduction
Upgrading Consul is a relatively easy process, but there are some best This document describes some best practices that you should follow when
practices that you should follow when doing so. Some versions also have upgrading Consul. Some versions also have steps that are specific to that
steps that are specific to that version, so make sure you also look through version, so make sure you also review the [upgrade instructions](/docs/upgrading/instructions)
our [upgrade instructions](/docs/upgrading/instructions) for the version you're on. for the version you are on.
## Download the New Version ## Download the New Version
The first thing you need to do is download the binary for the new version First, download the binary for the new version you want.
you want.
<Tabs> <Tabs>
<Tab heading="Binary"> <Tab heading="Binary">
If you're after the Consul binary, you can find all current and past versions All current and past versions of the OSS and Enterprise releases are
of the OSS and Enterprise releases here: available here:
- https://releases.hashicorp.com/consul - https://releases.hashicorp.com/consul
</Tab> </Tab>
<Tab heading="Docker"> <Tab heading="Docker">
If you're using Docker containers, then you can find those here: Docker containers are available at these locations:
- **OSS:** https://hub.docker.com/_/consul - **OSS:** https://hub.docker.com/_/consul
- **Enterprise:** https://hub.docker.com/r/hashicorp/consul-enterprise - **Enterprise:** https://hub.docker.com/r/hashicorp/consul-enterprise
@ -40,7 +39,7 @@ If you're using Docker containers, then you can find those here:
</Tab> </Tab>
<Tab heading="Kubernetes"> <Tab heading="Kubernetes">
If you're using Kubernetes, then please see our documentation for If you are using Kubernetes, then please review our documentation for
[Upgrading Consul on Kubernetes](/docs/k8s/operations/upgrading). [Upgrading Consul on Kubernetes](/docs/k8s/operations/upgrading).
</Tab> </Tab>
@ -60,7 +59,7 @@ You can inspect the snapshot to ensure if was successful with:
consul snapshot inspect backup.snap consul snapshot inspect backup.snap
``` ```
You should see output similar to this: Example output:
``` ```
ID 2-1182-1542056499724 ID 2-1182-1542056499724
@ -71,25 +70,24 @@ Version 1
``` ```
This will ensure you have a safe fallback option in case something goes wrong. Store This will ensure you have a safe fallback option in case something goes wrong. Store
this snapshot somewhere safe. If you would like more information on snapshots, you this snapshot somewhere safe. More documentation on snapshot usage is available here:
can find that here:
- https://www.consul.io/docs/commands/snapshot - https://www.consul.io/docs/commands/snapshot
- https://learn.hashicorp.com/tutorials/consul/backup-and-restore - https://learn.hashicorp.com/tutorials/consul/backup-and-restore
**2.** Temporarily modify your Consul configuration so that its [log_level](/docs/agent/options.html#_log_level) **2.** Temporarily modify your Consul configuration so that its [log_level](/docs/agent/options.html#_log_level)
is set to `debug`. After doing this, run `consul reload` on your servers. This will is set to `debug`. After doing this, issue the `consul reload` command on your servers. This will
give you more information to work with in the event something goes wrong. give you more information to work with in the event something goes wrong.
## Perform the Upgrade ## Perform the Upgrade
**1.** Run the following command to see which server is currently the leader: **1.** Issue the following command to discover which server is currently the leader:
``` ```
consul operator raft list-peers consul operator raft list-peers
``` ```
You should see output similar to this (exact formatting and may differ based on version): You should receive output similar to this (exact formatting and content may differ based on version):
``` ```
Node ID Address State Voter RaftProtocol Node ID Address State Voter RaftProtocol
@ -105,23 +103,23 @@ binary with the new one.
**3.** Perform a rolling restart of Consul on your servers, leaving the leader agent **3.** Perform a rolling restart of Consul on your servers, leaving the leader agent
for last. Only restart one server at a time. After restarting each server, validate for last. Only restart one server at a time. After restarting each server, validate
that it has rejoined the cluster and is in sync with the leader by running `consul info` that it has rejoined the cluster and is in sync with the leader by issuing the `consul info` command,
and checking whether the `commit_index` and `last_log_index` fields have the same value. and checking whether the `commit_index` and `last_log_index` fields have the same value.
If done properly, this should avoid an unexpected leadership election due to loss of quorum. If done properly, this should avoid an unexpected leadership election due to loss of quorum.
~> It's important to run `consul leave` on each server node when shutting ~> It is important to issue a `consul leave` command on each server node when shutting
Consul down. Make sure your service management system (e.g., systemd, upstart, etc.) is Consul down. Make sure your service management system (e.g., systemd, upstart, etc.) is
performing that action. If not, make sure you do it manually or you _will_ end up in a performing that action. If not, make sure you do it manually or you _will_ end up in a
bad cluster state. bad cluster state.
**4.** Double-check that all servers are showing up in the cluster as expected and are on **4.** Double-check that all servers are showing up in the cluster as expected and are on
the correct version by running: the correct version by issuing:
``` ```
consul members consul members
``` ```
You should see output similar to this: You should receive output similar to this:
``` ```
Node Address Status Type Build Protocol DC Node Address Status Type Build Protocol DC
@ -130,13 +128,13 @@ dc1-node2 10.11.0.3:8301 alive server 1.8.3 2 dc1
dc1-node3 10.11.0.4:8301 alive server 1.8.3 2 dc1 dc1-node3 10.11.0.4:8301 alive server 1.8.3 2 dc1
``` ```
Also double-check the raft state to make sure there's a leader and sufficient voters: Also double-check the raft state to make sure there is a leader and sufficient voters:
``` ```
consul operator raft list-peers consul operator raft list-peers
``` ```
Which should look similar to this: You should receive output similar to this:
``` ```
Node ID Address State Voter RaftProtocol Node ID Address State Voter RaftProtocol
@ -145,12 +143,12 @@ dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true
dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3 dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3
``` ```
**5.** Set your `log_level` back to what you had it at prior to the upgrade and run **5.** Set your `log_level` back to what you had it at prior to the upgrade and issue
`consul reload` again. `consul reload` again.
## Troubleshooting ## Troubleshooting
Most issues with upgrading occur due to either failing to upgrade the leader agent last Most problems with upgrading occur due to either failing to upgrade the leader agent last,
or due to failing to wait for a follower agent to fully rejoin a cluster before moving or due to failing to wait for a follower agent to fully rejoin a cluster before moving
on to another server. This can cause a loss of quorum and occasionally can result in on to another server. This can cause a loss of quorum and occasionally can result in
all of your servers attempting to kick off leadership elections endlessly without ever all of your servers attempting to kick off leadership elections endlessly without ever
@ -158,21 +156,21 @@ reaching a quorum and electing a leader.
Most of these problems can be solved by following the steps outlined in our Most of these problems can be solved by following the steps outlined in our
[Outage Recovery](https://learn.hashicorp.com/tutorials/consul/recovery-outage) document. [Outage Recovery](https://learn.hashicorp.com/tutorials/consul/recovery-outage) document.
If you're still having trouble after trying the recovery steps outlined there, If you are still having trouble after trying the recovery steps outlined there,
then you can get further assistance by: then these options for further assistance are available:
- OSS users without paid support plans can request help in our [Community Forum](https://discuss.hashicorp.com/c/consul/29) - OSS users without paid support plans can request help in our [Community Forum](https://discuss.hashicorp.com/c/consul/29)
- Enterprise users with paid support plans can contact [HashiCorp Support](https://support.hashicorp.com/) - Enterprise and OSS users with paid support plans can contact [HashiCorp Support](https://support.hashicorp.com/)
If you end up contacting support, please make sure you include the following information If you end up contacting support, please make sure you include the following information
in your support ticket: in your support ticket:
- Consul version you were upgrading FROM and TO. - Consul version you were upgrading FROM and TO.
- [Debug level logs](/docs/agent/options.html#_log_level) from all servers in the cluster - [Debug level logs](/docs/agent/options.html#_log_level) from all servers in the cluster
that you're having trouble with. These should include logs from prior to the upgrade attempt that you are having trouble with. These should include logs from prior to the upgrade attempt
up through the current time. If your logs were not set at debug level prior to the up through the current time. If your logs were not set at debug level prior to the
upgrade, please include those logs anyways, but also update your config to use debug logs upgrade, please include those logs as well. Also, update your config to use debug logs,
and include logs from after that was done as well. and include logs from after that was done.
- Your Consul config files (please redact any secrets). - Your Consul config files (please redact any secrets).
- Output from `consul members -detailed` and `consul operator raft list-peers` from each - Output from `consul members -detailed` and `consul operator raft list-peers` from each
server in your cluster. server in your cluster.

View File

@ -11,14 +11,14 @@ description: >-
This document is intended to help customers who find themselves many versions behind to upgrade safely. This document is intended to help customers who find themselves many versions behind to upgrade safely.
Our recommended upgrade path is moving from version 0.8.5 to 1.2.4 to 1.6.9 to the current version. To get Our recommended upgrade path is moving from version 0.8.5 to 1.2.4 to 1.6.9 to the current version. To get
started, you'll want to choose the version you're currently on below and then follow the instructions started, you will want to choose the version you are currently on below and then follow the instructions
until you're on the latest version. The upgrade guides will mention notable changes and link to relevant until you are on the latest version. The upgrade guides will mention notable changes and link to relevant
changelogs we recommend reading through the changelog for versions between the one you're on and the changelogs we recommend reviewing the changelog for versions between the one you are on and the
one you're upgrading to at each step to familiarize yourself with changes. one you are upgrading to at each step to familiarize yourself with changes.
## Getting Started ## Getting Started
To get instructions for your upgrade, please choose the release series you're _currently using_: To get instructions for your upgrade, please choose the release series you are _currently using_:
- [0.8.x](/docs/upgrading/instructions/upgrade-to-1-2-x) - [0.8.x](/docs/upgrading/instructions/upgrade-to-1-2-x)
- [0.9.x](/docs/upgrading/instructions/upgrade-to-1-2-x) - [0.9.x](/docs/upgrading/instructions/upgrade-to-1-2-x)
@ -31,4 +31,4 @@ To get instructions for your upgrade, please choose the release series you're _c
- [1.6.x](/docs/upgrading/instructions/upgrade-to-1-8-x) - [1.6.x](/docs/upgrading/instructions/upgrade-to-1-8-x)
- [1.7.x](/docs/upgrading/instructions/upgrade-to-1-8-x) - [1.7.x](/docs/upgrading/instructions/upgrade-to-1-8-x)
If you're using <= 0.7.x, please [contact support](https://support.hashicorp.com) for assistance. If you are using <= 0.7.x, please [contact support](https://support.hashicorp.com) for assistance.

View File

@ -11,14 +11,13 @@ description: >-
## Introduction ## Introduction
This guide explains how to best upgrade a multi-datacenter Consul deployment that's using This guide explains how to best upgrade a multi-datacenter Consul deployment that is using
a version of Consul >= 0.8.5 and < 1.2.4 while maintaining replication. If you're on a version a version of Consul >= 0.8.5 and < 1.2.4 while maintaining replication. If you are on a version
older than 0.8.5, but are in the 0.8.x series, please upgrade to 0.8.5 by following our older than 0.8.5, but are in the 0.8.x series, please upgrade to 0.8.5 by following our
[General Upgrade Process](/docs/upgrading/instructions/general-process). If you're on a version [General Upgrade Process](/docs/upgrading/instructions/general-process). If you are on a version
older than 0.8.0, please [contact support](https://support.hashicorp.com). As there weren't older than 0.8.0, please [contact support](https://support.hashicorp.com).
any major breaking changes, this upgrade shoul be fairly simple.
In this guide, we'll be using an example with two datacenters (DCs) and will be In this guide, we will be using an example with two datacenters (DCs) and will be
referring to them as DC1 and DC2. DC1 will be the primary datacenter. referring to them as DC1 and DC2. DC1 will be the primary datacenter.
## Requirements ## Requirements
@ -26,35 +25,35 @@ referring to them as DC1 and DC2. DC1 will be the primary datacenter.
- All Consul servers should be on a version of Consul >= 0.8.5 and < 1.2.4. - All Consul servers should be on a version of Consul >= 0.8.5 and < 1.2.4.
- You need a Consul cluster with at least 3 nodes to perform this upgrade as documented. If - You need a Consul cluster with at least 3 nodes to perform this upgrade as documented. If
you either have a single node cluster or several single node clusters joined via WAN, the you either have a single node cluster or several single node clusters joined via WAN, the
servers will come up in a `No cluster leader` loop after upgrading. If that happens, you'll servers will come up in a `No cluster leader` loop after upgrading. If that happens, you will
need to recover the cluster using the method described [here](https://learn.hashicorp.com/tutorials/consul/recovery-outage#manual-recovery-using-peers-json). need to recover the cluster using the method described [here](https://learn.hashicorp.com/tutorials/consul/recovery-outage#manual-recovery-using-peers-json).
You can avoid this issue entirely by growing your cluster to 3 nodes prior to upgrading. You can avoid this issue entirely by growing your cluster to 3 nodes prior to upgrading.
## Assumptions ## Assumptions
This guides makes the following assumptions: This guide makes the following assumptions:
- You have at least two datacenters configured and have ACL replication enabled. If you're - You have at least two datacenters configured and have ACL replication enabled. If you are
not using multiple datacenters, you can follow along and simply skip the instructions related not using multiple datacenters, you can follow along and simply skip the instructions related
to replication. to replication.
## Considerations ## Considerations
There aren't too many major changes that might cause issues upgrading from 1.0.8, but notable changes There are not too many major changes that might cause issues upgrading from 1.0.8, but notable changes
are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-1-0) are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-1-0)
page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#124-november-27-2018). page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#124-november-27-2018).
Looking through these changes prior to upgrading is highly recommended. Looking through these changes prior to upgrading is highly recommended.
## Procedure ## Procedure
**1.** Check replication status in DC1 by running the following curl command from a **1.** Check replication status in DC1 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -70,14 +69,14 @@ You should see output that looks like this:
-> The primary datacenter (indicated by `acl_datacenter`) will always show as having replication -> The primary datacenter (indicated by `acl_datacenter`) will always show as having replication
disabled, so this is normal even if replication is happening. disabled, so this is normal even if replication is happening.
**2.** Check replication status in DC2 by running the following curl command from a **2.** Check replication status in DC2 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -93,14 +92,14 @@ You should see output that looks like this:
**3.** Upgrade the Consul agents in all DCs to version 1.2.4 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). **3.** Upgrade the Consul agents in all DCs to version 1.2.4 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process).
This should be done one DC at a time, leaving the primary DC for last. This should be done one DC at a time, leaving the primary DC for last.
**4.** Confirm that replication is still working in DC2 by running the following curl command from a **4.** Confirm that replication is still working in DC2 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -115,17 +114,17 @@ You should see output that looks like this:
## Post-Upgrade Configuration Changes ## Post-Upgrade Configuration Changes
If you moved from a pre-1.0.0 version of Consul, you'll find that _many_ of the configuration If you moved from a pre-1.0.0 version of Consul, you will find that _many_ of the configuration
options were renamed. Backwards compatibility has been maintained, so your old config options options were renamed. Backwards compatibility has been maintained, so your old config options
will continue working after upgrading, but you'll want to update those now to avoid issues when will continue working after upgrading, but you will want to update those now to avoid issues when
moving to newer versions. moving to newer versions.
You can find the full list of changes here: The full list of changes is available here:
- https://www.consul.io/docs/upgrading/upgrade-specific#deprecated-options-have-been-removed - https://www.consul.io/docs/upgrading/upgrade-specific#deprecated-options-have-been-removed
You can make sure your config changes are valid by copying your existing configuration files, You can make sure your config changes are valid by copying your existing configuration files,
making the changes, and then verifing them by running `consul validate $CONFIG_FILE1_PATH $CONFIG_FILE2_PATH ...`. making the changes, and then verifing them by using `consul validate $CONFIG_FILE1_PATH $CONFIG_FILE2_PATH ...`.
Once your config is passing the validation check, replace your old config files with the new ones Once your config is passing the validation check, replace your old config files with the new ones
and slowly roll your cluster again one server at a time leaving the leader agent for last in each and slowly roll your cluster again one server at a time leaving the leader agent for last in each

View File

@ -11,9 +11,9 @@ description: >-
## Introduction ## Introduction
This guide explains how to best upgrade a multi-datacenter Consul deployment that's using This guide explains how to best upgrade a multi-datacenter Consul deployment that is using
a version of Consul >= 1.2.4 and < 1.6.9 while maintaining replication. If you're on a version a version of Consul >= 1.2.4 and < 1.6.9 while maintaining replication. If you are on a version
older than 1.2.4, please take a look at our [Upgrading to 1.2.4](/docs/upgrading/instructions/upgrade-to-1-2-x) older than 1.2.4, please review our [Upgrading to 1.2.4](/docs/upgrading/instructions/upgrade-to-1-2-x)
guide. Due to changes to the ACL system, an ACL token migration will need to be performed guide. Due to changes to the ACL system, an ACL token migration will need to be performed
as part of this upgrade. The 1.6.x series is the last series that had support for legacy as part of this upgrade. The 1.6.x series is the last series that had support for legacy
ACL tokens, so this migration _must_ happen before upgrading past the 1.6.x release series. ACL tokens, so this migration _must_ happen before upgrading past the 1.6.x release series.
@ -25,7 +25,7 @@ Here is some documentation that may prove useful for reference during this upgra
around legacy ACL and new ACL configuration options here. Legacy ACL config options around legacy ACL and new ACL configuration options here. Legacy ACL config options
will be listed as deprecates as of 1.4.0. will be listed as deprecates as of 1.4.0.
In this guide, we'll be using an example with two datacenters (DCs) and will be In this guide, we will be using an example with two datacenters (DCs) and will be
referring to them as DC1 and DC2. DC1 will be the primary datacenter. referring to them as DC1 and DC2. DC1 will be the primary datacenter.
## Requirements ## Requirements
@ -34,9 +34,9 @@ referring to them as DC1 and DC2. DC1 will be the primary datacenter.
## Assumptions ## Assumptions
This guides makes the following assumptions: This guide makes the following assumptions:
- You have at least two datacenters configured and have ACL replication enabled. If you're - You have at least two datacenters configured and have ACL replication enabled. If you are
not using multiple datacenters, you can follow along and simply skip the instructions related not using multiple datacenters, you can follow along and simply skip the instructions related
to replication. to replication.
- You have not already performed the ACL token migration. If you have, please skip all related - You have not already performed the ACL token migration. If you have, please skip all related
@ -56,14 +56,14 @@ Two very notable items are:
## Procedure ## Procedure
**1.** Check replication status in DC1 by running the following curl command from a **1.** Check replication status in DC1 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -79,14 +79,14 @@ You should see output that looks like this:
-> The primary datacenter (indicated by `acl_datacenter`) will always show as having replication -> The primary datacenter (indicated by `acl_datacenter`) will always show as having replication
disabled, so this is normal even if replication is happening. disabled, so this is normal even if replication is happening.
**2.** Check replication status in DC2 by running the following curl command from a **2.** Check replication status in DC2 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -99,20 +99,20 @@ You should see output that looks like this:
} }
``` ```
**3.** Upgrade DC2 agents to version 1.6.9 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). _**Leave all DC1 agents at 1.2.4.**_ You should start seeing log messages like this after that: **3.** Upgrade DC2 agents to version 1.6.9 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). _**Leave all DC1 agents at 1.2.4.**_ You should start observing log messages like this after that:
```log ```log
2020/09/08 15:51:29 [DEBUG] acl: Cannot upgrade to new ACLs, servers in acl datacenter have not upgraded - found servers: true, mode: 3 2020/09/08 15:51:29 [DEBUG] acl: Cannot upgrade to new ACLs, servers in acl datacenter have not upgraded - found servers: true, mode: 3
2020/09/08 15:51:32 [ERR] consul: RPC failed to server 192.168.5.2:8300 in DC "dc1": rpc error making call: rpc: can't find service ConfigEntry.ListAll 2020/09/08 15:51:32 [ERR] consul: RPC failed to server 192.168.5.2:8300 in DC "dc1": rpc error making call: rpc: can't find service ConfigEntry.ListAll
``` ```
!> **Warning:** _It's important to upgrade your primary datacenter **last**_ (the one !> **Warning:** _It is important to upgrade your primary datacenter **last**_ (the one
specified in `acl_datacenter`). If you upgrade the primary datacenter first, it will specified in `acl_datacenter`). If you upgrade the primary datacenter first, it will
break replication between your other datacenters. If you upgrade your other datacenters break replication between your other datacenters. If you upgrade your other datacenters
first, they will run in legacy mode and replication from your primary datacenter will first, they will be in legacy mode and replication from your primary datacenter will
continue working. continue working.
**4.** Check to see if replication is still working in DC2. **4.** Check that replication is still working in DC2.
From a Consul server in DC2: From a Consul server in DC2:
@ -147,30 +147,30 @@ curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pre
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/list?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/list?pretty
``` ```
`ReplicatedIndex` should have incremented and you should see the new token listed. If you try using CLI ACL commands you'll see this error: `ReplicatedIndex` should have incremented and you should find the new token listed. If you try using CLI ACL commands you will receive this error:
```log ```log
Failed to retrieve the token list: Unexpected response code: 500 (The ACL system is currently in legacy mode.) Failed to retrieve the token list: Unexpected response code: 500 (The ACL system is currently in legacy mode.)
``` ```
This is because Consul is running in legacy mode. ACL CLI commands won't work and you have to hit the old ACL HTTP endpoints (which is why `curl` is being used above rather than the `consul` CLI client). This is because Consul in legacy mode. ACL CLI commands will not work and you have to hit the old ACL HTTP endpoints (which is why `curl` is being used above rather than the `consul` CLI client).
**5.** Upgrade DC1 agents to version 1.6.9 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). **5.** Upgrade DC1 agents to version 1.6.9 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process).
Once this is complete, you should see a log entry like this from your server agents: Once this is complete, you should observe a log entry like this from your server agents:
```log ```log
2020/09/10 22:11:49 [DEBUG] acl: transitioning out of legacy ACL mode 2020/09/10 22:11:49 [DEBUG] acl: transitioning out of legacy ACL mode
``` ```
**6.** Confirm that replication is still working in DC2 by running the following curl command from a **6.** Confirm that replication is still working in DC2 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -192,15 +192,15 @@ You should see output that looks like this:
## Post-Upgrade Configuration Changes ## Post-Upgrade Configuration Changes
When moving from a pre-1.4.0 version of Consul, you'll find that several of the ACL-related When moving from a pre-1.4.0 version of Consul, you will find that several of the ACL-related
configuration options were renamed. Backwards compatibility is maintained in the 1.6.x release configuration options were renamed. Backwards compatibility is maintained in the 1.6.x release
series, so you're old config options will continue working after upgrading, but you'll want to series, so you are old config options will continue working after upgrading, but you will want to
update those now to avoid issues when moving to newer versions. update those now to avoid issues when moving to newer versions.
These are the changes you'll need to make: These are the changes you will need to make:
- `acl_datacenter` is now named `primary_datacenter` (see [docs](https://www.consul.io/docs/agent/options#primary_datacenter) for more info) - `acl_datacenter` is now named `primary_datacenter` (review our [docs](https://www.consul.io/docs/agent/options#primary_datacenter) for more info)
- `acl_default_policy`, `acl_down_policy`, `acl_ttl`, `acl_*_token` and `enable_acl_replication` options are now specified like this (see [docs](https://www.consul.io/docs/agent/options#acl) for more info): - `acl_default_policy`, `acl_down_policy`, `acl_ttl`, `acl_*_token` and `enable_acl_replication` options are now specified like this (review our [docs](https://www.consul.io/docs/agent/options#acl) for more info):
```hcl ```hcl
acl { acl {
enabled = true/false enabled = true/false
@ -221,7 +221,7 @@ These are the changes you'll need to make:
``` ```
You can make sure your config changes are valid by copying your existing configuration files, You can make sure your config changes are valid by copying your existing configuration files,
making the changes, and then verifing them by running `consul validate $CONFIG_FILE1_PATH $CONFIG_FILE2_PATH ...`. making the changes, and then verifing them by using `consul validate $CONFIG_FILE1_PATH $CONFIG_FILE2_PATH ...`.
Once your config is passing the validation check, replace your old config files with the new ones Once your config is passing the validation check, replace your old config files with the new ones
and slowly roll your cluster again one server at a time leaving the leader agent for last in each and slowly roll your cluster again one server at a time leaving the leader agent for last in each

View File

@ -11,12 +11,11 @@ description: >-
## Introduction ## Introduction
This guide explains how to best upgrade a multi-datacenter Consul deployment that's using This guide explains how to best upgrade a multi-datacenter Consul deployment that is using
a version of Consul >= 1.6.9 and < 1.8.4 while maintaining replication. If you're on a version a version of Consul >= 1.6.9 and < 1.8.4 while maintaining replication. If you are on a version
older than 1.6.9, please follow the link for the version you're on from [here](/docs/upgrading/instructions). older than 1.6.9, please follow the link for the version you are on from [here](/docs/upgrading/instructions).
As there weren't any major breaking changes, this upgrade will be fairly simple.
In this guide, we'll be using an example with two datacenters (DCs) and will be In this guide, we will be using an example with two datacenters (DCs) and will be
referring to them as DC1 and DC2. DC1 will be the primary datacenter. referring to them as DC1 and DC2. DC1 will be the primary datacenter.
## Requirements ## Requirements
@ -27,27 +26,27 @@ referring to them as DC1 and DC2. DC1 will be the primary datacenter.
This guides makes the following assumptions: This guides makes the following assumptions:
- You have at least two datacenters configured and have ACL replication enabled. If you're - You have at least two datacenters configured and have ACL replication enabled. If you are
not using multiple datacenters, you can follow along and simply skip the instructions related not using multiple datacenters, you can follow along and simply skip the instructions related
to replication. to replication.
## Considerations ## Considerations
There aren't too many major changes that might cause issues upgrading from 1.6.9, but notable changes There are not too many major changes that might cause issues upgrading from 1.6.9, but notable changes
are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-8-0) are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-8-0)
page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#183-august-12-2020). page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#183-august-12-2020).
Looking through these changes prior to upgrading is highly recommended. Looking through these changes prior to upgrading is highly recommended.
## Procedure ## Procedure
**1.** Check replication status in DC1 by running the following curl command from a **1.** Check replication status in DC1 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -66,14 +65,14 @@ You should see output that looks like this:
-> The primary datacenter (indicated by `primary_datacenter`) will always show as having replication -> The primary datacenter (indicated by `primary_datacenter`) will always show as having replication
disabled, so this is normal even if replication is happening. disabled, so this is normal even if replication is happening.
**2.** Check replication status in DC2 by running the following curl command from a **2.** Check replication status in DC2 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {
@ -91,14 +90,14 @@ You should see output that looks like this:
**3.** Upgrade the Consul agents in all DCs to version 1.8.4 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). **3.** Upgrade the Consul agents in all DCs to version 1.8.4 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process).
**4.** Confirm that replication is still working in DC2 by running the following curl command from a **4.** Confirm that replication is still working in DC2 by issuing the following curl command from a
consul server in that DC: consul server in that DC:
```shell ```shell
curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty
``` ```
You should see output that looks like this: You should receive output similar to this:
```json ```json
{ {