diff --git a/website/data/docs-navigation.js b/website/data/docs-navigation.js index 94cc45888..cdd3ebabe 100644 --- a/website/data/docs-navigation.js +++ b/website/data/docs-navigation.js @@ -247,7 +247,19 @@ export default [ 'download-tools', { category: 'upgrading', - content: ['compatibility', 'upgrade-specific'], + content: [ + 'compatibility', + 'upgrade-specific', + { + category: 'instructions', + content: [ + 'general-process', + 'upgrade-to-1-2-x', + 'upgrade-to-1-6-x', + 'upgrade-to-1-8-x', + ], + }, + ], }, { category: 'troubleshoot', diff --git a/website/pages/docs/upgrading/index.mdx b/website/pages/docs/upgrading/index.mdx index 11f1bb62c..a9875bdf1 100644 --- a/website/pages/docs/upgrading/index.mdx +++ b/website/pages/docs/upgrading/index.mdx @@ -45,6 +45,16 @@ Consul is A, and version B is released. by running `consul members` to make sure all members have the latest build and highest protocol version. +## Large Version Jumps + +Operating a Consul datacenter that is multiple major versions behind the current major +version can increase the risk incurred during upgrades. We encourage our users to +remain no more than two major versions behind (i.e., if 1.8.x is the current release, +do not use versions older than 1.6.x). If you find yourself in a situation where you +are many major versions behind, and need to upgrade, please review our +[Upgrade Instructions page](/docs/upgrading/instructions) for information on +how to perform those upgrades. + ## Backward Incompatible Upgrades In some cases, a backwards incompatible update may be released. This has not diff --git a/website/pages/docs/upgrading/instructions/general-process.mdx b/website/pages/docs/upgrading/instructions/general-process.mdx new file mode 100644 index 000000000..bcebb94dc --- /dev/null +++ b/website/pages/docs/upgrading/instructions/general-process.mdx @@ -0,0 +1,194 @@ +--- +layout: docs +page_title: General Upgrade Process +sidebar_title: General Process +description: >- + Specific versions of Consul may have additional information about the upgrade + process beyond the standard flow. +--- + +# General Upgrade Process + +## Introduction + +This document describes some best practices that you should follow when +upgrading Consul. Some versions also have steps that are specific to that +version, so make sure you also review the [upgrade instructions](/docs/upgrading/instructions) +for the version you are on. + +## Download the New Version + +First, download the binary for the new version you want. + + + + +All current and past versions of the OSS and Enterprise releases are +available here: + +- https://releases.hashicorp.com/consul + + + + +Docker containers are available at these locations: + +- **OSS:** https://hub.docker.com/_/consul +- **Enterprise:** https://hub.docker.com/r/hashicorp/consul-enterprise + + + + +If you are using Kubernetes, then please review our documentation for +[Upgrading Consul on Kubernetes](/docs/k8s/operations/upgrading). + + + + +## Prepare for the Upgrade + +**1.** Take a snapshot: + +``` +consul snapshot save backup.snap +``` + +You can inspect the snapshot to ensure if was successful with: + +``` +consul snapshot inspect backup.snap +``` + +Example output: + +``` +ID 2-1182-1542056499724 +Size 4115 +Index 1182 +Term 2 +Version 1 +``` + +This will ensure you have a safe fallback option in case something goes wrong. Store +this snapshot somewhere safe. More documentation on snapshot usage is available here: + +- https://www.consul.io/docs/commands/snapshot +- https://learn.hashicorp.com/tutorials/consul/backup-and-restore + +**2.** Temporarily modify your Consul configuration so that its [log_level](/docs/agent/options.html#_log_level) +is set to `debug`. After doing this, issue the following command on your servers to +reload the configuration: + +``` +consul reload +``` + +This change will give you more information to work with in the event something goes wrong. + +## Perform the Upgrade + +**1.** Issue the following command to discover which server is currently the leader: + +``` +consul operator raft list-peers +``` + +You should receive output similar to this (exact formatting and content may differ based on version): + +``` +Node ID Address State Voter RaftProtocol +dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3 +dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3 +dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3 +``` + +Take note of which agent is the leader. + +**2.** Copy the new `consul` binary onto your servers and replace the existing +binary with the new one. + +**3.** The following steps must be done in order on the server agents, leaving the leader +agent for last. First force the server agent to leave the cluster with the following command: + +``` +consul leave +``` + +Then, use a service management system (e.g., systemd, upstart, etc.) to restart the Consul service. If +you are not using a service management system, you must restart the agent manually. + +To validate that the agent has rejoined the cluster and is in sync with the leader, issue the +following command: + +``` +consul info +``` + +Check whether the `commit_index` and `last_log_index` fields have the same value. If done properly, +this should avoid an unexpected leadership election due to loss of quorum. + +**4.** Double-check that all servers are showing up in the cluster as expected and are on +the correct version by issuing: + +``` +consul members +``` + +You should receive output similar to this: + +``` +Node Address Status Type Build Protocol DC +dc1-node1 10.11.0.2:8301 alive server 1.8.3 2 dc1 +dc1-node2 10.11.0.3:8301 alive server 1.8.3 2 dc1 +dc1-node3 10.11.0.4:8301 alive server 1.8.3 2 dc1 +``` + +Also double-check the raft state to make sure there is a leader and sufficient voters: + +``` +consul operator raft list-peers +``` + +You should receive output similar to this: + +``` +Node ID Address State Voter RaftProtocol +dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3 +dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3 +dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3 +``` + +**5.** Set your `log_level` back to its original value and issue the following command +on your servers to reload the configuration: + +``` +consul reload +``` + +## Troubleshooting + +Most problems with upgrading occur due to either failing to upgrade the leader agent last, +or failing to wait for a follower agent to fully rejoin a cluster before moving +on to another server. This can cause a loss of quorum and occasionally can result in +all of your servers attempting to kick off leadership elections endlessly without ever +reaching a quorum and electing a leader. + +Most of these problems can be solved by following the steps outlined in our +[Outage Recovery](https://learn.hashicorp.com/tutorials/consul/recovery-outage) document. +If you are still having trouble after trying the recovery steps outlined there, +then the following options for further assistance are available: + +- OSS users without paid support plans can request help in our [Community Forum](https://discuss.hashicorp.com/c/consul/29) +- Enterprise and OSS users with paid support plans can contact [HashiCorp Support](https://support.hashicorp.com/) + +When contacting Hashicorp Support, please include the following information in your ticket: + +- Consul version you were upgrading FROM and TO. +- [Debug level logs](/docs/agent/options.html#_log_level) from all servers in the cluster + that you are having trouble with. These should include logs from prior to the upgrade attempt + up through the current time. If your logs were not set at debug level prior to the + upgrade, please include those logs as well. Also, update your config to use debug logs, + and include logs from after that was done. +- Your Consul config files (please redact any secrets). +- Output from `consul members -detailed` and `consul operator raft list-peers` from each + server in your cluster. diff --git a/website/pages/docs/upgrading/instructions/index.mdx b/website/pages/docs/upgrading/instructions/index.mdx new file mode 100644 index 000000000..ece299c7b --- /dev/null +++ b/website/pages/docs/upgrading/instructions/index.mdx @@ -0,0 +1,37 @@ +--- +layout: docs +page_title: Upgrade Instructions +sidebar_title: Upgrade Instructions +description: >- + Specific versions of Consul may have additional information about the upgrade + process beyond the standard flow. +--- + +# Upgrade Instructions + +This document is intended to help users who find themselves many versions behind to upgrade safely. +Our recommended upgrade path is moving from version 0.8.5 to 1.2.4 to 1.6.9 to the current version. To get +started, you will want to choose the version you are currently on below and then follow the instructions +until you are on the latest version. The upgrade guides will mention notable changes and link to relevant +changelogs – we recommend reviewing the changelog for versions between the one you are on and the +one you are upgrading to at each step to familiarize yourself with changes. + +## Getting Started + +To get instructions for your upgrade, please choose the release series you are _currently using_: + +- [0.8.x](/docs/upgrading/instructions/upgrade-to-1-2-x) +- [0.9.x](/docs/upgrading/instructions/upgrade-to-1-2-x) +- [1.0.x](/docs/upgrading/instructions/upgrade-to-1-2-x) +- [1.1.x](/docs/upgrading/instructions/upgrade-to-1-2-x) +- [1.2.x](/docs/upgrading/instructions/upgrade-to-1-6-x) +- [1.3.x](/docs/upgrading/instructions/upgrade-to-1-6-x) +- [1.4.x](/docs/upgrading/instructions/upgrade-to-1-6-x) +- [1.5.x](/docs/upgrading/instructions/upgrade-to-1-6-x) +- [1.6.x](/docs/upgrading/instructions/upgrade-to-1-8-x) +- [1.7.x](/docs/upgrading/instructions/upgrade-to-1-8-x) + +If you are using <= 0.7.x, please contact support for assistance: + +- OSS users without paid support plans can request help in our [Community Forum](https://discuss.hashicorp.com/c/consul/29) +- Enterprise and OSS users with paid support plans can contact [HashiCorp Support](https://support.hashicorp.com/) diff --git a/website/pages/docs/upgrading/instructions/upgrade-to-1-2-x.mdx b/website/pages/docs/upgrading/instructions/upgrade-to-1-2-x.mdx new file mode 100644 index 000000000..658fcd630 --- /dev/null +++ b/website/pages/docs/upgrading/instructions/upgrade-to-1-2-x.mdx @@ -0,0 +1,131 @@ +--- +layout: docs +page_title: Upgrading to 1.2.4 +sidebar_title: Upgrading to 1.2.4 +description: >- + Specific versions of Consul may have additional information about the upgrade + process beyond the standard flow. +--- + +# Upgrading to 1.2.4 + +## Introduction + +This guide explains how to best upgrade a multi-datacenter Consul deployment that is using +a version of Consul >= 0.8.5 and < 1.2.4 while maintaining replication. If you are on a version +older than 0.8.5, but are in the 0.8.x series, please upgrade to 0.8.5 by following our +[General Upgrade Process](/docs/upgrading/instructions/general-process). If you are on a version +older than 0.8.0, please [contact support](https://support.hashicorp.com). + +In this guide, we will be using an example with two datacenters (DCs) and will be +referring to them as DC1 and DC2. DC1 will be the primary datacenter. + +## Requirements + +- All Consul servers should be on a version of Consul >= 0.8.5 and < 1.2.4. +- You need a Consul cluster with at least 3 nodes to perform this upgrade as documented. If + you either have a single node cluster or several single node clusters joined via WAN, the + servers will come up in a `No cluster leader` loop after upgrading. If that happens, you will + need to recover the cluster using the method described [here](https://learn.hashicorp.com/tutorials/consul/recovery-outage#manual-recovery-using-peers-json). + You can avoid this issue entirely by growing your cluster to 3 nodes prior to upgrading. + +## Assumptions + +This guide makes the following assumptions: + +- You have at least two datacenters configured and have ACL replication enabled. If you are + not using multiple datacenters, you can follow along and simply skip the instructions related + to replication. + +## Considerations + +There are not too many major changes that might cause issues upgrading from 1.0.8, but notable changes +are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-1-0) +page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#124-november-27-2018). +Looking through these changes prior to upgrading is highly recommended. + +## Procedure + +**1.** Check replication status in DC1 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": false, + "Running": false, + "SourceDatacenter": "", + "ReplicatedIndex": 0, + "LastSuccess": "0001-01-01T00:00:00Z", + "LastError": "0001-01-01T00:00:00Z" +} +``` + +-> The primary datacenter (indicated by `acl_datacenter`) will always show as having replication +disabled, so this is normal even if replication is happening. + +**2.** Check replication status in DC2 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": true, + "Running": true, + "SourceDatacenter": "dc1", + "ReplicatedIndex": 9, + "LastSuccess": "2020-09-10T21:16:15Z", + "LastError": "0001-01-01T00:00:00Z" +} +``` + +**3.** Upgrade the Consul agents in all DCs to version 1.2.4 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). +This should be done one DC at a time, leaving the primary DC for last. + +**4.** Confirm that replication is still working in DC2 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": true, + "Running": true, + "SourceDatacenter": "dc1", + "ReplicatedIndex": 9, + "LastSuccess": "2020-09-10T21:16:15Z", + "LastError": "0001-01-01T00:00:00Z" +} +``` + +## Post-Upgrade Configuration Changes + +If you moved from a pre-1.0.0 version of Consul, you will find that _many_ of the configuration +options were renamed. Backwards compatibility has been maintained, so your old config options +will continue working after upgrading, but you will want to update those now to avoid issues when +moving to newer versions. + +The full list of changes is available here: + +- https://www.consul.io/docs/upgrading/upgrade-specific#deprecated-options-have-been-removed + +You can make sure your config changes are valid by copying your existing configuration files, +making the changes, and then verifing them by using `consul validate $CONFIG_FILE1_PATH $CONFIG_FILE2_PATH ...`. + +Once your config is passing the validation check, replace your old config files with the new ones +and slowly roll your cluster again one server at a time – leaving the leader agent for last in each +datacenter. diff --git a/website/pages/docs/upgrading/instructions/upgrade-to-1-6-x.mdx b/website/pages/docs/upgrading/instructions/upgrade-to-1-6-x.mdx new file mode 100644 index 000000000..a441e87a4 --- /dev/null +++ b/website/pages/docs/upgrading/instructions/upgrade-to-1-6-x.mdx @@ -0,0 +1,228 @@ +--- +layout: docs +page_title: Upgrading to 1.6.9 +sidebar_title: Upgrading to 1.6.9 +description: >- + Specific versions of Consul may have additional information about the upgrade + process beyond the standard flow. +--- + +# Upgrading to 1.6.9 + +## Introduction + +This guide explains how to best upgrade a multi-datacenter Consul deployment that is using +a version of Consul >= 1.2.4 and < 1.6.9 while maintaining replication. If you are on a version +older than 1.2.4, please review our [Upgrading to 1.2.4](/docs/upgrading/instructions/upgrade-to-1-2-x) +guide. Due to changes to the ACL system, an ACL token migration will need to be performed +as part of this upgrade. The 1.6.x series is the last series that had support for legacy +ACL tokens, so this migration _must_ happen before upgrading past the 1.6.x release series. +Here is some documentation that may prove useful for reference during this upgrade process: + +- [ACL System in Legacy Mode](https://www.consul.io/docs/acl/acl-legacy) - You can find + information about legacy configuration options and differences between modes here. +- [Configuration](https://www.consul.io/docs/agent/options) - You can find more details + around legacy ACL and new ACL configuration options here. Legacy ACL config options + will be listed as deprecates as of 1.4.0. + +In this guide, we will be using an example with two datacenters (DCs) and will be +referring to them as DC1 and DC2. DC1 will be the primary datacenter. + +## Requirements + +- All Consul servers should be on a version of Consul >= 1.2.4 and < 1.6.9. + +## Assumptions + +This guide makes the following assumptions: + +- You have at least two datacenters configured and have ACL replication enabled. If you are + not using multiple datacenters, you can follow along and simply skip the instructions related + to replication. +- You have not already performed the ACL token migration. If you have, please skip all related + steps. + +## Considerations + +There are quite a number of changes between releases. Notable changes +are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-6-3) +page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#124-november-27-2018). +Looking through these changes prior to upgrading is highly recommended. + +Two very notable items are: + +- 1.6.2 introduced more strict JSON decoding. Invalid JSON that was previously ignored might result in errors now (e.g., `Connect: null` in service definitions). See [[GH#6680](https://github.com/hashicorp/consul/pull/6680)]. +- 1.6.3 introduced the [http_max_conns_per_client](https://www.consul.io/docs/agent/options.html#http_max_conns_per_client) limit. This defaults to 200. Prior to this, connections per client were unbounded. [[GH#7159](https://github.com/hashicorp/consul/issues/7159)] + +## Procedure + +**1.** Check the replication status of the primary datacenter (DC1) by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": false, + "Running": false, + "SourceDatacenter": "", + "ReplicatedIndex": 0, + "LastSuccess": "0001-01-01T00:00:00Z", + "LastError": "0001-01-01T00:00:00Z" +} +``` + +-> The primary datacenter (indicated by `acl_datacenter`) will always show as having replication +disabled, so this is normal even if replication is happening. + +**2.** Check replication status in DC2 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": true, + "Running": true, + "SourceDatacenter": "dc1", + "ReplicatedIndex": 9, + "LastSuccess": "2020-09-10T21:16:15Z", + "LastError": "0001-01-01T00:00:00Z" +} +``` + +**3.** Upgrade DC2 agents to version 1.6.9 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). _**Leave all DC1 agents at 1.2.4.**_ You should start observing log messages like this after that: + +```log +2020/09/08 15:51:29 [DEBUG] acl: Cannot upgrade to new ACLs, servers in acl datacenter have not upgraded - found servers: true, mode: 3 +2020/09/08 15:51:32 [ERR] consul: RPC failed to server 192.168.5.2:8300 in DC "dc1": rpc error making call: rpc: can't find service ConfigEntry.ListAll +``` + +!> **Warning:** _It is important to upgrade your primary datacenter **last**_ (the one +specified in `acl_datacenter`). If you upgrade the primary datacenter first, it will +break replication between your other datacenters. If you upgrade your other datacenters +first, they will be in legacy mode and replication from your primary datacenter will +continue working. + +**4.** Check that replication is still working in DC2. + +From a Consul server in DC2: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/list?pretty +``` + +Take note of the `ReplicatedIndex` value. + +Create a new file containing the payload for creating a new token named `test-ui-token.json` +with the following contents: + +```json +{ + "Name": "UI Token", + "Type": "client", + "Rules": "key \"\" { policy = \"write\" } node \"\" { policy = \"read\" } service \"\" { policy = \"read\" }" +} +``` + +From a Consul server in DC1, create a new token using that file: + +```shell +curl -X PUT -H "X-Consul-Token: $MASTER_TOKEN" -d @test-ui-token.json localhost:8500/v1/acl/create +``` + +From a Consul server in DC2: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/list?pretty +``` + +`ReplicatedIndex` should have incremented and you should find the new token listed. If you try using CLI ACL commands you will receive this error: + +```log +Failed to retrieve the token list: Unexpected response code: 500 (The ACL system is currently in legacy mode.) +``` + +This is because Consul in legacy mode. ACL CLI commands will not work and you have to hit the old ACL HTTP endpoints (which is why `curl` is being used above rather than the `consul` CLI client). + +**5.** Upgrade DC1 agents to version 1.6.9 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). + +Once this is complete, you should observe a log entry like this from your server agents: + +```log +2020/09/10 22:11:49 [DEBUG] acl: transitioning out of legacy ACL mode +``` + +**6.** Confirm that replication is still working in DC2 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": true, + "Running": true, + "SourceDatacenter": "dc1", + "ReplicationType": "tokens", + "ReplicatedIndex": 259, + "ReplicatedRoleIndex": 1, + "ReplicatedTokenIndex": 260, + "LastSuccess": "2020-09-10T22:11:51Z", + "LastError": "2020-09-10T22:11:43Z" +} +``` + +**6.** Migrate your legacy ACL tokens to the new ACL system by following the instructions in our [ACL Token Migration guide](https://www.consul.io/docs/acl/acl-migrate-tokens). + +~> This step _must_ be completed before upgrading to a version higher than 1.6.x. + +## Post-Upgrade Configuration Changes + +When moving from a pre-1.4.0 version of Consul, you will find that several of the ACL-related +configuration options were renamed. Backwards compatibility is maintained in the 1.6.x release +series, so you are old config options will continue working after upgrading, but you will want to +update those now to avoid issues when moving to newer versions. + +These are the changes you will need to make: + +- `acl_datacenter` is now named `primary_datacenter` (review our [docs](https://www.consul.io/docs/agent/options#primary_datacenter) for more info) +- `acl_default_policy`, `acl_down_policy`, `acl_ttl`, `acl_*_token` and `enable_acl_replication` options are now specified like this (review our [docs](https://www.consul.io/docs/agent/options#acl) for more info): + ```hcl + acl { + enabled = true/false + default_policy = "..." + down_policy = "..." + policy_ttl = "..." + role_ttl = "..." + enable_token_replication = true/false + enable_token_persistence = true/false + tokens { + master = "..." + agent = "..." + agent_master = "..." + replication = "..." + default = "..." + } + } + ``` + +You can make sure your config changes are valid by copying your existing configuration files, +making the changes, and then verifing them by using `consul validate $CONFIG_FILE1_PATH $CONFIG_FILE2_PATH ...`. + +Once your config is passing the validation check, replace your old config files with the new ones +and slowly roll your cluster again one server at a time – leaving the leader agent for last in each +datacenter. diff --git a/website/pages/docs/upgrading/instructions/upgrade-to-1-8-x.mdx b/website/pages/docs/upgrading/instructions/upgrade-to-1-8-x.mdx new file mode 100644 index 000000000..63bc4c740 --- /dev/null +++ b/website/pages/docs/upgrading/instructions/upgrade-to-1-8-x.mdx @@ -0,0 +1,118 @@ +--- +layout: docs +page_title: Upgrading to 1.8.4 +sidebar_title: Upgrading to 1.8.4 +description: >- + Specific versions of Consul may have additional information about the upgrade + process beyond the standard flow. +--- + +# Upgrading to 1.8.4 + +## Introduction + +This guide explains how to best upgrade a multi-datacenter Consul deployment that is using +a version of Consul >= 1.6.9 and < 1.8.4 while maintaining replication. If you are on a version +older than 1.6.9, please follow the link for the version you are on from [here](/docs/upgrading/instructions). + +In this guide, we will be using an example with two datacenters (DCs) and will be +referring to them as DC1 and DC2. DC1 will be the primary datacenter. + +## Requirements + +- All Consul servers should be on a version of Consul >= 1.6.9 and < 1.8.4. + +## Assumptions + +This guides makes the following assumptions: + +- You have at least two datacenters configured and have ACL replication enabled. If you are + not using multiple datacenters, you can follow along and simply skip the instructions related + to replication. + +## Considerations + +There are not too many major changes that might cause issues upgrading from 1.6.9, but notable changes +are called out in our [Specific Version Details](/docs/upgrading/upgrade-specific#consul-1-8-0) +page. You can find more granular details in the full [changelog](https://github.com/hashicorp/consul/blob/master/CHANGELOG.md#183-august-12-2020). +Looking through these changes prior to upgrading is highly recommended. + +## Procedure + +**1.** Check replication status in DC1 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": false, + "Running": false, + "SourceDatacenter": "", + "ReplicationType": "", + "ReplicatedIndex": 0, + "ReplicatedRoleIndex": 0, + "ReplicatedTokenIndex": 0, + "LastSuccess": "0001-01-01T00:00:00Z", + "LastError": "0001-01-01T00:00:00Z" +} +``` + +-> The primary datacenter (indicated by `primary_datacenter`) will always show as having replication +disabled, so this is normal even if replication is happening. + +**2.** Check replication status in DC2 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": true, + "Running": true, + "SourceDatacenter": "dc1", + "ReplicationType": "tokens", + "ReplicatedIndex": 672, + "ReplicatedRoleIndex": 1, + "ReplicatedTokenIndex": 677, + "LastSuccess": "2020-09-14T17:06:07Z", + "LastError": "2020-09-14T16:53:22Z" +} +``` + +**3.** Upgrade the Consul agents in all DCs to version 1.8.4 by following our [General Upgrade Process](/docs/upgrading/instructions/general-process). + +**4.** Confirm that replication is still working in DC2 by issuing the following curl command from a +consul server in that DC: + +```shell +curl -s -H "X-Consul-Token: $MASTER_TOKEN" localhost:8500/v1/acl/replication?pretty +``` + +You should receive output similar to this: + +```json +{ + "Enabled": true, + "Running": true, + "SourceDatacenter": "dc1", + "ReplicationType": "tokens", + "ReplicatedIndex": 672, + "ReplicatedRoleIndex": 1, + "ReplicatedTokenIndex": 677, + "LastSuccess": "2020-09-14T17:15:16Z", + "LastError": "0001-01-01T00:00:00Z" +} +``` + +## Post-Upgrade Configuration Changes + +No configuration changes are required for this upgrade.