2020-09-14 17:18:08 +00:00
|
|
|
---
|
|
|
|
layout: docs
|
|
|
|
page_title: General Upgrade Process
|
|
|
|
description: >-
|
|
|
|
Specific versions of Consul may have additional information about the upgrade
|
|
|
|
process beyond the standard flow.
|
|
|
|
---
|
|
|
|
|
|
|
|
# General Upgrade Process
|
|
|
|
|
|
|
|
## Introduction
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
This document describes some best practices that you should follow when
|
|
|
|
upgrading Consul. Some versions also have steps that are specific to that
|
|
|
|
version, so make sure you also review the [upgrade instructions](/docs/upgrading/instructions)
|
|
|
|
for the version you are on.
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
## Download the New Version
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
First, download the binary for the new version you want.
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
<Tabs>
|
|
|
|
<Tab heading="Binary">
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
All current and past versions of the OSS and Enterprise releases are
|
|
|
|
available here:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
- https://releases.hashicorp.com/consul
|
|
|
|
|
|
|
|
</Tab>
|
|
|
|
<Tab heading="Docker">
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
Docker containers are available at these locations:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
- **OSS:** https://hub.docker.com/_/consul
|
|
|
|
- **Enterprise:** https://hub.docker.com/r/hashicorp/consul-enterprise
|
|
|
|
|
|
|
|
</Tab>
|
|
|
|
<Tab heading="Kubernetes">
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
If you are using Kubernetes, then please review our documentation for
|
2022-01-10 23:36:16 +00:00
|
|
|
[Upgrading Consul on Kubernetes](/docs/k8s/upgrade).
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
</Tab>
|
|
|
|
</Tabs>
|
|
|
|
|
|
|
|
## Prepare for the Upgrade
|
|
|
|
|
|
|
|
**1.** Take a snapshot:
|
|
|
|
|
|
|
|
```
|
|
|
|
consul snapshot save backup.snap
|
|
|
|
```
|
|
|
|
|
|
|
|
You can inspect the snapshot to ensure if was successful with:
|
|
|
|
|
|
|
|
```
|
|
|
|
consul snapshot inspect backup.snap
|
|
|
|
```
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
Example output:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
ID 2-1182-1542056499724
|
|
|
|
Size 4115
|
|
|
|
Index 1182
|
|
|
|
Term 2
|
|
|
|
Version 1
|
|
|
|
```
|
|
|
|
|
|
|
|
This will ensure you have a safe fallback option in case something goes wrong. Store
|
2020-09-15 21:03:17 +00:00
|
|
|
this snapshot somewhere safe. More documentation on snapshot usage is available here:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
2022-01-10 23:42:51 +00:00
|
|
|
- [consul.io/commands/snapshot](/commands/snapshot)
|
2022-09-14 22:45:42 +00:00
|
|
|
- [Backup Consul Data and State tutorial](https://learn.hashicorp.com/tutorials/consul/backup-and-restore)
|
2020-09-14 17:18:08 +00:00
|
|
|
|
2022-01-11 01:30:50 +00:00
|
|
|
**2.** Temporarily modify your Consul configuration so that its [log_level](/docs/agent/config/cli-flags#_log_level)
|
2020-09-18 20:18:46 +00:00
|
|
|
is set to `debug`. After doing this, issue the following command on your servers to
|
|
|
|
reload the configuration:
|
|
|
|
|
|
|
|
```
|
|
|
|
consul reload
|
|
|
|
```
|
|
|
|
|
|
|
|
This change will give you more information to work with in the event something goes wrong.
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
## Perform the Upgrade
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
**1.** Issue the following command to discover which server is currently the leader:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
consul operator raft list-peers
|
|
|
|
```
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
You should receive output similar to this (exact formatting and content may differ based on version):
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
Node ID Address State Voter RaftProtocol
|
|
|
|
dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3
|
|
|
|
dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3
|
|
|
|
dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3
|
|
|
|
```
|
|
|
|
|
|
|
|
Take note of which agent is the leader.
|
|
|
|
|
|
|
|
**2.** Copy the new `consul` binary onto your servers and replace the existing
|
|
|
|
binary with the new one.
|
|
|
|
|
2020-09-18 20:18:46 +00:00
|
|
|
**3.** The following steps must be done in order on the server agents, leaving the leader
|
|
|
|
agent for last. First force the server agent to leave the cluster with the following command:
|
|
|
|
|
|
|
|
```
|
|
|
|
consul leave
|
|
|
|
```
|
|
|
|
|
|
|
|
Then, use a service management system (e.g., systemd, upstart, etc.) to restart the Consul service. If
|
|
|
|
you are not using a service management system, you must restart the agent manually.
|
2020-09-14 17:18:08 +00:00
|
|
|
|
2020-09-18 20:18:46 +00:00
|
|
|
To validate that the agent has rejoined the cluster and is in sync with the leader, issue the
|
|
|
|
following command:
|
|
|
|
|
|
|
|
```
|
|
|
|
consul info
|
|
|
|
```
|
|
|
|
|
|
|
|
Check whether the `commit_index` and `last_log_index` fields have the same value. If done properly,
|
|
|
|
this should avoid an unexpected leadership election due to loss of quorum.
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
**4.** Double-check that all servers are showing up in the cluster as expected and are on
|
2020-09-15 21:03:17 +00:00
|
|
|
the correct version by issuing:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
consul members
|
|
|
|
```
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
You should receive output similar to this:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
Node Address Status Type Build Protocol DC
|
|
|
|
dc1-node1 10.11.0.2:8301 alive server 1.8.3 2 dc1
|
|
|
|
dc1-node2 10.11.0.3:8301 alive server 1.8.3 2 dc1
|
|
|
|
dc1-node3 10.11.0.4:8301 alive server 1.8.3 2 dc1
|
|
|
|
```
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
Also double-check the raft state to make sure there is a leader and sufficient voters:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
consul operator raft list-peers
|
|
|
|
```
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
You should receive output similar to this:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
Node ID Address State Voter RaftProtocol
|
|
|
|
dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3
|
|
|
|
dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3
|
|
|
|
dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3
|
|
|
|
```
|
|
|
|
|
2020-09-18 20:18:46 +00:00
|
|
|
**5.** Set your `log_level` back to its original value and issue the following command
|
|
|
|
on your servers to reload the configuration:
|
|
|
|
|
|
|
|
```
|
|
|
|
consul reload
|
|
|
|
```
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
## Troubleshooting
|
|
|
|
|
2020-09-15 21:03:17 +00:00
|
|
|
Most problems with upgrading occur due to either failing to upgrade the leader agent last,
|
2020-09-18 20:18:46 +00:00
|
|
|
or failing to wait for a follower agent to fully rejoin a cluster before moving
|
2020-09-14 17:18:08 +00:00
|
|
|
on to another server. This can cause a loss of quorum and occasionally can result in
|
|
|
|
all of your servers attempting to kick off leadership elections endlessly without ever
|
|
|
|
reaching a quorum and electing a leader.
|
|
|
|
|
|
|
|
Most of these problems can be solved by following the steps outlined in our
|
|
|
|
[Outage Recovery](https://learn.hashicorp.com/tutorials/consul/recovery-outage) document.
|
2020-09-15 21:03:17 +00:00
|
|
|
If you are still having trouble after trying the recovery steps outlined there,
|
2020-09-18 20:18:46 +00:00
|
|
|
then the following options for further assistance are available:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
- OSS users without paid support plans can request help in our [Community Forum](https://discuss.hashicorp.com/c/consul/29)
|
2020-09-15 21:03:17 +00:00
|
|
|
- Enterprise and OSS users with paid support plans can contact [HashiCorp Support](https://support.hashicorp.com/)
|
2020-09-14 17:18:08 +00:00
|
|
|
|
2020-09-18 20:18:46 +00:00
|
|
|
When contacting Hashicorp Support, please include the following information in your ticket:
|
2020-09-14 17:18:08 +00:00
|
|
|
|
|
|
|
- Consul version you were upgrading FROM and TO.
|
2022-01-11 01:30:50 +00:00
|
|
|
- [Debug level logs](/docs/agent/config/cli-flags#_log_level) from all servers in the cluster
|
2020-09-15 21:03:17 +00:00
|
|
|
that you are having trouble with. These should include logs from prior to the upgrade attempt
|
2020-09-14 17:18:08 +00:00
|
|
|
up through the current time. If your logs were not set at debug level prior to the
|
2020-09-15 21:03:17 +00:00
|
|
|
upgrade, please include those logs as well. Also, update your config to use debug logs,
|
|
|
|
and include logs from after that was done.
|
2020-09-14 17:18:08 +00:00
|
|
|
- Your Consul config files (please redact any secrets).
|
|
|
|
- Output from `consul members -detailed` and `consul operator raft list-peers` from each
|
|
|
|
server in your cluster.
|