2019-06-28 18:09:14 +00:00
|
|
|
|
---
|
2020-01-18 00:18:09 +00:00
|
|
|
|
layout: docs
|
|
|
|
|
page_title: Raft - Storage Backends - Configuration
|
|
|
|
|
sidebar_title: Raft
|
2019-06-28 18:09:14 +00:00
|
|
|
|
description: |-
|
|
|
|
|
|
2020-01-18 00:18:09 +00:00
|
|
|
|
The Raft storage backend is used to persist Vault's data. Unlike all the other
|
|
|
|
|
storage backends, this backend does not operate from a single source for the
|
|
|
|
|
data. Instead all the nodes in a Vault cluster will have a replicated copy of
|
|
|
|
|
the entire data. The data is replicated across the nodes using the Raft
|
|
|
|
|
Consensus Algorithm.
|
2019-06-28 18:09:14 +00:00
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
# Raft Storage Backend
|
|
|
|
|
|
2020-01-18 00:18:09 +00:00
|
|
|
|
~> **NOTE:** Vault's Integrated Storage is currently a **_Beta_**
|
2019-11-20 15:07:35 +00:00
|
|
|
|
feature and not recommended for deployment in production.
|
2019-09-16 15:44:04 +00:00
|
|
|
|
|
2019-06-28 18:09:14 +00:00
|
|
|
|
The Raft storage backend is used to persist Vault's data. Unlike other storage
|
|
|
|
|
backends, Raft storage does not operate from a single source of data. Instead
|
|
|
|
|
all the nodes in a Vault cluster will have a replicated copy of Vault's data.
|
|
|
|
|
Data gets replicated across the all the nodes via the [Raft Consensus
|
|
|
|
|
Algorithm][raft].
|
|
|
|
|
|
|
|
|
|
- **High Availability** – the Raft storage backend supports high availability.
|
|
|
|
|
|
|
|
|
|
- **HashiCorp Supported** – the Raft storage backend is officially supported
|
|
|
|
|
by HashiCorp.
|
|
|
|
|
|
|
|
|
|
```hcl
|
|
|
|
|
storage "raft" {
|
|
|
|
|
path = "/path/to/raft/data"
|
|
|
|
|
node_id = "raft_node_1"
|
|
|
|
|
}
|
|
|
|
|
cluster_addr = "http://127.0.0.1:8201"
|
|
|
|
|
```
|
|
|
|
|
|
2020-02-14 20:25:53 +00:00
|
|
|
|
~> **Note:** When using the Raft storage backend, it is required to provide
|
|
|
|
|
`cluster_addr` to indicate the address and port to be used for communication
|
|
|
|
|
between the nodes in the Raft cluster.
|
|
|
|
|
|
|
|
|
|
~> **Note:** Raft cannot be used as the configured `ha_storage` backend at this
|
|
|
|
|
time. To use Raft for HA coordination users must also use Raft for storage and
|
|
|
|
|
set `ha_enabled = true`.
|
2019-06-28 18:09:14 +00:00
|
|
|
|
|
|
|
|
|
## `raft` Parameters
|
|
|
|
|
|
|
|
|
|
- `path` `(string: "")` – The file system path where all the Vault data gets
|
|
|
|
|
stored.
|
2019-10-28 16:43:12 +00:00
|
|
|
|
This value can be overridden by setting the `VAULT_RAFT_PATH` environment variable.
|
2019-06-28 18:09:14 +00:00
|
|
|
|
|
|
|
|
|
- `node_id` `(string: "")` - The identifier for the node in the Raft cluster.
|
2019-10-28 16:43:12 +00:00
|
|
|
|
This value can be overridden by setting the `VAULT_RAFT_NODE_ID` environment variable.
|
2019-06-28 18:09:14 +00:00
|
|
|
|
|
2020-03-04 20:58:51 +00:00
|
|
|
|
- `performance_multiplier` `(integer: 0)` - An integer multiplier used by
|
|
|
|
|
servers to scale key Raft timing parameters. Tuning this affects the time it
|
|
|
|
|
takes Vault to detect leader failures and to perform leader elections, at the
|
|
|
|
|
expense of requiring more network and CPU resources for better performance.
|
|
|
|
|
Omitting this value or setting it to 0 uses default timing described below.
|
|
|
|
|
Lower values are used to tighten timing and increase sensitivity while higher
|
|
|
|
|
values relax timings and reduce sensitivity.
|
|
|
|
|
|
|
|
|
|
By default, Vault will use a lower-performance timing that's suitable for
|
|
|
|
|
minimal Vault servers, currently equivalent to setting this to a value of 5
|
|
|
|
|
(this default may be changed in future versions of Vault, depending if the
|
|
|
|
|
target minimum server profile changes). Setting this to a value of 1 will
|
|
|
|
|
configure Raft to its highest-performance mode and is recommended for
|
|
|
|
|
production Vault servers. The maximum allowed value is 10.
|
|
|
|
|
|
|
|
|
|
- `trailing_logs` `(integer: 10000)` - This controls how many log entries are
|
|
|
|
|
left in the log store on disk after a snapshot is made. This should only be
|
|
|
|
|
adjusted when followers cannot catch up to the leader due to a very large
|
|
|
|
|
snapshot size and high write throughput causing log truncation before a
|
|
|
|
|
snapshot can be fully installed. If you need to use this to recover a cluster,
|
|
|
|
|
consider reducing write throughput or the amount of data stored on Vault. The
|
|
|
|
|
default value is 10000 which is suitable for all normal workloads.
|
|
|
|
|
|
|
|
|
|
- `snapshot_threshold` `(integer: 8192)` - This controls the minimum number of raft
|
|
|
|
|
commit entries between snapshots that are saved to disk. This is a low-level
|
|
|
|
|
parameter that should rarely need to be changed. Very busy clusters
|
|
|
|
|
experiencing excessive disk IO may increase this value to reduce disk IO and
|
|
|
|
|
minimize the chances of all servers taking snapshots at the same time.
|
|
|
|
|
Increasing this trades off disk IO for disk space since the log will grow much
|
|
|
|
|
larger and the space in the raft.db file can't be reclaimed till the next
|
|
|
|
|
snapshot. Servers may take longer to recover from crashes or failover if this
|
|
|
|
|
is increased significantly as more logs will need to be replayed.
|
|
|
|
|
|
2020-03-06 00:47:10 +00:00
|
|
|
|
- `retry_join` `(list: [])` - There can be one or more `retry_join` stanzas.
|
|
|
|
|
When the raft cluster is getting bootstrapped, if the connection details of all
|
|
|
|
|
the nodes are known beforehand, then specifying this config stanzas enables the
|
|
|
|
|
nodes to automatically join a raft cluster. All the nodes would mention all
|
|
|
|
|
other nodes that they could join using this config. When one of the nodes is
|
|
|
|
|
initialized, it becomes the leader and all the other nodes will join the
|
|
|
|
|
leader node to form the cluster. When using Shamir seal, the joined nodes will
|
|
|
|
|
still need to be unsealed manually. See the section below that describes the
|
|
|
|
|
parameters accepted by the `retry_join` stanza.
|
|
|
|
|
|
|
|
|
|
### `retry_join` stanza
|
|
|
|
|
|
|
|
|
|
- `leader_api_addr` `(string: "")` - Address of a possible leader node
|
|
|
|
|
|
|
|
|
|
- `leader_ca_cert` `(string: "")` - CA cert of the possible leader node
|
|
|
|
|
|
|
|
|
|
- `leader_client_cert` `(string: "")` - Client certificate for the follower node to establish client authentication with the possible leader node
|
|
|
|
|
|
|
|
|
|
- `leader_client_key` `(string: "")` - Client key for the follower node to establish client authentication with the possible leader node
|
|
|
|
|
|
|
|
|
|
Example Configuration:
|
|
|
|
|
```
|
|
|
|
|
storage "raft" {
|
|
|
|
|
path = "/Users/foo/raft/"
|
|
|
|
|
node_id = "node1"
|
|
|
|
|
retry_join {
|
|
|
|
|
leader_api_addr = "http://127.0.0.2:8200"
|
|
|
|
|
leader_ca_cert = "/path/to/ca1"
|
|
|
|
|
leader_client_cert = "/path/to/client/cert1"
|
|
|
|
|
leader_client_key = "/path/to/client/key1"
|
|
|
|
|
}
|
|
|
|
|
retry_join {
|
|
|
|
|
leader_api_addr = "http://127.0.0.3:8200"
|
|
|
|
|
leader_ca_cert = "/path/to/ca2"
|
|
|
|
|
leader_client_cert = "/path/to/client/cert2"
|
|
|
|
|
leader_client_key = "/path/to/client/key2"
|
|
|
|
|
}
|
|
|
|
|
retry_join {
|
|
|
|
|
leader_api_addr = "http://127.0.0.4:8200"
|
|
|
|
|
leader_ca_cert = "/path/to/ca3"
|
|
|
|
|
leader_client_cert = "/path/to/client/cert3"
|
|
|
|
|
leader_client_key = "/path/to/client/key3"
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
```
|
|
|
|
|
|
2020-01-18 00:18:09 +00:00
|
|
|
|
[raft]: https://raft.github.io/ 'The Raft Consensus Algorithm'
|