2021-03-24 19:09:01 +00:00
|
|
|
---
|
|
|
|
layout: docs
|
|
|
|
page_title: Vault Enterprise Eventual Consistency
|
|
|
|
description: Vault Enterprise Consistency Model
|
|
|
|
---
|
|
|
|
|
|
|
|
# Vault Eventual Consistency
|
|
|
|
|
|
|
|
When running in a cluster, Vault has an eventual consistency model.
|
|
|
|
Only one node (the leader) can write to Vault's storage.
|
|
|
|
Users generally expect read-after-write consistency: in other
|
2021-04-06 17:49:04 +00:00
|
|
|
words, after writing foo=1, a subsequent read of foo should return 1. Depending
|
|
|
|
on the Vault configuration this isn't always the case. When using performance
|
2021-03-24 19:09:01 +00:00
|
|
|
standbys with Integrated Storage, or when using performance replication,
|
|
|
|
there are some sequences of operations that don't always yield read-after-write
|
|
|
|
consistency.
|
|
|
|
|
|
|
|
## Performance Standby Nodes
|
|
|
|
|
|
|
|
When using the Integrated Storage backend without performance standbys, only
|
2021-04-06 17:49:04 +00:00
|
|
|
a single Vault node (the active node) handles requests. Requests sent to
|
2021-03-24 19:09:01 +00:00
|
|
|
regular standbys are handled by forwarding them to the active node. This Vault configuration
|
|
|
|
gives Vault the same behavior as the default Consul consistency model.
|
|
|
|
|
|
|
|
When using the Integrated Storage backend with performance standbys, both the
|
2021-04-06 17:49:04 +00:00
|
|
|
active node and performance standbys can handle requests. If a performance standby
|
2021-03-24 19:09:01 +00:00
|
|
|
handles a login request, or a request that generates a dynamic secret, the
|
|
|
|
performance standby will issue a remote procedure call (RPC) to the active node to store the token
|
2021-04-06 17:49:04 +00:00
|
|
|
and/or lease. If the performance standby handles any other request that
|
2021-03-24 19:09:01 +00:00
|
|
|
results in a storage write, it will forward that request to the active node
|
|
|
|
in the same way a regular standby forwards all requests.
|
|
|
|
|
|
|
|
With Integrated Storage, all writes occur on the active node, which then issues
|
2021-04-06 17:49:04 +00:00
|
|
|
RPCs to update the local storage on every other node. Between when the active
|
2021-03-24 19:09:01 +00:00
|
|
|
node writes the data to its local disk, and when those RPCs are handled on the
|
|
|
|
other nodes to write the data to their local disks, those nodes present a stale
|
|
|
|
view of the data.
|
|
|
|
|
|
|
|
As a result, even if you're always talking to the same performance standby,
|
2021-04-06 17:49:04 +00:00
|
|
|
you may not get read-after-write semantics. The write gets sent to the active
|
2021-03-24 19:09:01 +00:00
|
|
|
node, and if the subsequent read request occurs before the new data gets sent
|
|
|
|
to the node handling the read request, the read request won't be able to take
|
|
|
|
the write into account because the new data isn't present on that node yet.
|
|
|
|
|
|
|
|
## Performance replication
|
|
|
|
|
2021-04-06 17:49:04 +00:00
|
|
|
A similar phenomenon occurs when using performance replication. One example
|
|
|
|
of how this manifests is when using shared mounts. If a KV secrets engine
|
2021-03-24 19:09:01 +00:00
|
|
|
is mounted on the primary with `local=false`, it will exist on the secondary
|
2021-04-06 17:49:04 +00:00
|
|
|
cluster as well. The secondary cluster can handle requests to that mount,
|
2021-03-24 19:09:01 +00:00
|
|
|
though as with performance standbys, write requests must be forwarded - in
|
2021-04-06 17:49:04 +00:00
|
|
|
this case to the primary active node. Once data is written to the primary cluster,
|
2021-03-24 19:09:01 +00:00
|
|
|
it won't be visible on the secondary cluster until the data has been replicated
|
|
|
|
from the primary. Therefore, on the secondary cluster, it initially appears as if
|
|
|
|
the data write hasn't happened.
|
|
|
|
|
|
|
|
If the secondary cluster is using Integrated Storage, and the read request is
|
|
|
|
being handled on one of its performance standbys, the problem is exacerbated because it
|
|
|
|
has to be sent first from the primary active node to the secondary active node,
|
|
|
|
and then from there to the secondary performance standby, each of which can
|
|
|
|
introduce their own form of lag.
|
|
|
|
|
|
|
|
Even without shared secret engines, stale reads can still happen with performance
|
|
|
|
replication. The Identity subsystem aims to provide a view on entities and
|
2021-04-06 17:49:04 +00:00
|
|
|
groups which span across clusters. As such, when logging in to a secondary cluster
|
2021-03-24 19:09:01 +00:00
|
|
|
using a shared mount, Vault tries to generate an entity and alias if they don't
|
2021-04-06 17:49:04 +00:00
|
|
|
already exist, and these must be stored on the primary using an RPC. Something
|
2021-03-24 19:09:01 +00:00
|
|
|
similar happens with groups.
|
|
|
|
|
|
|
|
## Mitigations
|
|
|
|
|
2021-04-06 17:49:04 +00:00
|
|
|
There has long been a partial mitigation for the above problems. When writing
|
2021-03-24 19:09:01 +00:00
|
|
|
data via RPC, e.g. when a performance standby registers tokens and leases on the
|
|
|
|
active node after a login or generating a dynamic secret, part of the response
|
|
|
|
includes a number known as the "WAL index", aka Write-Ahead Log index.
|
|
|
|
|
|
|
|
A full explanation of this is outside the scope of this document, but the short
|
|
|
|
version is that both performance replication and performance standbys use log
|
2021-04-06 17:49:04 +00:00
|
|
|
shipping to stay in sync with the upstream source of writes. The mitigation
|
2021-03-24 19:09:01 +00:00
|
|
|
historically used by nodes doing writes via RPC is to look at the WAL index in
|
|
|
|
the response and wait up to 2 seconds to see if that WAL index appear in the
|
2021-04-06 17:49:04 +00:00
|
|
|
logs being shipped from upstream. Once the WAL index is seen, the Vault node
|
2021-03-24 19:09:01 +00:00
|
|
|
handling the request that resulted in RPCs can return its own response to the
|
|
|
|
client: it knows that any subsequent reads will be able to see the value that
|
|
|
|
was just written. If the WAL index isn't seen within those 2 seconds, the Vault
|
|
|
|
node completes the request anyway, returning a warning in the response.
|
|
|
|
|
|
|
|
This mitigation option still exists in Vault 1.7, though now there is a
|
|
|
|
configuration option to adjust the wait time:
|
2023-01-26 00:12:15 +00:00
|
|
|
[best_effort_wal_wait_duration](/vault/docs/configuration/replication).
|
2021-03-24 19:09:01 +00:00
|
|
|
|
|
|
|
## Vault 1.7 Mitigations
|
|
|
|
|
|
|
|
There are now a variety of other mitigations available:
|
2021-04-06 17:49:04 +00:00
|
|
|
|
|
|
|
- per-request option to always forward the request to the active node
|
|
|
|
- per-request option to conditionally forward the request to the active node
|
2021-03-24 19:09:01 +00:00
|
|
|
if it would otherwise result in a stale read
|
2021-04-06 17:49:04 +00:00
|
|
|
- per-request option to fail requests if they might result in a stale read
|
|
|
|
- Vault Agent configuration to do the above for proxied requests
|
2021-03-24 19:09:01 +00:00
|
|
|
|
|
|
|
The remainder of this document describes the tradeoffs of these mitigations and
|
|
|
|
how to use them.
|
|
|
|
|
|
|
|
Note that any headers requesting forwarding are disabled by default, and must
|
2023-01-26 00:12:15 +00:00
|
|
|
be enabled using [allow_forwarding_via_header](/vault/docs/configuration/replication).
|
2021-03-24 19:09:01 +00:00
|
|
|
|
|
|
|
### Unconditional Forwarding (Performance standbys only)
|
|
|
|
|
|
|
|
The simplest solution to never experience stale reads from a performance standby
|
|
|
|
is to provide the following HTTP header in the request:
|
|
|
|
|
|
|
|
```
|
|
|
|
X-Vault-Forward: active-node
|
|
|
|
```
|
|
|
|
|
|
|
|
The drawback here is that if all your requests are forwarded to the active node,
|
2021-04-06 17:49:04 +00:00
|
|
|
you might as well not be using performance standbys. So this mitigation only
|
2021-03-24 19:09:01 +00:00
|
|
|
makes sense to use selectively.
|
|
|
|
|
|
|
|
This mitigation will not help with stale reads relating to performance replication.
|
|
|
|
|
|
|
|
### Conditional Forwarding (Performance standbys only)
|
|
|
|
|
|
|
|
As of Vault Enterprise 1.7, all requests that modify storage now return a new
|
|
|
|
HTTP response header:
|
|
|
|
|
|
|
|
```
|
|
|
|
X-Vault-Index: <base64 value>
|
|
|
|
```
|
|
|
|
|
|
|
|
To ensure that the state resulting from that write request is visible to a
|
|
|
|
subsequent request, add these headers to that second request:
|
|
|
|
|
|
|
|
```
|
|
|
|
X-Vault-Index: <base64 value taken from previous response>
|
|
|
|
X-Vault-Inconsistent: forward-active-node
|
|
|
|
```
|
|
|
|
|
|
|
|
The effect will be that the node handling the request will look at the state
|
|
|
|
it has locally, and if it doesn't contain the state described by the X-Vault-Index
|
|
|
|
header, the node will forward the request to the active node.
|
|
|
|
|
|
|
|
The drawback here is that when requests are forwarded to the active node,
|
2021-04-06 17:49:04 +00:00
|
|
|
performance standbys provide less value. If this happens often enough
|
2021-03-24 19:09:01 +00:00
|
|
|
the active node can become a bottleneck, limiting the horizontal read scalability
|
|
|
|
performance standbys are intended to provide.
|
|
|
|
|
|
|
|
### Retry stale requests
|
|
|
|
|
|
|
|
As of Vault Enterprise 1.7, all requests that modify storage now return a new
|
|
|
|
HTTP response header:
|
|
|
|
|
|
|
|
```
|
|
|
|
X-Vault-Index: <base64 value>
|
|
|
|
```
|
|
|
|
|
|
|
|
To ensure that the state resulting from that write request is visible to a
|
|
|
|
subsequent request, add this headers to that second request:
|
|
|
|
|
|
|
|
```
|
|
|
|
X-Vault-Index: <base64 value taken from previous response>
|
|
|
|
```
|
|
|
|
|
|
|
|
When the desired state isn't present, Vault will return a failure response with
|
2021-04-06 17:49:04 +00:00
|
|
|
HTTP status code 412. This tells the client that it should retry the request.
|
2021-03-24 19:09:01 +00:00
|
|
|
The advantage over the Conditional Forwarding solution above is twofold:
|
2021-04-06 17:49:04 +00:00
|
|
|
first, there's no additional load on the active node. Second, this solution
|
2021-03-24 19:09:01 +00:00
|
|
|
is applicable to performance replication as well as performance standbys.
|
|
|
|
|
|
|
|
The Vault Go API will now automatically retry 412s, and provides convenience
|
|
|
|
methods for propagating the X-Vault-Index response header into the request
|
2021-04-06 17:49:04 +00:00
|
|
|
header of subsequent requests. Those not using the Vault Go API will want
|
2021-03-24 19:09:01 +00:00
|
|
|
to build equivalent functionality into their client library.
|
|
|
|
|
|
|
|
### Vault Agent and consistency headers
|
|
|
|
|
2023-01-26 00:12:15 +00:00
|
|
|
When configured, the [Vault Agent API Proxy](/vault/docs/agent/apiproxy) will proxy incoming requests to Vault. There is
|
2022-12-05 15:51:03 +00:00
|
|
|
Agent configuration available in the `api_proxy` stanza that allows making use
|
2021-03-24 19:09:01 +00:00
|
|
|
of some of the above mitigations without modifying clients.
|
|
|
|
|
|
|
|
By setting `enforce_consistency="always"`, Agent will always provide
|
2021-04-06 17:49:04 +00:00
|
|
|
the `X-Vault-Index` consistency header. The value it uses for the header
|
2021-03-24 19:09:01 +00:00
|
|
|
will be based on the responses that have passed through the Agent previously.
|
|
|
|
|
|
|
|
The option `when_inconsistent` controls how stale reads are prevented:
|
2021-04-06 17:49:04 +00:00
|
|
|
|
2021-03-24 19:09:01 +00:00
|
|
|
- `"fail"` means that when a `412` response is seen, it is returned to the client
|
|
|
|
- `"retry"` means that `412` responses will be retried automatically by Agent,
|
|
|
|
so the client doesn't have to deal with them
|
2021-07-29 11:41:08 +00:00
|
|
|
- `"forward"` makes Agent provide the
|
2021-03-24 19:09:01 +00:00
|
|
|
`X-Vault-Inconsistent: forward-active-node` header as described above under
|
|
|
|
Conditional Forwarding
|
|
|
|
|
2022-03-16 17:20:12 +00:00
|
|
|
## Vault 1.10 Mitigations
|
|
|
|
|
|
|
|
In Vault 1.10, the token format has changed, where service tokens now employ server side consistency.
|
2022-03-17 19:08:27 +00:00
|
|
|
This means that by default, requests made
|
|
|
|
to nodes which cannot support read-after-write consistency due to
|
|
|
|
not having the necessary WAL index to check Vault tokens locally will output
|
|
|
|
a 412 status code. The Vault Go API automatically retries when receiving 412s, so
|
2022-03-16 17:20:12 +00:00
|
|
|
unless there is a considerable replication delay, users will experience
|
2022-03-17 19:08:27 +00:00
|
|
|
read-after-write consistency.
|
2022-03-16 17:20:12 +00:00
|
|
|
|
2023-01-26 00:12:15 +00:00
|
|
|
The replication option [allow_forwarding_via_token](/vault/docs/configuration/replication)
|
2022-03-16 17:20:12 +00:00
|
|
|
can be used to enforce requests that would have returned 412s in the
|
2022-03-17 19:08:27 +00:00
|
|
|
aforementioned way will be forwarded instead to the active node.
|
|
|
|
|
2023-01-26 00:12:15 +00:00
|
|
|
Refer to the [Server Side Consistent Token FAQ](/vault/docs/faq/ssct) for details.
|
2022-03-16 17:20:12 +00:00
|
|
|
|
2021-03-24 19:09:01 +00:00
|
|
|
## Client API helpers
|
|
|
|
|
|
|
|
There are some new helpers in the `api` package to work with the new headers.
|
|
|
|
`WithRequestCallbacks` and `WithResponseCallbacks` create a shallow clone of
|
2021-04-06 17:49:04 +00:00
|
|
|
the client and populate it with the given callbacks. `RecordState` and
|
2021-03-24 19:09:01 +00:00
|
|
|
`RequireState` are used to store the response header from one request and
|
2021-04-06 17:49:04 +00:00
|
|
|
provide it in a subsequent request. For example:
|
2021-03-24 19:09:01 +00:00
|
|
|
|
|
|
|
```go
|
|
|
|
client := api.NewClient(api.DefaultConfig)
|
|
|
|
var state string
|
|
|
|
_, err := client.WithResponseCallbacks(api.RecordState(&state)).Write(path, data)
|
|
|
|
secret, err := client.WithRequestCallbacks(api.RequireState(state)).Read(path)
|
|
|
|
```
|
|
|
|
|
|
|
|
This will retry the `Read` until the data stored by the `Write` is present.
|
|
|
|
There are also callbacks to use forwarding: `ForwardInconsistent` and
|
|
|
|
`ForwardAlways`.
|