* Update integrated-storage.mdx
The quorum paragraph shall also be updated with the table:
instead of:
"A Raft cluster of 3 nodes can tolerate a single node failure while a cluster
of 5 can tolerate 2 node failures. The recommended configuration is to either
run 3 or 5 Vault servers per cluster."
shall be:
"A Raft cluster of 3 nodes can tolerate a single node failure while a cluster
of 5 can tolerate 2 node failures. The recommended configuration is to either
run 5 or 7 Vault servers per cluster."
* Give an explicit node recommendation
---------
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
* Update integrated-storage.mdx
The old paragraph "The recommended deployment is either 3 or 5 servers." should match the "Minimums & Scaling" purpose from below.
It also reaffirms that, in a PRODUCTION environment, a number of 5 nodes are the minimum requirement.
A rewording is also necessary: "The recommended deployment consists in a minimum of 5 or more of servers that are odd in their total (5, 7, etc)."
* Update website/content/docs/internals/integrated-storage.mdx
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
Add some metrics helpful for monitoring raft cluster state.
Furthermore, we weren't emitting bolt metrics on regular (non-perf) standbys, and there were other metrics
in metricsLoop that would make sense to include in OSS but weren't. We now have an active-node-only func,
emitMetricsActiveNode. This runs metricsLoop on the active node. Standbys and perf-standbys run metricsLoop
from a goroutine managed by the runStandby rungroup.
* Refactor tidy steps into two separate helpers
This refactors the tidy go routine into two separate helpers, making it
clear where the boundaries of each are: variables are passed into these
method and concerns are separated. As more operations are rolled into
tidy, we can continue adding more helpers as appropriate. Additionally,
as we move to make auto-tidy occur, we can use these as points to hook
into periodic tidying.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Refactor revInfo checking to helper
This allows us to validate whether or not a revInfo entry contains a
presently valid issuer, from the existing mapping. Coupled with the
changeset to identify the issuer on revocation, we can begin adding
capabilities to tidy to update this association, decreasing CRL build
time and increasing the performance of OCSP.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Refactor issuer fetching for revocation purposes
Revocation needs to gracefully handle using the old legacy cert bundle,
so fetching issuers (and parsing them) needs to be done slightly
differently than other places. Refactor this from revokeCert into a
common helper that can be used by tidy.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Allow tidy to associate revoked certs, issuers
When revoking a certificate, we need to associate the issuer that signed
its certificate back to the revInfo entry. Historically this was
performed during CRL building (and still remains so), but when running
without CRL building and with only OCSP, performance will degrade as the
issuer needs to be found each time.
Instead, allow the tidy operation to take over this role, allowing us to
increase the performance of OCSP and CRL in this scenario, by decoupling
issuer identification from CRL building in the ideal case.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add tests for tidy updates
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add changelog entry
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add documentation on new tidy parameter, metrics
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Refactor tidy config into shared struct
Finish adding metrics, status messages about new tidy operation.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
This explanation of root key is incorrect. Root key is not sharded and reconstructed. The root key is encrypted by the unseal key which is sharded and reconstructed back in the unsealing process.
The explanation differed from the correct one at https://www.vaultproject.io/docs/concepts/seal
* docs/multiplexing: overhaul plugin documentation
* update nav data
* remove dupe nav data
* add external plugin section to index
* move custom plugin backends under internals/plugins
* remove ref to moved page
* revert moving custom plugin backends
* add building plugins from source section to plug dev
* add mux section to plugin arch
* add mux section to custom plugin page
* reorder custom database page
* use 'external plugin' where appropriate
* add link to plugin multiplexing
* fix example serve multiplex func call
* address review comments
* address review comments
* Minor format updates (#14590)
* mv Plugins to top-level; update upgrading plugins
* update links after changing paths
* add section on external plugin scaling characteristics
* add updates on plugin registration in plugin management page
* add plugin learn resource
* be more explicit about mux upgrade steps; add notes on when to avoid db muxing
* add plugin upgrade built-in section
* add caveats to built-in plugin upgrade
* improvements to built-in plugin override
* formatting, add redirects, correct multiplexing use case
* fix go-plugin link
* Apply suggestions from code review
Co-authored-by: Loann Le <84412881+taoism4504@users.noreply.github.com>
* remove single item list; add link to Database interface
Co-authored-by: Yoko Hyakuna <yoko@hashicorp.com>
Co-authored-by: Loann Le <84412881+taoism4504@users.noreply.github.com>