* Move seal barrier type field from Access to autoSeal struct.
Remove method Access.SetType(), which was only being used by a single test, and
which can use the name option of NewTestSeal() to specify the type.
* Change method signatures of Access to match those of Wrapper.
* Turn seal.Access struct into an interface.
* Tweak Access implementation.
Change `access` struct to have a field of type wrapping.Wrapper, rather than
extending it.
* Add method Seal.GetShamirWrapper().
Add method Seal.GetShamirWrapper() for use by code that need to perform
Shamir-specific operations.
* ensure we supply the node type when it's for a voter
* bumped autopilot version back to v0.2.0 and ran go mod tidy
* changed condition in knownservers and added some comments
* Export GetRaftBackend
* Updated tests for autopilot (related to dead server cleanup)
* Export Raft NewDelegate
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
Break grabLockOrStop into two pieces to facilitate investigating deadlocks. Without this change, the "grab" goroutine looks the same regardless of who was calling grabLockOrStop, so there's no way to identify one of the deadlock parties.
OSS parts of ent PR #3172: assume nodes we haven't received heartbeats from are running the same version as we are. Failing to provide a version/upgrade_version will result in Autopilot (on ent) demoting those unversioned nodes to non-voters until we receive a heartbeat from them.
* OSS portion of wrapper-v2
* Prefetch barrier type to avoid encountering an error in the simple BarrierType() getter
* Rename the OveriddenType to WrapperType and use it for the barrier type prefetch
* Fix unit test
* storage/raft: Fix cluster init with retry_join
Commit 8db66f4853abce3f432adcf1724b1f237b275415 introduced an error
wherein a join() would return nil (no error) with no information on its
channel if a joining node had been initialized. This was not handled
properly by the caller and resulted in a canceled `retry_join`.
Fix this by handling the `nil` channel respone by treating it as an
error and allowing the existing mechanics to work as intended.
* storage/raft: Improve retry_join go test
* storage/raft: Make VerifyRaftPeers pollable
* storage/raft: Add changelog entry for retry_join fix
* storage/raft: Add description to VerifyRaftPeers
* storage/raft: Make raftInfo atomic
This fixes some racy behavior discovered in parallel testing. Change the
core struct member to an atomic and update references throughout.
* raft: Ensure init before setting suffrage
As reported in https://hashicorp.atlassian.net/browse/VAULT-6773:
The /sys/storage/raft/join endpoint is intended to be unauthenticated. We rely
on the seal to manage trust.
It’s possible to use multiple join requests to switch nodes from voter to
non-voter. The screenshot shows a 3 node cluster where vault_2 is the leader,
and vault_3 and vault_4 are followers with non-voters set to false. sent two
requests to the raft join endpoint to have vault_3 and vault_4 join the cluster
with non_voters:true.
This commit fixes the issue by delaying the call to SetDesiredSuffrage until after
the initialization check, preventing unauthenticated mangling of voter status.
Tested locally using
https://github.com/hashicorp/vault-tools/blob/main/users/ncabatoff/cluster/raft.sh
and the reproducer outlined in VAULT-6773.
* raft: Return join err on failure
This is necessary to correctly distinguish errors returned from the Join
workflow. Previously, errors were being masked as timeouts.
* raft: Default autopilot parameters in teststorage
Change some defaults so we don't have to pass in parameters or set them
in the originating tests. These storage types are only used in two
places:
1) Raft HA testing
2) Seal migration testing
Both consumers have been tested and pass with this change.
* changelog: Unauthn voter status change bugfix
* add func to set level for specific logger
* add endpoints to modify log level
* initialize base logger with IndependentLevels
* test to ensure other loggers remain unchanged
* add DELETE loggers endpoints to revert back to config
* add API docs page
* add changelog entry
* remove extraneous line
* add log level field to Core struct
* add godoc for getLogLevel
* add some loggers to c.allLoggers
* fix raft tls key rotation panic when rotation time in past
* add changelog entry
* push out next raft TLS rotation time in case close to elapsing
* consolidate tls key rotation duration calculation
* reduce raft getNextRotationTime padding to 10 seconds
* move tls rotation ticker reset to where its duration is calculated
* See what it looks like to replace "master key" with "root key". There are two places that would require more challenging code changes: the storage path `core/master`, and its contents (the JSON-serialized EncodedKeyringtructure.)
* Restore accidentally deleted line
* Add changelog
* Update root->recovery
* Fix test
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
* Auto-join support for IPv6 discovery
The go-discover library returns IP addresses and not URLs. It just so
happens net.URL parses "127.0.0.1", which isn't a valid URL.
Instead, we construct the URL ourselves. Being careful to check if it's
an ipv6 address and making sure it's in explicit form if so.
Fixes#12323
* feedback: addrs & ipv6 test
Rename addrs to clusterIPs to improve clarity and intent
Tighten up our IPv6 address detection to be more correct and to ensure
it's actually in implicit form
* k8s doc: update for 0.9.1 and 0.8.0 releases (#10825)
* k8s doc: update for 0.9.1 and 0.8.0 releases
* Update website/content/docs/platform/k8s/helm/configuration.mdx
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* Autopilot initial commit
* Move autopilot related backend implementations to its own file
* Abstract promoter creation
* Add nil check for health
* Add server state oss no-ops
* Config ext stub for oss
* Make way for non-voters
* s/health/state
* s/ReadReplica/NonVoter
* Add synopsis and description
* Remove struct tags from AutopilotConfig
* Use var for config storage path
* Handle nin-config when reading
* Enable testing autopilot by using inmem cluster
* First passing test
* Only report the server as known if it is present in raft config
* Autopilot defaults to on for all existing and new clusters
* Add locking to some functions
* Persist initial config
* Clarify the command usage doc
* Add health metric for each node
* Fix audit logging issue
* Don't set DisablePerformanceStandby to true in test
* Use node id label for health metric
* Log updates to autopilot config
* Less aggressively consume config loading failures
* Return a mutable config
* Return early from known servers if raft config is unable to be pulled
* Update metrics name
* Reduce log level for potentially noisy log
* Add knob to disable autopilot
* Don't persist if default config is in use
* Autopilot: Dead server cleanup (#10857)
* Dead server cleanup
* Initialize channel in any case
* Fix a bunch of tests
* Fix panic
* Add follower locking in heartbeat tracker
* Add LastContactFailureThreshold to config
* Add log when marking node as dead
* Update follower state locking in heartbeat tracker
* Avoid follower states being nil
* Pull test to its own file
* Add execution status to state response
* Optionally enable autopilot in some tests
* Updates
* Added API function to fetch autopilot configuration
* Add test for default autopilot configuration
* Configuration tests
* Add State API test
* Update test
* Added TestClusterOptions.PhysicalFactoryConfig
* Update locking
* Adjust locking in heartbeat tracker
* s/last_contact_failure_threshold/left_server_last_contact_threshold
* Add disabling autopilot as a core config option
* Disable autopilot in some tests
* s/left_server_last_contact_threshold/dead_server_last_contact_threshold
* Set the lastheartbeat of followers to now when setting up active node
* Don't use config defaults from CLI command
* Remove config file support
* Remove HCL test as well
* Persist only supplied config; merge supplied config with default to operate
* Use pointer to structs for storing follower information
* Test update
* Retrieve non voter status from configbucket and set it up when a node comes up
* Manage desired suffrage
* Consider bucket being created already
* Move desired suffrage to its own entry
* s/DesiredSuffrageKey/LocalNodeConfigKey
* s/witnessSuffrage/recordSuffrage
* Fix test compilation
* Handle local node config post a snapshot install
* Commit to storage first; then record suffrage in fsm
* No need of local node config being nili case, post snapshot restore
* Reconcile autopilot config when a new leader takes over duty
* Grab fsm lock when recording suffrage
* s/Suffrage/DesiredSuffrage in FollowerState
* Instantiate autopilot only in leader
* Default to old ways in more scenarios
* Make API gracefully handle 404
* Address some feedback
* Make IsDead an atomic.Value
* Simplify follower hearbeat tracking
* Use uber.atomic
* Don't have multiple causes for having autopilot disabled
* Don't remove node from follower states if we fail to remove the dead server
* Autopilot server removals map (#11019)
* Don't remove node from follower states if we fail to remove the dead server
* Use map to track dead server removals
* Use lock and map
* Use delegate lock
* Adjust when to remove entry from map
* Only hold the lock while accessing map
* Fix race
* Don't set default min_quorum
* Fix test
* Ensure follower states is not nil before starting autopilot
* Fix race
Co-authored-by: Jason O'Donnell <2160810+jasonodonnell@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* raft: initial work on raft ha storage support
* add note on join
* add todo note
* raft: add support for bootstrapping and joining existing nodes
* raft: gate bootstrap join by reading leader api address from storage
* raft: properly check for raft-only for certain conditionals
* raft: add bootstrap to api and cli
* raft: fix bootstrap cli command
* raft: add test for setting up new cluster with raft HA
* raft: extend TestRaft_HA_NewCluster to include inmem and consul backends
* raft: add test for updating an existing cluster to use raft HA
* raft: remove debug log lines, clean up verifyRaftPeers
* raft: minor cleanup
* raft: minor cleanup
* Update physical/raft/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/ha.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/ha.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/logical_system_raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* address feedback comments
* address feedback comments
* raft: refactor tls keyring logic
* address feedback comments
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* address feedback comments
* testing: fix import ordering
* raft: rename var, cleanup comment line
* docs: remove ha_storage restriction note on raft
* docs: more raft HA interaction updates with migration and recovery mode
* docs: update the raft join command
* raft: update comments
* raft: add missing isRaftHAOnly check for clearing out state set earlier
* raft: update a few ha_storage config checks
* Update command/operator_raft_bootstrap.go
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* raft: address feedback comments
* raft: fix panic when checking for config.HAStorage.Type
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Update website/pages/docs/commands/operator/raft.mdx
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* raft: remove bootstrap cli command
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* raft: address review feedback
* raft: revert vendored sdk
* raft: don't send applied index and node ID info if we're HA-only
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* storage/raft: Advertise the configured cluster address
* Don't allow raft to start with unspecified IP
* Fix concurrent map write panic
* Add test file
* changelog++
* changelog++
* changelog++
* Update tcp_layer.go
* Update tcp_layer.go
* Only set the adverise addr if set
* storage/raft: Add committed and applied indexes to the status output
* Update api vendor
* changelog++
* Update http/sys_leader.go
Co-authored-by: Jim Kalafut <jkalafut@hashicorp.com>
Co-authored-by: Jim Kalafut <jkalafut@hashicorp.com>
* Raft retry join
* update
* Make retry join work with shamir seal
* Return upon context completion
* Update vault/raft.go
Co-Authored-By: Brian Kassouf <briankassouf@users.noreply.github.com>
* Address some review comments
* send leader information slice as a parameter
* Make retry join work properly with Shamir case. This commit has a blocking issue
* Fix join goroutine exiting before the job is done
* Polishing changes
* Don't return after a successful join during unseal
* Added config parsing test
* Add test and fix bugs
* minor changes
* Address review comments
* Fix build error
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>