We don't check for errors in the consul storage TLS setup. We might fail here
because of a missing certificate, bad permissions, etc. If anything is wrong,
vault just ignores the issues and continues, resulting in a lot of confusion.
Instead, lets return an error to the caller if this fails.
* See what it looks like to replace "master key" with "root key". There are two places that would require more challenging code changes: the storage path `core/master`, and its contents (the JSON-serialized EncodedKeyringtructure.)
* Restore accidentally deleted line
* Add changelog
* Update root->recovery
* Fix test
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
* upgrade aerospike-client-go to v5.2.0
* use strings.Contains to check an error
* add changelog file
* go mod tidy
* go mod tidy
* update the changelog
* revert .gitignore update
* go mod tidy
* Update Go client libraries for etcd
* Added etcd server container to run etcd3 tests automatically.
* Removed etcd2 test case: it fails the backend tests but the failure is
unrelated to the uplift. The etcd2 backend implementation does not
remove empty nested nodes when removing leaf (see comments in #11980).
Unlike the other libraries that were migrated, there are no usages of
this lib in any of our plugins, and the only other known usage was in
go-kms-wrapping, which has been updated. Aliasing it like the other libs
would still keep the aws-sdk-go dep in the sdk module because of the
function signatures. So I've simply removed it entirely here.
* Diagnose warns if HTTPS is not used for ha-storage-tls-consul
* Skipping TLS verification if https is not used in ha storage tls consul
* Adding diagnose skip message for consul service registration
* raft file and quorum checks
* raft checks
* backup
* raft file checks test
* address comments and add more raft and file and process checks
* syntax issues
* modularize functions to compile differently on different os
* compile raft checks everywhere
* more build tag issues
* raft-diagnose
* correct file permission checks
* upgrade tests and add a getConfigOffline test that currently does not work
* comment
* update file checks method signature on windows
* Update physical/raft/raft_test.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* raft tests
* add todo comment for windows root ownership
* voter count message
* raft checks test fixes
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* build out zombie lease system
* add typo for CI
* undo test CI commit
* time equality test isn't working on CI, so let's see what this does...
* add unrecoverable proto error, make proto, go mod vendor
* zombify leases if unrecoverable error, tests
* test fix: somehow pointer in pointer rx is null after pointer rx called
* tweaks based on roy feedback
* improve zombie errors
* update which errors are unrecoverable
* combine zombie logic
* keep subset of zombie lease in memory
* Create helpers which integrate with OpenTelemetry for diagnose collection
* Go mod vendor
* Comments
* Update vault/diagnose/helpers.go
Co-authored-by: swayne275 <swayne275@gmail.com>
* Add unit test/example
* tweak output
* More comments
* add spot check concept
* Get unit tests working on Result structs
* Fix unit test
* Get unit tests working, and make diagnose sessions local rather than global
* Comments
* Last comments
* No need for init
* :|
* Fix helpers_test
Co-authored-by: swayne275 <swayne275@gmail.com>
fixeshashicorp/vault#5836
DynamoDB may when throttled return a 2xx response while not committing
all submitted items to the database.
Depending upon load all actions in a BatchWriteUpdate may be throttled
with ProvisionedThroughputExceededException in which case AWS SDK handles
the retry. If some messages were throttled but not all
ProvisionedThroughputExceededException is not returned to the SDK and it
is up to us to resubmit the request.
Using an exponential backoff as recommended in AWS SDK for times we possibly
get partially throttled repeatedly.
* sanity checks for tls config in diagnose
* backup
* backup
* backup
* added necessary tests
* remove comment
* remove parallels causing test flakiness
* comments
* small fix
* separate out config hcl test case into new hcl file
* newline
* addressed comments
* addressed comments
* addressed comments
* addressed comments
* addressed comments
* reload funcs should be allowed to be nil
* k8s doc: update for 0.9.1 and 0.8.0 releases (#10825)
* k8s doc: update for 0.9.1 and 0.8.0 releases
* Update website/content/docs/platform/k8s/helm/configuration.mdx
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* Autopilot initial commit
* Move autopilot related backend implementations to its own file
* Abstract promoter creation
* Add nil check for health
* Add server state oss no-ops
* Config ext stub for oss
* Make way for non-voters
* s/health/state
* s/ReadReplica/NonVoter
* Add synopsis and description
* Remove struct tags from AutopilotConfig
* Use var for config storage path
* Handle nin-config when reading
* Enable testing autopilot by using inmem cluster
* First passing test
* Only report the server as known if it is present in raft config
* Autopilot defaults to on for all existing and new clusters
* Add locking to some functions
* Persist initial config
* Clarify the command usage doc
* Add health metric for each node
* Fix audit logging issue
* Don't set DisablePerformanceStandby to true in test
* Use node id label for health metric
* Log updates to autopilot config
* Less aggressively consume config loading failures
* Return a mutable config
* Return early from known servers if raft config is unable to be pulled
* Update metrics name
* Reduce log level for potentially noisy log
* Add knob to disable autopilot
* Don't persist if default config is in use
* Autopilot: Dead server cleanup (#10857)
* Dead server cleanup
* Initialize channel in any case
* Fix a bunch of tests
* Fix panic
* Add follower locking in heartbeat tracker
* Add LastContactFailureThreshold to config
* Add log when marking node as dead
* Update follower state locking in heartbeat tracker
* Avoid follower states being nil
* Pull test to its own file
* Add execution status to state response
* Optionally enable autopilot in some tests
* Updates
* Added API function to fetch autopilot configuration
* Add test for default autopilot configuration
* Configuration tests
* Add State API test
* Update test
* Added TestClusterOptions.PhysicalFactoryConfig
* Update locking
* Adjust locking in heartbeat tracker
* s/last_contact_failure_threshold/left_server_last_contact_threshold
* Add disabling autopilot as a core config option
* Disable autopilot in some tests
* s/left_server_last_contact_threshold/dead_server_last_contact_threshold
* Set the lastheartbeat of followers to now when setting up active node
* Don't use config defaults from CLI command
* Remove config file support
* Remove HCL test as well
* Persist only supplied config; merge supplied config with default to operate
* Use pointer to structs for storing follower information
* Test update
* Retrieve non voter status from configbucket and set it up when a node comes up
* Manage desired suffrage
* Consider bucket being created already
* Move desired suffrage to its own entry
* s/DesiredSuffrageKey/LocalNodeConfigKey
* s/witnessSuffrage/recordSuffrage
* Fix test compilation
* Handle local node config post a snapshot install
* Commit to storage first; then record suffrage in fsm
* No need of local node config being nili case, post snapshot restore
* Reconcile autopilot config when a new leader takes over duty
* Grab fsm lock when recording suffrage
* s/Suffrage/DesiredSuffrage in FollowerState
* Instantiate autopilot only in leader
* Default to old ways in more scenarios
* Make API gracefully handle 404
* Address some feedback
* Make IsDead an atomic.Value
* Simplify follower hearbeat tracking
* Use uber.atomic
* Don't have multiple causes for having autopilot disabled
* Don't remove node from follower states if we fail to remove the dead server
* Autopilot server removals map (#11019)
* Don't remove node from follower states if we fail to remove the dead server
* Use map to track dead server removals
* Use lock and map
* Use delegate lock
* Adjust when to remove entry from map
* Only hold the lock while accessing map
* Fix race
* Don't set default min_quorum
* Fix test
* Ensure follower states is not nil before starting autopilot
* Fix race
Co-authored-by: Jason O'Donnell <2160810+jasonodonnell@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* Add support for Managed Identity auth for physical/Azure
Obtain OAuth token from IMDS to allow for access to Azure Blob with
short-lived dynamic credentials
Fix#7322
* add tests & update docs/dependencies
* Add ability to specify region for OCI Storage Backend
* Fix capitalization in Vault documentation
Co-authored-by: Josh Black <raskchanky@users.noreply.github.com>
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
Adds debug and warn logging around AWS credential chain generation,
specifically to help users debugging auto-unseal problems on AWS, by
logging which role is being used in the case of a webidentity token.
Adds a deferred call to flush the log output as well, to ensure logs
are output in the event of an initialization failure.
Fix some places where raft wasn't hooking into the core logger as it should.
Revisited the code that was setting the log level to Error during cleanup: it's normal for there to be a bunch of errors then, which makes it harder to see what went wrong up to the point where the test was deemed to have failed. So now, instead of setting log level to Error, we actually stop logging altogether. This only applies if the test didn't pass in its own logger during cluster creation, but we should be moving away from that anyway.
* physical/spanner: use separate client for updating locks
We believe this mitigates an issue where a large influx of requests
cause the leader to be unable to update the lock table (since it cannot
grab a client from the pool or the client has no more open connections),
which causes cascading failure.
* raft: initial work on raft ha storage support
* add note on join
* add todo note
* raft: add support for bootstrapping and joining existing nodes
* raft: gate bootstrap join by reading leader api address from storage
* raft: properly check for raft-only for certain conditionals
* raft: add bootstrap to api and cli
* raft: fix bootstrap cli command
* raft: add test for setting up new cluster with raft HA
* raft: extend TestRaft_HA_NewCluster to include inmem and consul backends
* raft: add test for updating an existing cluster to use raft HA
* raft: remove debug log lines, clean up verifyRaftPeers
* raft: minor cleanup
* raft: minor cleanup
* Update physical/raft/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/ha.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/ha.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/logical_system_raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* address feedback comments
* address feedback comments
* raft: refactor tls keyring logic
* address feedback comments
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* address feedback comments
* testing: fix import ordering
* raft: rename var, cleanup comment line
* docs: remove ha_storage restriction note on raft
* docs: more raft HA interaction updates with migration and recovery mode
* docs: update the raft join command
* raft: update comments
* raft: add missing isRaftHAOnly check for clearing out state set earlier
* raft: update a few ha_storage config checks
* Update command/operator_raft_bootstrap.go
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* raft: address feedback comments
* raft: fix panic when checking for config.HAStorage.Type
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Update website/pages/docs/commands/operator/raft.mdx
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* raft: remove bootstrap cli command
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* raft: address review feedback
* raft: revert vendored sdk
* raft: don't send applied index and node ID info if we're HA-only
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* initial work on improving snapshot performance
* Work on snapshots
* rename a few functions
* Cleanup the snapshot file
* vendor the safeio library
* Add a test
* Add more tests
* Some review comments
* Fix comment
* Update physical/raft/snapshot.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Update physical/raft/snapshot.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Review feedback
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Refactor PG container creation.
* Rework rotation tests to use shorter sleeps.
* Refactor rotation tests.
* Add a static role rotation test for MongoDB Atlas.
* Adds a safety switch to configuration files.
This requires a user to either use TLS, or acknowledge that they are sending
credentials over plaintext.
* Warn if plaintext credentials will be passed
* Add true/false support to the plaintext transmission ack
* Updated website docs and ensured ToLower is used for true comparison
* storage/raft: Advertise the configured cluster address
* Don't allow raft to start with unspecified IP
* Fix concurrent map write panic
* Add test file
* changelog++
* changelog++
* changelog++
* Update tcp_layer.go
* Update tcp_layer.go
* Only set the adverise addr if set
* storage/raft: Add committed and applied indexes to the status output
* Update api vendor
* changelog++
* Update http/sys_leader.go
Co-authored-by: Jim Kalafut <jkalafut@hashicorp.com>
Co-authored-by: Jim Kalafut <jkalafut@hashicorp.com>
* stub out reusable storage test
* implement reusable inmem test
* work on reusable raft test
* stub out simple raft test
* switch to reusable raft storage
* cleanup tests
* cleanup tests
* refactor tests
* verify raft configuration
* cleanup tests
* stub out reuseStorage
* use common base address across clusters
* attempt to reuse raft cluster
* tinker with test
* fix typo
* start debugging
* debug raft configuration
* add BaseClusterListenPort to TestCluster options
* use BaseClusterListenPort in test
* raft join works now
* misc cleanup of raft tests
* use configurable base port for raft test
* clean up raft tests
* add parallelized tests for all backends
* clean up reusable storage tests
* remove debugging code from startClusterListener()
* improve comments in testhelpers
* improve comments in teststorage
* improve comments and test logging
* fix typo in vault/testing
* fix typo in comments
* remove debugging code
* make number of cores parameterizable in test
* raft: use file paths for TLS info in the retry_join stanza
* raft: maintain backward compat for existing tls params
* docs: update raft docs with new file-based TLS params
* Update godoc comment, fix docs
* raft: check for nil on concrete type in SetupCluster
* raft: move check to its own func
* raft: func cleanup
* raft: disallow disable_clustering = true when raft storage is used
* docs: update disable_clustering to mention new behavior
* Disallow non-voter setting from peers.json
* Fix bug that would make the actual fix a no-op
* Change order of evaluation
* Error out instead of resetting the value
* Update physical/raft/raft.go
Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com>
* Print node ID
Co-authored-by: Calvin Leung Huang <cleung2010@gmail.com>