* Make sure we sanitize the rotation config on each clone
* Add regression test for missing rotation config
* use Equals
* simplify
Co-authored-by: Scott G. Miller <smiller@hashicorp.com>
* k8s doc: update for 0.9.1 and 0.8.0 releases (#10825)
* k8s doc: update for 0.9.1 and 0.8.0 releases
* Update website/content/docs/platform/k8s/helm/configuration.mdx
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* Autopilot initial commit
* Move autopilot related backend implementations to its own file
* Abstract promoter creation
* Add nil check for health
* Add server state oss no-ops
* Config ext stub for oss
* Make way for non-voters
* s/health/state
* s/ReadReplica/NonVoter
* Add synopsis and description
* Remove struct tags from AutopilotConfig
* Use var for config storage path
* Handle nin-config when reading
* Enable testing autopilot by using inmem cluster
* First passing test
* Only report the server as known if it is present in raft config
* Autopilot defaults to on for all existing and new clusters
* Add locking to some functions
* Persist initial config
* Clarify the command usage doc
* Add health metric for each node
* Fix audit logging issue
* Don't set DisablePerformanceStandby to true in test
* Use node id label for health metric
* Log updates to autopilot config
* Less aggressively consume config loading failures
* Return a mutable config
* Return early from known servers if raft config is unable to be pulled
* Update metrics name
* Reduce log level for potentially noisy log
* Add knob to disable autopilot
* Don't persist if default config is in use
* Autopilot: Dead server cleanup (#10857)
* Dead server cleanup
* Initialize channel in any case
* Fix a bunch of tests
* Fix panic
* Add follower locking in heartbeat tracker
* Add LastContactFailureThreshold to config
* Add log when marking node as dead
* Update follower state locking in heartbeat tracker
* Avoid follower states being nil
* Pull test to its own file
* Add execution status to state response
* Optionally enable autopilot in some tests
* Updates
* Added API function to fetch autopilot configuration
* Add test for default autopilot configuration
* Configuration tests
* Add State API test
* Update test
* Added TestClusterOptions.PhysicalFactoryConfig
* Update locking
* Adjust locking in heartbeat tracker
* s/last_contact_failure_threshold/left_server_last_contact_threshold
* Add disabling autopilot as a core config option
* Disable autopilot in some tests
* s/left_server_last_contact_threshold/dead_server_last_contact_threshold
* Set the lastheartbeat of followers to now when setting up active node
* Don't use config defaults from CLI command
* Remove config file support
* Remove HCL test as well
* Persist only supplied config; merge supplied config with default to operate
* Use pointer to structs for storing follower information
* Test update
* Retrieve non voter status from configbucket and set it up when a node comes up
* Manage desired suffrage
* Consider bucket being created already
* Move desired suffrage to its own entry
* s/DesiredSuffrageKey/LocalNodeConfigKey
* s/witnessSuffrage/recordSuffrage
* Fix test compilation
* Handle local node config post a snapshot install
* Commit to storage first; then record suffrage in fsm
* No need of local node config being nili case, post snapshot restore
* Reconcile autopilot config when a new leader takes over duty
* Grab fsm lock when recording suffrage
* s/Suffrage/DesiredSuffrage in FollowerState
* Instantiate autopilot only in leader
* Default to old ways in more scenarios
* Make API gracefully handle 404
* Address some feedback
* Make IsDead an atomic.Value
* Simplify follower hearbeat tracking
* Use uber.atomic
* Don't have multiple causes for having autopilot disabled
* Don't remove node from follower states if we fail to remove the dead server
* Autopilot server removals map (#11019)
* Don't remove node from follower states if we fail to remove the dead server
* Use map to track dead server removals
* Use lock and map
* Use delegate lock
* Adjust when to remove entry from map
* Only hold the lock while accessing map
* Fix race
* Don't set default min_quorum
* Fix test
* Ensure follower states is not nil before starting autopilot
* Fix race
Co-authored-by: Jason O'Donnell <2160810+jasonodonnell@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* sketch out partial month activity log client API
* unit test partialMonthClientCount
* cleanup api
* add api doc, fix test, update api nomenclature to match existing
* cleanup
* add PR changelog file
* integration test for API
* report entities and tokens separately
* upgrade vault dependency set
* etcd and grpc issues:
* better for tests
* testing
* all upgrades for hashicorp deps
* kubernetes plugin upgrade seems to work
* kubernetes plugin upgrade seems to work
* etcd and a bunch of other stuff
* all vulnerable packages upgraded
* k8s is broken in linux env but not locally
* test fixes
* fix testing
* fix etcd and grpc
* fix etcd and grpc
* use master branch of go-testing-interface
* roll back etcd upgrade
* have to fix grpc since other vendors pull in grpc 1.35.0 but we cant due to etcd
* rolling back in the replace directives
* a few more testing dependencies to clean up
* fix go mod vendor
* basic pool and start testing
* refactor a bit for testing
* workFunc, start/stop safety, testing
* cleanup function for worker quit, more tests
* redo public/private members
* improve tests, export types, switch uuid package
* fix loop capture bug, cleanup
* cleanup tests
* update worker pool file name, other improvements
* add job manager prototype
* remove remnants
* add functions to wait for job manager and worker pool to stop, other fixes
* test job manager functionality, fix bugs
* encapsulate how jobs are distributed to workers
* make worker job channel read only
* add job interface, more testing, fixes
* set name for dispatcher
* fix test races
* wire up expiration manager most of the way
* dispatcher and job manager constructors don't return errors
* logger now dependency injected
* make some members private, test fcn to get worker pool size
* make GetNumWorkers public
* Update helper/fairshare/jobmanager_test.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* update fairsharing usage, add tests
* make workerpool private
* remove custom worker names
* concurrency improvements
* remove worker pool cleanup function
* remove cleanup func from job manager, remove non blocking stop from fairshare
* update job manager for new constructor
* stop job manager when expiration manager stopped
* unset env var after test
* stop fairshare when started in tests
* stop leaking job manager goroutine
* prototype channel for waking up to assign work
* fix typo/bug and add tests
* improve job manager wake up, fix test typo
* put channel drain back
* better start/pause test for job manager
* comment cleanup
* degrade possible noisy log
* remove closure, clean up context
* improve revocation context timer
* test: reduce number of revocation workers during many tests
* Update vault/expiration.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* feedback tweaks
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Updates identity/group to allow updating a group by name (#10223)
* Now that lookup by name is outside handleGroupUpdateCommon, do not
use the second name lookup as the object to update.
* Added changelog.
Co-authored-by: dr-db <25711615+dr-db@users.noreply.github.com>
* Add NIST guidance on rotating keys used for AES-GCM encryption
* Capture more places barrier encryption is used
* spacing issue
* Probabilistically track an estimated encryption count by key term
* Un-reorder imports
* wip
* get rid of sampling
* Make the error response to the sys/internal/ui/mounts with no client token consistent
* changelog
* Don't test against an empty mount path
* One other spot
* Instead, do all token checks first and early out before even looking for the mount
* Adding snowflake as a bundled database secrets plugin
* Add snowflake-database-plugin to expected bundled plugins
* Add snowflake plugin name to the mockBuiltinRegistry
Test was failing (once we specified the expected error to check) because when we create a token via the TokenStore, without registering the lease in the expiration manager, lookupInternal will see that there is an expiring token with no lease and delete it immediately, yielding the "no parent found" error.
* fix setting enable, update tests
* improve wording
* fix typo - left the testing enabled set in originally
* improve warning handling
* move from nested if to switch - TIL
* Send a test message before committing a new audit device.
Also, lower timeout on connection attempts in socket device.
* added changelog
* go mod vendor (picked up some unrelated changes.)
* Skip audit device check in integration test.
Co-authored-by: swayne275 <swayne@hashicorp.com>
* core: Record the time a node became active
* Update vault/core.go
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
* Add omitempty field
* Update vendor
* Added CL entry and fixed test
* Fix test
* Fix command package tests
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
* fix race that can cause deadlock on core state lock
The bug is in the grabLockOrStop function. For specific concurrent
executions the grabLockOrStop function can return stopped=true when
the lock is still held. A comment in grabLockOrStop indicates that the
function is only used when the stateLock is held, but grabLockOrStop is
being used to acquire the stateLock. If there are concurrent goroutines
using grabLockOrStop then some concurrent executions result in
stopped=true being returned when the lock is acquired.
The fix is to add a lock and some state around which the parent and
child goroutine in the grabLockOrStop function can coordinate so that
the different concurrent executions can be handled.
This change includes a non-deterministic unit test which reliably
reproduces the problem before the fix.
* use rand instead of time for random test stopCh close
Using time.Now().UnixNano()%2 ends up being system dependent because
different operating systems and hardware have different clock
resolution. A lower resolution will return the same unix time for a
longer period of time.
It is better to avoid this issue by using a random number generator.
This change uses the rand package default random number generator. It's
generally good to avoid using the default random number generator,
because it creates extra lock contention. For a test it should be fine.
* fix racy activity log tests and move testing utilities elsewhere
* remove TODO
* move SetEnable out of activity log
* clarify not waiting on waitgroup
* remove todo
* merge activity log invalidation work from vault-enterprise PR 1546
* skip failing test due to enabled config on oss
Co-authored-by: Mark Gritter <mgritter@hashicorp.com>
* Add a flag to enable a permit pool to gate lease expiration
* Use the env var to get the size
* Add logs and metris to help debug this
Co-authored-by: Hridoy Roy <roy@hashicorp.com>
Vault uses http.ServeMux which issues an HTTP 301 redirect if the
request path contains a double slash (`//`). Additionally, vault
handles all paths to ensure that the path only contains printable
characters. Therefore use the same validation on the to/from parameters
for remounting.
Not doing this can result in a Vault mount that was originally mounted
at `pki/foo` to being remounted at `pki/foo//bar` resulting in mounts
that cannot be accessed.
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* first commit
* update
* removed some ent features from backport
* final refactor
* backport patch
Co-authored-by: Hridoy Roy <hridoyroy@Hridoys-MacBook-Pro.local>
Co-authored-by: Hridoy Roy <hridoyroy@Hridoys-MBP.hitronhub.home>
* Consolidate locking for sys/health
This avoids a second state lock read-lock on every sys/health hit
* Address review feedback
Co-authored-by: Vishal Nayak <vishalnayakv@gmail.com>
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* auth: store period value on tokens created via login
* test: reduce potentially flaskiness due to ttl check
* test: govet on package declaration
* changelog++
* Temporarily remove CL entry
* Add back the CL entry
Co-authored-by: Vishal Nayak <vishalnayakv@gmail.com>
* Add test for 400 status on missing token
* Return logical.StatusBadRequest on missing token
* remove commented out code
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
This also temporarily disables couchbase, elasticsearch, and
mongodbatlas because the `Serve` function needs to change signatures
and those plugins are vendored in from external repos, causing problems
when building.
* backport VAULT-672
* backport VAULT-672
* go mod tidy
* go mod tidy
* add back indirect import
* replace go mod and go sum with master version
* go mod vendor
* more go mod vendor
Co-authored-by: Hridoy Roy <hridoyroy@Hridoys-MBP.hitronhub.home>
Co-authored-by: Hridoy Roy <hridoyroy@Hridoys-MacBook-Pro.local>
This is part 1 of 4 for renaming the `newdbplugin` package. This copies the existing package to the new location but keeps the current one in place so we can migrate the existing references over more easily.
Vault creates an LRU cache that is used when interacting with the
physical backend. Add telemetry when the cache is hit, missed, written
to and deleted from. Use the MetricSink from ClusterMetrics
Fix some places where raft wasn't hooking into the core logger as it should.
Revisited the code that was setting the log level to Error during cleanup: it's normal for there to be a bunch of errors then, which makes it harder to see what went wrong up to the point where the test was deemed to have failed. So now, instead of setting log level to Error, we actually stop logging altogether. This only applies if the test didn't pass in its own logger during cluster creation, but we should be moving away from that anyway.
* Increase expiration timeouts on leases to avoid races in NoopBackend
* Set timeouts depending on whether they are relevant to the test: 1s for irrelevant, back to 20ms if they are
* revert one more
* quotas: fix data race that could occur if ApplyQuota was called during a db reset
* Abstract out the locking caller
* Remove unneeded lock
* Update
Co-authored-by: Vishal Nayak <vishalnayakv@gmail.com>
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* Carefully move changes from the plugin-cluster-reload branch into this clean branch off master.
* Don't test this at this level, adequately covered in the api level tests
* Change PR link
* go.mod
* Vendoring
* Vendor api/sys_plugins.go
* Revert "Some of the OSS changes were clobbered when merging with quotas out of, master (#9343)"
This reverts commit 8719a9b7c4d6ca7afb2e0a85e7c570cc17081f41.
* Revert "OSS side of Global Plugin Reload (#9340)"
This reverts commit f98afb998ae50346849050e882b6be50807983ad.
* Add the initialized tag to Consul registration for parity with k8s (and for easy automated testing). Ensure that whenever we flag Vault as unsealed, we also flag it as initialized.
* Update API docs.
Co-authored-by: Jason O'Donnell <2160810+jasonodonnell@users.noreply.github.com>
* raft: initial work on raft ha storage support
* add note on join
* add todo note
* raft: add support for bootstrapping and joining existing nodes
* raft: gate bootstrap join by reading leader api address from storage
* raft: properly check for raft-only for certain conditionals
* raft: add bootstrap to api and cli
* raft: fix bootstrap cli command
* raft: add test for setting up new cluster with raft HA
* raft: extend TestRaft_HA_NewCluster to include inmem and consul backends
* raft: add test for updating an existing cluster to use raft HA
* raft: remove debug log lines, clean up verifyRaftPeers
* raft: minor cleanup
* raft: minor cleanup
* Update physical/raft/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/ha.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/ha.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/logical_system_raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* address feedback comments
* address feedback comments
* raft: refactor tls keyring logic
* address feedback comments
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* address feedback comments
* testing: fix import ordering
* raft: rename var, cleanup comment line
* docs: remove ha_storage restriction note on raft
* docs: more raft HA interaction updates with migration and recovery mode
* docs: update the raft join command
* raft: update comments
* raft: add missing isRaftHAOnly check for clearing out state set earlier
* raft: update a few ha_storage config checks
* Update command/operator_raft_bootstrap.go
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* raft: address feedback comments
* raft: fix panic when checking for config.HAStorage.Type
* Update vault/raft.go
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* Update website/pages/docs/commands/operator/raft.mdx
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
* raft: remove bootstrap cli command
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Update vault/raft.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* raft: address review feedback
* raft: revert vendored sdk
* raft: don't send applied index and node ID info if we're HA-only
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
Co-authored-by: Alexander Bezobchuk <alexanderbez@users.noreply.github.com>
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* Replaced ClusterMetricSink's cluster name with an atomic.Value.
This should permit go-race tests to pass which seal and unseal
the core.
* Replace metric sink before unseal to avoid data races.
* This package is new for 1.5 so this is not a breaking change.
* This is being moved because this code was originally intended to be used
within plugins, however the design of password policies has changed such
that this is no longer needed. Thus, this code doesn't need to be in the
public SDK.
* move adjustForSealMigration to vault package
* fix adjustForSealMigration
* begin working on new seal migration test
* create shamir seal migration test
* refactor testhelpers
* add VerifyRaftConfiguration to testhelpers
* stub out TestTransit
* Revert "refactor testhelpers"
This reverts commit 39593defd0d4c6fd79aedfd37df6298391abb9db.
* get shamir test working again
* stub out transit join
* work on transit join
* remove debug code
* initTransit now works with raft join
* runTransit works with inmem
* work on runTransit with raft
* runTransit works with raft
* cleanup tests
* TestSealMigration_TransitToShamir_Pre14
* TestSealMigration_ShamirToTransit_Pre14
* split for pre-1.4 testing
* add simple tests for transit and shamir
* fix typo in test suite
* debug wrapper type
* test debug
* test-debug
* refactor core migration
* Revert "refactor core migration"
This reverts commit a776452d32a9dca7a51e3df4a76b9234d8c0c7ce.
* begin refactor of adjustForSealMigration
* fix bug in adjustForSealMigration
* clean up tests
* clean up core refactoring
* fix bug in shamir->transit migration
* stub out test that brings individual nodes up and down
* refactor NewTestCluster
* pass listeners into newCore()
* simplify cluster address setup
* simplify extra test core setup
* refactor TestCluster for readability
* refactor TestCluster for readability
* refactor TestCluster for readability
* add shutdown func to TestCore
* add cleanup func to TestCore
* create RestartCore
* stub out TestSealMigration_ShamirToTransit_Post14
* refactor address handling in NewTestCluster
* fix listener setup in newCore()
* remove unnecessary lock from setSealsForMigration()
* rename sealmigration test package
* use ephemeral ports below 30000
* work on post-1.4 migration testing
* clean up pre-1.4 test
* TestSealMigration_ShamirToTransit_Post14 works for non-raft
* work on raft TestSealMigration_ShamirToTransit_Post14
* clean up test code
* refactor TestClusterCore
* clean up TestClusterCore
* stub out some temporary tests
* use HardcodedServerAddressProvider in seal migration tests
* work on raft for TestSealMigration_ShamirToTransit_Post14
* always use hardcoded raft address provider in seal migration tests
* debug TestSealMigration_ShamirToTransit_Post14
* fix bug in RestartCore
* remove debug code
* TestSealMigration_ShamirToTransit_Post14 works now
* clean up debug code
* clean up tests
* cleanup tests
* refactor test code
* stub out TestSealMigration_TransitToShamir_Post14
* set seals properly for transit->shamir migration
* migrateFromTransitToShamir_Post14 works for inmem
* migrateFromTransitToShamir_Post14 works for raft
* use base ports per-test
* fix seal verification test code
* simplify seal migration test suite
* simplify test suite
* cleanup test suite
* use explicit ports below 30000
* simplify use of numTestCores
* Update vault/external_tests/sealmigration/seal_migration_test.go
Co-authored-by: Calvin Leung Huang <cleung2010@gmail.com>
* Update vault/external_tests/sealmigration/seal_migration_test.go
Co-authored-by: Calvin Leung Huang <cleung2010@gmail.com>
* clean up imports
* rename to StartCore()
* Update vault/testing.go
Co-authored-by: Calvin Leung Huang <cleung2010@gmail.com>
* simplify test suite
* clean up tests
Co-authored-by: Calvin Leung Huang <cleung2010@gmail.com>
* Changes to expiration manager to walk tokens (including non-expiring ones.)
* Count by namespace in token manager.
* Keep a dictionary of policy lists and deduplicate based on it.
* enable seal wrap in all seal migration tests
* move adjustForSealMigration to vault package
* fix adjustForSealMigration
* begin working on new seal migration test
* create shamir seal migration test
* refactor testhelpers
* add VerifyRaftConfiguration to testhelpers
* stub out TestTransit
* Revert "refactor testhelpers"
This reverts commit 39593defd0d4c6fd79aedfd37df6298391abb9db.
* get shamir test working again
* stub out transit join
* work on transit join
* Revert "move resuable storage test to avoid creating import cycle"
This reverts commit b3ff2317381a5af12a53117f87d1c6fbb093af6b.
* remove debug code
* initTransit now works with raft join
* runTransit works with inmem
* work on runTransit with raft
* runTransit works with raft
* get rid of dis-used test
* cleanup tests
* TestSealMigration_TransitToShamir_Pre14
* TestSealMigration_ShamirToTransit_Pre14
* split for pre-1.4 testing
* add simple tests for transit and shamir
* fix typo in test suite
* debug wrapper type
* test debug
* test-debug
* refactor core migration
* Revert "refactor core migration"
This reverts commit a776452d32a9dca7a51e3df4a76b9234d8c0c7ce.
* begin refactor of adjustForSealMigration
* fix bug in adjustForSealMigration
* clean up tests
* clean up core refactoring
* fix bug in shamir->transit migration
* remove unnecessary lock from setSealsForMigration()
* rename sealmigration test package
* use ephemeral ports below 30000
* simplify use of numTestCores
* Add token creation counters.
* Created a utility to change TTL to bucket name.
* Add counter covering token creation for response wrapping.
* Fix namespace label, with a new utility function.
* Refactor PG container creation.
* Rework rotation tests to use shorter sleeps.
* Refactor rotation tests.
* Add a static role rotation test for MongoDB Atlas.
* Populate a token_ttl and token_issue_time field on the Auth struct of audit log entries, and in the Auth portion of a response for login methods
* Revert go fmt, better zero checking
* Update unit tests
* changelog++
* Add random string generator with rules engine
This adds a random string generation library that validates random
strings against a set of rules. The library is designed for use as generating
passwords, but can be used to generate any random strings.
* storage/raft: Advertise the configured cluster address
* Don't allow raft to start with unspecified IP
* Fix concurrent map write panic
* Add test file
* changelog++
* changelog++
* changelog++
* Update tcp_layer.go
* Update tcp_layer.go
* Only set the adverise addr if set
* storage/raft: Add committed and applied indexes to the status output
* Update api vendor
* changelog++
* Update http/sys_leader.go
Co-authored-by: Jim Kalafut <jkalafut@hashicorp.com>
Co-authored-by: Jim Kalafut <jkalafut@hashicorp.com>
* serivceregistration: refactor service registration logic to run later
* move state check to the internal func
* sr/kubernetes: update setInitialStateInternal godoc
* sr/kubernetes: remove return in setInitialState
* core/test: fix mockServiceRegistration
* address review feedback
* stub out reusable storage test
* implement reusable inmem test
* work on reusable raft test
* stub out simple raft test
* switch to reusable raft storage
* cleanup tests
* cleanup tests
* refactor tests
* verify raft configuration
* cleanup tests
* stub out reuseStorage
* use common base address across clusters
* attempt to reuse raft cluster
* tinker with test
* fix typo
* start debugging
* debug raft configuration
* add BaseClusterListenPort to TestCluster options
* use BaseClusterListenPort in test
* raft join works now
* misc cleanup of raft tests
* use configurable base port for raft test
* clean up raft tests
* add parallelized tests for all backends
* clean up reusable storage tests
* remove debugging code from startClusterListener()
* improve comments in testhelpers
* improve comments in teststorage
* improve comments and test logging
* fix typo in vault/testing
* fix typo in comments
* remove debugging code
* make number of cores parameterizable in test
* raft: check for nil on concrete type in SetupCluster
* raft: move check to its own func
* raft: func cleanup
* raft: disallow disable_clustering = true when raft storage is used
* docs: update disable_clustering to mention new behavior
This avoids SetConfig from having to grab a write lock which is called on a SIGHUP, and may block, along with a long-running requests that has a read lock held, any other operation that requires a state lock.
* token/renewal: return full set of token and identity policies in the policies field
* extend tests to cover additional token and identity policies on a token
* verify identity_policies returned on login and renewals
* fix#7623: add missed description field for GET /sys/auth/:path/tune endpoint
* fix#7623: allow empty description
* fix#7623: update tests with description field
* provide vault server flag to exit on core shutdown
* Update command/server.go
Co-Authored-By: Jeff Mitchell <jeffrey.mitchell@gmail.com>
Co-authored-by: Jeff Mitchell <jeffrey.mitchell@gmail.com>
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Seal migration after unsealing
* Refactor migration fields migrationInformation in core
* Perform seal migration as part of postUnseal
* Remove the sleep logic
* Use proper seal in the unseal function
* Fix migration from Auto to Shamir
* Fix the recovery config missing issue
* Address the non-ha migration case
* Fix the multi cluster case
* Avoid re-running seal migration
* Run the post migration code in new leaders
* Fix the issue of wrong recovery being set
* Address review feedback
* Add more complete testing coverage for seal migrations. (#8247)
* Add more complete testing coverage for seal migrations. Also remove VAULT_ACC gate from some tests that just depend on docker, cleanup dangling recovery config in storage after migration, and fix a call in adjustCoreForSealMigration that seems broken.
* Fix the issue of wrong recovery key being set
* Adapt tests to work with multiple cores.
* Add missing line to disable raft join.
Co-authored-by: Vishal Nayak <vishalnayak@users.noreply.github.com>
* Fix all known issues
* Remove warning
* Review feedback.
* Revert my previous change that broke raft tests. We'll need to come back and at least comment
this once we better understand why it's needed.
* Don't allow migration between same types for now
* Disable auto to auto tests for now since it uses migration between same types which is not allowed
* Update vault/core.go
Co-Authored-By: Brian Kassouf <briankassouf@users.noreply.github.com>
* Add migration logs
* Address review comments
* Add the recovery config check back
* Skip a few steps if migration is already done
* Return from waitForLeadership if migration fails
Co-authored-by: ncabatoff <nick.cabatoff@gmail.com>
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* external_tests: ensure derived cores are stable before proceeding on tests
* testhelpers: add min duration tolerance when checking stability on derived core
* use observer pattern for service discovery
* update perf standby method
* fix test
* revert usersTags to being called serviceTags
* use previous consul code
* vault isnt a performance standby before starting
* log err
* changes from feedback
* add Run method to interface
* changes from feedback
* fix core test
* update example
* Use Shamir as KeK when migrating from auto-seal to shamir
* Use the correct number of shares/threshold for the migrated seal.
* Fix log message
* Add WaitForActiveNode to test
* Make test fail
* Minor updates
* Test with more shares and a threshold
* Add seal/unseal step to the test
* Update the logic that prepares seal migration (#8187)
* Update the logic that preps seal migration
* Add test and update recovery logic
Co-authored-by: ncabatoff <nick.cabatoff@gmail.com>
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* Raft retry join
* update
* Make retry join work with shamir seal
* Return upon context completion
* Update vault/raft.go
Co-Authored-By: Brian Kassouf <briankassouf@users.noreply.github.com>
* Address some review comments
* send leader information slice as a parameter
* Make retry join work properly with Shamir case. This commit has a blocking issue
* Fix join goroutine exiting before the job is done
* Polishing changes
* Don't return after a successful join during unseal
* Added config parsing test
* Add test and fix bugs
* minor changes
* Address review comments
* Fix build error
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* plugin: fix panic on router.MatchingSystemView if backend is nil
* correctly determine the plugin binary file in the directory
* docs: simplify plugin file removal
* move ServiceDiscovery into methods
* add ServiceDiscoveryFactory
* add serviceDiscovery field to vault.Core
* refactor ConsulServiceDiscovery into separate struct
* cleanup
* revert accidental change to go.mod
* cleanup
* get rid of un-needed struct tags in vault.CoreConfig
* add service_discovery parser
* add ServiceDiscovery to config
* cleanup
* cleanup
* add test for ConfigServiceDiscovery to Core
* unit testing for config service_discovery stanza
* cleanup
* get rid of un-needed redirect_addr stuff in service_discovery stanza
* improve test suite
* cleanup
* clean up test a bit
* create docs for service_discovery
* check if service_discovery is configured, but storage does not support HA
* tinker with test
* tinker with test
* tweak docs
* move ServiceDiscovery into its own package
* tweak a variable name
* fix comment
* rename service_discovery to service_registration
* tweak service_registration config
* Revert "tweak service_registration config"
This reverts commit 5509920a8ab4c5a216468f262fc07c98121dce35.
* simplify naming
* refactor into ./serviceregistration/consul
Go 1.13 flipped TLS 1.3 to opt-out instead of opt-in, and its TLS 1.3
support does not allow configuring cipher suites. Simply remove the
affected test; it's not relevant going forward and there's ample
evidence it works properly prior to Go 1.13.
* core: revoke the proper token on partial failures from token-related requests
* move test to vault package, move test trigger to expiration manager
* update logging messages for clarity
* docstring fix
* Don't allow registering a non-root zero TTL token lease
This is defense-in-depth in that such a token was not allowed to be
used; however it's also a bug fix in that this would then cause no lease
to be generated but the token entry to be written, meaning the token
entry would stick around until it was attempted to be used or tidied (in
both cases the internal lookup would see that this was invalid and do a
revoke on the spot).
* Fix tests
* tidy
* vault: remove dead test helper function testMakeBatchTokenViaCore()
* vault: remove dead test helper function testMakeBatchTokenViaBackend()
* vault: remove dead test helper function mockPolicyStoreNoCache()
* vault: remove dead test helper function mockPolicyStore()
* vault: remove unused test imports
* Handpick cluster cipher suites when they're not user-set
There is an undocumented way for users to choose cluster cipher suites
but for the most part this is to paper over the fact that there are
undesirable suites in TLS 1.2.
If not explicitly set, have the set of cipher suites for the cluster
port come from a hand-picked list; either the allowed TLS 1.3 set (for
forwards compatibility) or the three identical ones for TLS 1.2.
The 1.2 suites have been supported in Go until at least as far back as
Go 1.9 from two years ago. As a result in cases where no specific suites
have been chosen this _ought_ to have no compatibility issues.
Also includes a useful test script.
* Abstract generate-root authentication into the strategy interface
* Generate root strategy ncabatoff (#7700)
* Adapt to new shamir-as-kek reality.
* Don't try to verify the master key when we might still be sealed (in
recovery mode). Instead, verify it in the authenticate methods.
because when unsealing it wouldn't wait for core 0 to come up and become
the active node. Much of our testing code assumes that core0 is the
active node.
Shamir seals now come in two varieties: legacy and new-style. Legacy
Shamir is automatically converted to new-style when a rekey operation
is performed. All new Vault initializations using Shamir are new-style.
New-style Shamir writes an encrypted master key to storage, just like
AutoUnseal. The stored master key is encrypted using the shared key that
is split via Shamir's algorithm. Thus when unsealing, we take the key
fragments given, combine them into a Key-Encryption-Key, and use that
to decrypt the master key on disk. Then the master key is used to read
the keyring that decrypts the barrier.
* core: add postSealMigration method
The postSealMigration method is called at the end of the postUnseal
method if a seal migration has occurred. This starts a seal rewrap
process in the enterprise version of. It is a no-op in the OSS version.
* Initial work
* rework
* s/dr/recovery
* Add sys/raw support to recovery mode (#7577)
* Factor the raw paths out so they can be run with a SystemBackend.
# Conflicts:
# vault/logical_system.go
* Add handleLogicalRecovery which is like handleLogical but is only
sufficient for use with the sys-raw endpoint in recovery mode. No
authentication is done yet.
* Integrate with recovery-mode. We now handle unauthenticated sys/raw
requests, albeit on path v1/raw instead v1/sys/raw.
* Use sys/raw instead raw during recovery.
* Don't bother persisting the recovery token. Authenticate sys/raw
requests with it.
* RecoveryMode: Support generate-root for autounseals (#7591)
* Recovery: Abstract config creation and log settings
* Recovery mode integration test. (#7600)
* Recovery: Touch up (#7607)
* Recovery: Touch up
* revert the raw backend creation changes
* Added recovery operation token prefix
* Move RawBackend to its own file
* Update API path and hit it using CLI flag on generate-root
* Fix a panic triggered when handling a request that yields a nil response. (#7618)
* Improve integ test to actually make changes while in recovery mode and
verify they're still there after coming back in regular mode.
* Refuse to allow a second recovery token to be generated.
* Resize raft cluster to size 1 and start as leader (#7626)
* RecoveryMode: Setup raft cluster post unseal (#7635)
* Setup raft cluster post unseal in recovery mode
* Remove marking as unsealed as its not needed
* Address review comments
* Accept only one seal config in recovery mode as there is no scope for migration
* add storage route
* template out the routes and new raft storage overview
* fetch raft config and add new server model
* pngcrush the favicon
* add view components and binary-file component
* add form-save-buttons component
* adjust rawRequest so that it can send a request body and returns the response on errors
* hook up restore
* rename binary-file to file-to-array-buffer
* add ember-service-worker
* use forked version of ember-service-worker for now
* scope the service worker to a single endpoint
* show both download buttons for now
* add service worker download with a fallback to JS in-mem download
* add remove peer functionality
* lint go file
* add storage-type to the cluster and node models
* update edit for to take a cancel action
* separate out a css table styles to be used by http-requests-table and on the raft-overview component
* add raft-join adapter, model, component and use on the init page
* fix styling and gate the menu item on the cluster using raft storage
* style tweaks to the raft-join component
* fix linting
* add form-save-buttons component to storybook
* add cancel functionality for backup uploads, and add a success message for successful uploads
* add component tests
* add filesize.js
* add filesize and modified date to file-to-array-buffer
* fix linting
* fix server section showing in the cluster nav
* don't use babel transforms in service worker lib because we don't want 2 copies of babel polyfill
* add file-to-array-buffer to storybook
* add comments and use removeObjectURL to raft-storage-overview
* update alert-banner markdown
* messaging change for upload alert banner
* Update ui/app/templates/components/raft-storage-restore.hbs
Co-Authored-By: Joshua Ogle <joshua@joshuaogle.com>
* more comments
* actually render the label if passed and update stories with knobs
Seal keys can be rotated. When this happens, the barrier and recovery
keys should be re-encrypted with the new seal key. This change
automatically re-encrypts the barrier and recovery keys with the latest
seal key on the active node during the 'postUnseal' phase.
* sys: add host-info endpoint, add client API method
* remove old commented handler
* add http tests, fix bugs
* query all partitions for disk usage
* fix Timestamp decoding
* add comments for clarification
* dont append a nil entry on disk usage query error
* remove HostInfo from the sdk api
We can use Logical().Read(...) to query this endpoint since the payload is contained with the data object. All warnings are preserved under Secret.Warnings.
* ensure that we're testing failure case against a standby node
* add and use TestWaitStandby to ensure core is on standby
* remove TestWaitStandby
* respond with local-only error
* move HostInfo into its own helper package
* fix imports; use new no-forward handler
* add cpu times to collection
* emit clearer multierrors/warnings by collection type
* add comments on HostInfo fields
The OIDC Discovery standard requires the response_types_supported field
to be returned in the .well-known/openid-configuration response.
Also, the AWS IAM OIDC consumer won't accept Vault as an identity
provider without this field.
Based on examples in the OIDC Core documentation, it appears Vault
supports only the `id_token` flow, and thus that is the only value that
makes sense to be set in this field. See:
https://openid.net/specs/openid-connect-core-1_0.html#AuthorizationExamples
* sys/pprof: add pprof routes to the system backend
* sys/pprof: add pprof paths to handler with local-only check
* fix trailing slash on pprof index endpoint
* use new no-forward handler on pprof
* go mod tidy
* add pprof external tests
* disallow streaming requests to exceed DefaultMaxRequestDuration
* add max request duration test
This allows logical operations (along with a non-nil response writer) to
process http handler funcs within the operation function while keeping
auth and audit checks that the logical request flow provides.
* Move SudoPrivilege out of SystemView
We only use this in token store and it literally doesn't work anything
that isn't the token store or system mount, so we should stop exposing
something that doesn't work.
* Reconcile extended system view with sdk/logical a bit and put an explanation for why SudoPrivilege isn't moved over
Generalization of the PhysicalFactory notion introduced by Raft, so it can be used by other storage backends in tests. These are the OSS changes needed for my rework of the ent integ tests and cluster helpers.
* Fix create token sudo non-root namespace check
* Moved path trimming to SudoPrivilege
* Changed to tokenCtx instead of request ctx
* Use root context for AllowOperation; details in comment
* also flush nilNamespace when a namespace is flushed
* adds test cases with nilNamespace.ID
* adds a test case
* adds a test for oidcCache.Flush
* fixed a typo in an error message
There are a few different things happening in this change. First, some code that previously lived in enterprise has moved here: this includes some helper code for manipulating clusters and for building storage backends. Second, the existing cluster-building code using inmem storage has been generalized to allow various storage backends. Third, added support for creating two-cluster DR setups. Finally, there are tweaks to handle edge cases that
result in intermittent failures, or to eliminate sleeps in favour of polling to detect state changes.
Also: generalize TestClusterOptions.PhysicalFactory so it can be used either
as a per-core factory (for raft) or a per-cluster factory (for other
storage backends.)
* Add maximum amount of random characters requested at any given time
* Readability changes
* Removing sys/tools/random from the default policy
* Setting the maxBytes value as const
* Declaring maxBytes in the package to use it everywhere
* Using maxBytes in the error message
This eliminates a bunch of noise at the end of a test's logs, making it
easier to diagnose test failures.
Add TestCluster.Logger. This is intendend to be used when test helpers
want to log something. If TestClusterOptions.Logger is non-nil, it will
be used as the cluster logger, otherwise we create one based on test
name. In the absence of CoreConfig.Logger, this logger is also used as
the parent of each core's log. This makes it easy when running parallel
tests to identify which log message came from which test.
* flush identity/oidc cache by namespace
* separates and unit tests the logic that looks for a namespace id within a namespace key
* applies pr feedback
* renames nskeyContainsID to isNamespacedKey
We already have a separate log line if rollback fails. It really fills
up logs to always note when rollback is occurring and it usually isn't
useful for incidents.
Wrapping validation was deferring the function to audit log before
actually checking if we were sealed or standby, and without having the
read lock grabbed.
* upgrade aws roles
* test upgrade aws roles
* Initialize aws credential backend at mount time
* add a TODO
* create end-to-end test for builtin/credential/aws
* fix bug in initializer
* improve comments
* add Initialize() to logical.Backend
* use Initialize() in Core.enableCredentialInternal()
* use InitializeRequest to call Initialize()
* improve unit testing for framework.Backend
* call logical.Backend.Initialize() from all of the places that it needs to be called.
* implement backend.proto changes for logical.Backend.Initialize()
* persist current role storage version when upgrading aws roles
* format comments correctly
* improve comments
* use postUnseal funcs to initialize backends
* simplify test suite
* improve test suite
* simplify logic in aws role upgrade
* simplify aws credential initialization logic
* simplify logic in aws role upgrade
* use the core's activeContext for initialization
* refactor builtin/plugin/Backend
* use a goroutine to upgrade the aws roles
* misc improvements and cleanup
* do not run AWS role upgrade on DR Secondary
* always call logical.Backend.Initialize() when loading a plugin.
* improve comments
* on standbys and DR secondaries we do not want to run any kind of upgrade logic
* fix awsVersion struct
* clarify aws version upgrade
* make the upgrade logic for aws auth more explicit
* aws upgrade is now called from a switch
* fix fallthrough bug
* simplify logic
* simplify logic
* rename things
* introduce currentAwsVersion const to track aws version
* improve comments
* rearrange things once more
* conglomerate things into one function
* stub out aws auth initialize e2e test
* improve aws auth initialize e2e test
* finish aws auth initialize e2e test
* tinker with aws auth initialize e2e test
* tinker with aws auth initialize e2e test
* tinker with aws auth initialize e2e test
* fix typo in test suite
* simplify logic a tad
* rearrange assignment
* Fix a few lifecycle related issues in #7025 (#7075)
* Fix panic when plugin fails to load
* Fix various read only storage errors
A mistake we've seen multiple times in our own plugins and that we've
seen in the GCP plugin now is that control flow (how the code is
structured, helper functions, etc.) can obfuscate whether an error came
from storage or some other Vault-core location (in which case likely it
needs to be a 5XX message) or because of user input (thus 4XX). Error
handling for functions therefore often ends up always treating errors as
either user related or internal.
When the error is logical.ErrReadOnly this means that treating errors as
user errors skips the check that triggers forwarding, instead returning
a read only view error to the user.
While it's obviously more correct to fix that code, it's not always
immediately apparent to reviewers or fixers what the issue is and fixing
it when it's found both requires someone to hit the problem and report
it (thus exposing bugs to users) and selective targeted refactoring that
only helps that one specific case.
If instead we check whether the logical.Response is an error and, if so,
whether it contains the error value, we work around this in all of these
cases automatically. It feels hacky since it's a coding mistake, but
it's one we've made too multiple times, and avoiding bugs altogether is
better for our users.
* storage/raft: When restoring a snapshot preseal first
* best-effort allow standbys to apply the restoreOp before sealing active node
* Don't cache the raft tls key
* Update physical/raft/raft.go
* Move pending raft peers to core
* Fix race on close bool
* Extend the leaderlease time for tests
* Update raft deps
* Fix audit hashing
* Fix race with auditing
* adds allowed_roles field to identity token keys and updates tests
* removed a comment that was redundant
* allowed_roles uses role client_id s instead of role names
* renamed allowed_roles to allowed_clients
* renamed allowed_clients to allowed_clientIDs
* WIP
* Kinda working?
* Handle nil during rotation
* Update discovery document
* WIP
* removes some warning messages and checks on keys when creating a role
* Path issuer ns/specific
* Fix nspath handling
* Update issuer handling
* Add locking around key updates
* Cleanup
* Fix nextRun handling
* saving work
* Include namespace in token
* saving work
* saving work
* happy path
* saving work
* sharing debug msgs
* Merge branch 'master' into refactor_periodic_func_test
# Conflicts:
# vault/identity_store_oidc.go
# vault/identity_store_oidc_test.go
* use MatchingStorageByAPIPath instead of logical.InmemStorage
At the level of role config it doesn't mean anything to use
default-service or default-batch; that's for mount tuning. So disallow
it in tokenutil. This also fixes the fact that the switch statement
wasn't right.
Add support for hashing time.Time within slices, which unbreaks auditing of requests returning the request counters.
Break Hash into struct-specific func like HashAuth, HashRequest. Move all the copying/hashing logic from FormatRequest/FormatResponse into the new Hash* funcs. HashStructure now modifies in place instead of copying.
Instead of returning an error when trying to hash map keys of type time.Time, ignore them, i.e. pass them through unhashed.
Enable auditing on test clusters by default if the caller didn't specify any audit backends. If they do, they're responsible for setting it up.