* Customizing HTTP headers in the config file
* Add changelog, fix bad imports
* fixing some bugs
* fixing interaction of custom headers and /ui
* Defining a member in core to set custom response headers
* missing additional file
* Some refactoring
* Adding automated tests for the feature
* Changing some error messages based on some recommendations
* Incorporating custom response headers struct into the request context
* removing some unused references
* fixing a test
* changing some error messages, removing a default header value from /ui
* fixing a test
* wrapping ResponseWriter to set the custom headers
* adding a new test
* some cleanup
* removing some extra lines
* Addressing comments
* fixing some agent tests
* skipping custom headers from agent listener config,
removing two of the default headers as they cause issues with Vault in UI mode
Adding X-Content-Type-Options to the ui default headers
Let Content-Type be set as before
* Removing default custom headers, and renaming some function varibles
* some refacotring
* Refactoring and addressing comments
* removing a function and fixing comments
* move merge and compare states to vault core
* move MergeState, CompareStates and ParseRequiredStates to api package
* fix merge state reference in API Proxy
* move mergeStates test to api package
* add changelog
* ghost commit to trigger CI
* rename CompareStates to CompareReplicationStates
* rename MergeStates and make compareStates and parseStates private methods
* improved error messaging in parseReplicationState
* export ParseReplicationState for enterprise files
* initial refactoring of unseal step in run
* remove waitgroup
* remove waitgroup
* backup work
* backup
* backup
* completely modularize run and move into diagnose
* add diagnose errors for incorrect number of unseal keys
* comment tests back in
* backup
* first subspan
* finished subspanning but running into error with timeouts
* remove runtime checks
* merge main branch
* meeting updates
* remove telemetry block
* roy comment
* subspans for seal finalization and wrapping diagnose latency checks
* backup while I fix something else
* fix storage latency test errors
* runtime checks
* diagnose with timeout on seal
* The main driver for this change was to make the read from a.newFragmentCh timeout quickly rather than waiting for the test timeout (much longer). While testing the change I observed a panic during shutdown, but it was swallowed and moreover there was no stack trace so it wasn't obvious. I'm hoping we can get rid of the recover, so I fixed the issue in the activitylog tests that needed it.
* tls tests and root verification
* make the certificate verification check correct for non root CA case
* add expiry test
* addressed comments but struggling with the bug in parsing Cas and inters from single file:
* final checks on tls and listener
* cleanup
* sanity checks for tls config in diagnose
* backup
* backup
* backup
* added necessary tests
* remove comment
* remove parallels causing test flakiness
* comments
* small fix
* separate out config hcl test case into new hcl file
* newline
* addressed comments
* addressed comments
* addressed comments
* addressed comments
* addressed comments
* reload funcs should be allowed to be nil
* k8s doc: update for 0.9.1 and 0.8.0 releases (#10825)
* k8s doc: update for 0.9.1 and 0.8.0 releases
* Update website/content/docs/platform/k8s/helm/configuration.mdx
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* Autopilot initial commit
* Move autopilot related backend implementations to its own file
* Abstract promoter creation
* Add nil check for health
* Add server state oss no-ops
* Config ext stub for oss
* Make way for non-voters
* s/health/state
* s/ReadReplica/NonVoter
* Add synopsis and description
* Remove struct tags from AutopilotConfig
* Use var for config storage path
* Handle nin-config when reading
* Enable testing autopilot by using inmem cluster
* First passing test
* Only report the server as known if it is present in raft config
* Autopilot defaults to on for all existing and new clusters
* Add locking to some functions
* Persist initial config
* Clarify the command usage doc
* Add health metric for each node
* Fix audit logging issue
* Don't set DisablePerformanceStandby to true in test
* Use node id label for health metric
* Log updates to autopilot config
* Less aggressively consume config loading failures
* Return a mutable config
* Return early from known servers if raft config is unable to be pulled
* Update metrics name
* Reduce log level for potentially noisy log
* Add knob to disable autopilot
* Don't persist if default config is in use
* Autopilot: Dead server cleanup (#10857)
* Dead server cleanup
* Initialize channel in any case
* Fix a bunch of tests
* Fix panic
* Add follower locking in heartbeat tracker
* Add LastContactFailureThreshold to config
* Add log when marking node as dead
* Update follower state locking in heartbeat tracker
* Avoid follower states being nil
* Pull test to its own file
* Add execution status to state response
* Optionally enable autopilot in some tests
* Updates
* Added API function to fetch autopilot configuration
* Add test for default autopilot configuration
* Configuration tests
* Add State API test
* Update test
* Added TestClusterOptions.PhysicalFactoryConfig
* Update locking
* Adjust locking in heartbeat tracker
* s/last_contact_failure_threshold/left_server_last_contact_threshold
* Add disabling autopilot as a core config option
* Disable autopilot in some tests
* s/left_server_last_contact_threshold/dead_server_last_contact_threshold
* Set the lastheartbeat of followers to now when setting up active node
* Don't use config defaults from CLI command
* Remove config file support
* Remove HCL test as well
* Persist only supplied config; merge supplied config with default to operate
* Use pointer to structs for storing follower information
* Test update
* Retrieve non voter status from configbucket and set it up when a node comes up
* Manage desired suffrage
* Consider bucket being created already
* Move desired suffrage to its own entry
* s/DesiredSuffrageKey/LocalNodeConfigKey
* s/witnessSuffrage/recordSuffrage
* Fix test compilation
* Handle local node config post a snapshot install
* Commit to storage first; then record suffrage in fsm
* No need of local node config being nili case, post snapshot restore
* Reconcile autopilot config when a new leader takes over duty
* Grab fsm lock when recording suffrage
* s/Suffrage/DesiredSuffrage in FollowerState
* Instantiate autopilot only in leader
* Default to old ways in more scenarios
* Make API gracefully handle 404
* Address some feedback
* Make IsDead an atomic.Value
* Simplify follower hearbeat tracking
* Use uber.atomic
* Don't have multiple causes for having autopilot disabled
* Don't remove node from follower states if we fail to remove the dead server
* Autopilot server removals map (#11019)
* Don't remove node from follower states if we fail to remove the dead server
* Use map to track dead server removals
* Use lock and map
* Use delegate lock
* Adjust when to remove entry from map
* Only hold the lock while accessing map
* Fix race
* Don't set default min_quorum
* Fix test
* Ensure follower states is not nil before starting autopilot
* Fix race
Co-authored-by: Jason O'Donnell <2160810+jasonodonnell@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* basic pool and start testing
* refactor a bit for testing
* workFunc, start/stop safety, testing
* cleanup function for worker quit, more tests
* redo public/private members
* improve tests, export types, switch uuid package
* fix loop capture bug, cleanup
* cleanup tests
* update worker pool file name, other improvements
* add job manager prototype
* remove remnants
* add functions to wait for job manager and worker pool to stop, other fixes
* test job manager functionality, fix bugs
* encapsulate how jobs are distributed to workers
* make worker job channel read only
* add job interface, more testing, fixes
* set name for dispatcher
* fix test races
* wire up expiration manager most of the way
* dispatcher and job manager constructors don't return errors
* logger now dependency injected
* make some members private, test fcn to get worker pool size
* make GetNumWorkers public
* Update helper/fairshare/jobmanager_test.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* update fairsharing usage, add tests
* make workerpool private
* remove custom worker names
* concurrency improvements
* remove worker pool cleanup function
* remove cleanup func from job manager, remove non blocking stop from fairshare
* update job manager for new constructor
* stop job manager when expiration manager stopped
* unset env var after test
* stop fairshare when started in tests
* stop leaking job manager goroutine
* prototype channel for waking up to assign work
* fix typo/bug and add tests
* improve job manager wake up, fix test typo
* put channel drain back
* better start/pause test for job manager
* comment cleanup
* degrade possible noisy log
* remove closure, clean up context
* improve revocation context timer
* test: reduce number of revocation workers during many tests
* Update vault/expiration.go
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* feedback tweaks
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
Co-authored-by: Brian Kassouf <briankassouf@users.noreply.github.com>
* core: Record the time a node became active
* Update vault/core.go
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
* Add omitempty field
* Update vendor
* Added CL entry and fixed test
* Fix test
* Fix command package tests
Co-authored-by: Nick Cabatoff <ncabatoff@hashicorp.com>
* fix racy activity log tests and move testing utilities elsewhere
* remove TODO
* move SetEnable out of activity log
* clarify not waiting on waitgroup
* remove todo