* core: add postSealMigration method
The postSealMigration method is called at the end of the postUnseal
method if a seal migration has occurred. This starts a seal rewrap
process in the enterprise version of. It is a no-op in the OSS version.
* cli: initial work on debug; server-status target
* debug: add metrics capture target (#7376)
* check against DR secondary
* debug: add compression
* refactor check into preflight func
* debug: set short test time on tests, fix exit code bug
* debug: use temp dir for output on tests
* debug: use mholt/archiver for compression
* first pass on adding pprof
* use logger for output
* refactor polling target capture logic
* debug: poll and collect replication status
* debug: poll and collect host-info; rename output files and collection refactor
* fix comments
* add archive test; fix bugs found
* rename flag name to singular target
* add target output test; scaffold other tests cases
* debug/test: add pprof and index file tests
* debug/test: add min timing check tests
* debug: fix index gen race and collection goroutine race
* debug: extend archive tests, handle race between program exit and polling goroutines
* update docstring
* debug: correctly add to pollingWg
* debug: add config target support
* debug: don't wait on interrupt shutdown; add file exists unit tests
* move pprof bits into its goroutine
* debug: skip empty metrics and some pprof file creation if permission denied, add matching unit test
* address comments and feedback
* Vault debug using run.Group (#7658)
* debug: switch to use oklog/run.Group
* debug: use context to cancel requests and interrupt rungroups.
* debug: trigger the first interval properly
* debug: metrics collection should use metrics interval
* debug: add missing continue on metrics error
* debug: remove the use of buffered chan to trigger first interval
* debug: don't shadow BaseCommand's client, properly block on interval capture failures
* debug: actually use c.cachedClient everywhere
* go mod vendor
* debug: run all pprof in goroutines; bump pprof timings in tests to reduce flakiness
* debug: update help text
* Initial work
* rework
* s/dr/recovery
* Add sys/raw support to recovery mode (#7577)
* Factor the raw paths out so they can be run with a SystemBackend.
# Conflicts:
# vault/logical_system.go
* Add handleLogicalRecovery which is like handleLogical but is only
sufficient for use with the sys-raw endpoint in recovery mode. No
authentication is done yet.
* Integrate with recovery-mode. We now handle unauthenticated sys/raw
requests, albeit on path v1/raw instead v1/sys/raw.
* Use sys/raw instead raw during recovery.
* Don't bother persisting the recovery token. Authenticate sys/raw
requests with it.
* RecoveryMode: Support generate-root for autounseals (#7591)
* Recovery: Abstract config creation and log settings
* Recovery mode integration test. (#7600)
* Recovery: Touch up (#7607)
* Recovery: Touch up
* revert the raw backend creation changes
* Added recovery operation token prefix
* Move RawBackend to its own file
* Update API path and hit it using CLI flag on generate-root
* Fix a panic triggered when handling a request that yields a nil response. (#7618)
* Improve integ test to actually make changes while in recovery mode and
verify they're still there after coming back in regular mode.
* Refuse to allow a second recovery token to be generated.
* Resize raft cluster to size 1 and start as leader (#7626)
* RecoveryMode: Setup raft cluster post unseal (#7635)
* Setup raft cluster post unseal in recovery mode
* Remove marking as unsealed as its not needed
* Address review comments
* Accept only one seal config in recovery mode as there is no scope for migration
Currently whenever we start a new C* session in the database plugin, we
run `LIST ALL` to determine whether we are a superuser, or otherwise
have permissions on roles. This is a fairly sensible way of checking
this, except it can be really slow when you have a lot of roles (C*
isn't so good at listing things). It's also really intensive to C* and
leads to a lot of data transfer. We've seen timeout issues when doing
this query, and can of course raise the timeout, but we'd probably
prefer to be able to switch it off.
* add storage route
* template out the routes and new raft storage overview
* fetch raft config and add new server model
* pngcrush the favicon
* add view components and binary-file component
* add form-save-buttons component
* adjust rawRequest so that it can send a request body and returns the response on errors
* hook up restore
* rename binary-file to file-to-array-buffer
* add ember-service-worker
* use forked version of ember-service-worker for now
* scope the service worker to a single endpoint
* show both download buttons for now
* add service worker download with a fallback to JS in-mem download
* add remove peer functionality
* lint go file
* add storage-type to the cluster and node models
* update edit for to take a cancel action
* separate out a css table styles to be used by http-requests-table and on the raft-overview component
* add raft-join adapter, model, component and use on the init page
* fix styling and gate the menu item on the cluster using raft storage
* style tweaks to the raft-join component
* fix linting
* add form-save-buttons component to storybook
* add cancel functionality for backup uploads, and add a success message for successful uploads
* add component tests
* add filesize.js
* add filesize and modified date to file-to-array-buffer
* fix linting
* fix server section showing in the cluster nav
* don't use babel transforms in service worker lib because we don't want 2 copies of babel polyfill
* add file-to-array-buffer to storybook
* add comments and use removeObjectURL to raft-storage-overview
* update alert-banner markdown
* messaging change for upload alert banner
* Update ui/app/templates/components/raft-storage-restore.hbs
Co-Authored-By: Joshua Ogle <joshua@joshuaogle.com>
* more comments
* actually render the label if passed and update stories with knobs
* implement SSRF protection header
* add test for SSRF protection header
* cleanup
* refactor
* implement SSRF header on a per-listener basis
* cleanup
* cleanup
* creat unit test for agent SSRF
* improve unit test for agent SSRF
* add VaultRequest SSRF header to CLI
* fix unit test
* cleanup
* improve test suite
* simplify check for Vault-Request header
* add constant for Vault-Request header
* improve test suite
* change 'config' to 'agentConfig'
* Revert "change 'config' to 'agentConfig'"
This reverts commit 14ee72d21fff8027966ee3c89dd3ac41d849206f.
* do not remove header from request
* change header name to X-Vault-Request
* simplify http.Handler logic
* cleanup
* simplify http.Handler logic
* use stdlib errors package