* Initial work
* rework
* s/dr/recovery
* Add sys/raw support to recovery mode (#7577)
* Factor the raw paths out so they can be run with a SystemBackend.
# Conflicts:
# vault/logical_system.go
* Add handleLogicalRecovery which is like handleLogical but is only
sufficient for use with the sys-raw endpoint in recovery mode. No
authentication is done yet.
* Integrate with recovery-mode. We now handle unauthenticated sys/raw
requests, albeit on path v1/raw instead v1/sys/raw.
* Use sys/raw instead raw during recovery.
* Don't bother persisting the recovery token. Authenticate sys/raw
requests with it.
* RecoveryMode: Support generate-root for autounseals (#7591)
* Recovery: Abstract config creation and log settings
* Recovery mode integration test. (#7600)
* Recovery: Touch up (#7607)
* Recovery: Touch up
* revert the raw backend creation changes
* Added recovery operation token prefix
* Move RawBackend to its own file
* Update API path and hit it using CLI flag on generate-root
* Fix a panic triggered when handling a request that yields a nil response. (#7618)
* Improve integ test to actually make changes while in recovery mode and
verify they're still there after coming back in regular mode.
* Refuse to allow a second recovery token to be generated.
* Resize raft cluster to size 1 and start as leader (#7626)
* RecoveryMode: Setup raft cluster post unseal (#7635)
* Setup raft cluster post unseal in recovery mode
* Remove marking as unsealed as its not needed
* Address review comments
* Accept only one seal config in recovery mode as there is no scope for migration
* sys/pprof: add pprof routes to the system backend
* sys/pprof: add pprof paths to handler with local-only check
* fix trailing slash on pprof index endpoint
* use new no-forward handler on pprof
* go mod tidy
* add pprof external tests
* disallow streaming requests to exceed DefaultMaxRequestDuration
* add max request duration test
This allows logical operations (along with a non-nil response writer) to
process http handler funcs within the operation function while keeping
auth and audit checks that the logical request flow provides.
* Work on raft backend
* Add logstore locally
* Add encryptor and unsealable interfaces
* Add clustering support to raft
* Remove client and handler
* Bootstrap raft on init
* Cleanup raft logic a bit
* More raft work
* Work on TLS config
* More work on bootstrapping
* Fix build
* More work on bootstrapping
* More bootstrapping work
* fix build
* Remove consul dep
* Fix build
* merged oss/master into raft-storage
* Work on bootstrapping
* Get bootstrapping to work
* Clean up FMS and node-id
* Update local node ID logic
* Cleanup node-id change
* Work on snapshotting
* Raft: Add remove peer API (#906)
* Add remove peer API
* Add some comments
* Fix existing snapshotting (#909)
* Raft get peers API (#912)
* Read raft configuration
* address review feedback
* Use the Leadership Transfer API to step-down the active node (#918)
* Raft join and unseal using Shamir keys (#917)
* Raft join using shamir
* Store AEAD instead of master key
* Split the raft join process to answer the challenge after a successful unseal
* get the follower to standby state
* Make unseal work
* minor changes
* Some input checks
* reuse the shamir seal access instead of new default seal access
* refactor joinRaftSendAnswer function
* Synchronously send answer in auto-unseal case
* Address review feedback
* Raft snapshots (#910)
* Fix existing snapshotting
* implement the noop snapshotting
* Add comments and switch log libraries
* add some snapshot tests
* add snapshot test file
* add TODO
* More work on raft snapshotting
* progress on the ConfigStore strategy
* Don't use two buckets
* Update the snapshot store logic to hide the file logic
* Add more backend tests
* Cleanup code a bit
* [WIP] Raft recovery (#938)
* Add recovery functionality
* remove fmt.Printfs
* Fix a few fsm bugs
* Add max size value for raft backend (#942)
* Add max size value for raft backend
* Include physical.ErrValueTooLarge in the message
* Raft snapshot Take/Restore API (#926)
* Inital work on raft snapshot APIs
* Always redirect snapshot install/download requests
* More work on the snapshot APIs
* Cleanup code a bit
* On restore handle special cases
* Use the seal to encrypt the sha sum file
* Add sealer mechanism and fix some bugs
* Call restore while state lock is held
* Send restore cb trigger through raft log
* Make error messages nicer
* Add test helpers
* Add snapshot test
* Add shamir unseal test
* Add more raft snapshot API tests
* Fix locking
* Change working to initalize
* Add underlying raw object to test cluster core
* Move leaderUUID to core
* Add raft TLS rotation logic (#950)
* Add TLS rotation logic
* Cleanup logic a bit
* Add/Remove from follower state on add/remove peer
* add comments
* Update more comments
* Update request_forwarding_service.proto
* Make sure we populate all nodes in the followerstate obj
* Update times
* Apply review feedback
* Add more raft config setting (#947)
* Add performance config setting
* Add more config options and fix tests
* Test Raft Recovery (#944)
* Test raft recovery
* Leave out a node during recovery
* remove unused struct
* Update physical/raft/snapshot_test.go
* Update physical/raft/snapshot_test.go
* fix vendoring
* Switch to new raft interface
* Remove unused files
* Switch a gogo -> proto instance
* Remove unneeded vault dep in go.sum
* Update helper/testhelpers/testhelpers.go
Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com>
* Update vault/cluster/cluster.go
* track active key within the keyring itself (#6915)
* track active key within the keyring itself
* lookup and store using the active key ID
* update docstring
* minor refactor
* Small text fixes (#6912)
* Update physical/raft/raft.go
Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com>
* review feedback
* Move raft logical system into separate file
* Update help text a bit
* Enforce cluster addr is set and use it for raft bootstrapping
* Fix tests
* fix http test panic
* Pull in latest raft-snapshot library
* Add comment
* Save the original request body for forwarding
If we are forwarding a request after initial parsing the request body is
already consumed. As a result a forwarded call containing a request body
will have the body be nil. This saves the original request body for a
given request via a TeeReader and uses that in cases of forwarding past
body consumption.
* Handle ns lease and token renew/revoke via relative paths
* s/usin/using/
* add token and lease lookup paths; set ctx only on non-nil ns
Addtionally, use client token's ns for auth/token/lookup if no token is provided
The result will still pass gofmtcheck and won't trigger additional
changes if someone isn't using goimports, but it will avoid the
piecemeal imports changes we've been seeing.
We support this in the API as of 0.10.2 so read should support it too.
Trivially tested with some log info:
`core: data: data="map[string]interface {}{"zip":[]string{"zap", "zap2"}}"`
This change makes it so that if a lease is revoked through user action,
we set the expiration time to now and update pending, just as we do with
tokens. This allows the normal retry logic to apply in these cases as
well, instead of just erroring out immediately. The idea being that once
you tell Vault to revoke something it should keep doing its darndest to
actually make that happen.
This takes place in two parts, since working on this exposed an issue
with response wrapping when there is a raw body set. The changes are (in
diff order):
* A CurrentWrappingLookupFunc has been added to return the current
value. This is necessary for the lookahead call since we don't want the
lookahead call to be wrapped.
* Support for unwrapping < 0.6.2 tokens via the API/CLI has been
removed, because we now have backends returning 404s with data and can't
rely on the 404 trick. These can still be read manually via
cubbyhole/response.
* KV preflight version request now ensures that its calls is not
wrapped, and restores any given function after.
* When responding with a raw body, instead of always base64-decoding a
string value and erroring on failure, on failure we assume that it
simply wasn't a base64-encoded value and use it as is.
* A test that fails on master and works now that ensures that raw body
responses that are wrapped and then unwrapped return the expected
values.
* A flag for response data that indicates to the wrapping handling that
the data contained therein is already JSON decoded (more later).
* RespondWithStatusCode now defaults to a string so that the value is
HMAC'd during audit. The function always JSON encodes the body, so
before now it was always returning []byte which would skip HMACing. We
don't know what's in the data, so this is a "better safe than sorry"
issue. If different behavior is needed, backends can always manually
populate the data instead of relying on the helper function.
* We now check unwrapped data after unwrapping to see if there were raw
flags. If so, we try to detect whether the value can be unbase64'd. The
reason is that if it can it was probably originally a []byte and
shouldn't be audit HMAC'd; if not, it was probably originally a string
and should be. In either case, we then set the value as the raw body and
hit the flag indicating that it's already been JSON decoded so not to
try again before auditing. Doing it this way ensures the right typing.
* There is now a check to see if the data coming from unwrapping is
already JSON decoded and if so the decoding is skipped before setting
the audit response.
* plugins/gRPC: fix issues with reserved keywords in response data
* Add the path raw file for mock plugin
* Fix panic when special paths is nil
* Add tests for Listing and raw requests from plugins
* Add json.Number case when decoding the status
* Bump the version required for gRPC defaults
* Fix test for gRPC version check
* Store original request path in WrapInfo as CreationPath
* Add wrapping_token_creation_path to CLI output
* Add CreationPath to AuditResponseWrapInfo
* Fix tests
* Add and fix tests, update API docs with new sample responses
* Add /sys/config/audited-headers endpoint for configuring the headers that will be audited
* Remove some debug lines
* Add a persistant layer and refactor a bit
* update the api endpoints to be more restful
* Add comments and clean up a few functions
* Remove unneeded hash structure functionaility
* Fix existing tests
* Add tests
* Add test for Applying the header config
* Add Benchmark for the ApplyConfig method
* ResetTimer on the benchmark:
* Update the headers comment
* Add test for audit broker
* Use hyphens instead of camel case
* Add size paramater to the allocation of the result map
* Fix the tests for the audit broker
* PR feedback
* update the path and permissions on config/* paths
* Add docs file
* Fix TestSystemBackend_RootPaths test
This fixes#1911 but not directly; it doesn't address the cause of the
panic. However, it turns out that this is the correct fix anyways,
because it ensures that the value being logged is RFC3339 format, which
is what the time turns into in JSON but not the normal time string
value, so what we audit log (and HMAC) matches what we are returning.