* Work on raft backend
* Add logstore locally
* Add encryptor and unsealable interfaces
* Add clustering support to raft
* Remove client and handler
* Bootstrap raft on init
* Cleanup raft logic a bit
* More raft work
* Work on TLS config
* More work on bootstrapping
* Fix build
* More work on bootstrapping
* More bootstrapping work
* fix build
* Remove consul dep
* Fix build
* merged oss/master into raft-storage
* Work on bootstrapping
* Get bootstrapping to work
* Clean up FMS and node-id
* Update local node ID logic
* Cleanup node-id change
* Work on snapshotting
* Raft: Add remove peer API (#906)
* Add remove peer API
* Add some comments
* Fix existing snapshotting (#909)
* Raft get peers API (#912)
* Read raft configuration
* address review feedback
* Use the Leadership Transfer API to step-down the active node (#918)
* Raft join and unseal using Shamir keys (#917)
* Raft join using shamir
* Store AEAD instead of master key
* Split the raft join process to answer the challenge after a successful unseal
* get the follower to standby state
* Make unseal work
* minor changes
* Some input checks
* reuse the shamir seal access instead of new default seal access
* refactor joinRaftSendAnswer function
* Synchronously send answer in auto-unseal case
* Address review feedback
* Raft snapshots (#910)
* Fix existing snapshotting
* implement the noop snapshotting
* Add comments and switch log libraries
* add some snapshot tests
* add snapshot test file
* add TODO
* More work on raft snapshotting
* progress on the ConfigStore strategy
* Don't use two buckets
* Update the snapshot store logic to hide the file logic
* Add more backend tests
* Cleanup code a bit
* [WIP] Raft recovery (#938)
* Add recovery functionality
* remove fmt.Printfs
* Fix a few fsm bugs
* Add max size value for raft backend (#942)
* Add max size value for raft backend
* Include physical.ErrValueTooLarge in the message
* Raft snapshot Take/Restore API (#926)
* Inital work on raft snapshot APIs
* Always redirect snapshot install/download requests
* More work on the snapshot APIs
* Cleanup code a bit
* On restore handle special cases
* Use the seal to encrypt the sha sum file
* Add sealer mechanism and fix some bugs
* Call restore while state lock is held
* Send restore cb trigger through raft log
* Make error messages nicer
* Add test helpers
* Add snapshot test
* Add shamir unseal test
* Add more raft snapshot API tests
* Fix locking
* Change working to initalize
* Add underlying raw object to test cluster core
* Move leaderUUID to core
* Add raft TLS rotation logic (#950)
* Add TLS rotation logic
* Cleanup logic a bit
* Add/Remove from follower state on add/remove peer
* add comments
* Update more comments
* Update request_forwarding_service.proto
* Make sure we populate all nodes in the followerstate obj
* Update times
* Apply review feedback
* Add more raft config setting (#947)
* Add performance config setting
* Add more config options and fix tests
* Test Raft Recovery (#944)
* Test raft recovery
* Leave out a node during recovery
* remove unused struct
* Update physical/raft/snapshot_test.go
* Update physical/raft/snapshot_test.go
* fix vendoring
* Switch to new raft interface
* Remove unused files
* Switch a gogo -> proto instance
* Remove unneeded vault dep in go.sum
* Update helper/testhelpers/testhelpers.go
Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com>
* Update vault/cluster/cluster.go
* track active key within the keyring itself (#6915)
* track active key within the keyring itself
* lookup and store using the active key ID
* update docstring
* minor refactor
* Small text fixes (#6912)
* Update physical/raft/raft.go
Co-Authored-By: Calvin Leung Huang <cleung2010@gmail.com>
* review feedback
* Move raft logical system into separate file
* Update help text a bit
* Enforce cluster addr is set and use it for raft bootstrapping
* Fix tests
* fix http test panic
* Pull in latest raft-snapshot library
* Add comment
* Save the original request body for forwarding
If we are forwarding a request after initial parsing the request body is
already consumed. As a result a forwarded call containing a request body
will have the body be nil. This saves the original request body for a
given request via a TeeReader and uses that in cases of forwarding past
body consumption.
* Fixes a regression in forwarding from #6115
Although removing the authentication header is good defense in depth,
for forwarding mechanisms that use the raw request, we never add it
back. This caused perf standby tests to throw errors. Instead, once
we're past the point at which we would do any raw forwarding, but before
routing the request, remove the header.
To speed this up, a flag is set in the logical.Request to indicate where
the token is sourced from. That way we don't iterate through maps
unnecessarily.
The result will still pass gofmtcheck and won't trigger additional
changes if someone isn't using goimports, but it will avoid the
piecemeal imports changes we've been seeing.
* Support Authorization Bearer as token header
* add requestAuth test
* remove spew debug output in test
* Add Authorization in CORS Allowed headers
* use const where applicable
* use less allocations in bearer token checking
* address PR comments on tests and apply last commit
* reorder error checking in a TestHandler_requestAuth
* Initial implemntation of returning 529 for rate limits
- bump aws iam and sts packages to v1.14.31 to get mocking interface
- promote the iam and sts clients to the aws backend struct, for mocking in tests
- this also promotes some functions to methods on the Backend struct, so
that we can use the injected client
Generating creds requires reading config/root for credentials to contact
IAM. Here we make pathConfigRoot a method on aws/backend so we can clear
the clients on successful update of config/root path. Adds a mutex to
safely clear the clients
* refactor locking and unlocking into methods on *backend
* refactor/simply the locking
* check client after grabbing lock
* Add request timeouts in normal request path and to expirations
* Add ability to adjust default max request duration
* Some test fixes
* Ensure tests have defaults set for max request duration
* Add context cancel checking to inmem/file
* Fix tests
* Fix tests
* Set default max request duration to basically infinity for this release for BC
* Address feedback
If we get to respondStandby but we're actually not in an HA cluster, we
should instead indicate the correct status to the user. Although it
might be better to change any such behavior upstream, if any upstream
code manages this state we should still handle it correctly.
Fixes#4873
* Allow max request size to be user-specified
This turned out to be way more impactful than I'd expected because I
felt like the right granularity was per-listener, since an org may want
to treat external clients differently from internal clients. It's pretty
straightforward though.
This also introduces actually using request contexts for values, which
so far we have not done (using our own logical.Request struct instead),
but this allows non-logical methods to still get this benefit.
* Switch to ioutil.ReadAll()
* adding UI handlers and UI header configuration
* forcing specific static headers
* properly getting UI config value from config/environment
* fixing formatting in stub UI text
* use http.Header
* case-insensitive X-Vault header check
* fixing var name
* wrap both stubbed and real UI in header handler
* adding test for >1 keys
* logbridge with hclog and identical output
* Initial search & replace
This compiles, but there is a fair amount of TODO
and commented out code, especially around the
plugin logclient/logserver code.
* strip logbridge
* fix majority of tests
* update logxi aliases
* WIP fixing tests
* more test fixes
* Update test to hclog
* Fix format
* Rename hclog -> log
* WIP making hclog and logxi love each other
* update logger_test.go
* clean up merged comments
* Replace RawLogger interface with a Logger
* Add some logger names
* Replace Trace with Debug
* update builtin logical logging patterns
* Fix build errors
* More log updates
* update log approach in command and builtin
* More log updates
* update helper, http, and logical directories
* Update loggers
* Log updates
* Update logging
* Update logging
* Update logging
* Update logging
* update logging in physical
* prefixing and lowercase
* Update logging
* Move phyisical logging name to server command
* Fix som tests
* address jims feedback so far
* incorporate brians feedback so far
* strip comments
* move vault.go to logging package
* update Debug to Trace
* Update go-plugin deps
* Update logging based on review comments
* Updates from review
* Unvendor logxi
* Remove null_logger.go
* exclude /sys/leases/renew from registering with expiration manager
* adding sys/leases/renew to return full secret object, adding tests to catch renew errors
Unlike seal, this command has no meaning other than on the active node,
so when issuing it the expected behavior would be for whichever node is
currently active to step down.
* audit: Added token_num_uses to audit response
* Fixed jsonx tests
* Revert logical auth to NumUses instead of TokenNumUses
* s/TokenNumUses/NumUses
* Audit: Add num uses to audit requests as well
* Added RemainingUses to distinguish NumUses in audit requests
This makes it easier to understand the expected lifetime without a
lookup call that uses the single use left on the token.
This also adds a couple of safety checks and for JSON uses int, rather
than int64, for the TTL for the wrapped token.
* Request/Response field extension
* Parsing of header into request object
* Handling of duration/mount point within router
* Tests of router WrapDuration handling
This endpoint causes the node it's hit to step down from active duty.
It's a noop if the node isn't active or not running in HA mode. The node
will wait one second before attempting to reacquire the lock, to give
other nodes a chance to grab it.
Fixes#1093
with a new endpoint '/sys/audit-hash', which returns the given input
string hashed with the given audit backend's hash function and salt
(currently, always HMAC-SHA256 and a backend-specific salt).
In the process of adding the HTTP handler, this also removes the custom
HTTP handlers for the other audit endpoints, which were simply
forwarding to the logical system backend. This means that the various
audit functions will now redirect correctly from a standby to master.
(Tests all pass.)
Fixes#784
In order to implement this efficiently, I have introduced the concept of
"singleton" backends -- currently, 'sys' and 'cubbyhole'. There isn't
much reason to allow sys to be mounted at multiple places, and there
isn't much reason you'd need multiple per-token storage areas. By
restricting it to just one, I can store that particular mount instead of
iterating through them in order to call the appropriate revoke function.
Additionally, because revocation on the backend needs to be triggered by
the token store, the token store's salt is kept in the router and
client tokens going to the cubbyhole backend are double-salted by the
router. This allows the token store to drive when revocation happens
using its salted tokens.