The result will still pass gofmtcheck and won't trigger additional
changes if someone isn't using goimports, but it will avoid the
piecemeal imports changes we've been seeing.
* Support Authorization Bearer as token header
* add requestAuth test
* remove spew debug output in test
* Add Authorization in CORS Allowed headers
* use const where applicable
* use less allocations in bearer token checking
* address PR comments on tests and apply last commit
* reorder error checking in a TestHandler_requestAuth
* Initial implemntation of returning 529 for rate limits
- bump aws iam and sts packages to v1.14.31 to get mocking interface
- promote the iam and sts clients to the aws backend struct, for mocking in tests
- this also promotes some functions to methods on the Backend struct, so
that we can use the injected client
Generating creds requires reading config/root for credentials to contact
IAM. Here we make pathConfigRoot a method on aws/backend so we can clear
the clients on successful update of config/root path. Adds a mutex to
safely clear the clients
* refactor locking and unlocking into methods on *backend
* refactor/simply the locking
* check client after grabbing lock
* Add request timeouts in normal request path and to expirations
* Add ability to adjust default max request duration
* Some test fixes
* Ensure tests have defaults set for max request duration
* Add context cancel checking to inmem/file
* Fix tests
* Fix tests
* Set default max request duration to basically infinity for this release for BC
* Address feedback
If we get to respondStandby but we're actually not in an HA cluster, we
should instead indicate the correct status to the user. Although it
might be better to change any such behavior upstream, if any upstream
code manages this state we should still handle it correctly.
Fixes#4873
* Allow max request size to be user-specified
This turned out to be way more impactful than I'd expected because I
felt like the right granularity was per-listener, since an org may want
to treat external clients differently from internal clients. It's pretty
straightforward though.
This also introduces actually using request contexts for values, which
so far we have not done (using our own logical.Request struct instead),
but this allows non-logical methods to still get this benefit.
* Switch to ioutil.ReadAll()
* adding UI handlers and UI header configuration
* forcing specific static headers
* properly getting UI config value from config/environment
* fixing formatting in stub UI text
* use http.Header
* case-insensitive X-Vault header check
* fixing var name
* wrap both stubbed and real UI in header handler
* adding test for >1 keys
* logbridge with hclog and identical output
* Initial search & replace
This compiles, but there is a fair amount of TODO
and commented out code, especially around the
plugin logclient/logserver code.
* strip logbridge
* fix majority of tests
* update logxi aliases
* WIP fixing tests
* more test fixes
* Update test to hclog
* Fix format
* Rename hclog -> log
* WIP making hclog and logxi love each other
* update logger_test.go
* clean up merged comments
* Replace RawLogger interface with a Logger
* Add some logger names
* Replace Trace with Debug
* update builtin logical logging patterns
* Fix build errors
* More log updates
* update log approach in command and builtin
* More log updates
* update helper, http, and logical directories
* Update loggers
* Log updates
* Update logging
* Update logging
* Update logging
* Update logging
* update logging in physical
* prefixing and lowercase
* Update logging
* Move phyisical logging name to server command
* Fix som tests
* address jims feedback so far
* incorporate brians feedback so far
* strip comments
* move vault.go to logging package
* update Debug to Trace
* Update go-plugin deps
* Update logging based on review comments
* Updates from review
* Unvendor logxi
* Remove null_logger.go
* exclude /sys/leases/renew from registering with expiration manager
* adding sys/leases/renew to return full secret object, adding tests to catch renew errors
Unlike seal, this command has no meaning other than on the active node,
so when issuing it the expected behavior would be for whichever node is
currently active to step down.
* audit: Added token_num_uses to audit response
* Fixed jsonx tests
* Revert logical auth to NumUses instead of TokenNumUses
* s/TokenNumUses/NumUses
* Audit: Add num uses to audit requests as well
* Added RemainingUses to distinguish NumUses in audit requests
This makes it easier to understand the expected lifetime without a
lookup call that uses the single use left on the token.
This also adds a couple of safety checks and for JSON uses int, rather
than int64, for the TTL for the wrapped token.
* Request/Response field extension
* Parsing of header into request object
* Handling of duration/mount point within router
* Tests of router WrapDuration handling
This endpoint causes the node it's hit to step down from active duty.
It's a noop if the node isn't active or not running in HA mode. The node
will wait one second before attempting to reacquire the lock, to give
other nodes a chance to grab it.
Fixes#1093
with a new endpoint '/sys/audit-hash', which returns the given input
string hashed with the given audit backend's hash function and salt
(currently, always HMAC-SHA256 and a backend-specific salt).
In the process of adding the HTTP handler, this also removes the custom
HTTP handlers for the other audit endpoints, which were simply
forwarding to the logical system backend. This means that the various
audit functions will now redirect correctly from a standby to master.
(Tests all pass.)
Fixes#784
In order to implement this efficiently, I have introduced the concept of
"singleton" backends -- currently, 'sys' and 'cubbyhole'. There isn't
much reason to allow sys to be mounted at multiple places, and there
isn't much reason you'd need multiple per-token storage areas. By
restricting it to just one, I can store that particular mount instead of
iterating through them in order to call the appropriate revoke function.
Additionally, because revocation on the backend needs to be triggered by
the token store, the token store's salt is kept in the router and
client tokens going to the cubbyhole backend are double-salted by the
router. This allows the token store to drive when revocation happens
using its salted tokens.
specify more concrete error cases to make their way back up the stack.
Over time there is probably a cleaner way of doing this, but that's
looking like a more massive rewrite and this solves some issues in
the meantime.
Use a CodedError to return a more concrete HTTP return code for
operations you want to do so. Returning a regular error leaves
the existing behavior in place.