* Start work on passing context to backends
* More work on passing context
* Unindent logical system
* Unindent token store
* Unindent passthrough
* Unindent cubbyhole
* Fix tests
* use requestContext in rollback and expiration managers
* Add logic for using Auth.Period when handling auth login/renew requests
* Set auth.TTL if not set in handleLoginRequest
* Always set auth.TTL = te.TTL on handleLoginRequest, check TTL and period against sys values on RenewToken
* Get sysView from le.Path, revert tests
* Add back auth.Policies
* Fix TokenStore tests, add resp warning when capping values
* Use switch for ttl/period check on RenewToken
* Move comments around
* Start work on context aware backends
* Start work on moving the database plugins to gRPC in order to pass context
* Add context to builtin database plugins
* use byte slice instead of string
* Context all the things
* Move proto messages to the dbplugin package
* Add a grpc mechanism for running backend plugins
* Serve the GRPC plugin
* Add backwards compatibility to the database plugins
* Remove backend plugin changes
* Remove backend plugin changes
* Cleanup the transport implementations
* If grpc connection is in an unexpected state restart the plugin
* Fix tests
* Fix tests
* Remove context from the request object, replace it with context.TODO
* Add a test to verify netRPC plugins still work
* Remove unused mapstructure call
* Code review fixes
* Code review fixes
* Code review fixes
* Move location of quit channel closing in exp manager
If it happens after stopping timers any timers firing before all timers
are stopped will still run the revocation function. With plugin
auto-crash-recovery this could end up instantiating a plugin that could
then try to unwrap a token from a nil token store.
This also plumbs in core so that we can grab a read lock during the
operation and check standby/sealed status before running it (after
grabbing the lock).
* Use context instead of checking core values directly
* Use official Go context in a few key places
1) Ensure that if we fail to generate a lease for a secret we attempt to revoke it
2) Ensure that any lease that is registered should never have a blank token
In theory, number 2 will let us a) find places where this *is* the case, and b) if errors are encountered when revoking tokens due to a blank client token, it suggests that the client token values are being stripped somewhere along the way, which is also instructive.
* Add a benchmark for exiration.Restore
* Add benchmarks for consul Restore functions
* Add a parallel version of expiration.Restore
* remove debug code
* Up the MaxIdleConnsPerHost
* Add tests for etcd
* Return errors and ensure go routines are exited
* Refactor inmem benchmark
* Add s3 bench and refactor a bit
* Few tweaks
* Fix race with waitgroup.Add()
* Fix waitgroup race condition
* Move wait above the info log
* Add helper/consts package to store consts that are needed in cyclic packages
* Remove not used benchmarks
This should help with transient issues. Full control over min/max delays
and number of retries (and ability to turn off) is provided in the API
and via env vars.
Fix tests.
The expiration manager would never be poked to remove token entries upon
token revocation, if that revocation was initiated in the token store
itself. It might have been to avoid deadlock, since during revocation of
tokens the expiration manager is called, which then calls back into the
token store, and so on.
This adds a way to skip that last call back into the token store if we
know that we're on the revocation path because we're in the middle of
revoking a token. That way the lease is cleaned up. This both prevents
log entries appearing for already-revoked tokens, and it also releases
timer/memory resources since we're not keeping the leases around.
In some situations, it can be impossible to revoke leases (for instance,
if someone has gone and manually removed users created by Vault). This
can not only cause Vault to cycle trying to revoke them, but it also
prevents mounts from being unmounted, leaving them in a tainted state
where the only operations allowed are to revoke (or rollback), which
will never successfully complete.
This adds a new endpoint that works similarly to `revoke-prefix` but
ignores errors coming from a backend upon revocation (it does not ignore
errors coming from within the expiration manager, such as errors
accessing the data store). This can be used to force Vault to abandon
leases.
Like `revoke-prefix`, this is a very sensitive operation and requires
`sudo`. It is implemented as a separate endpoint, rather than an
argument to `revoke-prefix`, to ensure that control can be delegated
appropriately, as even most administrators should not normally have
this privilege.
Fixes#1135
Due to a typo, revoking ensures that index entries are created rather
than removed. This adds a failing, then fixed test case (and helper
function) to ensure that index entries are properly removed on revoke.
Fixes#749
In order to implement this efficiently, I have introduced the concept of
"singleton" backends -- currently, 'sys' and 'cubbyhole'. There isn't
much reason to allow sys to be mounted at multiple places, and there
isn't much reason you'd need multiple per-token storage areas. By
restricting it to just one, I can store that particular mount instead of
iterating through them in order to call the appropriate revoke function.
Additionally, because revocation on the backend needs to be triggered by
the token store, the token store's salt is kept in the router and
client tokens going to the cubbyhole backend are double-salted by the
router. This allows the token store to drive when revocation happens
using its salted tokens.
/cc @armon - This is a reasonably major refactor that I think cleans up
a lot of the logic with secrets in responses. The reason for the
refactor is that while implementing Renew/Revoke in logical/framework I
found the existing API to be really awkward to work with.
Primarily, we needed a way to send down internal data for Vault core to
store since not all the data you need to revoke a key is always sent
down to the user (for example the user than AWS key belongs to).
At first, I was doing this manually in logical/framework with
req.Storage, but this is going to be such a common event that I think
its something core should assist with. Additionally, I think the added
context for secrets will be useful in the future when we have a Vault
API for returning orphaned out keys: we can also return the internal
data that might help an operator.
So this leads me to this refactor. I've removed most of the fields in
`logical.Response` and replaced it with a single `*Secret` pointer. If
this is non-nil, then the response represents a secret. The Secret
struct encapsulates all the lease info and such.
It also has some fields on it that are only populated at _request_ time
for Revoke/Renew operations. There is precedent for this sort of
behavior in the Go stdlib where http.Request/http.Response have fields
that differ based on client/server. I copied this style.
All core unit tests pass. The APIs fail for obvious reasons but I'll fix
that up in the next commit.