This allows us to truly delete policies when we've either invalidated it
(which since they're singletons/default should only happen when we're
doing a namespace delete) or are doing a namespace delete on the local
node.
When unmounting, the router entry would be tainted, preventing routing.
However, we would then unmount the router before clearing storage, so if
an error occurred the router would have forgotten the path. For auth
mounts this isn't a problem since they had a secondary check, but
regular mounts didn't (not sure why, but this is true back to at least
0.2.0). This meant you could then create a duplicate mount using the
same path which would then not conflict in the router until postUnseal.
This adds the extra check to regular mounts, and also moves the location
of the router unmount.
This also ensures that on the next router.Mount, tainted is set to the
mount entry's tainted status.
Fixes#6769
Move audit.LogInput to sdk/logical. Allow the Data values in audited
logical.Request and Response to implement OptMarshaler, in which case
we delegate hashing/serializing responsibility to them. Add new
ClientCertificateSerialNumber audit request field.
SystemView can now be cast to ExtendedSystemView to expose the Auditor
interface, which allows submitting requests and responses to the audit
broker.
* Port over some SP v2 bits
Specifically:
* Add too-large handling to Physical (Consul only for now)
* Contextify some identity funcs
* Update SP protos
* Add size limiting to inmem storage
These paths are handled directly in handler.go, but the list of special
paths here impacts the x-vault-unauthenticated field in generated
OpenAPI.
Fixes: #6651
Merge both functions for creating mongodb containers into one.
Add retries to docker container cleanups.
Require $VAULT_ACC be set to enable AWS tests.
* Don't bother trying to save metrics when we don't have a barrier. Use stateLock.
* Use c.barrier instead of c.systemBarrierView, thus we don't need locking
and don't need to worry about race with mount setup.
* Remove unneccessary lock.
* Fix a locking issue in the Rollback manager
* Update rollback.go
* Update rollback.go
* move state creation
* Update vault/rollback.go
Co-Authored-By: briankassouf <briankassouf@users.noreply.github.com>
* Simplify logic by canceling the lock grab
* Use context instead of a chan
* Update vault/rollback.go
* fixing dockertest to run on travis
* try a repo local directory
* precreate the directory
* strip extraneous comment
* check directory was created
* try to print container logs
* try writing out client logs
* one last try
* Attempt to fix test
* convert to insecure tls
* strip test-temp
Increment a counter whenever a request is received.
The in-memory counter is persisted to counters/requests/YYYY/MM.
When the month wraps around, we reset the in-memory counter to
zero.
Add an endpoint for querying the request counters across all time.
* Add ability to migrate autoseal to autoseal
This adds the ability to migrate from shamir to autoseal, autoseal to
shamir, or autoseal to autoseal, by allowing multiple seal stanzas. A
disabled stanza will be used as the config being migrated from; this can
also be used to provide an unwrap seal on ent over multiple unseals.
A new test is added to ensure that autoseal to autoseal works as
expected.
* Fix test
* Provide default shamir info if not given in config
* Linting feedback
* Remove context var that isn't used
* Don't run auto unseal watcher when in migration, and move SetCores to SetSealsForMigration func
* Slight logic cleanup
* Fix test build and fix bug
* Updates
* remove GetRecoveryKey function
* First pass at filtered-path endpoint. It seems to be working, but there are tests missing, and possibly some optimization to handle large key sets.
* Vendor go-cmp.
* Fix incomplete vendoring of go-cmp.
* Improve test coverage. Fix bug whereby access to a subtree named X would expose existence of a the key named X at the same level.
* Add benchmarks, which showed that hasNonDenyCapability would be "expensive" to call for every member of a large folder. Made a couple of minor tweaks so that now it can be done without allocations.
* Comment cleanup.
* Review requested changes: rename some funcs, use routeCommon instead of
querying storage directly.
* Keep the same endpoint for now, but move it from a LIST to a POST and allow multiple paths to be queried in one operation.
* Modify test to pass multiple paths in at once.
* Add endpoint to default policy.
* Move endpoint to /sys/access/filtered-path.
* Handle ns lease and token renew/revoke via relative paths
* s/usin/using/
* add token and lease lookup paths; set ctx only on non-nil ns
Addtionally, use client token's ns for auth/token/lookup if no token is provided
* Adding Transit Autoseal
* adding tests
* adding more tests
* updating seal info
* send a value to test and set current key id
* updating message
* cleanup
* Adding tls config, addressing some feedback
* adding tls testing
* renaming config fields for tls
* Path globbing
* Add glob support at the beginning
* Ensure when evaluating an ACL that our path never has a leading slash. This already happens in the normal request path but not in tests; putting it here provides it for tests and extra safety in case the request path changes
* Simplify the algorithm, we don't really need to validate the prefix first as glob won't apply if it doesn't
* Add path segment wildcarding
* Disable path globbing for now
* Remove now-unneeded test
* Remove commented out globbing bits
* Remove more holdover glob bits
* Rename k var to something more clear
* Port over OSS cluster port refactor components
* Start forwarding
* Cleanup a bit
* Fix copy error
* Return error from perf standby creation
* Add some more comments
* Fix copy/paste error
* initial commit for prometheus and sys/metrics support
* Throw an error if prometheusRetentionTime is 0,add prometheus in devmode
* return when format=prometheus is used and prom is disable
* parse prometheus_retention_time from string instead of int
* Initialize config.Telemetry if nil
* address PR issues
* add sys/metrics framework.Path in a factory
* Apply requiredMountTable entries's MountConfig to existing core table
* address pr comments
* enable prometheus sink by default
* Move Metric-related code in a separate metricsutil helper
* Fixes a regression in forwarding from #6115
Although removing the authentication header is good defense in depth,
for forwarding mechanisms that use the raw request, we never add it
back. This caused perf standby tests to throw errors. Instead, once
we're past the point at which we would do any raw forwarding, but before
routing the request, remove the header.
To speed this up, a flag is set in the logical.Request to indicate where
the token is sourced from. That way we don't iterate through maps
unnecessarily.
* Merge entities during unseal only on the primary
* Add another guard check
* Add perf standby to the check
* Make primary to not differ from case-insensitivity status w.r.t secondaries
* Ensure mutual exclusivity between loading and invalidations
* Both primary and secondaries won't persist during startup and invalidations
* Allow primary to persist when loading case sensitively
* Using core.perfStandby
* Add a tweak in core for testing
* Address review feedback
* update memdb but not storage in secondaries
* Wire all the things directly do mergeEntity
* Fix persist behavior
* Address review feedback
* Two things:
* Change how we populate and clear leader UUID. This fixes a case where
if a standby disconnects from an active node and reconnects, without the
active node restarting, the UUID doesn't change so triggers on a new
active node don't get run.
* Add a bunch of test helpers and minor updates to things.
This lets other parts of Vault that can't depend on the vault package
take advantage of the subview functionality.
This also allows getting rid of BarrierStorage and vault.Entry, two
totally redundant abstractions.
* Add helper for checking if an error is a fatal error
The double-double negative was really confusing, and this pattern is used a few places in Vault. This negates the double negative, making the devx a bit easier to follow.
* Check return value of UnsealWithStoredKeys in sys/init
* Return proper error types when attempting unseal with stored key
Prior to this commit, "nil" could have meant unsupported auto-unseal, a transient error, or success. This updates the function to return the correct error type, signaling to the caller whether they should retry or fail.
* Continuously attempt to unseal if sealed keys are supported
This fixes a bug that occurs on bootstrapping an initial cluster. Given a collection of Vault nodes and an initialized storage backend, they will all go into standby waiting for initialization. After one node is initialized, the other nodes had no mechanism by which they "re-check" to see if unseal keys are present. This adds a goroutine to the server command which continually waits for unseal keys to exist. It exits in the following conditions:
- the node is unsealed
- the node does not support stored keys
- a fatal error occurs (as defined by Vault)
- the server is shutting down
In all other situations, the routine wakes up at the specified interval and attempts to unseal with the stored keys.
* Upgrade to new Cloud KMS client libraries
We recently released the new Cloud KMS client libraries which use GRPC
instead of HTTP. They are faster and look nicer (</opinion>), but more
importantly they drastically simplify a lot of the logic around client
creation, encryption, and decryption. In particular, we can drop all the
logic around looking up credentials and base64-encoding/decoding.
Tested on a brand new cluster (no pre-existing unseal keys) and against
a cluster with stored keys from a previous version of Vault to ensure no
regressions.
* Use the default scopes the client requests
The client already does the right thing here, so we don't need to
surface it, especially since we aren't allowing users to configure it.
* fix cubbyhole deletion
* Fix error handling
* Move the cubbyhole tidy logic to token store and track the revocation count
* Move fetching of cubby keys before the tidy loop
* Fix context getting cancelled
* Test the cubbyhole cleanup logic
* Add progress counter for cubbyhole cleanup
* Minor polish
* Use map instead of slice for faster computation
* Add test for cubbyhole deletion
* Add a log statement for deletion
* Add SHA1 hashed tokens into the mix
The result will still pass gofmtcheck and won't trigger additional
changes if someone isn't using goimports, but it will avoid the
piecemeal imports changes we've been seeing.
This changes the behavior of the GCPCKMS auto-unsealer setup to attempt
encryption instead of a key lookup. Key lookups are a different API
method not covered by roles/cloudkms.cryptoKeyEncrypterDecrypter. This
means users must grant an extended scope to their service account
(granting the ability to read key data) which only seems to be used to
validate the existence of the key.
Worse, the only roles that include this permission are overly verbose
(e.g. roles/viewer which gives readonly access to everything in the
project and roles/cloudkms.admin which gives full control over all key
operations). This leaves the user stuck between choosing to create a
custom IAM role (which isn't fun) or grant overly broad permissions.
By changing to an encrypt call, we get better verification of the unseal
permissions and users can reduce scope to a single role.
* Refactor mount tune to support upsert options values and unset options.
* Do not allow unsetting options map
* add secret tune version regression test
* Only accept valid options version
* s/meVersion/optVersion/
We're having issues with leases in the GCS backend storage being
corrupted and failing MAC checking. When that happens, we need to know
the lease ID so we can address the corruption by hand and take
appropriate action.
This will hopefully prevent any instances of incomplete data being sent
to GSS