The extra core setup is no longer needed in Vault Enterprise, and the
licensing code here has no effect here or in Vault Enterprise.
I pulled this commit into Vault Enterprise and it still compiled fine,
and all tests pass. (Though a few functions can be deleted there as
well after this is merged.)
When testing the rollback mechanism, there's two categories of tests
typically written:
1. Ones in which the rollback manager is entirely left alone, which
usually are a bit slower and less predictable. However, it is still
sufficient in many scenarios.
2. Ones in which the rollback manager is explicitly probed by tests
and "stepped" to achieve the next rollback.
Here, without a mechanism to fully disable the rollback manager's
periodic ticker (without affecting its ability to work!) we'll continue
to see races of the sort:
> --- FAIL: TestRevocationQueue (50.95s)
> panic: sync: WaitGroup is reused before previous Wait has returned [recovered]
> panic: sync: WaitGroup is reused before previous Wait has returned
This allows us to disable the ticker, returning control to the test
suite entirely.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
For plugin tests, we copy the test binary. On macOS, if the
destination binary already exists, then copying over it will result
in an invalid signature.
The easiest workaround is to delete the file before copying.
* add RequestResponseCallback to core/options
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* pass in router and apply function on requests
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* add callback
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* cleanup
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Anton Averchenkov <84287187+averche@users.noreply.github.com>
* Update vault/core.go
* bad typo...
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* use pvt interface, can't downcast to child struct
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* finer grained errors
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* trim path for backend
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* remove entire mount point instead of just the first part of url
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* Update vault/testing.go
Co-authored-by: Anton Averchenkov <84287187+averche@users.noreply.github.com>
* add doc string
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* update docstring
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* reformat
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
* added changelog
---------
Signed-off-by: Daniel Huckins <dhuckins@users.noreply.github.com>
Co-authored-by: Anton Averchenkov <84287187+averche@users.noreply.github.com>
* test/plugin: refactor compilePlugin for reuse
- move compilePlugin to helper package
- make NewTestCluster use compilePlugin
* do not overwrite plugin directory in CoreConfig if set
* fix getting plugin directory path for go build
This isn't perfect for sure, but it's solidifying and becoming a useful
base to work off.
This routes events sent from auth and secrets plugins to the main
`EventBus` in the Vault Core. Events sent from plugins are automatically
tagged with the namespace and plugin information associated with them.
* Move some test helper stuff from the vault package to a new helper/testhelpers/corehelpers package. Consolidate on a single "noop audit" implementation.
* Add global, cross-cluster revocation queue to PKI
This adds a global, cross-cluster replicated revocation queue, allowing
operators to revoke certificates by serial number across any cluster. We
don't support revoking with private key (PoP) in the initial
implementation.
In particular, building on the PBPWF work, we add a special storage
location for handling non-local revocations which gets replicated up to
the active, primary cluster node and back down to all secondary PR
clusters. These then check the pending revocation entry and revoke the
serial locally if it exists, writing a cross-cluster confirmation entry.
Listing capabilities are present under pki/certs/revocation-queue,
allowing operators to see which certs are present. However, a future
improvement to the tidy subsystem will allow automatic cleanup of stale
entries.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Allow tidying revocation queue entries
No manual operator control of revocation queue entries are allowed.
However, entries are stored with their request time, allowing tidy to,
after a suitable safety buffer, remove these unconfirmed and presumably
invalid requests.
Notably, when a cluster goes offline, it will be unable to process
cross-cluster revocations for certificates it holds. If tidy runs,
potentially valid revocations may be removed. However, it is up to the
administrator to ensure the tidy window is sufficiently long that any
required maintenance is done (or, prior to maintenance when an issue is
first noticed, tidy is temporarily disabled).
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Only allow enabling global revocation queue on Vault Enterprise
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Use a locking queue to handle revocation requests
This queue attempts to guarantee that PKI's invalidateFunc won't have
to wait long to execute: by locking only around access to the queue
proper, and internally using a list, we minimize the time spent locked,
waiting for queue accesses.
Previously, we held a lock during tidy and processing that would've
prevented us from processing invalidateFunc calls.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* use_global_queue->cross_cluster_revocation
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Grab revocation storage lock when processing queue
We need to grab the storage lock as we'll actively be revoking new
certificates in the revocation queue. This ensures nobody else is
competing for storage access, across periodic funcs, new revocations,
and tidy operations.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Fix expected tidy status test
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Allow probing RollbackManager directly in tests
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Address review feedback on revocationQueue
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add more cancel checks, fix starting manual tidy
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* add core state lockd eadlock detection config option v2
* add changelog
* split out NewTestCluster function to maintain build flag
* replace long func with constant
* remove line
* rename file, and move where detect deadlock flag is set
* Allow mounting external plugins with same name/type as deprecated builtins
* Add some go tests for deprecation status handling
* Move timestamp storage to post-unseal
* Add upgrade-aware deprecation shutdown and tests
Move version out of SDK. For now it's a copy rather than move: the part not addressed by this change is sdk/helper/useragent.String, which we'll want to remove in favour of PluginString. That will have to wait until we've removed uses of useragent.String from all builtins.
* Add test that fails due to audit log panic
* Rebuild VersionedPlugin as map of primitive types before adding to response
* Changelog
* Fix casting in external plugin tests
Create global quotas of each type in every NewTestCluster. Also switch some key locks to use DeadlockMutex to make it easier to discover deadlocks in testing.
NewTestCluster also now starts the cluster, and the Start method becomes a no-op. Unless SkipInit is provided, we also wait for a node to become active, eliminating the need for WaitForActiveNode. This was needed because otherwise we can't safely make the quota api call. We can't do it in Start because Start doesn't return an error, and I didn't want to begin storing the testing object T instead TestCluster just so we could call t.Fatal inside Start.
The last change here was to address the problem of how to skip setting up quotas when creating a cluster with a nonstandard handler that might not even implement the quotas endpoint. The challenge is that because we were taking a func pointer to generate the real handler func, we didn't have any way to compare that func pointer to the standard handler-generating func http.Handler without creating a circular dependency between packages vault and http. The solution was to pass a method instead of an anonymous func pointer so that we can do reflection on it.
* Removes _builtin_ versions from mount storage where it already exists
* Stops new builtin versions being put into storage on mount creation/tuning
* Stops the plugin catalog from returning a builtin plugin that has been overridden, so it more accurately reflects the plugins that are available to actually run
Ensure that we don't try to access Core.perfStandby or Core.PerfStandby() from dynamicSystemView, which might be accessed with or without stateLock held.
Change the multiplexing key to use all `PluginRunner` config (converted to a struct which is comparable), so that plugins with the same name but different env, args, types, versions etc are not incorrectly multiplexed together.
Co-authored-by: Christopher Swenson <christopher.swenson@hashicorp.com>
* Support version selection for database plugins
* Don't consider unversioned plugins for version selection algorithm
* Added version to 'plugin not found' error
* Add PluginFactoryVersion function to avoid changing sdk/ API
* OSS parts of ent #3157. Some activity log tests were flaky because background workers could race with them; now we overload DisableTimers to stop some of them from running, and add some channels we can use to wait for others to complete before we start testing.
* Add CL
* HCP link integration
* update configure-git.yml
* more OSS stuff
* removing internal repos
* adding a nil check
* removing config test to be included in ENT only
* updating hcp-sdk-go to v0.22.0
* remove Hostname and AuthURL link config params
Co-authored-by: Chris Capurso <1036769+ccapurso@users.noreply.github.com>
* auth: Add Deprecation Status to auth list -detailed
* secrets: Add Deprecation Status to secrets list -detailed
* Add changelog entry for deprecation status list
* Add ability to perform automatic tidy operations
This enables the PKI secrets engine to allow tidy to be started
periodically by the engine itself, avoiding the need for interaction.
This operation is disabled by default (to avoid load on clusters which
don't need tidy to be run) but can be enabled.
In particular, a default tidy configuration is written (via
/config/auto-tidy) which mirrors the options passed to /tidy. Two
additional parameters, enabled and interval, are accepted, allowing
auto-tidy to be enabled or disabled and controlling the interval
(between successful tidy runs) to attempt auto-tidy.
Notably, a manual execution of tidy will delay additional auto-tidy
operations. Status is reported via the existing /tidy-status endpoint.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add changelog entry
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add documentation on auto-tidy
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add tests for auto-tidy
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Prevent race during parallel testing
We modified the RollbackManager's execution window to allow more
faithful testing of the periodicFunc. However, the TestAutoRebuild and
the new TestAutoTidy would then race against each other for modifying
the period and creating their clusters (before resetting to the old
value).
This changeset adds a lock around this, preventing the races.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Use tidyStatusLock to gate lastTidy time
This prevents a data race between the periodic func and the execution of
the running tidy.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add read lock around tidyStatus gauges
When reading from tidyStatus for computing gauges, since the underlying
values aren't atomics, we really should be gating these with a read lock
around the status access.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* enable registering backend muxed plugins in plugin catalog
* set the sysview on the pluginconfig to allow enabling secrets/auth plugins
* store backend instances in map
* store single implementations in the instances map
cleanup instance map and ensure we don't deadlock
* fix system backend unit tests
move GetMultiplexIDFromContext to pluginutil package
fix pluginutil test
fix dbplugin ut
* return error(s) if we can't get the plugin client
update comments
* refactor/move GetMultiplexIDFromContext test
* add changelog
* remove unnecessary field on pluginClient
* add unit tests to PluginCatalog for secrets/auth plugins
* fix comment
* return pluginClient from TestRunTestPlugin
* add multiplexed backend test
* honor metadatamode value in newbackend pluginconfig
* check that connection exists on cleanup
* add automtls to secrets/auth plugins
* don't remove apiclientmeta parsing
* use formatting directive for fmt.Errorf
* fix ut: remove tls provider func
* remove tlsproviderfunc from backend plugin tests
* use env var to prevent test plugin from running as a unit test
* WIP: remove lazy loading
* move non lazy loaded backend to new package
* use version wrapper for backend plugin factory
* remove backendVersionWrapper type
* implement getBackendPluginType for plugin catalog
* handle backend plugin v4 registration
* add plugin automtls env guard
* modify plugin factory to determine the backend to use
* remove old pluginsets from v5 and log pid in plugin catalog
* add reload mechanism via context
* readd v3 and v4 to pluginset
* call cleanup from reload if non-muxed
* move v5 backend code to new package
* use context reload for for ErrPluginShutdown case
* add wrapper on v5 backend
* fix run config UTs
* fix unit tests
- use v4/v5 mapping for plugin versions
- fix test build err
- add reload method on fakePluginClient
- add multiplexed cases for integration tests
* remove comment and update AutoMTLS field in test
* remove comment
* remove errwrap and unused context
* only support metadatamode false for v5 backend plugins
* update plugin catalog errors
* use const for env variables
* rename locks and remove unused
* remove unneeded nil check
* improvements based on staticcheck recommendations
* use const for single implementation string
* use const for context key
* use info default log level
* move pid to pluginClient struct
* remove v3 and v4 from multiplexed plugin set
* return from reload when non-multiplexed
* update automtls env string
* combine getBackend and getBrokeredClient
* update comments for plugin reload, Backend return val and log
* revert Backend return type
* allow non-muxed plugins to serve v5
* move v5 code to existing sdk plugin package
* do next export sdk fields now that we have removed extra plugin pkg
* set TLSProvider in ServeMultiplex for backwards compat
* use bool to flag multiplexing support on grpc backend server
* revert userpass main.go
* refactor plugin sdk
- update comments
- make use of multiplexing boolean and single implementation ID const
* update comment and use multierr
* attempt v4 if dispense fails on getPluginTypeForUnknown
* update comments on sdk plugin backend
Adds support for using semantic version information when registering
and managing plugins. New `detailed` field in the response data for listing
plugins and new `version` field in the response data for reading a
single plugin.
* Allow automatic rebuilding of CRLs
When enabled, periodic rebuilding of CRLs will improve PKI mounts in two
way:
1. Reduced load during periods of high (new) revocations, as the CRL
isn't rebuilt after each revocation but instead on a fixed schedule.
2. Ensuring the CRL is never stale as long as the cluster remains up,
by checking for next CRL expiry and regenerating CRLs before that
happens. This may increase cluster load when operators have large
CRLs that they'd prefer to let go stale, rather than regenerating
fresh copies.
In particular, we set a grace period before expiration of CRLs where,
when the periodic function triggers (about once a minute), we check
upcoming CRL expirations and check if we need to rebuild the CRLs.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add changelog entry
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add documentation on periodic rebuilding
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Allow modification of rollback period for testing
When testing backends that use the periodic func, and specifically,
testing the behavior of that periodic func, waiting for the usual 1m
interval can lead to excessively long test execution. By switching to a
shorter period--strictly for testing--we can make these tests execute
faster.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add tests for auto-rebuilding of CRLs
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Remove non-updating getConfig variant
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Avoid double reload of config
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* add func to set level for specific logger
* add endpoints to modify log level
* initialize base logger with IndependentLevels
* test to ensure other loggers remain unchanged
* add DELETE loggers endpoints to revert back to config
* add API docs page
* add changelog entry
* remove extraneous line
* add log level field to Core struct
* add godoc for getLogLevel
* add some loggers to c.allLoggers
* fix dev-plugin-dir when backend is builtin
* use builtinRegistry.Contains
* revert aa76337
* use correct plugin type for logical backend after revert
* fix factory func default setting after revert
* add ut coverage for builtin plugin with plugin directory set
* add coverage for secrets plugin type
* use totp in tests to avoid test import cycle in ssh package
* use nomad in tests to avoid test import cycle
* remove secrets mount tests due to unavoidable test import cycle
* Various changes to try to ensure that fewer goroutines survive after a test completes:
* add Core.ShutdownWait that doesn't return until shutdown is done
* create the usedCodes cache on seal and nil it out on pre-seal so that the finalizer kills the janitor goroutine
* stop seal health checks on seal rather than wait for them to discover the active context is done
* make sure all lease-loading goroutines are done before returning from restore
* make uniquePoliciesGc discover closed quitCh immediately instead of only when the ticker fires
* make sure all loading goroutines are done before returning from loadEntities, loadCachedEntitiesOfLocalAliases
* port SSCT OSS
* port header hmac key to ent and generate token proto without make command
* remove extra nil check in request handling
* add changelog
* add comment to router.go
* change test var to use length constants
* remove local index is 0 check and extra defer which can be removed after use of ExternalID
This function call was previously used to generate mappings from
potential subjects (or SANs) to certificates within the TLS client
object. However, newer Go versions have deprecated this method, instead
building the mapping automatically based on present certificates at
request time. Because the corresponding client configuration field is
not used in Vault (NameToCertificate), it is safe to remove this call
and leave it nil.
See also: 67d894ee65
See also: https://pkg.go.dev/crypto/tls#Config.BuildNameToCertificate
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>