* Add basic autocompletion
* Add autocomplete to some common commands
* Autocomplete the generate-root flags
* Add information about autocomplete to the docs
@jefferai and I discussed this on Friday. With three fully-documented
SSH backends, the page is lengthy, ungreppable, and intimidating. This
commit separates the SSH backends into their own pages with as little
text changes as possible.
When we "nest" like this, it's important to use a common suffix,
"Database Secret Backend" in this case, so that the SEO minions can
properly group search results for end users.
* Add backend plugin changes
* Fix totp backend plugin tests
* Fix logical/plugin InvalidateKey test
* Fix plugin catalog CRUD test, fix NoopBackend
* Clean up commented code block
* Fix system backend mount test
* Set plugin_name to omitempty, fix handleMountTable config parsing
* Clean up comments, keep shim connections alive until cleanup
* Include pluginClient, disallow LookupPlugin call from within a plugin
* Add wrapper around backendPluginClient for proper cleanup
* Add logger shim tests
* Add logger, storage, and system shim tests
* Use pointer receivers for system view shim
* Use plugin name if no path is provided on mount
* Enable plugins for auth backends
* Add backend type attribute, move builtin/plugin/package
* Fix merge conflict
* Fix missing plugin name in mount config
* Add integration tests on enabling auth backend plugins
* Remove dependency cycle on mock-plugin
* Add passthrough backend plugin, use logical.BackendType to determine lease generation
* Remove vault package dependency on passthrough package
* Add basic impl test for passthrough plugin
* Incorporate feedback; set b.backend after shims creation on backendPluginServer
* Fix totp plugin test
* Add plugin backends docs
* Fix tests
* Fix builtin/plugin tests
* Remove flatten from PluginRunner fields
* Move mock plugin to logical/plugin, remove totp and passthrough plugins
* Move pluginMap into newPluginClient
* Do not create storage RPC connection on HandleRequest and HandleExistenceCheck
* Change shim logger's Fatal to no-op
* Change BackendType to uint32, match UX backend types
* Change framework.Backend Setup signature
* Add Setup func to logical.Backend interface
* Move OptionallyEnableMlock call into plugin.Serve, update docs and comments
* Remove commented var in plugin package
* RegisterLicense on logical.Backend interface (#3017)
* Add RegisterLicense to logical.Backend interface
* Update RegisterLicense to use callback func on framework.Backend
* Refactor framework.Backend.RegisterLicense
* plugin: Prevent plugin.SystemViewClient.ResponseWrapData from getting JWTs
* plugin: Revert BackendType to remove TypePassthrough and related references
* Fix typo in plugin backends docs
* Added HANA dynamic secret backend
* Added acceptance tests for HANA secret backend
* Add HANA backend as a logical backend to server
* Added documentation to HANA secret backend
* Added vendored libraries
* Go fmt
* Migrate hana credential creation to plugin
* Removed deprecated hana logical backend
* Migrated documentation for HANA database plugin
* Updated HANA DB plugin to use role name in credential generation
* Update HANA plugin tests
* If env vars are not configured, tests will skip rather than succeed
* Fixed some improperly named string variables
* Removed unused import
* Import SAP hdb driver
Previously, the renew method would ALWAYS check to ensure the
authenticated IAM principal ARN matched the bound ARN. However, there
is a valid use case in which no bound_iam_principal_arn is specified and
all bindings are done through inferencing. When a role is configured
like this, clients won't be able to renew their token because of the
check.
This now checks to ensure that the bound_iam_principal_arn is not empty
before requriing that it match the originally authenticated client.
Fixes#2781
* Add max_parallel parameter to MySQL backend.
This limits the number of concurrent connections, so that vault does not die
suddenly from "Too many connections".
This can happen when e.g. vault starts up, and tries to load all the
existing leases in parallel. At the time of writing this, the value
ExpirationRestoreWorkerCount in vault/helper/consts/const.go is set to
64, meaning that if there are enough leases in the vault's DB, it will
generate AT LEAST 64 concurrent connections to MySQL when loading the
data during start-up. On certain configurations, e.g. smaller AWS
RDS/Aurora instances, this will cause Vault to fail startup.
* Fix a typo in mysql storage readme