* porting identity to OSS
* changes that glue things together
* add testing bits
* wrapped entity id
* fix mount error
* some more changes to core
* fix storagepacker tests
* fix some more tests
* fix mount tests
* fix http mount tests
* audit changes for identity
* remove upgrade structs on the oss side
* added go-memdb to vendor
* Lazy load plugins to avoid setup-unwrap cycle
* Remove commented blocks
* Refactor NewTestCluster, use single core cluster on basic plugin tests
* Set c.pluginDirectory in TestAddTestPlugin for setupPluginCatalog to work properly
* Add special path to mock plugin
* Move ensureCoresSealed to vault/testing.go
* Use same method for EnsureCoresSealed and Cleanup
* Bump ensureCoresSealed timeout to 60s
* Correctly handle nil opts on NewTestCluster
* Add metadata flag to APIClientMeta, use meta-enabled plugin when mounting to bootstrap
* Check metadata flag directly on the plugin process
* Plumb isMetadataMode down to PluginRunner
* Add NOOP shims when running in metadata mode
* Remove unused flag from the APIMetadata object
* Remove setupSecretPlugins and setupCredentialPlugins functions
* Move when we setup rollback manager to after the plugins are initialized
* Fix tests
* Fix merge issue
* start rollback manager after the credential setup
* Add guards against running certain client and server functions while in metadata mode
* Call initialize once a plugin is loaded on the fly
* Add more tests, update basic secret/auth plugin tests to trigger lazy loading
* Skip mount if plugin removed from catalog
* Fixup
* Remove commented line on LookupPlugin
* Fail on mount operation if plugin is re-added to catalog and mount is on existing path
* Check type and special paths on startBackend
* Fix merge conflicts
* Refactor PluginRunner run methods to use runCommon, fix TestSystemBackend_Plugin_auth
* Add plugin reload capability on all mounts for a specific plugin type
* Comments cleanup
* Add per-mount plugin backend reload, add tests
* Fix typos
* Remove old comment
* Reuse existing storage view in reloadPluginCommon
* Correctly handle reloading auth plugin backends
* Update path to plugin/backend/reload
* Use multierrors on reloadMatchingPluginMounts, attempt to reload all mounts provided
* Use internal value as check to ensure plugin backend reload
* Remove connection state from request for plugins at the moment
* Minor cleanup
* Refactor tests
* Rename builtin/credential/aws-ec2 to aws
The aws-ec2 authentication backend is being expanded and will become the
generic aws backend. This is a small rename commit to keep the commit
history clean.
* Expand aws-ec2 backend to more generic aws
This adds the ability to authenticate arbitrary AWS IAM principals using
AWS's sts:GetCallerIdentity method. The AWS-EC2 auth backend is being to
just AWS with the expansion.
* Add missing aws auth handler to CLI
This was omitted from the previous commit
* aws auth backend general variable name cleanup
Also fixed a bug where allowed auth types weren't being checked upon
login, and added tests for it.
* Update docs for the aws auth backend
* Refactor aws bind validation
* Fix env var override in aws backend test
Intent is to override the AWS environment variables with the TEST_*
versions if they are set, but the reverse was happening.
* Update docs on use of IAM authentication profile
AWS now allows you to change the instance profile of a running instance,
so the use case of "a long-lived instance that's not in an instance
profile" no longer means you have to use the the EC2 auth method. You
can now just change the instance profile on the fly.
* Fix typo in aws auth cli help
* Respond to PR feedback
* More PR feedback
* Respond to additional PR feedback
* Address more feedback on aws auth PR
* Make aws auth_type immutable per role
* Address more aws auth PR feedback
* Address more iam auth PR feedback
* Rename aws-ec2.html.md to aws.html.md
Per PR feedback, to go along with new backend name.
* Add MountType to logical.Request
* Make default aws auth_type dependent upon MountType
When MountType is aws-ec2, default to ec2 auth_type for backwards
compatibility with legacy roles. Otherwise, default to iam.
* Pass MountPoint and MountType back up to the core
Previously the request router reset the MountPoint and MountType back to
the empty string before returning to the core. This ensures they get set
back to the correct values.
* audit: Added token_num_uses to audit response
* Fixed jsonx tests
* Revert logical auth to NumUses instead of TokenNumUses
* s/TokenNumUses/NumUses
* Audit: Add num uses to audit requests as well
* Added RemainingUses to distinguish NumUses in audit requests
* Add /sys/config/audited-headers endpoint for configuring the headers that will be audited
* Remove some debug lines
* Add a persistant layer and refactor a bit
* update the api endpoints to be more restful
* Add comments and clean up a few functions
* Remove unneeded hash structure functionaility
* Fix existing tests
* Add tests
* Add test for Applying the header config
* Add Benchmark for the ApplyConfig method
* ResetTimer on the benchmark:
* Update the headers comment
* Add test for audit broker
* Use hyphens instead of camel case
* Add size paramater to the allocation of the result map
* Fix the tests for the audit broker
* PR feedback
* update the path and permissions on config/* paths
* Add docs file
* Fix TestSystemBackend_RootPaths test
This commit splits ACL policies into more fine-grained capabilities.
This both drastically simplifies the checking code and makes it possible
to support needed workflows that are not possible with the previous
method. It is backwards compatible; policies containing a "policy"
string are simply converted to a set of capabilities matching previous
behavior.
Fixes#724 (and others).
In order to implement this efficiently, I have introduced the concept of
"singleton" backends -- currently, 'sys' and 'cubbyhole'. There isn't
much reason to allow sys to be mounted at multiple places, and there
isn't much reason you'd need multiple per-token storage areas. By
restricting it to just one, I can store that particular mount instead of
iterating through them in order to call the appropriate revoke function.
Additionally, because revocation on the backend needs to be triggered by
the token store, the token store's salt is kept in the router and
client tokens going to the cubbyhole backend are double-salted by the
router. This allows the token store to drive when revocation happens
using its salted tokens.