* Move SudoPrivilege out of SystemView
We only use this in token store and it literally doesn't work anything
that isn't the token store or system mount, so we should stop exposing
something that doesn't work.
* Reconcile extended system view with sdk/logical a bit and put an explanation for why SudoPrivilege isn't moved over
Generalization of the PhysicalFactory notion introduced by Raft, so it can be used by other storage backends in tests. These are the OSS changes needed for my rework of the ent integ tests and cluster helpers.
* secret/aws: Pass policy ARNs to AssumedRole and FederationToken roles
AWS now allows you to pass policy ARNs as well as, and in addition to,
policy documents for AssumeRole and GetFederationToken (see
https://aws.amazon.com/about-aws/whats-new/2019/05/session-permissions/).
Vault already collects policy ARNs for iam_user credential types; now it
will allow policy ARNs for assumed_role and federation_token credential
types and plumb them through to the appropriate AWS calls.
This brings along a minor breaking change. Vault roles of the
federation_token credential type are now required to have either a
policy_document or a policy_arns specified. This was implicit
previously; a missing policy_document would result in a validation error
from the AWS SDK when retrieving credentials. However, it would still
allow creating a role that didn't have a policy_document specified and
then later specifying it, after which retrieving the AWS credentials
would work. Similar workflows in which the Vault role didn't have a
policy_document specified for some period of time, such as deleting the
policy_document and then later adding it back, would also have worked
previously but will now be broken.
The reason for this breaking change is because a credential_type of
federation_token without either a policy_document or policy_arns
specified will return credentials that have equivalent permissions to
the credentials the Vault server itself is using. This is quite
dangerous (e.g., it could allow Vault clients access to retrieve
credentials that could modify Vault's underlying storage) and so should
be discouraged. This scenario is still possible when passing in an
appropriate policy_document or policy_arns parameter, but clients should
be explicitly aware of what they are doing and opt in to it by passing
in the appropriate role parameters.
* Error out on dangerous federation token retrieval
The AWS secrets role code now disallows creation of a dangerous role
configuration; however, pre-existing roles could have existed that would
trigger this now-dangerous code path, so also adding a check for this
configuration at credential retrieval time.
* Run makefmt
* Fix tests
* Fix comments/docs
* be more specific about node version, and specify a yarn version
* update ember, ember-cli, ember-data, ember-data-model-fragments
* use router handlers to access transition information
* fix shadowing of component helper
* update ivy-codemirror, ember-cli-inject-live-reload
* remove custom router service
* don't use transition.queryParams
* update ember-cli-deprecation-workflow
* refactor kv v1 to use 'path' instead of 'id' on creation
* fix auth-jwt-test and toolbar-link-test
* update ember composable helpers
* remove Ember.copy from test file
* no more deprecations in the workflow
* fix more secret tests
* fix remaining failed tests
* move select component to core because it's used by ttl-picker
* generate new model class for each test instead of reusing an existing one
* fix selectors on kmip tests
* refactor how control groups construct urls from the new transition objects
* add router service override back in, and have it be evented so that we can trigger router events on it
* move stories and markdown files to core if the component lives in core
* update ember-cli, ember-cli-babel, ember-auto-import
* update base64js, date-fns, deepmerge, codemirror, broccoli-asset-rev
* update linting rules
* fix test selectors
* update ember-api-actions, ember-concurrency, ember-load-initializers, escape-string-regexp, normalize.css, prettier-eslint-cli, jsdoc-to-markdown
* remove test-results dir
* update base64js, ember-cli-clipboard, ember-cli-sass, ember-cli-string-helpers, ember-cli-template-lint, ember-cli-uglify, ember-link-action
* fix linting
* run yarn install without restoring from cache
* refactor how tests are run and handle the vault server subprocess
* update makefile for new test task names
* update circle config to use the new yarn task
* fix writing the seal keys when starting the dev server
* remove optional deps from the lockfile
* don't ignore-optional on yarn install
* remove errant console.log
* update ember-basic-dropdown-hover, jsonlint, yargs-parser
* update ember-cli-flash
* add back optionalDeps
* update @babel/core@7.5.5, ember-basic-dropdown@1.1.3, eslint-plugin-ember@6.8.2
* update storybook to the latest release
* add a babel config with targets so that the ember babel plugin works properly
* update ember-resolver, move ember-cli-storybook to devDependencies
* revert normalize.css upgrade
* silence fetchadapter warning for now
* exclude 3rd party array helper now that ember includes one
* fix switch and entity lookup styling
* only add -root suffix if it's not in versions mode
* make sure drop always has an array on the aws role form
* fix labels like we did with the backport
* update eslintignore
* update the yarn version in the docker build file
* update eslint ignore
* Store less data in Cassandra prefix buckets
The Cassandra physical backend relies on storing data for sys/foo/bar
under sys, sys/foo, and sys/foo/bar. This is necessary so that we
can list the sys bucket, get a list of all child keys, and then trim
this down to find child 'folders' eg food. Right now however, we store
the full value of every storage entry in all three buckets. This is
unnecessary as the value will only ever be read out in the leaf bucket
ie sys/foo/bar. We use the intermediary buckets simply for listing keys.
We have seen some issues around compaction where certain buckets,
particularly intermediary buckets that are exclusively for listing,
get really clogged up with data to the point of not being listable.
Buckets like sys/expire/id are huge, combining lease expiry data for
all auth methods, and need to be listed for vault to successfully
become leader. This PR tries to cut down on the amount of data stored
in intermediary buckets.
* Avoid goroutine leak by buffering results channel up to the bucket count
* command/server: fix TestLoadConfigFile_json2 test, fix hcl tags
Fixes test to call the equality check, and add missing values to the expected object. Fixes hcl tags in the Telemetry structs.
* fix PrometheusRetentionTime tag