If the value is being "piped", we don't print colors or the newline character at the end. If it's not, we still give users pretty when selecting a raw field/value.
* disable raw endpoint by default
* adding docs
* config option raw -> raw_storage_endpoint
* docs updates
* adding listing on raw endpoint
* reworking tests for enabled raw endpoints
* root protecting base raw endpoint
* Add basic autocompletion
* Add autocomplete to some common commands
* Autocomplete the generate-root flags
* Add information about autocomplete to the docs
Previously we lowercased names on ingress but not on lookup or delete
which could cause unexpected results. Now, just unilaterally lowercase
policy names on write and delete. On get, to avoid the performance hit
of always lowercasing when not necessary since it's in the critical
path, we have a minor optimization -- we check the LRU first before
normalizing. For tokens, because they're already normalized when adding
policies during creation, this should always work; it might just be
slower for API calls.
Fixes#3187
1. The current implementation of the SSH command is heavily tied to the
assumptions of OTP/dynamic key types. The SSH CA backend is
fundamentally a different approach to login and authentication. As a
result, there was some restructuring of existing methods to share more
code and state.
2. Each authentication method (ca, otp, dynamic) are now fully-contained
in their own handle* function.
3. -mode and -role are going to be required for SSH CA, and I don't
think the magical UX (and overhead) of guessing them is a good UX. It's
confusing as to which role and how Vault guesses. We can reduce 66% of
the API calls and add more declaration to the CLI by making -mode and
-role required. This commit adds warnings for that deprecation, but
these values are both required for CA type authentication.
4. The principal and extensions are currently fixed, and I personally
believe that's good enough for the first pass at this. Until we
understand what configuration options users will want, I think we should
ship with all the local extensions enabled. Users who don't want that
can generate the key themselves directly (current behavior) or submit
PRs to make the map of extensions customizable.
5. Host key checking for the CA backend is not currently implemented.
It's not strictly required at setup, so I need to think about whether it
belongs here.
This is not ready for merge, but it's ready for early review.
From the sshpass manpage:
> The -p option should be considered the least secure of all of sshpass's options. All system users can see the password in the command line with a simple "ps" command. Sshpass makes a minimal attempt to hide the password, but such attempts are doomed to create race conditions without actually solving the problem. Users of sshpass are encouraged to use one of the other password passing techniques, which are all more secure.
This PR changes the sshpass behavior to execute a subprocess with the
SSHPASS envvar (which is generally regarded as more secure) than using
the -p option.
* Store original request path in WrapInfo as CreationPath
* Add wrapping_token_creation_path to CLI output
* Add CreationPath to AuditResponseWrapInfo
* Fix tests
* Add and fix tests, update API docs with new sample responses
* Add backend plugin changes
* Fix totp backend plugin tests
* Fix logical/plugin InvalidateKey test
* Fix plugin catalog CRUD test, fix NoopBackend
* Clean up commented code block
* Fix system backend mount test
* Set plugin_name to omitempty, fix handleMountTable config parsing
* Clean up comments, keep shim connections alive until cleanup
* Include pluginClient, disallow LookupPlugin call from within a plugin
* Add wrapper around backendPluginClient for proper cleanup
* Add logger shim tests
* Add logger, storage, and system shim tests
* Use pointer receivers for system view shim
* Use plugin name if no path is provided on mount
* Enable plugins for auth backends
* Add backend type attribute, move builtin/plugin/package
* Fix merge conflict
* Fix missing plugin name in mount config
* Add integration tests on enabling auth backend plugins
* Remove dependency cycle on mock-plugin
* Add passthrough backend plugin, use logical.BackendType to determine lease generation
* Remove vault package dependency on passthrough package
* Add basic impl test for passthrough plugin
* Incorporate feedback; set b.backend after shims creation on backendPluginServer
* Fix totp plugin test
* Add plugin backends docs
* Fix tests
* Fix builtin/plugin tests
* Remove flatten from PluginRunner fields
* Move mock plugin to logical/plugin, remove totp and passthrough plugins
* Move pluginMap into newPluginClient
* Do not create storage RPC connection on HandleRequest and HandleExistenceCheck
* Change shim logger's Fatal to no-op
* Change BackendType to uint32, match UX backend types
* Change framework.Backend Setup signature
* Add Setup func to logical.Backend interface
* Move OptionallyEnableMlock call into plugin.Serve, update docs and comments
* Remove commented var in plugin package
* RegisterLicense on logical.Backend interface (#3017)
* Add RegisterLicense to logical.Backend interface
* Update RegisterLicense to use callback func on framework.Backend
* Refactor framework.Backend.RegisterLicense
* plugin: Prevent plugin.SystemViewClient.ResponseWrapData from getting JWTs
* plugin: Revert BackendType to remove TypePassthrough and related references
* Fix typo in plugin backends docs
* exclude /sys/leases/renew from registering with expiration manager
* adding sys/leases/renew to return full secret object, adding tests to catch renew errors
* Normalize "X arguments expected" messages
* Use "Vault" when referring to the product and "vault" when referring to an instance of the product
* Various minor tweaks to improve readability and/or provide clarity
Adds HUP support for audit log files to close and reopen. This makes it
much easier to deal with normal log rotation methods.
As part of testing this I noticed that HUP and other items that come out
of command/server.go are going to stderr, which is where our normal log
lines go. This isn't so much problematic with our normal output but as
we officially move to supporting other formats this can cause
interleaving issues, so I moved those to stdout instead.
* Provide base64 keys in addition to hex encoded.
Accept these at unseal/rekey time.
Also fix a bug where backup would not be honored when doing a rekey with
no operation currently ongoing.
This makes it easier to understand the expected lifetime without a
lookup call that uses the single use left on the token.
This also adds a couple of safety checks and for JSON uses int, rather
than int64, for the TTL for the wrapped token.
Vault will now register itself with Consul. The active node can be found using `active.vault.service.consul`. All standby vaults are available via `standby.vault.service.consul`. All unsealed vaults are considered healthy and available via `vault.service.consul`. Change in status and registration is event driven and should happen at the speed of a write to Consul (~network RTT + ~1x fsync(2)).
Healthy/active:
```
curl -X GET 'http://127.0.0.1:8500/v1/health/service/vault?pretty' && echo;
[
{
"Node": {
"Node": "vm1",
"Address": "127.0.0.1",
"TaggedAddresses": {
"wan": "127.0.0.1"
},
"CreateIndex": 3,
"ModifyIndex": 20
},
"Service": {
"ID": "vault:127.0.0.1:8200",
"Service": "vault",
"Tags": [
"active"
],
"Address": "127.0.0.1",
"Port": 8200,
"EnableTagOverride": false,
"CreateIndex": 17,
"ModifyIndex": 20
},
"Checks": [
{
"Node": "vm1",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"CreateIndex": 3,
"ModifyIndex": 3
},
{
"Node": "vm1",
"CheckID": "vault-sealed-check",
"Name": "Vault Sealed Status",
"Status": "passing",
"Notes": "Vault service is healthy when Vault is in an unsealed status and can become an active Vault server",
"Output": "",
"ServiceID": "vault:127.0.0.1:8200",
"ServiceName": "vault",
"CreateIndex": 19,
"ModifyIndex": 19
}
]
}
]
```
Healthy/standby:
```
[snip]
"Service": {
"ID": "vault:127.0.0.2:8200",
"Service": "vault",
"Tags": [
"standby"
],
"Address": "127.0.0.2",
"Port": 8200,
"EnableTagOverride": false,
"CreateIndex": 17,
"ModifyIndex": 20
},
"Checks": [
{
"Node": "vm2",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"CreateIndex": 3,
"ModifyIndex": 3
},
{
"Node": "vm2",
"CheckID": "vault-sealed-check",
"Name": "Vault Sealed Status",
"Status": "passing",
"Notes": "Vault service is healthy when Vault is in an unsealed status and can become an active Vault server",
"Output": "",
"ServiceID": "vault:127.0.0.2:8200",
"ServiceName": "vault",
"CreateIndex": 19,
"ModifyIndex": 19
}
]
}
]
```
Sealed:
```
"Checks": [
{
"Node": "vm2",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"CreateIndex": 3,
"ModifyIndex": 3
},
{
"Node": "vm2",
"CheckID": "vault-sealed-check",
"Name": "Vault Sealed Status",
"Status": "critical",
"Notes": "Vault service is healthy when Vault is in an unsealed status and can become an active Vault server",
"Output": "Vault Sealed",
"ServiceID": "vault:127.0.0.2:8200",
"ServiceName": "vault",
"CreateIndex": 19,
"ModifyIndex": 38
}
]
```