* Do some best-effort cleanup in file backend
If put results in an encoding error and after the file is closed we
detect it's zero bytes, it could be caused by an out of space error on
the disk since file info is often stored in filesystem metadata with
reserved space. This tries to detect that scenario and perform
best-effort cleanup. We only do this on zero length files to ensure that
if an encode fails to write but the system hasn't already performed
truncation, we leave the existing data alone.
Vault should never write a zero-byte file (as opposed to a zero-byte
value in the encoded JSON) so if this case is hit it's always an error.
* Also run a check on Get
* Add entity information request to system view
* fixing a few comments
* sharing types between plugin and logical
* sharing types between plugin and logical
* fixing output directory for proto
* removing extra replacement
* adding mount type lookup
* empty entities return nil instead of error
* adding some comments
* add router service polyfill
* add refresh command
* move async code into ember-concurrency task and implement refresh that way
* use ember-concurrency derived state to show a loading spinner when the task is running
* scroll after appending to log too
Don't set a default value for the UserKnownHostsFile flag.
Only append `-o UserKnownHostsFile` to the ssh command if it
has been specified by the user or vault ssh has set it based on another
flag (such as flagHostKeyMountPoint)
Fixes https://github.com/hashicorp/vault/issues/4672
This is implementing the same fix that was added for the CA mode for vault
ssh in https://github.com/hashicorp/vault/pull/3922
Using the IP address caused `Host` entries in the ssh_config to not
match anymore meaning you would need to hardcode all of your IP
addresses in your ssh config instead of using DNS to connect to hosts