This PR adds a new Storage Backend for Triton's Object Storage - Manta
```
make testacc TEST=./physical/manta
==> Checking that code complies with gofmt requirements...
==> Checking that build is using go version >= 1.9.1...
go generate
VAULT_ACC=1 go test -tags='vault' ./physical/manta -v -timeout 45m
=== RUN TestMantaBackend
--- PASS: TestMantaBackend (61.18s)
PASS
ok github.com/hashicorp/vault/physical/manta 61.210s
```
Manta behaves differently to how S3 works - it has no such concepts of Buckets - it is merely a filesystem style object store
Therefore, we have chosen the approach of when writing a secret `foo` it will actually map (on disk) as foo/.vault_value
The reason for this is because if we write the secret `foo/bar` and then try and Delete a key using the name `foo` then Manta
will complain that the folder is not empty because `foo/bar` exists. Therefore, `foo/bar` is written as `foo/bar/.vault_value`
The value of the key is *always* written to a directory tree of the name and put in a `.vault_value` file.
The example in the documentation correctly passes a quoted boolean (i.e.
true or false as a string) instead of a "real" HCL boolean. This commit
corrects the parameter list to document that fact.
While it would be more desirable to change the implementation to accept
an unquoted boolean, it seems that the use of `hcl.DecodeObject` for
parameters which are not common to all storage back ends would make this
a rather more involved change than this necessarily warrants.
* Mention api_addr on VaultPluginTLSProvider logs, update docs
* Clarify message and mention automatic api_address detection
* Change error message to use api_addr
* Change error messages to use api_addr
* disable raw endpoint by default
* adding docs
* config option raw -> raw_storage_endpoint
* docs updates
* adding listing on raw endpoint
* reworking tests for enabled raw endpoints
* root protecting base raw endpoint
* Add max_parallel parameter to MySQL backend.
This limits the number of concurrent connections, so that vault does not die
suddenly from "Too many connections".
This can happen when e.g. vault starts up, and tries to load all the
existing leases in parallel. At the time of writing this, the value
ExpirationRestoreWorkerCount in vault/helper/consts/const.go is set to
64, meaning that if there are enough leases in the vault's DB, it will
generate AT LEAST 64 concurrent connections to MySQL when loading the
data during start-up. On certain configurations, e.g. smaller AWS
RDS/Aurora instances, this will cause Vault to fail startup.
* Fix a typo in mysql storage readme
SIGHUP support is denoted in the sections/options that support actions on SIGHUP, so with the new docs layout it's confusing to have the old statement in there. Remove in favor of the inline comments.
Fixes#2572
With `ha_enabled = true` vault crashes with the following error:
```
error parsing 'storage': storage.dynamodb: At 17:16: root.ha_enabled: unknown type for string *ast.LiteralType
```
This seems related to https://github.com/hashicorp/vault/issues/1559