* Add useragent package
This helper provides a consistent user-agent header for Vault, taking into account different versions.
* Add user-agent headers to spanner and gcs
This PR adds a new Storage Backend for Triton's Object Storage - Manta
```
make testacc TEST=./physical/manta
==> Checking that code complies with gofmt requirements...
==> Checking that build is using go version >= 1.9.1...
go generate
VAULT_ACC=1 go test -tags='vault' ./physical/manta -v -timeout 45m
=== RUN TestMantaBackend
--- PASS: TestMantaBackend (61.18s)
PASS
ok github.com/hashicorp/vault/physical/manta 61.210s
```
Manta behaves differently to how S3 works - it has no such concepts of Buckets - it is merely a filesystem style object store
Therefore, we have chosen the approach of when writing a secret `foo` it will actually map (on disk) as foo/.vault_value
The reason for this is because if we write the secret `foo/bar` and then try and Delete a key using the name `foo` then Manta
will complain that the folder is not empty because `foo/bar` exists. Therefore, `foo/bar` is written as `foo/bar/.vault_value`
The value of the key is *always* written to a directory tree of the name and put in a `.vault_value` file.
The original reason for the split was physical's dependencies, but those
haven't been onerous for a long time. Meanwhile it's a totally separate
implementation so we could be getting faulty results from tests. Get rid
of it and use the unified physical/inmem.
This change makes these errors transient instead of permanent:
[ERROR] core: failed to acquire lock: error=etcdserver: requested lease not found
After this change, there can still be one of these errors when a
standby vault that lost its lease tries to become leader, but on the
next lock acquisition attempt a new session will be created. With this
new session, the standby will be able to become the leader.
* Fix cassandra tests, explicitly set cluster port if provided
* Update cassandra.yml test-fixture
* Add port as part of the config option, fix tests
* Remove hostport splitting in cassandraConnectionProducer.createSession
* Include port in API docs
* Add max_parallel parameter to MySQL backend.
This limits the number of concurrent connections, so that vault does not die
suddenly from "Too many connections".
This can happen when e.g. vault starts up, and tries to load all the
existing leases in parallel. At the time of writing this, the value
ExpirationRestoreWorkerCount in vault/helper/consts/const.go is set to
64, meaning that if there are enough leases in the vault's DB, it will
generate AT LEAST 64 concurrent connections to MySQL when loading the
data during start-up. On certain configurations, e.g. smaller AWS
RDS/Aurora instances, this will cause Vault to fail startup.
* Fix a typo in mysql storage readme
This change addresses an issue where deep paths would not be enumerated if parent paths did not contain a key.
Given the keys `shallow` and `deep` at the following paths...
```
secret/shallow
secret/path/deep
```
... a `LIST` request against `/v1/secret` would produce only one result, `shallow`. With this change, the same list request will now list `shallow` and `path/`.
* Add a benchmark for exiration.Restore
* Add benchmarks for consul Restore functions
* Add a parallel version of expiration.Restore
* remove debug code
* Up the MaxIdleConnsPerHost
* Add tests for etcd
* Return errors and ensure go routines are exited
* Refactor inmem benchmark
* Add s3 bench and refactor a bit
* Few tweaks
* Fix race with waitgroup.Add()
* Fix waitgroup race condition
* Move wait above the info log
* Add helper/consts package to store consts that are needed in cyclic packages
* Remove not used benchmarks
S3 results require paging to ensure that all results are returned. This
PR changes the S3 physical backend to use the new ListObjectV2 method
and pages through all the results.
Fixes#2223.
This patch fixes two bugs in Zookeeper backends:
* backend was determining if the node is a leaf or not basing on the number
of the childer given node has. This is incorrect if you consider the fact
that deleteing nested node can leave empty prefixes/dirs behind which have
neither children nor data inside. The fix changes this situation by testing
if the node has any data set - if not then it is not a leaf.
* zookeeper does not delete nodes that do not have childern just like consul
does and this leads to leaving empty nodes behind. In order to fix it, we
scan the logical path of a secret being deleted for empty dirs/prefixes and
remove them up until first non-empty one.
Current tests were not checking if backends are properly removing
nested secrets. We follow here the behaviour of Consul backend, where
empty "directories/prefixes" are automatically removed by Consul itself.
Prepared statements prevent the use of connection multiplexing software
such as PGBouncer. Even when PGBouncer is configured for [session mode][1]
there's a possibility that a connection to PostgreSQL can be re-used by
different clients. This leads to errors when clients use session based
features (like prepared statements).
This change removes prepared statements from the PostgreSQL physical
backend. This will allow vault to successfully work in infrastructures
that employ the use of PGBouncer or other connection multiplexing
software.
[1]: https://pgbouncer.github.io/config.html#poolmode
If the local Consul agent is not available while attempting to step down from active or up to active, retry once a second. Allow for concurrent changes to the state with a single registration updater. Fix standby initialization.
Vault will now register itself with Consul. The active node can be found using `active.vault.service.consul`. All standby vaults are available via `standby.vault.service.consul`. All unsealed vaults are considered healthy and available via `vault.service.consul`. Change in status and registration is event driven and should happen at the speed of a write to Consul (~network RTT + ~1x fsync(2)).
Healthy/active:
```
curl -X GET 'http://127.0.0.1:8500/v1/health/service/vault?pretty' && echo;
[
{
"Node": {
"Node": "vm1",
"Address": "127.0.0.1",
"TaggedAddresses": {
"wan": "127.0.0.1"
},
"CreateIndex": 3,
"ModifyIndex": 20
},
"Service": {
"ID": "vault:127.0.0.1:8200",
"Service": "vault",
"Tags": [
"active"
],
"Address": "127.0.0.1",
"Port": 8200,
"EnableTagOverride": false,
"CreateIndex": 17,
"ModifyIndex": 20
},
"Checks": [
{
"Node": "vm1",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"CreateIndex": 3,
"ModifyIndex": 3
},
{
"Node": "vm1",
"CheckID": "vault-sealed-check",
"Name": "Vault Sealed Status",
"Status": "passing",
"Notes": "Vault service is healthy when Vault is in an unsealed status and can become an active Vault server",
"Output": "",
"ServiceID": "vault:127.0.0.1:8200",
"ServiceName": "vault",
"CreateIndex": 19,
"ModifyIndex": 19
}
]
}
]
```
Healthy/standby:
```
[snip]
"Service": {
"ID": "vault:127.0.0.2:8200",
"Service": "vault",
"Tags": [
"standby"
],
"Address": "127.0.0.2",
"Port": 8200,
"EnableTagOverride": false,
"CreateIndex": 17,
"ModifyIndex": 20
},
"Checks": [
{
"Node": "vm2",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"CreateIndex": 3,
"ModifyIndex": 3
},
{
"Node": "vm2",
"CheckID": "vault-sealed-check",
"Name": "Vault Sealed Status",
"Status": "passing",
"Notes": "Vault service is healthy when Vault is in an unsealed status and can become an active Vault server",
"Output": "",
"ServiceID": "vault:127.0.0.2:8200",
"ServiceName": "vault",
"CreateIndex": 19,
"ModifyIndex": 19
}
]
}
]
```
Sealed:
```
"Checks": [
{
"Node": "vm2",
"CheckID": "serfHealth",
"Name": "Serf Health Status",
"Status": "passing",
"Notes": "",
"Output": "Agent alive and reachable",
"ServiceID": "",
"ServiceName": "",
"CreateIndex": 3,
"ModifyIndex": 3
},
{
"Node": "vm2",
"CheckID": "vault-sealed-check",
"Name": "Vault Sealed Status",
"Status": "critical",
"Notes": "Vault service is healthy when Vault is in an unsealed status and can become an active Vault server",
"Output": "Vault Sealed",
"ServiceID": "vault:127.0.0.2:8200",
"ServiceName": "vault",
"CreateIndex": 19,
"ModifyIndex": 38
}
]
```
From the PostgreSQL docs
(http://www.postgresql.org/docs/9.4/static/datatype-character.html):
> Tip: There is no performance difference among these three types,
> apart from increased storage space when using the blank-padded type,
> and a few extra CPU cycles to check the length when storing into a
> length-constrained column. While character(n) has performance
> advantages in some other database systems, there is no such advantage
> in PostgreSQL; in fact character(n) is usually the slowest of the
> three because of its additional storage costs. In most situations
> text or character varying should be used instead.
Some etcd configurations (such as that provided by compose.io) place the
etcd cluster behind multiple load balancers or proxies. In this
configuration, calling Sync (or AutoSync) on the etcd client will
replace the load balancer addresses with the underlying etcd server
address.
This will cause the etcd client to bypass the load balancers, and may
cause the connection to fail completely if the etcd servers are
protected by a firewall.
This patch provides a "sync" option for the etcd backend, which defaults
to the current behavior, but which can be used to turn off of sync.
This corresponds to etcdctl's --no-sync option.
When Vault is killed without the chance to clean up the lock
entry in DynamoDB, no further Vault nodes can become leaders after
that.
To recover from this situation, this commit adds an environment
variable and a configuration flag that when set to "1" causes Vault
to delete the lock entry from DynamoDB.