* Use the AWS SDK's UnmarshalMap method for dynamodb backend, not the deprecated ConvertFromMap method
* Use the AWS SDK's MarshalMap method for dynamodb backend, not the deprecated ConvertToMap method
* Use the AWS SDK's session.NewSession method for dynamodb backend, not the deprecated session.New method
* Fix variable name awserr that colides with imported package in dynamodb backend
* logbridge with hclog and identical output
* Initial search & replace
This compiles, but there is a fair amount of TODO
and commented out code, especially around the
plugin logclient/logserver code.
* strip logbridge
* fix majority of tests
* update logxi aliases
* WIP fixing tests
* more test fixes
* Update test to hclog
* Fix format
* Rename hclog -> log
* WIP making hclog and logxi love each other
* update logger_test.go
* clean up merged comments
* Replace RawLogger interface with a Logger
* Add some logger names
* Replace Trace with Debug
* update builtin logical logging patterns
* Fix build errors
* More log updates
* update log approach in command and builtin
* More log updates
* update helper, http, and logical directories
* Update loggers
* Log updates
* Update logging
* Update logging
* Update logging
* Update logging
* update logging in physical
* prefixing and lowercase
* Update logging
* Move phyisical logging name to server command
* Fix som tests
* address jims feedback so far
* incorporate brians feedback so far
* strip comments
* move vault.go to logging package
* update Debug to Trace
* Update go-plugin deps
* Update logging based on review comments
* Updates from review
* Unvendor logxi
* Remove null_logger.go
* Switch reading from S3 to io.Copy from io.ReadFull
If the Content-Length header wasn't being sent back, the current
behavior could panic. It's unclear when it will not be sent; it appears
to be CORS dependent. But this works around it by not trying to
preallocate a buffer of a specific size and instead just read until EOF.
In addition I noticed that Close wasn't being called.
https://docs.aws.amazon.com/sdk-for-go/api/service/s3/#GetObjectOutput
specifies that Body is an io.ReadCloser so I added a call to Close.
Fixes#4222
* Add some extra efficiency
Allow the storage backend for MySQL to use a custom connection lifetime and max idle connection value if the parameter is specified in the config file of vault otherwise do not set in order to leave at default value.
* Consul service address is blank
Setting an explicit service address eliminates the ability for Consul
to dynamically decide what it should be based on its translate_wan_addrs
setting.
translate_wan_addrs configures Consul to return its lan address to nodes
in its same datacenter but return its wan address to nodes in foreign
datacenters.
* service_address parameter for Consul storage backend
This parameter allows users to override the use of what Vault knows to
be its HA redirect address.
This option is particularly commpelling because if set to a blank
string, Consul will leverage the node configuration where the service is
registered which includes the `translate_wan_addrs` option. This option
conditionally associates nodes' lan or wan address based on where
requests originate.
* Add TestConsul_ServiceAddress
Ensures that the service_address configuration parameter is setting the
serviceAddress field of ConsulBackend instances properly.
If the "service_address" parameter is not set, the ConsulBackend
serviceAddress field must instantiate as nil to indicate that it can be
ignored.
* Add useragent package
This helper provides a consistent user-agent header for Vault, taking into account different versions.
* Add user-agent headers to spanner and gcs
This PR adds a new Storage Backend for Triton's Object Storage - Manta
```
make testacc TEST=./physical/manta
==> Checking that code complies with gofmt requirements...
==> Checking that build is using go version >= 1.9.1...
go generate
VAULT_ACC=1 go test -tags='vault' ./physical/manta -v -timeout 45m
=== RUN TestMantaBackend
--- PASS: TestMantaBackend (61.18s)
PASS
ok github.com/hashicorp/vault/physical/manta 61.210s
```
Manta behaves differently to how S3 works - it has no such concepts of Buckets - it is merely a filesystem style object store
Therefore, we have chosen the approach of when writing a secret `foo` it will actually map (on disk) as foo/.vault_value
The reason for this is because if we write the secret `foo/bar` and then try and Delete a key using the name `foo` then Manta
will complain that the folder is not empty because `foo/bar` exists. Therefore, `foo/bar` is written as `foo/bar/.vault_value`
The value of the key is *always* written to a directory tree of the name and put in a `.vault_value` file.
The original reason for the split was physical's dependencies, but those
haven't been onerous for a long time. Meanwhile it's a totally separate
implementation so we could be getting faulty results from tests. Get rid
of it and use the unified physical/inmem.
This change makes these errors transient instead of permanent:
[ERROR] core: failed to acquire lock: error=etcdserver: requested lease not found
After this change, there can still be one of these errors when a
standby vault that lost its lease tries to become leader, but on the
next lock acquisition attempt a new session will be created. With this
new session, the standby will be able to become the leader.
* Fix cassandra tests, explicitly set cluster port if provided
* Update cassandra.yml test-fixture
* Add port as part of the config option, fix tests
* Remove hostport splitting in cassandraConnectionProducer.createSession
* Include port in API docs
* Add max_parallel parameter to MySQL backend.
This limits the number of concurrent connections, so that vault does not die
suddenly from "Too many connections".
This can happen when e.g. vault starts up, and tries to load all the
existing leases in parallel. At the time of writing this, the value
ExpirationRestoreWorkerCount in vault/helper/consts/const.go is set to
64, meaning that if there are enough leases in the vault's DB, it will
generate AT LEAST 64 concurrent connections to MySQL when loading the
data during start-up. On certain configurations, e.g. smaller AWS
RDS/Aurora instances, this will cause Vault to fail startup.
* Fix a typo in mysql storage readme