* Allow exposing access to the underlying container
This exposes the Container response from the Docker API, allowing
consumers of the testhelper to interact with the newly started running
container instance. This will be useful for two reasons:
1. Allowing radiusd container to start its own daemon after modifying
its configuration.
2. For loading certificates into a future similar integration test
using the PKI secrets engine.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Allow any client to connect to test radiusd daemon
This fixes test failures of the following form:
> 2022-09-07T10:46:19.332-0400 [TRACE] core: adding local paths: paths=[]
> 2022-09-07T10:46:19.333-0400 [INFO] core: enabled credential backend: path=mnt/ type=test
> 2022-09-07T10:46:19.334-0400 [WARN] Executing test step: step_number=1
> 2022-09-07T10:46:19.334-0400 [WARN] Executing test step: step_number=2
> 2022-09-07T10:46:29.334-0400 [WARN] Executing test step: step_number=3
> 2022-09-07T10:46:29.335-0400 [WARN] Executing test step: step_number=4
> 2022-09-07T10:46:39.336-0400 [WARN] Requesting RollbackOperation
> --- FAIL: TestBackend_acceptance (28.56s)
> testing.go:364: Failed step 4: erroneous response:
>
> &logical.Response{Secret:<nil>, Auth:<nil>, Data:map[string]interface {}{"error":"context deadline exceeded"}, Redirect:"", Warnings:[]string(nil), WrapInfo:(*wrapping.ResponseWrapInfo)(nil), Headers:map[string][]string(nil)}
> FAIL
> FAIL github.com/hashicorp/vault/builtin/credential/radius 29.238s
In particular, radiusd container ships with a default clients.conf which
restricts connections to ranges associated with the Docker daemon. When
creating new networks (such as in CircleCI) or when running via Podman
(which has its own set of network ranges), this initial config will no
longer be applicable. We thus need to write a new config into the image;
while we could do this by rebuilding a new image on top of the existing
layers (provisioning our config), we then need to manage these changes
and give hooks for the service setup to build it.
Thus, post-startup modification is probably easier to execute in our
case.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* OSS portion of wrapper-v2
* Prefetch barrier type to avoid encountering an error in the simple BarrierType() getter
* Rename the OveriddenType to WrapperType and use it for the barrier type prefetch
* Fix unit test
* storage/raft: Fix cluster init with retry_join
Commit 8db66f4853abce3f432adcf1724b1f237b275415 introduced an error
wherein a join() would return nil (no error) with no information on its
channel if a joining node had been initialized. This was not handled
properly by the caller and resulted in a canceled `retry_join`.
Fix this by handling the `nil` channel respone by treating it as an
error and allowing the existing mechanics to work as intended.
* storage/raft: Improve retry_join go test
* storage/raft: Make VerifyRaftPeers pollable
* storage/raft: Add changelog entry for retry_join fix
* storage/raft: Add description to VerifyRaftPeers
* raft: Ensure init before setting suffrage
As reported in https://hashicorp.atlassian.net/browse/VAULT-6773:
The /sys/storage/raft/join endpoint is intended to be unauthenticated. We rely
on the seal to manage trust.
It’s possible to use multiple join requests to switch nodes from voter to
non-voter. The screenshot shows a 3 node cluster where vault_2 is the leader,
and vault_3 and vault_4 are followers with non-voters set to false. sent two
requests to the raft join endpoint to have vault_3 and vault_4 join the cluster
with non_voters:true.
This commit fixes the issue by delaying the call to SetDesiredSuffrage until after
the initialization check, preventing unauthenticated mangling of voter status.
Tested locally using
https://github.com/hashicorp/vault-tools/blob/main/users/ncabatoff/cluster/raft.sh
and the reproducer outlined in VAULT-6773.
* raft: Return join err on failure
This is necessary to correctly distinguish errors returned from the Join
workflow. Previously, errors were being masked as timeouts.
* raft: Default autopilot parameters in teststorage
Change some defaults so we don't have to pass in parameters or set them
in the originating tests. These storage types are only used in two
places:
1) Raft HA testing
2) Seal migration testing
Both consumers have been tested and pass with this change.
* changelog: Unauthn voter status change bugfix
* WIP replacing lib/pq
* change timezome param to be URI format
* add changelog
* add changelog for redshift
* update changelog
* add test for DSN style connection string
* more parseurl and quoteidentify to sdk; include copyright and license
* call dbutil.ParseURL instead, fix import ordering
Co-authored-by: Calvin Leung Huang <1883212+calvn@users.noreply.github.com>
The URL password redaction operation did not handle the case where the
database connection URL was provided as a percent-encoded string, and
its password component contained reserved characters. It attempted to
redact the password by replacing the unescaped password in the
percent-encoded URL. This resulted in the password being revealed when
reading the configuration from Vault.
* Fix upndomain bug causing alias name to change
* Fix nil map
* Add changelog
* revert
* Update changelog
* Add test for alias metadata name
* Fix code comment
- Add a 'Connect Timeout' query parameter to the test helper to set
a timeout value of 30 seconds in an attempt to address the following
failure we see at times in TestDeleteUser and TestUpdateUser
mssql_test.go:253: Failed to initialize: error verifying connection: TLS Handshake failed: cannot read handshake packet: EOF
* Update Go client libraries for etcd
* Added etcd server container to run etcd3 tests automatically.
* Removed etcd2 test case: it fails the backend tests but the failure is
unrelated to the uplift. The etcd2 backend implementation does not
remove empty nested nodes when removing leaf (see comments in #11980).
* Refactor TLS parsing
The ParsePEMBundle and ParsePKIJSON functions in the certutil package assumes
both a client certificate and a custom CA are specified. Cassandra needs to
allow for either a client certificate, a custom CA, or both. This revamps the
parsing of pem_json and pem_bundle to accomodate for any of these configurations
* k8s doc: update for 0.9.1 and 0.8.0 releases (#10825)
* k8s doc: update for 0.9.1 and 0.8.0 releases
* Update website/content/docs/platform/k8s/helm/configuration.mdx
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* Autopilot initial commit
* Move autopilot related backend implementations to its own file
* Abstract promoter creation
* Add nil check for health
* Add server state oss no-ops
* Config ext stub for oss
* Make way for non-voters
* s/health/state
* s/ReadReplica/NonVoter
* Add synopsis and description
* Remove struct tags from AutopilotConfig
* Use var for config storage path
* Handle nin-config when reading
* Enable testing autopilot by using inmem cluster
* First passing test
* Only report the server as known if it is present in raft config
* Autopilot defaults to on for all existing and new clusters
* Add locking to some functions
* Persist initial config
* Clarify the command usage doc
* Add health metric for each node
* Fix audit logging issue
* Don't set DisablePerformanceStandby to true in test
* Use node id label for health metric
* Log updates to autopilot config
* Less aggressively consume config loading failures
* Return a mutable config
* Return early from known servers if raft config is unable to be pulled
* Update metrics name
* Reduce log level for potentially noisy log
* Add knob to disable autopilot
* Don't persist if default config is in use
* Autopilot: Dead server cleanup (#10857)
* Dead server cleanup
* Initialize channel in any case
* Fix a bunch of tests
* Fix panic
* Add follower locking in heartbeat tracker
* Add LastContactFailureThreshold to config
* Add log when marking node as dead
* Update follower state locking in heartbeat tracker
* Avoid follower states being nil
* Pull test to its own file
* Add execution status to state response
* Optionally enable autopilot in some tests
* Updates
* Added API function to fetch autopilot configuration
* Add test for default autopilot configuration
* Configuration tests
* Add State API test
* Update test
* Added TestClusterOptions.PhysicalFactoryConfig
* Update locking
* Adjust locking in heartbeat tracker
* s/last_contact_failure_threshold/left_server_last_contact_threshold
* Add disabling autopilot as a core config option
* Disable autopilot in some tests
* s/left_server_last_contact_threshold/dead_server_last_contact_threshold
* Set the lastheartbeat of followers to now when setting up active node
* Don't use config defaults from CLI command
* Remove config file support
* Remove HCL test as well
* Persist only supplied config; merge supplied config with default to operate
* Use pointer to structs for storing follower information
* Test update
* Retrieve non voter status from configbucket and set it up when a node comes up
* Manage desired suffrage
* Consider bucket being created already
* Move desired suffrage to its own entry
* s/DesiredSuffrageKey/LocalNodeConfigKey
* s/witnessSuffrage/recordSuffrage
* Fix test compilation
* Handle local node config post a snapshot install
* Commit to storage first; then record suffrage in fsm
* No need of local node config being nili case, post snapshot restore
* Reconcile autopilot config when a new leader takes over duty
* Grab fsm lock when recording suffrage
* s/Suffrage/DesiredSuffrage in FollowerState
* Instantiate autopilot only in leader
* Default to old ways in more scenarios
* Make API gracefully handle 404
* Address some feedback
* Make IsDead an atomic.Value
* Simplify follower hearbeat tracking
* Use uber.atomic
* Don't have multiple causes for having autopilot disabled
* Don't remove node from follower states if we fail to remove the dead server
* Autopilot server removals map (#11019)
* Don't remove node from follower states if we fail to remove the dead server
* Use map to track dead server removals
* Use lock and map
* Use delegate lock
* Adjust when to remove entry from map
* Only hold the lock while accessing map
* Fix race
* Don't set default min_quorum
* Fix test
* Ensure follower states is not nil before starting autopilot
* Fix race
Co-authored-by: Jason O'Donnell <2160810+jasonodonnell@users.noreply.github.com>
Co-authored-by: Theron Voran <tvoran@users.noreply.github.com>
* upgrade vault dependency set
* etcd and grpc issues:
* better for tests
* testing
* all upgrades for hashicorp deps
* kubernetes plugin upgrade seems to work
* kubernetes plugin upgrade seems to work
* etcd and a bunch of other stuff
* all vulnerable packages upgraded
* k8s is broken in linux env but not locally
* test fixes
* fix testing
* fix etcd and grpc
* fix etcd and grpc
* use master branch of go-testing-interface
* roll back etcd upgrade
* have to fix grpc since other vendors pull in grpc 1.35.0 but we cant due to etcd
* rolling back in the replace directives
* a few more testing dependencies to clean up
* fix go mod vendor
* added retry to mssql testing
* setting num retry to 3
* removed a comment and moved svc into loop
Co-authored-by: HridoyRoy <hridoyroy@Hridoys-MacBook-Pro.local>
Co-authored-by: HridoyRoy <hridoyroy@Hridoys-MBP.hitronhub.home>
Fix some places where raft wasn't hooking into the core logger as it should.
Revisited the code that was setting the log level to Error during cleanup: it's normal for there to be a bunch of errors then, which makes it harder to see what went wrong up to the point where the test was deemed to have failed. So now, instead of setting log level to Error, we actually stop logging altogether. This only applies if the test didn't pass in its own logger during cluster creation, but we should be moving away from that anyway.