This fixes a couple of references to loop variables in parallel tests
and deferred functions. When running a parallel test (calling
`t.Parallel()`) combined with the table-driven pattern, it's necessary
to copy the test case loop variable, otherwise only the last test case
is exercised. This is documented in the `testing` package:
https://pkg.go.dev/testing#hdr-Subtests_and_Sub_benchmarks
`defer` statements that invoke a closure should also not reference a
loop variable directly as the referenced value will change in each
iteration of the loop.
Issues were automatically found with the `loopvarcapture` linter.
* HCP link integration
* update configure-git.yml
* more OSS stuff
* removing internal repos
* adding a nil check
* removing config test to be included in ENT only
* updating hcp-sdk-go to v0.22.0
* remove Hostname and AuthURL link config params
Co-authored-by: Chris Capurso <1036769+ccapurso@users.noreply.github.com>
* OSS portion of wrapper-v2
* Prefetch barrier type to avoid encountering an error in the simple BarrierType() getter
* Rename the OveriddenType to WrapperType and use it for the barrier type prefetch
* Fix unit test
* add func to set level for specific logger
* add endpoints to modify log level
* initialize base logger with IndependentLevels
* test to ensure other loggers remain unchanged
* add DELETE loggers endpoints to revert back to config
* add API docs page
* add changelog entry
* remove extraneous line
* add log level field to Core struct
* add godoc for getLogLevel
* add some loggers to c.allLoggers
This was causing failures when running `vault server -dev`:
> panic: runtime error: invalid memory address or nil pointer dereference
> [signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x105c41c1c]
>
> goroutine 1 [running]:
> github.com/hashicorp/vault/command.(*ServerCommand).parseConfig(0x140005a2180)
> .../vault/command/server.go:429 +0x5c
Interestingly, we do not have a test case for running the dev
sever.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add warning about EA in FIPS mode
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* Add changelog
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
* port SSCT OSS
* port header hmac key to ent and generate token proto without make command
* remove extra nil check in request handling
* add changelog
* add comment to router.go
* change test var to use length constants
* remove local index is 0 check and extra defer which can be removed after use of ExternalID
In future Vault Enterprise versions, we'll be building Vault with
FIPS-validated cryptography. To help operators understand their
environment, we'll want to expose information about their FIPS status
when they're running a FIPS version of Vault.
Signed-off-by: Alexander Scheel <alex.scheel@hashicorp.com>
In doing some testing I found that the listener clusteraddr isn't really used, or at least isn't as important as the top-level clusteraddr setting. As such, go-sockaddr templating needs to be implemented for the top-level `cluster_addr` setting or it's unusable for HA.
Also fix a nil pointer panic I discovered at the same time.
* VAULT-1564 report in-flight requests
* adding a changelog
* Changing some variable names and fixing comments
* minor style change
* adding unauthenticated support for in-flight-req
* adding documentation for the listener.profiling stanza
* adding an atomic counter for the inflight requests
addressing comments
* addressing comments
* logging completed requests
* fixing a test
* providing log_requests_info as a config option to determine at which level requests should be logged
* removing a member and a method from the StatusHeaderResponseWriter struct
* adding api docks
* revert changes in NewHTTPResponseWriter
* Fix logging invalid log_requests_info value
* Addressing comments
* Fixing a test
* use an tomic value for logRequestsInfo, and moving the CreateClientID function to Core
* fixing go.sum
* minor refactoring
* protecting InFlightRequests from data race
* another try on fixing a data race
* another try to fix a data race
* addressing comments
* fixing couple of tests
* changing log_requests_info to log_requests_level
* minor style change
* fixing a test
* removing the lock in InFlightRequests
* use single-argument form for interface assertion
* adding doc for the new configuration paramter
* adding the new doc to the nav data file
* minor fix
* Customizing HTTP headers in the config file
* Add changelog, fix bad imports
* fixing some bugs
* fixing interaction of custom headers and /ui
* Defining a member in core to set custom response headers
* missing additional file
* Some refactoring
* Adding automated tests for the feature
* Changing some error messages based on some recommendations
* Incorporating custom response headers struct into the request context
* removing some unused references
* fixing a test
* changing some error messages, removing a default header value from /ui
* fixing a test
* wrapping ResponseWriter to set the custom headers
* adding a new test
* some cleanup
* removing some extra lines
* Addressing comments
* fixing some agent tests
* skipping custom headers from agent listener config,
removing two of the default headers as they cause issues with Vault in UI mode
Adding X-Content-Type-Options to the ui default headers
Let Content-Type be set as before
* Removing default custom headers, and renaming some function varibles
* some refacotring
* Refactoring and addressing comments
* removing a function and fixing comments
* Actually call config.Validate in diagnose
* Wire configuration checks into diagnose and fix resulting bugs.
* go mod vendor
* Merge to vendorless version
* Remove sentinel section to allow diagnose_ok to pass
* Fix unit tests
* initial refactoring of unseal step in run
* remove waitgroup
* remove waitgroup
* backup work
* backup
* backup
* completely modularize run and move into diagnose
* add diagnose errors for incorrect number of unseal keys
* comment tests back in
* backup
* first subspan
* finished subspanning but running into error with timeouts
* remove runtime checks
* merge main branch
* meeting updates
* remove telemetry block
* roy comment
* subspans for seal finalization and wrapping diagnose latency checks
* backup while I fix something else
* fix storage latency test errors
* runtime checks
* diagnose with timeout on seal
* initial refactoring of unseal step in run
* remove waitgroup
* remove waitgroup
* backup work
* backup
* backup
* completely modularize run and move into diagnose
* add diagnose errors for incorrect number of unseal keys
* comment tests back in
* backup
* first subspan
* finished subspanning but running into error with timeouts
* remove runtime checks
* meeting updates
* remove telemetry block
* roy comment
* subspans for seal finalization and wrapping diagnose latency checks
* fix storage latency test errors
* review comments
* use random uuid for latency checks instead of static id
* Create helpers which integrate with OpenTelemetry for diagnose collection
* Go mod vendor
* Comments
* Update vault/diagnose/helpers.go
Co-authored-by: swayne275 <swayne275@gmail.com>
* Add unit test/example
* tweak output
* More comments
* add spot check concept
* Get unit tests working on Result structs
* Fix unit test
* Get unit tests working, and make diagnose sessions local rather than global
* Comments
* Last comments
* No need for init
* :|
* Fix helpers_test
Co-authored-by: swayne275 <swayne275@gmail.com>
* sanity checks for tls config in diagnose
* backup
* backup
* backup
* added necessary tests
* remove comment
* remove parallels causing test flakiness
* comments
* small fix
* separate out config hcl test case into new hcl file
* newline
* addressed comments
* addressed comments
* addressed comments
* addressed comments
* addressed comments
* reload funcs should be allowed to be nil
* Stub "operator diagnose" command.
* Parse configuration files.
* Refactor storage setup to call from diagnose.
* Add the ability to run Diagnose as a prequel to server start.
Adds debug and warn logging around AWS credential chain generation,
specifically to help users debugging auto-unseal problems on AWS, by
logging which role is being used in the case of a webidentity token.
Adds a deferred call to flush the log output as well, to ensure logs
are output in the event of an initialization failure.
* Register a log sink that delays the printing of the big dev warning until logs have settled down
* Since this is always an intercept logger, just be explicit about the type
* changelog++