* update bindata_assetfs.go
* Release v1.8.2
* Putting source back into Dev Mode
* changelog: add entries for 1.7.6, 1.7.5 and 1.6.7
Co-authored-by: hashicorp-ci <hashicorp-ci@users.noreply.github.com>
NotifyShutdown was only used for testing. Now that t.Cleanup exists, we
can use that instead of attaching cleanup to the Server shutdown.
The Autopilot test which used NotifyShutdown doesn't need this
notification because Shutdown is synchronous. Waiting for the function
to return is equivalent.
AutoConfig will generate local tokens for clients and the ability to use local tokens is gated off of token replication being enabled and being configured with a replication token. Therefore we already have a hard requirement on having token replication enabled, this commit just makes sure to surface that to the operator instead of having to discern what the issue is from RPC errors.
Occasionally we are seeing the go-test-api job timeout at 10 minutes.
Looking at the stack trace I saw the following:
1. Lots of tests blocked on server.Stop in NewTestServerConfigT. This
suggests that SIGINT is being sent to the server, but the server is
not properly shutting down.
2. Over 20k goroutines that look like this:
goroutine 16355 [select, 8 minutes]:
net/http.(*persistConn).readLoop(0xc004270240)
/usr/local/go/src/net/http/transport.go:2099 +0x99e
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1647 +0xc56
Issue 1 seems to be the main problem, but debugging that directly is not
possible because our buffered logs do not get sent when the tests
timeout. To mitigate this problem I've added a timeout to the cmd.Wait()
to force kill the process and return an error.
Unfortunately because we retry this operation, we still may not see the
cause because the next attempt will likely pass. I'm tempted to remove
the retry around NewTestServerConfigT.
Issue 2 seems to be caused by not closing the response body. Since the
request is performed many times in a loop, many goroutines are created
and are not closed until the response body is closed.
This code started as an optimization to avoid doing an RPC Ping to
itself. But in a single server cluster the rebalancing was led to
believe that there were no healthy servers because foundHealthyServer
was not set. Now this is being set properly.
Fixes#8401 and #8403.
When consul is restarted and an envoy that had already sent
DiscoveryRequests to the previous consul process sends a request to the
new process it doesn't respect the setting and never populates
DiscoveryRequest.Node for the life of the new consul process due to this
bug: https://github.com/envoyproxy/envoy/issues/9682Fixes#8430
Now that it is no longer used, we can remove this unnecessary field. This is a pre-step in cleanup up RuntimeConfig->Consul.Config, which is a pre-step to adding a gRPCHandler component to Server for streaming.
Removing this field also allows us to remove one of the return values from logging.Setup.
The main bug was fixed in cb050b280ceb4186de765118611a7a92d8158c3f, but the return value of 'result' is still misleading.
Change the return value to nil to make the code more clear.
Related changes:
- hard-fail the xDS connection attempt if the envoy version is known to be too old to be supported
- remove the RouterMatchSafeRegex proxy feature since all supported envoy versions have it
- stop using --max-obj-name-len (due to: envoyproxy/envoy#11740)
Ensure that enabling AutoConfig sets the tls configurator properly
This also refactors the TLS configurator a bit so the naming doesn’t imply only AutoEncrypt as the source of the automatically setup TLS cert info.
* Add sorting to ACLs policies with comparator
* Add acls/roles sorting test
* Add navigation test for acls/policies
* Update onchange value for sorting policies