Occasionally we are seeing the go-test-api job timeout at 10 minutes.
Looking at the stack trace I saw the following:
1. Lots of tests blocked on server.Stop in NewTestServerConfigT. This
suggests that SIGINT is being sent to the server, but the server is
not properly shutting down.
2. Over 20k goroutines that look like this:
goroutine 16355 [select, 8 minutes]:
net/http.(*persistConn).readLoop(0xc004270240)
/usr/local/go/src/net/http/transport.go:2099 +0x99e
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1647 +0xc56
Issue 1 seems to be the main problem, but debugging that directly is not
possible because our buffered logs do not get sent when the tests
timeout. To mitigate this problem I've added a timeout to the cmd.Wait()
to force kill the process and return an error.
Unfortunately because we retry this operation, we still may not see the
cause because the next attempt will likely pass. I'm tempted to remove
the retry around NewTestServerConfigT.
Issue 2 seems to be caused by not closing the response body. Since the
request is performed many times in a loop, many goroutines are created
and are not closed until the response body is closed.
Replaces #7559
Running tests in parallel, with background goroutines, results in test output not being associated with the correct test. `go test` does not make any guarantees about output from goroutines being attributed to the correct test case.
Attaching log output from background goroutines also cause data races. If the goroutine outlives the test, it will race with the test being marked done. Previously this was noticed as a panic when logging, but with the race detector enabled it is shown as a data race.
The previous solution did not address the problem of correct test attribution because test output could still be hidden when it was associated with a test that did not fail. You would have to look at all of the log output to find the relevant lines. It also made debugging test failures more difficult because each log line was very long.
This commit attempts a new approach. Instead of printing all the logs, only print when a test fails. This should work well when there are a small number of failures, but may not work well when there are many test failures at the same time. In those cases the failures are unlikely a result of a specific test, and the log output is likely less useful.
All of the logs are printed from the test goroutine, so they should be associated with the correct test.
Also removes some test helpers that were not used, or only had a single caller. Packages which expose many functions with similar names can be difficult to use correctly.
Related:
https://github.com/golang/go/issues/38458 (may be fixed in go1.15)
https://github.com/golang/go/issues/38382#issuecomment-612940030
Switch from /v1/agent/self to /v1/status/leader when checking if the test server has come up successfully in the waitForAPI function.
Previously, the test server was relying (probably not intentionally) on the default value of the acl_enforce_version_8 in the TestConfig, which was false. So if you create a test server and enabled ACLs, they would not be enforced and the server would be able to come up pretty quickly because /v1/agent/self would return a 200 status pretty much as soon as the agent is running and most likely before leader election is finished.
Now that we have removed acl_enforce_version_8 property (equivalent to being true by default) if you've created a test server with ACLs enabled, it will need to wait for leader election and for ACLs to be initialized before it'll get a successful response from the /v1/agent/self.
Note: With this change, waitForAPI function no longer requires a 200 response status from the v1/status/leader endpoint. This is because in some tests, namely TestAPI_AgentLeave, we are only running clients, and this endpoint returns a 500 status.
* Unflake the TestAPI_AgentConnectCALeaf test
* Modify the WaitForActiveCARoot to actually verify that at least one root exists
Also verify that the active root id field is set
* Fix zombie consul process in Windows
Windows doesn't support Interrupt signal, thus while stop it on Windows platform
it would fail and left zombie consul process
If there is imperfect goroutine lifespan tracking if we pipe our logs
through testing.T.Logf there is a chance of a stray goroutine attempting
to log after the test that spawned it completes.
This results in a panic of:
panic: Log in goroutine after TestLeader_SecondaryCA_Initialize has completed...
This isn't great and should be fixed, but quickly runs into situations
around externally cancelling blocking queries which isn't terribly
possible at the moment. The concession here is to ignore these specific
panics for now.
This can be triggered easily when running some tests with a high
`-count=YYY` value.
Also needed to update some funcs that were taking a *testing.T to use a testing.TB. This prevents passing a nil pointer as a non-nil interface value
and thus making it impossible to detect nil before using the interfaces functions.
This should cut down on test flakiness.
Problems handled:
- If you had enough parallel test cases running, the former circular
approach to handling the port block could hand out the same port to
multiple cases before they each had a chance to bind them, leading to
one of the two tests to fail.
- The freeport library would allocate out of the ephemeral port range.
This has been corrected for Linux (which should cover CI).
- The library now waits until a formerly-in-use port is verified to be
free before putting it back into circulation.
All these changes should have no side-effects or change behavior:
- Use bytes.Buffer's String() instead of a conversion
- Use time.Since and time.Until where fitting
- Drop unnecessary returns and assignment
* Ensure the mesh gateway configuration comes back in the api within each upstream
* Add a test for the MeshGatewayConfig in the ToAPI functions
* Ensure we don’t use gateways for dc local connections
* Update the svc kind index for deletions
* Replace the proxycfg.state cache with an interface for testing
Also start implementing proxycfg state testing.
* Update the state tests to verify some gateway watches for upstream-targets of a discovery chain.
* Retry the creation of the test server three times.
* Reduce the retry timeout for the API wait to 2 seconds, opting to fail faster and start over.
* Remove wait for leader from server creation. This wait can be added on a test by test basis now that the function is being exported.
* Remove wait for anti-entropy sync. This is built into the existing WaitForSerfCheck func, so that can be used if the anti-entropy wait is needed
* Add HTTP endpoints for config entry management
* Finish implementing decoding in the HTTP Config entry apply endpoint
* Add CAS operation to the config entry apply endpoint
Also use this for the bootstrapping and move the config entry decoding function into the structs package.
* First pass at the API client for the config entries
* Fixup some of the ConfigEntry APIs
Return a singular response object instead of a list for the ConfigEntry.Get RPC. This gets plumbed through the HTTP API as well.
Dont return QueryMeta in the JSON response for the config entry listing HTTP API. Instead just return a list of config entries.
* Minor API client fixes
* Attempt at some ConfigEntry api client tests
These don’t currently work due to weak typing in JSON
* Get some of the api client tests passing
* Implement reflectwalk magic to correct JSON encoding a ProxyConfigEntry
Also added a test for the HTTP endpoint that exposes the problem. However, since the test doesn’t actually do the JSON encode/decode its still failing.
* Move MapWalk magic into a binary marshaller instead of JSON.
* Add a MapWalk test
* Get rid of unused func
* Get rid of unused imports
* Fixup some tests now that the decoding from msgpack coerces things into json compat types
* Stub out most of the central config cli
Fully implement the config read command.
* Basic config delete command implementation
* Implement config write command
* Implement config list subcommand
Not entirely sure about the output here. Its basically the read output indented with a line specifying the kind/name of each type which is also duplicated in the indented output.
* Update command usage
* Update some help usage formatting
* Add the connect enable helper cli command
* Update list command output
* Rename the config entry API client methods.
* Use renamed apis
* Implement config write tests
Stub the others with the noTabs tests.
* Change list output format
Now just simply output 1 line per named config
* Add config read tests
* Add invalid args write test.
* Add config delete tests
* Add config list tests
* Add connect enable tests
* Update some CLI commands to use CAS ops
This also modifies the HTTP API for a write op to return a boolean indicating whether the value was written or not.
* Fix up the HTTP API CAS tests as I realized they weren’t testing what they should.
* Update config entry rpc tests to properly test CAS
* Fix up a few more tests
* Fix some tests that using ConfigEntries.Apply
* Update config_write_test.go
* Get rid of unused import