## Backport
This PR is auto-generated from #17754 to be assessed for backporting due
to the inclusion of the label backport/1.16.
🚨
>**Warning** automatic cherry-pick of commits failed. If the first
commit failed,
you will see a blank no-op commit below. If at least one commit
succeeded, you
will see the cherry-picked commits up to, _not including_, the commit
where
the merge conflict occurred.
The person who merged in the original PR is:
@WenInCode
This person should manually cherry-pick the original PR into a new
backport PR,
and close this one when the manual backport PR is merged in.
> merge conflict error: unable to process merge commit:
"1c757b8a2c1160ad53421b7b8bd7f74b205c4b89", automatic backport requires
rebase workflow
The below text is copied from the body of the original PR.
---
fixes#17097 Consul version of each nodes in UI nodes section
@jkirschner-hashicorp @huikang @team @Maintainers
Updated consul version in the request to register consul.
Added this as Node MetaData.
Fetching this new metadata in UI
<img width="1512" alt="Screenshot 2023-06-15 at 4 21 33 PM"
src="https://github.com/hashicorp/consul/assets/3139634/94f7cf6b-701f-4230-b9f7-d8c4342d0737">
Also made this backward compatible and tested.
Backward compatible in this context means - If consul binary with above
PR changes is deployed to one of node, and if UI is run from this node,
then the version of not only current (upgraded) node is displayed in UI
, but also of older nodes given that they are consul servers only.
For older (non-server or client) nodes the version is not added in
NodeMeta Data and hence the version will not be displayed for them.
If a old node is consul server, the version will be displayed. As the
endpoint - "v1/internal/ui/nodes?dc=dc1" was already returning version
in service meta. This is made use of in current UI changes.
<img width="1480" alt="Screenshot 2023-06-16 at 6 58 32 PM"
src="https://github.com/hashicorp/consul/assets/3139634/257942f4-fbed-437d-a492-37849d2bec4c">
---
<details>
<summary> Overview of commits </summary>
- 931fdfc7ecdc26bb7cc20b698c5e14c1b65fcc6e -
b3e2ec1ccaca3832a088ffcac54257fa6653c6c1 -
8d0e9a54907039c09330c6cd7b9e761566af6856 -
04e5d88cca37821f6667be381c16aaa5958b5c92 -
28286a2e98f8cd66ef8593c2e2893b4db6080417 -
43e50ad38207952a9c4d04d45d08b6b8f71b31fe -
0cf1b7077cdf255596254d9dc1624a269c42b94d -
27f34ce1c2973591f75b1e38a81ccbe7cee6cee3 -
2ac76d62b8cbae76b1a903021aebb9b865e29d6e -
3d618df9ef1d10dd5056c8b1ed865839c553a0e0 -
1c757b8a2c1160ad53421b7b8bd7f74b205c4b89 -
23ce82b4cee8f74dd634dbe145313e9a56c0077d -
4dc1c9b4c5aafdb8883ef977dfa9b39da138b6cb -
85a12a92528bfa267a039a9bb258170be914abf7 -
25d30a3fa980d130a30d445d26d47ef2356cb553 -
7f1d6192dce3352e92307175848b89f91e728c24 -
5174cbff84b0795d4cb36eb8980d0d5336091ac9
</details>
---------
Co-authored-by: Vijay Srinivas <vijayraghav22@gmail.com>
Co-authored-by: John Murret <john.murret@hashicorp.com>
Co-authored-by: Jared Kirschner <85913323+jkirschner-hashicorp@users.noreply.github.com>
* backport of commit dc9c08d3b8cc1eda95a05a8b041ab2a3a5248dd0
* backport of commit 1271705a5cce5fe5e9487fed2ac965ab7aac3d59
---------
Co-authored-by: Ronald Ekambi <ronekambi@gmail.com>
Co-authored-by: Ronald <roncodingenthusiast@users.noreply.github.com>
* Add a ReplaceType dep mapper and move them into their own file
* Implement the service endpoints controller
* Implement a Catalog Controllers Integration Test
This change was necessary, because the configuration was always
generated with a gRPC TLS port, which did not exist in Consul 1.13,
and would result in the server failing to launch with an error.
This code checks the version of Consul and conditionally adds the
gRPC TLS port, only if the version number is greater than 1.14.
* update go version to 1.18 for api and sdk, go mod tidy
* removes ioutil usage everywhere which was deprecated in go1.16 in favour of io and os packages. Also introduces a lint rule which forbids use of ioutil going forward.
Co-authored-by: R.B. Boyer <4903+rboyer@users.noreply.github.com>
A previous commit introduced an internally-managed server certificate
to use for peering-related purposes.
Now the peering token has been updated to match that behavior:
- The server name matches the structure of the server cert
- The CA PEMs correspond to the Connect CA
Note that if Conect is disabled, and by extension the Connect CA, we
fall back to the previous behavior of returning the manually configured
certs and local server SNI.
Several tests were updated to use the gRPC TLS port since they enable
Connect by default. This means that the peering token will embed the
Connect CA, and the dialer will expect a TLS listener.
* defaulting to false because peering will be released as beta
* Ignore peering disabled error in bundles cachetype
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Co-authored-by: freddygv <freddy@hashicorp.com>
Co-authored-by: Matt Keeler <mjkeeler7@gmail.com>
Peer replication is intended to be between separate Consul installs and
effectively should be considered "external". This PR moves the peer
stream replication bidirectional RPC endpoint to the external gRPC
server and ensures that things continue to function.
- Adding a 'Partition' and 'RetryJoin' command allows test cases where
one would like to spin up a Consul Agent in a non-default partition to
test use-cases that are common when enabling Admin Partition on
Kubernetes.
* add root_cert_ttl option for consul connect, vault ca providers
Signed-off-by: FFMMM <FFMMM@users.noreply.github.com>
* Apply suggestions from code review
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
* add changelog, pr feedback
Signed-off-by: FFMMM <FFMMM@users.noreply.github.com>
* Update .changelog/11428.txt, more docs
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
* Update website/content/docs/agent/options.mdx
Co-authored-by: Kyle Havlovitz <kylehav@gmail.com>
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
Co-authored-by: Daniel Nephin <dnephin@hashicorp.com>
Co-authored-by: Kyle Havlovitz <kylehav@gmail.com>
- return errors in TestAgent.Start so that the retry works correctly
- remove duplicate logging, the error is returned already
- add a missing t.Helper() to retry.Run
- properly set a.Agent to nil so that subsequent retry attempts will actually try to start
On a few occasions I've had to read timeout stack traces for tests and
noticed that retry.Run runs the function in a goroutine. This makes
debuging a timeout more difficult because the gourinte of the retryable
function is disconnected from the stack of the actual test. It requires
searching through the entire stack trace to find the other goroutine.
By using panic instead of runtime.Goexit() we remove the need for a
separate goroutine.
Also a few other small improvements:
* add `R.Helper` so that an assertion function can be used with both
testing.T and retry.R.
* Pass t to `Retryer.NextOr`, and call `t.Helper` in a number of places
so that the line number reported by `t.Log` is the line in the test
where `retry.Run` was called, instead of some line in `retry.go` that
is not relevant to the failure.
* improve the implementation of `dedup` by removing the need to iterate
twice. Instad track the lines and skip any duplicate when writing to
the buffer.
- Upgrade the ConfigEntry.ListAll RPC to be kind-aware so that older
copies of consul will not see new config entries it doesn't understand
replicate down.
- Add shim conversion code so that the old API/CLI method of interacting
with intentions will continue to work so long as none of these are
edited via config entry endpoints. Almost all of the read-only APIs will
continue to function indefinitely.
- Add new APIs that operate on individual intentions without IDs so that
the UI doesn't need to implement CAS operations.
- Add a new serf feature flag indicating support for
intentions-as-config-entries.
- The old line-item intentions way of interacting with the state store
will transparently flip between the legacy memdb table and the config
entry representations so that readers will never see a hiccup during
migration where the results are incomplete. It uses a piece of system
metadata to control the flip.
- The primary datacenter will begin migrating intentions into config
entries on startup once all servers in the datacenter are on a version
of Consul with the intentions-as-config-entries feature flag. When it is
complete the old state store representations will be cleared. We also
record a piece of system metadata indicating this has occurred. We use
this metadata to skip ALL of this code the next time the leader starts
up.
- The secondary datacenters continue to run the old intentions
replicator until all servers in the secondary DC and primary DC support
intentions-as-config-entries (via serf flag). Once this condition it met
the old intentions replicator ceases.
- The secondary datacenters replicate the new config entries as they are
migrated in the primary. When they detect that the primary has zeroed
it's old state store table it waits until all config entries up to that
point are replicated and then zeroes its own copy of the old state store
table. We also record a piece of system metadata indicating this has
occurred. We use this metadata to skip ALL of this code the next time
the leader starts up.
Occasionally we are seeing the go-test-api job timeout at 10 minutes.
Looking at the stack trace I saw the following:
1. Lots of tests blocked on server.Stop in NewTestServerConfigT. This
suggests that SIGINT is being sent to the server, but the server is
not properly shutting down.
2. Over 20k goroutines that look like this:
goroutine 16355 [select, 8 minutes]:
net/http.(*persistConn).readLoop(0xc004270240)
/usr/local/go/src/net/http/transport.go:2099 +0x99e
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1647 +0xc56
Issue 1 seems to be the main problem, but debugging that directly is not
possible because our buffered logs do not get sent when the tests
timeout. To mitigate this problem I've added a timeout to the cmd.Wait()
to force kill the process and return an error.
Unfortunately because we retry this operation, we still may not see the
cause because the next attempt will likely pass. I'm tempted to remove
the retry around NewTestServerConfigT.
Issue 2 seems to be caused by not closing the response body. Since the
request is performed many times in a loop, many goroutines are created
and are not closed until the response body is closed.
Replaces #7559
Running tests in parallel, with background goroutines, results in test output not being associated with the correct test. `go test` does not make any guarantees about output from goroutines being attributed to the correct test case.
Attaching log output from background goroutines also cause data races. If the goroutine outlives the test, it will race with the test being marked done. Previously this was noticed as a panic when logging, but with the race detector enabled it is shown as a data race.
The previous solution did not address the problem of correct test attribution because test output could still be hidden when it was associated with a test that did not fail. You would have to look at all of the log output to find the relevant lines. It also made debugging test failures more difficult because each log line was very long.
This commit attempts a new approach. Instead of printing all the logs, only print when a test fails. This should work well when there are a small number of failures, but may not work well when there are many test failures at the same time. In those cases the failures are unlikely a result of a specific test, and the log output is likely less useful.
All of the logs are printed from the test goroutine, so they should be associated with the correct test.
Also removes some test helpers that were not used, or only had a single caller. Packages which expose many functions with similar names can be difficult to use correctly.
Related:
https://github.com/golang/go/issues/38458 (may be fixed in go1.15)
https://github.com/golang/go/issues/38382#issuecomment-612940030
Switch from /v1/agent/self to /v1/status/leader when checking if the test server has come up successfully in the waitForAPI function.
Previously, the test server was relying (probably not intentionally) on the default value of the acl_enforce_version_8 in the TestConfig, which was false. So if you create a test server and enabled ACLs, they would not be enforced and the server would be able to come up pretty quickly because /v1/agent/self would return a 200 status pretty much as soon as the agent is running and most likely before leader election is finished.
Now that we have removed acl_enforce_version_8 property (equivalent to being true by default) if you've created a test server with ACLs enabled, it will need to wait for leader election and for ACLs to be initialized before it'll get a successful response from the /v1/agent/self.
Note: With this change, waitForAPI function no longer requires a 200 response status from the v1/status/leader endpoint. This is because in some tests, namely TestAPI_AgentLeave, we are only running clients, and this endpoint returns a 500 status.
* Unflake the TestAPI_AgentConnectCALeaf test
* Modify the WaitForActiveCARoot to actually verify that at least one root exists
Also verify that the active root id field is set