TestEnvoy.Close used e.stream.recvCh == nil to indicate the channel had already
been closed, so that TestEnvoy.Close can be called multiple times. The recvCh
was not protected by a lock, so setting it to nil caused a data race with any
goroutine trying to read from the channel.
Instead set the stream to nil. The stream is guarded by a lock, so it does not race.
This change allows us to test the agent/xds package using -race.
To fix failing integration tests. The latest version (`1.7.4.0-r0`)
appears to not be catting all the bytes, so the expected metrics are
missing in the output.
This way we only have to wait for the serf barrier to pass once before
we can upgrade to v2 acls. Without this patch every restart needs to
re-compute the change, and potentially if a stray older node joins after
a migration it might regress back to v1 mode which would be problematic.
* Adds a new react component `ConfigEntryReference` that allows us to
document config entries for both HCL and Kubernetes CRDs.
* Rewrites config entry docs to use new component
* Add CRD examples alongside HCL examples
* Removes examples from Kube CRD doc because the examples are now in the
main CRD docs
* ui: Add EmptyState for exposed paths, plus additional details
1. Add the port to the combined address
2. Use the proxy address for the ip address used
* Convert to glimmer component
* Add spaces to ListenerPort and LocalPathPort Tooltips
* ui: Remove all vestiges of role=tabpanel
* Switch out tablist role for a label, default to Secondary
* Move healthcheckout-output headers to h2, ideally these would be outside the component
* Add aria-label for empty button
* Fix up non-unique ids in topology component
* Temporarily fixup h2 in KV > LockSession
* Fixup dl with no dt
* h3 > h2
* Fix up page objects that were reliant on ids
This can happen when one other node in the cluster such as a client is unable to communicate with the leader server and sees it as failed. When that happens its failing status eventually gets propagated to the other servers in the cluster and eventually this can result in RPCs returning “No cluster leader” error.
That error is misleading and unhelpful for determing the root cause of the issue as its not raft stability but rather and client -> server networking issue. Therefore this commit will add a new error that will be returned in that case to differentiate between the two cases.
This PR adds the ns=* query parameter when namespaces are enabled to keep backwards compatibility with how the UI used to work (Intentions page always lists all intention across all namespace you have access to)
I found a tiny dev bug for printing out the current URL during acceptance testing and fixed that up while I was there.