Care must be taken when replacing mesh gateways in the primary
datacenter, because if the old addresses become unreachable before the
secondary datacenters receive the new addresses then the primary
datacenter overall will become unreachable.
This commit adds docs related to this class of upgrades.
In 1.13.2 we added a new flag called use_auto_cert to address issues
previously documented in the upgrade guide. Originally there was no way
to disable TLS for gRPC when auto-encrypt was in use, because TLS was
enabled for gRPC due to the presence of auto-encrypt certs.
As of 1.13.2, using auto-encrypt certs as the signal to enable TLS for
gRPC is opt-in only. Meaning that if anyone who had upgraded to 1.13
relied on that side-effect, they now need to explicitly configure it.
Use local-storage service, prototyped here https://github.com/LevelbossMike/local-storage-service, to manage local storage usage in an octane way. Does not write to local storage in tests by default and is easy to stub out.
There is a bug in the error handling code for the Agent cache subsystem discovered:
1. NotifyCallback calls notifyBlockingQuery which calls getWithIndex in
a loop (which backs off on-error up to 1 minute)
2. getWithIndex calls fetch if there’s no valid entry in the cache
3. fetch starts a goroutine which calls Fetch on the cache-type, waits
for a while (again with backoff up to 1 minute for errors) and then
calls fetch to trigger a refresh
The end result being that every 1 minute notifyBlockingQuery spawns an
ancestry of goroutines that essentially lives forever.
This PR ensures that the goroutine started by `fetch` cancels any prior
goroutine spawned by the same line for the same key.
In isolated testing where a cache type was tweaked to indefinitely
error, this patch prevented goroutine counts from skyrocketing.
In practice this was masked by #14956 and was only uncovered fixing the
other bug.
go test ./agent -run TestAgentConnectCALeafCert_goodNotLocal
would fail when only #14956 was fixed.
Fixes a `go vet` warning caused by the pragma.DoNotCopy on the protobuf
message type.
Originally I'd hoped we wouldn't need any reflection in the proxycfg hot
path, but it seems proto.Clone is the only supported way to copy a message.
Adds a user-configurable rate limiter to proxycfg snapshot delivery,
with a default limit of 250 updates per second.
This addresses a problem observed in our load testing of Consul
Dataplane where updating a "global" resource such as a wildcard
intention or the proxy-defaults config entry could starve the Raft or
Memberlist goroutines of CPU time, causing general cluster instability.
Replaces the reflection-based implementation of proxycfg's
ConfigSnapshot.Clone with code generated by deep-copy.
While load testing server-based xDS (for consul-dataplane) we discovered
this method is extremely expensive. The ConfigSnapshot struct, directly
or indirectly, contains a copy of many of the structs in the agent/structs
package, which creates a large graph for copystructure.Copy to traverse
at runtime, on every proxy reconfiguration.