3adfe86f03
This includes fixes that improve gossip scalability on very large (> 10k node) clusters. The Serf changes: - take snapshot disk IO out of the critical path for handling messages hashicorp/serf#524 - make snapshot compaction much less aggressive - the old fixed threshold caused snapshots to be constantly compacted (synchronously with request handling) on clusters larger than about 2000 nodes! hashicorp/serf#525 Memberlist changes: - prioritize handling alive messages over suspect/dead to improve stability, and handle queue in LIFO order to avoid acting on info that 's already stale in the queue by the time we handle it. hashicorp/memberlist#159 - limit the number of concurrent pushPull requests being handled at once to 128. In one test scenario with 10s of thousands of servers we saw channel and lock blocking cause over 3000 pushPulls at once which ballooned the memory of the server because each push pull contained a de-serialised list of all known 10k+ nodes and their tags for a total of about 60 million objects and 7GB of memory stuck. While the rest of the fixes here should prevent the same root cause from blocking in the same way, this prevents any other bug or source of contention from allowing pushPull messages to stack up and eat resources. hashicorp/memberlist#158 |
||
---|---|---|
.. | ||
cloud.google.com/go | ||
github.com | ||
golang.org/x | ||
google.golang.org | ||
gopkg.in | ||
vendor.json |