When calling `GetDatacentersByDistance()` or `GetDatacentersMap()`, an
incorrect condition was used to diplay log message, thus flooding
Consul's logs.
Example of message:
```
[WARN] agent.router: Non-server in server-only area: non_server=myClientNode area=lan
```
This message is only valid for WAN areas, filter to avoid creating
hundreds of logs/s on our clusters, each time someone is calling this
method.
Our logs were flooded by such messages when migrating our Consul servers
from 1.7.7 to 1.8.4.
This will issue fix#8663
Occasionally this test would flake. The flakes were fixed by:
1. Stopping the service and retrying to check on metrics. This way we
also include the active_streams going to 0 in the metric calls.
2. Using a reference to the global Metrics. This way when other tests
have background goroutines that are still shutting down, they won't
emit metrics to the metric instance with the fake Sink. The stats
test can patch the local reference to the global, so the existing
statHandlers will continue to emit to the global, but the stats
test will send all metrics to the replacement.
Load balancing policies are configured in service-resolvers. All Envoy load balancing algorithms are currently supported. Including: least_request, round_robin, random, maglev, and ring_hash.
At the moment hash-based load balancing configuration is not applied at mesh gateways since they cannot decrypt traffic to inspect HTTP attributes like headers.
During development a HTTP request will pause for 1 minute ONLY when an
?index is set. This gives a realistic emulation of blocking queries.
During testing we can change this latency when we are testing blocking
queries, which we do in numerous places.
A problem can arise during testing on a very slow machine.
If you are not testing blocking queries and therefore not set a latency
to test with, if the machine you are testing on is slow enough a normal
page can assert during a test, yet not tear down before a further
blocking query request is made. This blocking query then uses the default
latency which cause the page to hang for 1 minute, which in turn causes
the test to timeout.
This only seems to happen on a very slow system, but it does potentially
explain why we occasionally see the odd flakey test popping up.
* k8s > ambassador integration moved to k8s > service mesh > ambassador integration
* k8s > get started > overview moved to k8s > get started > install with
helm chart
* k8s > helm chart reference renamed to helm chart configuration