This field was always read by the same function that populated the field,
so it does not need to be a field. Passing the value as an argument to
functions makes it more obvious where the value comes from, and also reduces
the scope of the variable significantly.
[The documentation for context](https://golang.org/pkg/context/)
recommends not storing context in a struct field:
> Do not store Contexts inside a struct type; instead, pass a Context
> explicitly to each function that needs it. The Context should be the
> first parameter, typically named ctx...
Sometimes there are good reasons to not follow this recommendation, but
in this case it seems easy enough to follow.
Also moved the ctx argument to be the first in one of the function calls
to follow the same recommendation.
Blocking queries issues will still be uncancellable (that cannot be helped until we get rid of net/rpc). However this makes it so that if calling getWithIndex (like during a cache Notify go routine) we can cancell the outer routine. Previously it would keep issuing more blocking queries until the result state actually changed.
Also handle []interface{} in HookWeakDecodeFromSlice
Without this change only the top level []map[string]interface{} will be
unpacked as a single item. With this change any nested config will be
unpacked.
Co-authored-by: Matt Keeler <mkeeler@users.noreply.github.com>
Currently when passing hostname clusters to Envoy, we set each service instance registered with Consul as an LbEndpoint for the cluster.
However, Envoy can only handle one per cluster:
[2020-06-04 18:32:34.094][1][warning][config] [source/common/config/grpc_subscription_impl.cc:87] gRPC config for type.googleapis.com/envoy.api.v2.Cluster rejected: Error adding/updating cluster(s) dc2.internal.ddd90499-9b47-91c5-4616-c0cbf0fc358a.consul: LOGICAL_DNS clusters must have a single locality_lb_endpoint and a single lb_endpoint, server.dc2.consul: LOGICAL_DNS clusters must have a single locality_lb_endpoint and a single lb_endpoint
Envoy is currently handling this gracefully by only picking one of the endpoints. However, we should avoid passing multiple to avoid these warning logs.
This PR:
* Ensures we only pass one endpoint, which is tied to one service instance.
* We prefer sending an endpoint which is marked as Healthy by Consul.
* If no endpoints are healthy we emit a warning and skip the cluster.
* If multiple unique hostnames are spread across service instances we emit a warning and let the user know which will be resolved.
* Add Health Checks and update Tooltips in Ingress Upstreams
* Update Tooltip in Proxy Info tab Upstreams
* Add Tooltips to Proxy Info tab Exposed Paths
* Add Health Checks with Tooltips to Service List page
This excludes any /components/**/pageobject.js files from our production
builds which means we can co-locate all of our component page objects
(and selectors) along with the components themselves.
In discussion with team, it was pointed out that query parameters tend
to be filter mechanism, and that semantically the "/v1/health/connect"
endpoint should return "all healthy connect-enabled endpoints (e.g.
could be side car proxies or native instances) for this service so I can
connect with mTLS".
That does not fit an ingress gateway, so we remove the query parameter
and add a new endpoint "/v1/health/ingress" that semantically means
"all the healthy ingress gateway instances that I can connect to
to access this connect-enabled service without mTLS"
* ui: Reduce discovery-chain log spam
Currently the only way that the UI can know whether connect is enabled
or not is whether we get 500 errors from certain endpoints.
One of these endpoints we already use, so aswell as recovering from a
500 error, we also remember that connect is disabled for the rest of the
page 'session' (so until the page is refreshed), and make no further
http requests to the endpoint for that specific datacenter.
This means that log spam is reduced to only 1 log per page refresh/dc
instead of 1 log per service navigation.
Longer term we'll need some way to dynamically discover whether connect
is enabled per datacenter without relying on something that will add
error logs to consul.
Get the tertiary links to wrap below buttons
Adjust color/spacing of tertiary via override
Remove overrides, implement custom link
Extract arrow icon to file
Increase top margin for third link
Apply Brandon's fixes
Co-authored-by: Brandon Romano <BrandonRRomano@gmail.com>