* Update k8s sync docs
- remove docs that said for nodeport service we register each instance
on a node with its same node name. We instead register each instance
onto the k8s-sync node
- add docs describing which ports and ips are used
This is a small step to allowing Agent to accept its dependencies
instead of creating them in New.
There were two fields in autoconfig.Config that were used exclusively
to load config. These were replaced with a single function, allowing us
to move LoadConfig back to the config package.
Also removed the WithX functions for building a Config. Since these were
simple assignment, it appeared we were not getting much value from them.
Making these functions allows us to cleanup how an agent is initialized. They only make use of a config and a logger, so they do not need to be agent methods.
Also cleanup the testing to use t.Run and require.
Fixes#8466
Since Consul 1.8.0 there was a bug in how ingress gateway protocol
compatibility was enforced. At the point in time that an ingress-gateway
config entry was modified the discovery chain for each upstream was
checked to ensure the ingress gateway protocol matched. Unfortunately
future modifications of other config entries were not validated against
existing ingress-gateway definitions, such as:
1. create tcp ingress-gateway pointing to 'api' (ok)
2. create service-defaults for 'api' setting protocol=http (worked, but not ok)
3. create service-splitter or service-router for 'api' (worked, but caused an agent panic)
If you were to do these in a different order, it would fail without a
crash:
1. create service-defaults for 'api' setting protocol=http (ok)
2. create service-splitter or service-router for 'api' (ok)
3. create tcp ingress-gateway pointing to 'api' (fail with message about
protocol mismatch)
This PR introduces the missing validation. The two new behaviors are:
1. create tcp ingress-gateway pointing to 'api' (ok)
2. (NEW) create service-defaults for 'api' setting protocol=http ("ok" for back compat)
3. (NEW) create service-splitter or service-router for 'api' (fail with
message about protocol mismatch)
In consideration for any existing users that may be inadvertently be
falling into item (2) above, that is now officiall a valid configuration
to be in. For anyone falling into item (3) above while you cannot use
the API to manufacture that scenario anymore, anyone that has old (now
bad) data will still be able to have the agent use them just enough to
generate a new agent/proxycfg error message rather than a panic.
Unfortunately we just don't have enough information to properly fix the
config entries.
* ui: Reduce reconnection attempts on disconnection
The UI will attempt to reconnect/retry a blocking query to Consul after
a disconnection in certain circumstances.
1. On receipt of a 5xx error (used for keeping blocking queries running
through reverse proxies that have lowertimeouts than consul itself)
2. When a user switches to a different tab and back again)
3. When the connection to Consul is dropped entirely (when Consul itself
has exited)
In the last case the retry attempts where not using a 3 second interval
between attempts like the first case is.
This commit changes the last case to use the same 3 second pause as the
last case.
* ui: Switch selects to use more HTML-like approach for optgroups
* Add KV comparator
* Use new option/optgroup approach for sort/select
* Fix up tests for new order of menu items
* changes some functions to return data instead of modifying pointer
arguments
* renames globalRPC() to keyringRPCs() to make its purpose more clear
* restructures KeyringOperation() to make it more understandable
* Add sorting to namespaces
* Add sorting to namespaces
* ui: Fix up default namespace no delete test (#8467)
Co-authored-by: John Cowen <johncowen@users.noreply.github.com>
* ui: Move more menu subcomponents deeper down into popovermenu
* ui: Simplify aria-menu component+remove auto menu close on route change
* Add ember-string-fns
* Use new PopoverMenu sub components and fix up tests
* Fix up wrong closing let
* Remove dcs from the service show page now we have it in the navigation
* update bindata_assetfs.go
* Release v1.8.2
* Putting source back into Dev Mode
* changelog: add entries for 1.7.6, 1.7.5 and 1.6.7
Co-authored-by: hashicorp-ci <hashicorp-ci@users.noreply.github.com>
NotifyShutdown was only used for testing. Now that t.Cleanup exists, we
can use that instead of attaching cleanup to the Server shutdown.
The Autopilot test which used NotifyShutdown doesn't need this
notification because Shutdown is synchronous. Waiting for the function
to return is equivalent.
AutoConfig will generate local tokens for clients and the ability to use local tokens is gated off of token replication being enabled and being configured with a replication token. Therefore we already have a hard requirement on having token replication enabled, this commit just makes sure to surface that to the operator instead of having to discern what the issue is from RPC errors.
Occasionally we are seeing the go-test-api job timeout at 10 minutes.
Looking at the stack trace I saw the following:
1. Lots of tests blocked on server.Stop in NewTestServerConfigT. This
suggests that SIGINT is being sent to the server, but the server is
not properly shutting down.
2. Over 20k goroutines that look like this:
goroutine 16355 [select, 8 minutes]:
net/http.(*persistConn).readLoop(0xc004270240)
/usr/local/go/src/net/http/transport.go:2099 +0x99e
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1647 +0xc56
Issue 1 seems to be the main problem, but debugging that directly is not
possible because our buffered logs do not get sent when the tests
timeout. To mitigate this problem I've added a timeout to the cmd.Wait()
to force kill the process and return an error.
Unfortunately because we retry this operation, we still may not see the
cause because the next attempt will likely pass. I'm tempted to remove
the retry around NewTestServerConfigT.
Issue 2 seems to be caused by not closing the response body. Since the
request is performed many times in a loop, many goroutines are created
and are not closed until the response body is closed.