* Improve startup message to avoid confusing users when no error occurs
Several times, some users not very familiar with Consul get confused
by error message at startup:
`[INFO] agent: (LAN) joined: 1 Err: <nil>`
Having `Err: <nil>` seems weird to many users, I propose to have the
following instead:
* Success: `[INFO] agent: (LAN) joined: 1`
* Error: `[WARN] agent: (LAN) couldn't join: %d Err: ERROR`
Roles are named and can express the same bundle of permissions that can
currently be assigned to a Token (lists of Policies and Service
Identities). The difference with a Role is that it not itself a bearer
token, but just another entity that can be tied to a Token.
This lets an operator potentially curate a set of smaller reusable
Policies and compose them together into reusable Roles, rather than
always exploding that same list of Policies on any Token that needs
similar permissions.
This also refactors the acl replication code to be semi-generic to avoid
3x copypasta.
* Move the watch package into the api module
It was already just a thin wrapper around the API anyways. The biggest change was to the testing. Instead of using a test agent directly from the agent package it now uses the binary on the PATH just like the other API tests.
The other big changes were to fix up the connect based watch tests so that we didn’t need to pull in the connect package (and therefore all of Consul)
The DNS config parameters `recursors` and `dns_config.*` are now hot
reloaded on SIGHUP or `consul reload` and do not need an agent restart
to be modified.
Config is stored in an atomic.Value and loaded at the beginning of each
request. Reloading only affects requests that start _after_ the
reload. Ongoing requests are not affected. To match the current
behavior the recursor handler is loaded and unloaded as needed on config
reload.
This PR introduces reloading tls configuration. Consul will now be able to reload the TLS configuration which previously required a restart. It is not yet possible to turn TLS ON or OFF with these changes. Only when TLS is already turned on, the configuration can be reloaded. Most importantly the certificates and CAs.
Prevent race between register and deregister requests by saving them
together in the local state on registration.
Also adds more cleaning in case of failure when registering services
/ checks.
This PR adds two features which will be useful for operators when ACLs are in use.
1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)
Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.
Some other secondary changes:
* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid.
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
This PR is based on #5366 and continues to centralise the tls configuration in order to be reloadable eventually!
This PR is another refactoring. No tests are changed, beyond calling other functions or cosmetic stuff. I added a bunch of tests, even though they might be redundant.
In order to be able to reload the TLS configuration, we need one way to generate the different configurations.
This PR introduces a `tlsutil.Configurator` which holds a `tlsutil.Config`. Afterwards it is responsible for rendering every `tls.Config`. In this particular PR I moved `IncomingHTTPSConfig`, `IncomingTLSConfig`, and `OutgoingTLSWrapper` into `tlsutil.Configurator`.
This PR is a pure refactoring - not a single feature added. And not a single test added. I only slightly modified existing tests as necessary.
Adds two new configuration parameters "dns_config.use_cache" and
"dns_config.cache_max_age" controlling how DNS requests use the agent
cache when querying servers.
Currently the gRPC server assumes that if you have configured TLS
certs on the agent (for RPC) that you want gRPC to be encrypted.
If gRPC is bound to localhost this can be overkill. For the API we
let the user choose to offer HTTP or HTTPS API endpoints
independently of the TLS cert configuration for a similar reason.
This setting will let someone encrypt RPC traffic with TLS but avoid
encrypting local gRPC traffic if that is what they want to do by only
enabling TLS on gRPC if the HTTPS API port is enabled.
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client
* Add CSR rate limiting docs
* Fix config naming and add tests for new CA configs
For established xDS gRPC streams recheck ACLs for each DiscoveryRequest
or DiscoveryResponse. If more than 5 minutes has elapsed since the last
ACL check, recheck even without an incoming DiscoveryRequest or
DiscoveryResponse. ACL failures will terminate the stream.
* Add State storage and LastResult argument into Cache so that cache.Types can safely store additional data that is eventually expired.
* New Leaf cache type working and basic tests passing. TODO: more extensive testing for the Root change jitter across blocking requests, test concurrent fetches for different leaves interact nicely with rootsWatcher.
* Add multi-client and delayed rotation tests.
* Typos and cleanup error handling in roots watch
* Add comment about how the FetchResult can be used and change ca leaf state to use a non-pointer state.
* Plumb test override of root CA jitter through TestAgent so that tests are deterministic again!
* Fix failing config test
* Add default weights when adding a service with no weights to local state to prevent constant AE re-sync.
This fix was contributed by @42wim in https://github.com/hashicorp/consul/pull/5096 but was merged against the wrong base. This adds it to master and adds a test to cover the behaviour.
* Fix tests that broke due to comparing internal state which now has default weights
Fixes point `#2` of: https://github.com/hashicorp/consul/issues/4903
When registering a service each healthcheck status is saved and restored (https://github.com/hashicorp/consul/blob/master/agent/agent.go#L1914) to avoid unnecessary flaps in health state.
This change extends this feature to single check registration by moving this protection in `AddCheck()` so that both `PUT /v1/agent/service/register` and `PUT /v1/agent/check/register` behave in the same idempotent way.
#### Steps to reproduce
1. Register a check :
```
curl -X PUT \
http://127.0.0.1:8500/v1/agent/check/register \
-H 'Content-Type: application/json' \
-d '{
"Name": "my_check",
"ServiceID": "srv",
"Interval": "10s",
"Args": ["true"]
}'
```
2. The check will initialize and change to `passing`
3. Run the same request again
4. The check status will quickly go from `critical` to `passing` (the delay for this transission is determined by https://github.com/hashicorp/consul/blob/master/agent/checks/check.go#L95)
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description
At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.
On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.
Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
A server running the new system must still support other clients using the legacy system.
A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.
So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
* Add -enable-local-script-checks options
These options allow for a finer control over when script checks are enabled by
giving the option to only allow them when they are declared from the local
file system.
* Add documentation for the new option
* Nitpick doc wording
* Initial draft of Sidecar Service and Managed Proxy deprecation docs
* Service definition deprecation notices and sidecar service
* gRPC and sidecar service config options; Deprecate managed proxy options
* Envoy Docs: Basic envoy command; envoy getting started/intro
* Remove change that snuck in
* Envoy custom config example
* Add agent/service API docs; deprecate proxy config endpoint
* Misc grep cleanup for managed proxies; capitalize Envoy
* Updates to getting started guide
* Add missing link
* Refactor Envoy guide into a separate guide and add bootstrap reference notes.
* Add limitations to Envoy docs; Highlight no fixes for known managed proxy issues on deprecation page; clarify snake cae stuff; Sidecar Service lifecycle
* Plumb xDS server and proxyxfg into the agent startup
* Add `consul connect envoy` command to allow running Envoy as a connect sidecar.
* Add test for help tabs; typos and style fixups from review
- A new endpoint `/v1/agent/service/:service_id` which is a generic way to look up the service for a single instance. The primary value here is that it:
- **supports hash-based blocking** and so;
- **replaces `/agent/connect/proxy/:proxy_id`** as the mechanism the built-in proxy uses to read its config.
- It's not proxy specific and so works for any service.
- It has a temporary shim to call through to the existing endpoint to preserve current managed proxy config defaulting behaviour until that is removed entirely (tested).
- The built-in proxy now uses the new endpoint exclusively for it's config
- The built-in proxy now has a `-sidecar-for` flag that allows the service ID of the _target_ service to be specified, on the condition that there is exactly one "sidecar" proxy (that is one that has `Proxy.DestinationServiceID` set) for the service registered.
- Several fixes for edge cases for SidecarService
- A fix for `Alias` checks - when running locally they didn't update their state until some external thing updated the target. If the target service has no checks registered as below, then the alias never made it past critical.
* Added new Config for SidecarService in ServiceDefinitions.
* WIP: all the code needed for SidecarService is written... none of it is tested other than config :). Need API updates too.
* Test coverage for the new sidecarServiceFromNodeService method.
* Test API registratrion with SidecarService
* Recursive Key Translation 🤦
* Add tests for nested sidecar defintion arrays to ensure they are translated correctly
* Use dedicated internal state rather than Service Meta for tracking sidecars for deregistration.
Add tests for deregistration.
* API struct for agent register. No other endpoint should be affected yet.
* Additional test cases to cover updates to API registrations
* Rename agent/proxy package to reflect that it is limited to managed proxy processes
Rationale: we have several other components of the agent that relate to Connect proxies for example the ProxyConfigManager component needed for Envoy work. Those things are pretty separate from the focus of this package so far which is only concerned with managing external proxy processes so it's nota good fit to put code for that in here, yet there is a naming clash if we have other packages related to proxy functionality that are not in the `agent/proxy` package.
Happy to bikeshed the name. I started by calling it `managedproxy` but `managedproxy.Manager` is especially unpleasant. `proxyprocess` seems good in that it's more specific about purpose but less clearly connected with the concept of "managed proxies". The names in use are cleaner though e.g. `proxyprocess.Manager`.
This rename was completed automatically using golang.org/x/tools/cmd/gomvpkg.
Depends on #4541
* Fix missed windows tagged files
* Add cache types for catalog/services and health/services and basic test that caching works
* Support non-blocking cache types with Cache-Control semantics.
* Update API docs to include caching info for every endpoint.
* Comment updates per PR feedback.
* Add note on caching to the 10,000 foot view on the architecture page to make the new data path more clear.
* Document prepared query staleness quirk and force all background requests to AllowStale so we can spread service discovery load across servers.
In a real agent the `cache` instance is alive until the agent shuts down so this is not a real leak in production, however in out test suite, every testAgent that is started and stops leaks goroutines that never get cleaned up which accumulate consuming CPU and memory through subsequent test in the `agent` package which doesn't help our test flakiness.
This adds a Close method that doesn't invalidate or clean up the cache, and still allows concurrent blocking queries to run (for up to 10 mins which might still affect tests). But at least it doesn't maintain them forever with background refresh and an expiry watcher routine.
It would be nice to cancel any outstanding blocking requests as well when we close but that requires much more invasive surgery right into our RPC protocol since we don't have a way to cancel requests currently.
Unscientifically this seems to make tests pass a bit quicker and more reliably locally but I can't really be sure of that!
If you provide an invalid HTTP configuration consul will still start again instead of failing. But if you do so the build-in proxy won't be able to start which you might need for connect.
Fixes: #4578
Prior to this fix if there was an error binding to ports for the DNS servers the error would be swallowed by the gated log writer and never output. This fix propagates the DNS server errors back to the shell with a multierror.
Fixes#4515
This just slightly refactors the logic to only attempt to set the serf wan reconnect timeout when the rest of the serf wan settings are configured - thus avoiding a segfault.
Also change how loadProxies works. Now it will load all persisted proxies into a map, then when loading config file proxies will look up the previous proxy token in that map.
- Dev mode assumed no persistence of services although proxy state is persisted which caused proxies to be killed on startup as their services were no longer registered. Fixed.
- Didn't snapshot the ProxyID which meant that proxies were adopted OK from snapshot but failed to restart if they died since there was no proxyID in the ENV on restart
- Dev mode with no persistence just kills all proxies on shutdown since it can't recover them later
- Naming things
This turns out to have a lot more subtelty than we accounted for. The test suite is especially prone to races now we can only poll the child and many extra levels of indirectoin are needed to correctly run daemon process without it becoming a Zombie.
I ran this test suite in a loop with parallel enabled to verify for races (-race doesn't find any as they are logical inter-process ones not actual data races). I made it through ~50 runs before hitting an error due to timing which is much better than before. I want to go back and see if we can do better though. Just getting this up.
Uses struct/interface embedding with the embedded structs/interfaces being empty for oss. Also methods on the server/client types are defaulted to do nothing for OSS
Previously a change was made to make the file writing atomic,
but that wasn't enough to cover something like an OS crash so we
needed something here to handle the situation more gracefully.
Fixes#1221.
Since commit 9685bdcd0ba4b4b3adb04f9c1dd67d637ca7894e, service tags are added to the health checks.
Otherwise, when adding a service, tags are not added to its check.
In updateSyncState, we compare the checks of the local agent with the checks of the catalog.
It appears that the service tags are different (missing in one case), and so the check is synchronized.
That increase the ModifyIndex periodically when nothing changes.
Fixed it by adding serviceTags to the check.
Note that the issue appeared in version 0.8.2.
Looks related to #3259.
* Refactors the HTTP listen path to create servers in the same spot.
* Adds HTTP/2 support to Consul's HTTPS server.
* Vendors Go HTTP/2 library and associated deps.
The state of the service and health check records was spread out over
multiple maps guarded by a single lock. Access to the maps has to happen
in a coordinated effort and the tests often violated this which made
them brittle and racy.
This patch replaces the multiple maps with a single one for both checks
and services to make the code less fragile.
This is also necessary since moving the local state into its own package
creates circular dependencies for the tests. To avoid this the tests can
no longer access internal data structures which they should not be doing
in the first place.
The tests still don't compile but this is a ncessary step in that
direction.
The anti-entropy code manages background synchronizations of the local
state on a regular basis or on demand when either the state has changed
or a new consul server has been added.
This patch moves the anti-entropy code into its own package and
decouples it from the local state code since they are performing
two different functions.
To simplify code-review this revision does not make any optimizations,
renames or refactorings. This will happen in subsequent commits.
The state of the service and health check records was spread out over
multiple maps guarded by a single lock. Access to the maps has to happen
in a coordinated effort and the tests often violated this which made
them brittle and racy.
This patch replaces the multiple maps with a single one for both checks
and services to make the code less fragile.
This is also necessary since moving the local state into its own package
creates circular dependencies for the tests. To avoid this the tests can
no longer access internal data structures which they should not be doing
in the first place.
The tests still don't compile but this is a ncessary step in that
direction.
The anti-entropy code manages background synchronizations of the local
state on a regular basis or on demand when either the state has changed
or a new consul server has been added.
This patch moves the anti-entropy code into its own package and
decouples it from the local state code since they are performing
two different functions.
To simplify code-review this revision does not make any optimizations,
renames or refactorings. This will happen in subsequent commits.
* agent: add option to discard health output
In high volatile environments consul will have checks with "noisy"
output which changes every time even though the status does not change.
Since the output is stored in the raft log every health check update
unblocks a blocking call on health checks since the raft index has
changed even though the status of the health checks may not have changed
at all. By discarding the output of the health checks the users can
choose a different tradeoff. Less visibility on why a check failed in
exchange for a reduced change rate on the raft log.
* agent: discard output also when adding a check
* agent: add test for discard check output
* agent: update docs
* go vet
* Adds discard_check_output to reloadable config table.
* Updates the change log.
* Fixes agent error handling when check definition is invalid. Distinguishes between empty checks vs invalid checks
* Made CheckTypes return Checks from service definition struct rather than a new copy, and other changes from code review. This also errors when json payload contains empty structs
* Simplify and improve validate method, and make sure that CheckTypes always returns a new copy of validated check definitions
* Tweaks some small style things and error messages.
* Updates the change log.
* Adds client-side retry for no leader errors.
This paves over the case where the client was connected to the leader
when it loses leadership.
* Adds a configurable server RPC drain time and a fail-fast path for RPCs.
When a server leaves it gets removed from the Raft configuration, so it will
never know who the new leader server ends up being. Without this we'd be
doomed to wait out the RPC hold timeout and then fail. This makes things fail
a little quicker while a sever is draining, and since we added a client retry
AND since the server doing this has already shut down and left the Serf LAN,
clients should retry against some other server.
* Makes the RPC hold timeout configurable.
* Reorders struct members.
* Sets the RPC hold timeout default for test servers.
* Bumps the leave drain time up to 5 seconds.
* Robustifies retries with a simpler client-side RPC hold.
* Reverts untended delete.
* Clean up handling of subprocesses and make using a shell optional
* Update docs for subprocess changes
* Fix tests for new subprocess behavior
* More cleanup of subprocesses
* Minor adjustments and cleanup for subprocess logic
* Makes the watch handler reload test use the new path.
* Adds check tests for new args path, and updates existing tests to use new path.
* Adds support for script args in Docker checks.
* Fixes the sanitize unit test.
* Adds panic for unknown watch type, and reverts back to Run().
* Adds shell option back to consul lock command.
* Adds shell option back to consul exec command.
* Adds shell back into consul watch command.
* Refactors signal forwarding and makes Windows-friendly.
* Adds a clarifying comment.
* Changes error wording to a warning.
* Scopes signals to interrupt and kill.
This avoids us trying to send SIGCHILD to the dead process.
* Adds an error for shell=false for consul exec.
* Adds notes about the deprecated script and handler fields.
* De-nests an if statement.
Remove an error in watch reloading that happens when http and https
are both disabled, and use an https address for running watches if
no http addresses are present.
Fixes#3425.
* new config parser for agent
This patch implements a new config parser for the consul agent which
makes the following changes to the previous implementation:
* add HCL support
* all configuration fragments in tests and for default config are
expressed as HCL fragments
* HCL fragments can be provided on the command line so that they
can eventually replace the command line flags.
* HCL/JSON fragments are parsed into a temporary Config structure
which can be merged using reflection (all values are pointers).
The existing merge logic of overwrite for values and append
for slices has been preserved.
* A single builder process generates a typed runtime configuration
for the agent.
The new implementation is more strict and fails in the builder process
if no valid runtime configuration can be generated. Therefore,
additional validations in other parts of the code should be removed.
The builder also pre-computes all required network addresses so that no
address/port magic should be required where the configuration is used
and should therefore be removed.
* Upgrade github.com/hashicorp/hcl to support int64
* improve error messages
* fix directory permission test
* Fix rtt test
* Fix ForceLeave test
* Skip performance test for now until we know what to do
* Update github.com/hashicorp/memberlist to update log prefix
* Make memberlist use the default logger
* improve config error handling
* do not fail on non-existing data-dir
* experiment with non-uniform timeouts to get a handle on stalled leader elections
* Run tests for packages separately to eliminate the spurious port conflicts
* refactor private address detection and unify approach for ipv4 and ipv6.
Fixes#2825
* do not allow unix sockets for DNS
* improve bind and advertise addr error handling
* go through builder using test coverage
* minimal update to the docs
* more coverage tests fixed
* more tests
* fix makefile
* cleanup
* fix port conflicts with external port server 'porter'
* stop test server on error
* do not run api test that change global ENV concurrently with the other tests
* Run remaining api tests concurrently
* no need for retry with the port number service
* monkey patch race condition in go-sockaddr until we understand why that fails
* monkey patch hcl decoder race condidtion until we understand why that fails
* monkey patch spurious errors in strings.EqualFold from here
* add test for hcl decoder race condition. Run with go test -parallel 128
* Increase timeout again
* cleanup
* don't log port allocations by default
* use base command arg parsing to format help output properly
* handle -dc deprecation case in Build
* switch autopilot.max_trailing_logs to int
* remove duplicate test case
* remove unused methods
* remove comments about flag/config value inconsistencies
* switch got and want around since the error message was misleading.
* Removes a stray debug log.
* Removes a stray newline in imports.
* Fixes TestACL_Version8.
* Runs go fmt.
* Adds a default case for unknown address types.
* Reoders and reformats some imports.
* Adds some comments and fixes typos.
* Reorders imports.
* add unix socket support for dns later
* drop all deprecated flags and arguments
* fix wrong field name
* remove stray node-id file
* drop unnecessary patch section in test
* drop duplicate test
* add test for LeaveOnTerm and SkipLeaveOnInt in client mode
* drop "bla" and add clarifying comment for the test
* split up tests to support enterprise/non-enterprise tests
* drop raft multiplier and derive values during build phase
* sanitize runtime config reflectively and add test
* detect invalid config fields
* fix tests with invalid config fields
* use different values for wan sanitiziation test
* drop recursor in favor of recursors
* allow dns_config.udp_answer_limit to be zero
* make sure tests run on machines with multiple ips
* Fix failing tests in a few more places by providing a bind address in the test
* Gets rid of skipped TestAgent_CheckPerformanceSettings and adds case for builder.
* Add porter to server_test.go to make tests there less flaky
* go fmt
* Added rate limiting for agent RPC calls.
* Initializes the rate limiter based on the config.
* Adds the rate limiter into the snapshot RPC path.
* Adds unit tests for the RPC rate limiter.
* Groups the RPC limit parameters under "limits" in the config.
* Adds some documentation about the RPC limiter.
* Sends a 429 response when the rate limiter kicks in.
* Adds docs for new telemetry.
* Makes snapshot telemetry look like RPC telemetry and cleans up comments.
The error handling of the ACL code relies on the presence of certain
magic error messages. Since the error values are sent via RPC between
older and newer consul agents we cannot just replace the magic values
with typed errors and switch to type checks since this would break
compatibility with older clients.
Therefore, this patch moves all magic ACL error messages into the acl
package and provides default error values and helper functions which
determine the type of error.
Note that there is no test since the correct way to solve (and test)
this is to replace the different maps with a single one or to hide
that functionality behind a separate data structure. This will be
addressed in #3294.
Fixes#3265
This patch replaces the Docker client which is used
for health checks with a simplified version tailored
for that purpose.
See #3254
See #3257Fixes#3270
* Moves magic check and service constants into shared structs package.
* Removes the "consul" service from local state.
Since this service is added by the leader, it doesn't really make sense to
also keep it in local state (which requires special ACLs to configure), and
requires a bunch of special cases in the local state logic. This requires
fewer special cases and makes ACL bootstrapping cleaner.
* Makes coordinate update ACL log message a warning, similar to other AE warnings.
* Adds much more detailed examples for bootstrapping ACLs.
This can hopefully replace https://gist.github.com/slackpad/d89ce0e1cc0802c3c4f2d84932fa3234.
The agent configuration for the consul server is a partial configuration
which needs to be cloned to avoid data races.
This is a stop-gap measure before moving the configuration into
a separate package.
This patch fixes watch registration through the config file and a broken log line when the watch registration fails. It also plumbs all the watch loading through a common function and tweaks the
unit test to create the watch before the reload.
When the agent is triggered to shutdown via an external 'consul leave'
command delivered via the HTTP API then the client expects to receive a
response when the agent is down. This creates a race on when to shutdown
the agent itself like the RPC server, the checks and the state and the
external endpoints like DNS and HTTP.
This patch splits the shutdown process into two parts:
* shutdown the agent
* shutdown the endpoints (http and dns)
They can be executed multiple times, concurrently and in any order but
should be executed first agent, then endpoints to provide consistent
behavior across all use cases. Both calls have to be executed for a
proper shutdown.
This could be partially hidden in a single function but would introduce
some magic that happens behind the scenes which one has to know of but
isn't obvious.
Fixes#2880
This patch hides the RPC handler overwrite mechanism from the
rest of the code so that it works in all cases and that there
is no cooperation required from the tested code, i.e. we can
drop a.getEndpoint().
When the agent is triggered to shutdown via an external 'consul leave'
command delivered via the HTTP API then the client expects to receive a
response when the agent is down. This creates a race on when to shutdown
the agent itself like the RPC server, the checks and the state and the
external endpoints like DNS and HTTP. Ideally, the external endpoints
should be shutdown before the internal state but if the goal is to
respond reliably that the agent is down then this is not possible.
This patch splits the agent shutdown into two parts implemented in a
single method to keep it simple and unambiguos for the caller. The first
stage shuts down the internal state, checks, RPC server, ...
synchronously and then triggers the shutdown of the external endpoints
asychronously. This way the caller is guaranteed that the internal state
services are down when Shutdown returns and there remains enough time to
send a response.
Fixes#2880