Fixes#4969
This implements non-blocking request polling at the cache layer which is currently only used for prepared queries. Additionally this enables the proxycfg manager to poll prepared queries for use in envoy proxy upstreams.
* Stared updaing links for the learn migration
* Language change cluster -> datacenter (#5212)
* Updating the language from cluster to datacenter in the backup guide to be consistent and more accurate.
* missed some clusters
* updated three broken links for the sidebar nav
Adds info about `k8stag` and `nodePortSyncType` options that were
added in consul-helm v0.5.0.
Additionally moves the k8sprefix to match the order in the Helm chart
values file, while also clarifying that it only affects one sync
direction.
If a user installs the default Helm chart Consul on a Kubernetes
cluster that is open to the internet, it is lacking some important
security configurations.
* Store leaf cert indexes in raft and use for the ModifyIndex on the returned certs
This ensures that future certificate signings will have a strictly greater ModifyIndex than any previous certs signed.
## Background
When making a blocking query on a missing service (was never registered, or is not registered anymore) the query returns as soon as any service is updated.
On clusters with frequent updates (5~10 updates/s in our DCs) these queries virtually do not block, and clients with no protections againt this waste ressources on the agent and server side. Clients that do protect against this get updates later than they should because of the backoff time they implement between requests.
## Implementation
While reducing the number of unnecessary updates we still want :
* Clients to be notified as soon as when the last instance of a service disapears.
* Clients to be notified whenever there's there is an update for the service.
* Clients to be notified as soon as the first instance of the requested service is added.
To reduce the number of unnecessary updates we need to block when a request to a missing service is made. However in the following case :
1. Client `client1` makes a query for service `foo`, gets back a node and X-Consul-Index 42
2. `foo` is unregistered
3. `client1` makes a query for `foo` with `index=42` -> `foo` does not exist, the query blocks and `client1` is not notified of the change on `foo`
We could store the last raft index when each service was last alive to know wether we should block on the incoming query or not, but that list could grow indefinetly.
We instead store the last raft index when a service was unregistered and use it when a query targets a service that does not exist.
When a service `srv` is unregistered this "missing service index" is always greater than any X-Consul-Index held by the clients while `srv` was up, allowing us to immediatly notify them.
1. Client `client1` makes a query for service `foo`, gets back a node and `X-Consul-Index: 42`
2. `foo` is unregistered, we set the "missing service index" to 43
3. `client1` makes a blocking query for `foo` with `index=42` -> `foo` does not exist, we check against the "missing service index" and return immediatly with `X-Consul-Index: 43`
4. `client1` makes a blocking query for `foo` with `index=43` -> we block
5. Other changes happen in the cluster, but foo still doesn't exist and "missing service index" hasn't changed, the query is still blocked
6. `foo` is registered again on index 62 -> `foo` exists and its index is greater than 43, we unblock the query
This adds a MaxQueryTime field to the connect ca leaf cache request type and populates it via the wait query param. The cache will then do the right thing and timeout the operation as expected if no new leaf cert is available within that time.
Fixes#4462
The reproduction scenario in the original issue now times out appropriately.
Bump `shirou/gopsutil` to include https://github.com/shirou/gopsutil/pull/603
This will allow to have consistent node-id even when machine is reinstalled
when using `"disable_host_node_id": false`
It will fix https://github.com/hashicorp/consul/issues/4914 and allow having
the same node-id even when reinstalling a node from scratch. However,
it is only compatible with a single OS (installing to Windows will change
the node-id, but it seems acceptable).
Fixes#4897
Also apparently token deletion could segfault in secondary DCs when attempting to delete non-existant tokens. For that reason both checks are wrapped within the non-nil check.
* Add State storage and LastResult argument into Cache so that cache.Types can safely store additional data that is eventually expired.
* New Leaf cache type working and basic tests passing. TODO: more extensive testing for the Root change jitter across blocking requests, test concurrent fetches for different leaves interact nicely with rootsWatcher.
* Add multi-client and delayed rotation tests.
* Typos and cleanup error handling in roots watch
* Add comment about how the FetchResult can be used and change ca leaf state to use a non-pointer state.
* Plumb test override of root CA jitter through TestAgent so that tests are deterministic again!
* Fix failing config test
* Re-worked the ACL guide into two docs and an updated guide.
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Updating syntax based on amayer5125's comments.
* Missed one of amayer5125's comments
* found a bad link in the acl system docs
* fixing a link in the rules docs
* Add default weights when adding a service with no weights to local state to prevent constant AE re-sync.
This fix was contributed by @42wim in https://github.com/hashicorp/consul/pull/5096 but was merged against the wrong base. This adds it to master and adds a test to cover the behaviour.
* Fix tests that broke due to comparing internal state which now has default weights
* Fixes#4480. Don't leave old errors around in cache that can be hit in specific circumstances.
* Move error reset to cover extreme edge case of nil Value, nil err Fetch