This adds a MaxQueryTime field to the connect ca leaf cache request type and populates it via the wait query param. The cache will then do the right thing and timeout the operation as expected if no new leaf cert is available within that time.
Fixes#4462
The reproduction scenario in the original issue now times out appropriately.
Bump `shirou/gopsutil` to include https://github.com/shirou/gopsutil/pull/603
This will allow to have consistent node-id even when machine is reinstalled
when using `"disable_host_node_id": false`
It will fix https://github.com/hashicorp/consul/issues/4914 and allow having
the same node-id even when reinstalling a node from scratch. However,
it is only compatible with a single OS (installing to Windows will change
the node-id, but it seems acceptable).
Fixes#4897
Also apparently token deletion could segfault in secondary DCs when attempting to delete non-existant tokens. For that reason both checks are wrapped within the non-nil check.
* Add State storage and LastResult argument into Cache so that cache.Types can safely store additional data that is eventually expired.
* New Leaf cache type working and basic tests passing. TODO: more extensive testing for the Root change jitter across blocking requests, test concurrent fetches for different leaves interact nicely with rootsWatcher.
* Add multi-client and delayed rotation tests.
* Typos and cleanup error handling in roots watch
* Add comment about how the FetchResult can be used and change ca leaf state to use a non-pointer state.
* Plumb test override of root CA jitter through TestAgent so that tests are deterministic again!
* Fix failing config test
* Re-worked the ACL guide into two docs and an updated guide.
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Updating syntax based on amayer5125's comments.
* Missed one of amayer5125's comments
* found a bad link in the acl system docs
* fixing a link in the rules docs
* Add default weights when adding a service with no weights to local state to prevent constant AE re-sync.
This fix was contributed by @42wim in https://github.com/hashicorp/consul/pull/5096 but was merged against the wrong base. This adds it to master and adds a test to cover the behaviour.
* Fix tests that broke due to comparing internal state which now has default weights
* Fixes#4480. Don't leave old errors around in cache that can be hit in specific circumstances.
* Move error reset to cover extreme edge case of nil Value, nil err Fetch
* Avoid to have infinite recursion in DNS lookups when resolving CNAMEs
This will avoid killing Consul when a Service.Address is using CNAME
to a Consul CNAME that creates an infinite recursion.
This will fix https://github.com/hashicorp/consul/issues/4907
* Use maxRecursionLevel = 3 to allow several recursions
* bugfix: use ServiceTags to generate cahce key hash
* update unit test
* update
* remote print log
* Update .gitignore
* Completely deprecate ServiceTag field internally for clarity
* Add explicit test for CacheInfo cases
Fixes point `#2` of: https://github.com/hashicorp/consul/issues/4903
When registering a service each healthcheck status is saved and restored (https://github.com/hashicorp/consul/blob/master/agent/agent.go#L1914) to avoid unnecessary flaps in health state.
This change extends this feature to single check registration by moving this protection in `AddCheck()` so that both `PUT /v1/agent/service/register` and `PUT /v1/agent/check/register` behave in the same idempotent way.
#### Steps to reproduce
1. Register a check :
```
curl -X PUT \
http://127.0.0.1:8500/v1/agent/check/register \
-H 'Content-Type: application/json' \
-d '{
"Name": "my_check",
"ServiceID": "srv",
"Interval": "10s",
"Args": ["true"]
}'
```
2. The check will initialize and change to `passing`
3. Run the same request again
4. The check status will quickly go from `critical` to `passing` (the delay for this transission is determined by https://github.com/hashicorp/consul/blob/master/agent/checks/check.go#L95)
This adds the `agent/connect/ca/plugin` library for consuming/serving Connect CA providers as [go-plugin](https://github.com/hashicorp/go-plugin) plugins. This **does not** wire this up in any way to Consul itself, so this will not enable using these plugins yet.
## Why?
We want to enable CA providers to be pluggable without modifying Consul so that any CA or PKI system can potentially back the Connect certificates. This CA system may also be used in the future for easier bootstrapping and internal cluster security.
### go-plugin
The benefit of `go-plugin` is that for the plugin consumer, the fact that the interface implementation is communicating over multi-process RPC is invisible. Internals of Consul will continue to just use `ca.Provider` interface implementations as if they're local. For plugin _authors_, they simply have to implement the interface. The network/transport/process management issues are handled by go-plugin itself.
The CA provider plugins support both `net/rpc` and gRPC transports. This enables easy authoring in any language. go-plugin handles the actual protocol handshake and connection. This is just a feature of go-plugin.
`go-plugin` is already in production use for years by Packer, Terraform, Nomad, Vault, and Sentinel. We've shown stability for both desktop and server-side software. It is very mature.
## Implementation Details
### `map[string]interface{}`
The `Configure` method passes a `map[string]interface{}`. This map contains only Go primitives and containers of primitives (no funcs, chans, etc.). For `net/rpc` we encode as-is using Gob. For gRPC we marshal to JSON and transmit as a `bytes` type. This is the same approach we take with Vault and other software.
Note that this is just the transport protocol, the end software views it fully decoded.
### `x509.Certificate` and `CertificateRequest`
We transmit the raw ASN.1 bytes and decode on the other side. Unit tests are verifying we get the same cert/csrs across the wire.
### Testing
`go-plugin` exposes test helpers that enable testing the full plugin RPC over real loopback network connections. We test all endpoints for success and error for both `net/rpc` and gRPC.
### Vendoring
This PR doesn't introduce vendoring for two reasons:
1. @banks's `f-envoy` branch introduces a lot of these and I didn't want conflict.
2. The library isn't actually used yet so it doesn't introduce compile-time errors (it does introduce test errors).
## Next Steps
With this in place, we need to figure out the proper way to actually hook these up to Consul, load them, etc. This discussion can happen elsewhere, since regardless of approach this plugin library implementation is the exact same.
This endpoint aggregates all checks related to <service id> on the agent
and return an appropriate http code + the string describing the worst
check.
This allows to cleanly expose service status to other component, hiding
complexity of multiple checks.
This is especially useful to use consul to feed a load balancer which
would delegate health checking to consul agent.
Exposing this endpoint on the agent is necessary to avoid a hit on
consul servers and avoid decreasing resiliency (this endpoint will work
even if there is no consul leader in the cluster).
Fixes: https://github.com/hashicorp/consul/issues/3676
This fixes a bug were registering an agent with a non-existent ACL token can prevent other
services registered with a good token from being synced to the server when using
`acl_enforce_version_8 = false`.
## Background
When `acl_enforce_version_8` is off the agent does not check the ACL token validity before
storing the service in its state.
When syncing a service registered with a missing ACL token we fall into the default error
handling case (https://github.com/hashicorp/consul/blob/master/agent/local/state.go#L1255)
and stop the sync (https://github.com/hashicorp/consul/blob/master/agent/local/state.go#L1082)
without setting its Synced property to true like in the permission denied case.
This means that the sync will always stop at the faulty service(s).
The order in which the services are synced is random since we iterate on a map. So eventually
all services with good ACL tokens will be synced, this can however take some time and is influenced
by the cluster size, the bigger the slower because retries are less frequent.
Having a service in this state also prevent all further sync of checks as they are done after
the services.
## Changes
This change modify the sync process to continue even if there is an error.
This fixes the issue described above as well as making the sync more error tolerant: if the server repeatedly refuses
a service (the ACL token could have been deleted by the time the service is synced, the servers
were upgraded to a newer version that has more strict checks on the service definition...).
Then all services and check that can be synced will, and those that don't will be marked as errors in
the logs instead of blocking the whole process.