If a KVSet is performed but does not update the entry, do not trigger
watches for this key.
This avoids releasing blocking queries for KV values that did not
actually changed.
* Add integration test for central config; fix central config WIP
* Add integration test for central config; fix central config WIP
* Set proxy protocol correctly and begin adding upstream support
* Add upstreams to service config cache key and start new notify watcher if they change.
This doesn't update the tests to pass though.
* Fix some merging logic get things working manually with a hack (TODO fix properly)
* Simplification to not allow enabling sidecars centrally - it makes no sense without upstreams anyway
* Test compile again and obvious ones pass. Lots of failures locally not debugged yet but may be flakes. Pushing up to see what CI does
* Fix up service manageer and API test failures
* Remove the enable command since it no longer makes much sense without being able to turn on sidecar proxies centrally
* Remove version.go hack - will make integration test fail until release
* Remove unused code from commands and upstream merge
* Re-bump version to 1.5.0
* Add HTTP endpoints for config entry management
* Finish implementing decoding in the HTTP Config entry apply endpoint
* Add CAS operation to the config entry apply endpoint
Also use this for the bootstrapping and move the config entry decoding function into the structs package.
* First pass at the API client for the config entries
* Fixup some of the ConfigEntry APIs
Return a singular response object instead of a list for the ConfigEntry.Get RPC. This gets plumbed through the HTTP API as well.
Dont return QueryMeta in the JSON response for the config entry listing HTTP API. Instead just return a list of config entries.
* Minor API client fixes
* Attempt at some ConfigEntry api client tests
These don’t currently work due to weak typing in JSON
* Get some of the api client tests passing
* Implement reflectwalk magic to correct JSON encoding a ProxyConfigEntry
Also added a test for the HTTP endpoint that exposes the problem. However, since the test doesn’t actually do the JSON encode/decode its still failing.
* Move MapWalk magic into a binary marshaller instead of JSON.
* Add a MapWalk test
* Get rid of unused func
* Get rid of unused imports
* Fixup some tests now that the decoding from msgpack coerces things into json compat types
* Stub out most of the central config cli
Fully implement the config read command.
* Basic config delete command implementation
* Implement config write command
* Implement config list subcommand
Not entirely sure about the output here. Its basically the read output indented with a line specifying the kind/name of each type which is also duplicated in the indented output.
* Update command usage
* Update some help usage formatting
* Add the connect enable helper cli command
* Update list command output
* Rename the config entry API client methods.
* Use renamed apis
* Implement config write tests
Stub the others with the noTabs tests.
* Change list output format
Now just simply output 1 line per named config
* Add config read tests
* Add invalid args write test.
* Add config delete tests
* Add config list tests
* Add connect enable tests
* Update some CLI commands to use CAS ops
This also modifies the HTTP API for a write op to return a boolean indicating whether the value was written or not.
* Fix up the HTTP API CAS tests as I realized they weren’t testing what they should.
* Update config entry rpc tests to properly test CAS
* Fix up a few more tests
* Fix some tests that using ConfigEntries.Apply
* Update config_write_test.go
* Get rid of unused import
When receiving a serf faild message for a node which is not in the
catalog, do not perform a register request to set is serf heath to
critical as it could overwrite the node information and services if it
was renamed.
Fixes : #5518
Roles are named and can express the same bundle of permissions that can
currently be assigned to a Token (lists of Policies and Service
Identities). The difference with a Role is that it not itself a bearer
token, but just another entity that can be tied to a Token.
This lets an operator potentially curate a set of smaller reusable
Policies and compose them together into reusable Roles, rather than
always exploding that same list of Policies on any Token that needs
similar permissions.
This also refactors the acl replication code to be semi-generic to avoid
3x copypasta.
This is mainly to avoid having the API return "0001-01-01T00:00:00Z" as
a value for the ExpirationTime field when it is not set. Unfortunately
time.Time doesn't respect the json marshalling "omitempty" directive.
These act like a special cased version of a Policy Template for granting
a token the privileges necessary to register a service and its connect
proxy, and read upstreams from the catalog.
Fixes: #4222
# Data Filtering
This PR will implement filtering for the following endpoints:
## Supported HTTP Endpoints
- `/agent/checks`
- `/agent/services`
- `/catalog/nodes`
- `/catalog/service/:service`
- `/catalog/connect/:service`
- `/catalog/node/:node`
- `/health/node/:node`
- `/health/checks/:service`
- `/health/service/:service`
- `/health/connect/:service`
- `/health/state/:state`
- `/internal/ui/nodes`
- `/internal/ui/services`
More can be added going forward and any endpoint which is used to list some data is a good candidate.
## Usage
When using the HTTP API a `filter` query parameter can be used to pass a filter expression to Consul. Filter Expressions take the general form of:
```
<selector> == <value>
<selector> != <value>
<value> in <selector>
<value> not in <selector>
<selector> contains <value>
<selector> not contains <value>
<selector> is empty
<selector> is not empty
not <other expression>
<expression 1> and <expression 2>
<expression 1> or <expression 2>
```
Normal boolean logic and precedence is supported. All of the actual filtering and evaluation logic is coming from the [go-bexpr](https://github.com/hashicorp/go-bexpr) library
## Other changes
Adding the `Internal.ServiceDump` RPC endpoint. This will allow the UI to filter services better.
* First conversion
* Use serf 0.8.2 tag and associated updated deps
* * Move freeport and testutil into internal/
* Make internal/ its own module
* Update imports
* Add replace statements so API and normal Consul code are
self-referencing for ease of development
* Adapt to newer goe/values
* Bump to new cleanhttp
* Fix ban nonprintable chars test
* Update lock bad args test
The error message when the duration cannot be parsed changed in Go 1.12
(ae0c435877d3aacb9af5e706c40f9dddde5d3e67). This updates that test.
* Update another test as well
* Bump travis
* Bump circleci
* Bump go-discover and godo to get rid of launchpad dep
* Bump dockerfile go version
* fix tar command
* Bump go-cleanhttp
* Make Connect health queryies unblock correctly in all cases and use optimal number of watch chans. Fixes#5506.
* Node check test cases and clearer bug test doc
* Comment update
Refs #4984.
Watching chans for every node we touch in a health query is wasteful. In #4984 it shows that if there are more than 682 service instances we always fallback to watching all services which kills performance.
We already have a record in MemDB that is reliably update whenever the service health result should change thanks to per-service watch indexes.
So in general, provided there is at least one service instances and we actually have a service index for it (we always do now) we only ever need to watch a single channel.
This saves us from ever falling back to the general index and causing the performance cliff in #4984, but it also means fewer goroutines and work done for every blocking health query.
It also saves some allocations made during the query because we no longer have to populate a WatchSet with 3 chans per service instance which saves the internal map allocation.
This passes all state store tests except the one that explicitly checked for the fallback behaviour we've now optimized away and in general seems safe.
acl: reduce complexity of token resolution process with alternative singleflighting
Switches acl resolution to use golang.org/x/sync/singleflight. For the
identity/legacy lookups this is a drop-in replacement with the same
overall approach to request coalescing.
For policies this is technically a change in behavior, but when
considered holistically is approximately performance neutral (with the
benefit of less code).
There are two goals with this blob of code (speaking specifically of
policy resolution here):
1) Minimize cross-DC requests.
2) Minimize client-to-server LAN requests.
The previous iteration of this code was optimizing for the case of many
possibly different tokens being resolved concurrently that have a
significant overlap in linked policies such that deduplication would be
worth the complexity. While this is laudable there are some things to
consider that can help to adjust expectations:
1) For v1.4+ policies are always replicated, and once a single policy
shows up in a secondary DC the replicated data is considered
authoritative for requests made in that DC. This means that our
earlier concerns about minimizing cross-DC requests are irrelevant
because there will be no cross-DC policy reads that occur.
2) For Server nodes the in-memory ACL policy cache is capped at zero,
meaning it has no caching. Only Client nodes run with a cache. This
means that instead of having an entire DC's worth of tokens (what a
Server might see) that can have policy resolutions coalesced these
nodes will only ever be seeing node-local token resolutions. In a
reasonable worst-case scenario where a scheduler like Kubernetes has
"filled" a node with Connect services, even that will only schedule
~100 connect services per node. If every service has a unique token
there will only be 100 tokens to coalesce and even then those requests
have to occur concurrently AND be hitting an empty consul cache.
Instead of seeing a great coalescing opportunity for cutting down on
redundant Policy resolutions, in practice it's far more likely given
node densities that you'd see requests for the same token concurrently
than you would for two tokens sharing a policy concurrently (to a degree
that would warrant the overhead of the current variation of
singleflighting.
Given that, this patch switches the Policy resolution process to only
singleflight by requesting token (but keeps the cache as by-policy).
This PR introduces reloading tls configuration. Consul will now be able to reload the TLS configuration which previously required a restart. It is not yet possible to turn TLS ON or OFF with these changes. Only when TLS is already turned on, the configuration can be reloaded. Most importantly the certificates and CAs.
Node updates were not updating the service indexes, which are used for
service related queries. This caused the X-Consul-Index to stay the same
after a node update as seen from a service query even though the node
data is returned in heath queries. If that happened in between queries
the client would miss this change.
We now update the indexes of the services on the node when it is
updated.
Fixes: #5450
In TestServer_LANReap autopilot is running, so the alternate flow
through the serf reaping function is possible. In that situation the
ReconnectTimeout is not relevant so for parity also override the
TombstoneTimeout value as well.
For additional parity update the TestServer_WANReap and
TestClient_LANReap versions of this test in the same way even though
autopilot is irrelevant here .
Fix error in detecting raft replication errors.
Detect redacted token secrets and prevent attempting to insert.
Add a Redacted field to the TokenBatchRead and TokenRead RPC endpoints
This will indicate whether token secrets have been redacted.
Ensure any token with a redacted secret in secondary datacenters is removed.
Test that redacted tokens cannot be replicated.
Previously we were fixing up the token links directly on the *ACLToken returned by memdb. This invalidated some assumptions that a snapshot is immutable as well as potentially being able to cause a crash.
The fix here is to give the policy link fixing function copy on write semantics. When no fixes are necessary we can return the memdb object directly, otherwise we copy it and create a new list of links.
Eventually we might find a better way to keep those policy links in sync but for now this fixes the issue.
This PR adds two features which will be useful for operators when ACLs are in use.
1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)
Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.
Some other secondary changes:
* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid.
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
In order to be able to reload the TLS configuration, we need one way to generate the different configurations.
This PR introduces a `tlsutil.Configurator` which holds a `tlsutil.Config`. Afterwards it is responsible for rendering every `tls.Config`. In this particular PR I moved `IncomingHTTPSConfig`, `IncomingTLSConfig`, and `OutgoingTLSWrapper` into `tlsutil.Configurator`.
This PR is a pure refactoring - not a single feature added. And not a single test added. I only slightly modified existing tests as necessary.
Also in acl_endpoint_test.go:
* convert logical blocks in some token tests to subtests
* remove use of require.New
This removes a lot of noise in a later PR.
There was an errant early-return in PolicyDelete() that bypassed the
rest of the function. This was ok because the only caller of this
function ignores the results.
This removes the early-return making it structurally behave like
TokenDelete() and for both PolicyDelete and TokenDelete clarify the lone
callers to indicate that the return values are ignored.
We may wish to avoid the entire return value as well, but this patch
doesn't go that far.
`establishLeadership` invoked during leadership monitoring may use autopilot to do promotions etc. There was a race with doing that and having autopilot initialized and this fixes it.
Given a query like:
```
{
"Name": "tagged-connect-query",
"Service": {
"Service": "foo",
"Tags": ["tag"],
"Connect": true
}
}
```
And a Consul configuration like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {} },
"tags": ["tag"]
]
}
```
If you executed the query it would always turn up with 0 results. This was because the sidecar service was being created without any tags. You could instead make your config look like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {
"tags": ["tag"]
} },
"tags": ["tag"]
]
}
```
However that is a bit redundant for most cases. This PR ensures that the tags and service meta of the parent service get copied to the sidecar service. If there are any tags or service meta set in the sidecar service definition then this copying does not take place. After the changes, the query will now return the expected results.
A second change was made to prepared queries in this PR which is to allow filtering on ServiceMeta just like we allow for filtering on NodeMeta.
When tests fail, only the logs for the failing run are dumped to the
console which helps in diagnosis. This is easily added to other test
scenarios as they come up.
* Fix 2 remote ACL policy resolution issues
1 - Use the right method to fire async not found errors when the ACL.PolicyResolve RPC returns that error. This was previously accidentally firing a token result instead of a policy result which would have effectively done nothing (unless there happened to be a token with a secret id == the policy id being resolved.
2. When concurrent policy resolution is being done we single flight the requests. The bug before was that for the policy resolution that was going to piggy back on anothers RPC results it wasn’t waiting long enough for the results to come back due to looping with the wrong variable.
* Fix a handful of other edge case ACL scenarios
The main issue was that token specific issues (not able to access a particular policy or the token being deleted after initial fetching) were poisoning the policy cache.
A second issue was that for concurrent token resolutions, the first resolution to get started would go fetch all the policies. If before the policies were retrieved a second resolution request came in, the new request would register watchers for those policies but then never block waiting for them to complete. This resulted in using the default policy when it shouldn't have.
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client
* Add CSR rate limiting docs
* Fix config naming and add tests for new CA configs
* Store leaf cert indexes in raft and use for the ModifyIndex on the returned certs
This ensures that future certificate signings will have a strictly greater ModifyIndex than any previous certs signed.
## Background
When making a blocking query on a missing service (was never registered, or is not registered anymore) the query returns as soon as any service is updated.
On clusters with frequent updates (5~10 updates/s in our DCs) these queries virtually do not block, and clients with no protections againt this waste ressources on the agent and server side. Clients that do protect against this get updates later than they should because of the backoff time they implement between requests.
## Implementation
While reducing the number of unnecessary updates we still want :
* Clients to be notified as soon as when the last instance of a service disapears.
* Clients to be notified whenever there's there is an update for the service.
* Clients to be notified as soon as the first instance of the requested service is added.
To reduce the number of unnecessary updates we need to block when a request to a missing service is made. However in the following case :
1. Client `client1` makes a query for service `foo`, gets back a node and X-Consul-Index 42
2. `foo` is unregistered
3. `client1` makes a query for `foo` with `index=42` -> `foo` does not exist, the query blocks and `client1` is not notified of the change on `foo`
We could store the last raft index when each service was last alive to know wether we should block on the incoming query or not, but that list could grow indefinetly.
We instead store the last raft index when a service was unregistered and use it when a query targets a service that does not exist.
When a service `srv` is unregistered this "missing service index" is always greater than any X-Consul-Index held by the clients while `srv` was up, allowing us to immediatly notify them.
1. Client `client1` makes a query for service `foo`, gets back a node and `X-Consul-Index: 42`
2. `foo` is unregistered, we set the "missing service index" to 43
3. `client1` makes a blocking query for `foo` with `index=42` -> `foo` does not exist, we check against the "missing service index" and return immediatly with `X-Consul-Index: 43`
4. `client1` makes a blocking query for `foo` with `index=43` -> we block
5. Other changes happen in the cluster, but foo still doesn't exist and "missing service index" hasn't changed, the query is still blocked
6. `foo` is registered again on index 62 -> `foo` exists and its index is greater than 43, we unblock the query
Fixes#4897
Also apparently token deletion could segfault in secondary DCs when attempting to delete non-existant tokens. For that reason both checks are wrapped within the non-nil check.
* Avoid to have infinite recursion in DNS lookups when resolving CNAMEs
This will avoid killing Consul when a Service.Address is using CNAME
to a Consul CNAME that creates an infinite recursion.
This will fix https://github.com/hashicorp/consul/issues/4907
* Use maxRecursionLevel = 3 to allow several recursions
* bugfix: use ServiceTags to generate cahce key hash
* update unit test
* update
* remote print log
* Update .gitignore
* Completely deprecate ServiceTag field internally for clarity
* Add explicit test for CacheInfo cases