* clarify possibilities for centralized proxy configuration
* add line breaks to config entries file
* add info about centralized config to built in proxy doc
* mondify connect landing page to help with navigation
* move internals details to its own page
* link fixes and shortening text on main page
* put built-in proxy options on its own page
* add configuration details for connect
* clarify security title and add observability page
* reorganize menu
* remove observability from configuration section
* Update website/source/docs/connect/configuration.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* Update website/source/docs/connect/index.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* Update website/source/docs/agent/config_entries.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* Update website/source/docs/connect/configuration.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* rename connect section to include service mesh
* reorganize sections per suggestions from paul
* add configuration edits from paul
* add internals edits from paul
* add observability edits from paul
* reorganize pages and menu
* Update website/source/docs/connect/configuration.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* menu corrections and edits
* incorporate some of pauls comments
* incorporate more of pauls comments
* Update website/source/docs/connect/configuration.html.md
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Update website/source/docs/connect/index.html.md
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Update website/source/docs/connect/index.html.md
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Update website/source/docs/connect/registration.html.md
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* incorporate kaitlin and pavanni feedback
* add redirect
* fix conflicts in index file
* Resolve conflicts in index file
* correct links for new organization
* Update website/source/docs/connect/proxies.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* Update website/source/docs/connect/registration.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* Update website/source/docs/connect/registration.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* Update website/source/docs/connect/registration.html.md
Co-Authored-By: Paul Banks <banks@banksco.de>
* add title to service registration page
* Add some config entry docs
* Update website/source/docs/agent/config_entries.html.md
Co-Authored-By: mkeeler <mkeeler@users.noreply.github.com>
* Update website/source/docs/agent/config_entries.html.md
Co-Authored-By: mkeeler <mkeeler@users.noreply.github.com>
* Get rid of double negative
* Some incremental updates
* Update the config list docs to not point to service-default related things.
* A few more doc updates to get rid of some service-defaults specific linking info in the cli docs
* In progress update
* Update website/source/docs/agent/config_entries.html.md
Co-Authored-By: mkeeler <mkeeler@users.noreply.github.com>
* Reword bootstrap section
* Update example proxy-defaults config
* Finish up the examples section for managing config entries with the CLI
* Update website/source/docs/agent/config_entries.html.md
Co-Authored-By: mkeeler <mkeeler@users.noreply.github.com>
* Use $ for shell command start
* Make it very clear that the normal way to manage things is via the API/CLI
* Update website/source/docs/agent/options.html.md
Co-Authored-By: mkeeler <mkeeler@users.noreply.github.com>
* Add api docs for the config entry endpoints
* Add enable_central_service_config field to agent docs
* Add docs for config entry CLI operations
* Fix wording and links in config entry docs
* Add links to the central service config option
* Update the central service config setting description.
* starting broken link fixes
* Updating the other links for ACLs
* Updating the rest of the links
* fixing acl required links.
* update a bunch of other links
* updated a couple more broken links based on Alvins checker
* removed the extra s
Fixes: #4222
# Data Filtering
This PR will implement filtering for the following endpoints:
## Supported HTTP Endpoints
- `/agent/checks`
- `/agent/services`
- `/catalog/nodes`
- `/catalog/service/:service`
- `/catalog/connect/:service`
- `/catalog/node/:node`
- `/health/node/:node`
- `/health/checks/:service`
- `/health/service/:service`
- `/health/connect/:service`
- `/health/state/:state`
- `/internal/ui/nodes`
- `/internal/ui/services`
More can be added going forward and any endpoint which is used to list some data is a good candidate.
## Usage
When using the HTTP API a `filter` query parameter can be used to pass a filter expression to Consul. Filter Expressions take the general form of:
```
<selector> == <value>
<selector> != <value>
<value> in <selector>
<value> not in <selector>
<selector> contains <value>
<selector> not contains <value>
<selector> is empty
<selector> is not empty
not <other expression>
<expression 1> and <expression 2>
<expression 1> or <expression 2>
```
Normal boolean logic and precedence is supported. All of the actual filtering and evaluation logic is coming from the [go-bexpr](https://github.com/hashicorp/go-bexpr) library
## Other changes
Adding the `Internal.ServiceDump` RPC endpoint. This will allow the UI to filter services better.
* website: specify value of acquire/release params for kv
* website: clarify leader election usage in TTL docs
* website: document minimal value of lockdelay
I believe it uses the default when parsing 0 as it
views that as an empty parameter in this case.
This PR adds two features which will be useful for operators when ACLs are in use.
1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)
Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.
Some other secondary changes:
* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid.
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
* Add common blocking implementation details to docs
These come up over and over again with blocking query loops in our own code and third-party's. #5333 is possibly a case (unconfirmed) where "badly behaved" blocking clients cause issues, however since we've never explicitly documented these things it's not reasonable for third-party clients to have guessed that they are needed!
This hopefully gives us something to point to for the future.
It's a little wordy - happy to consider breaking some of the blocking stuff out of this page if we think it's appropriate but just wanted to quickly plaster over this gap in our docs for now.
* Update index.html.md
* Apply suggestions from code review
Co-Authored-By: banks <banks@banksco.de>
* Update index.html.md
* Update index.html.md
* Clearified monotonically
* Fixing formating
Given a query like:
```
{
"Name": "tagged-connect-query",
"Service": {
"Service": "foo",
"Tags": ["tag"],
"Connect": true
}
}
```
And a Consul configuration like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {} },
"tags": ["tag"]
]
}
```
If you executed the query it would always turn up with 0 results. This was because the sidecar service was being created without any tags. You could instead make your config look like:
```
{
"services": [
"name": "foo",
"port": 8080,
"connect": { "sidecar_service": {
"tags": ["tag"]
} },
"tags": ["tag"]
]
}
```
However that is a bit redundant for most cases. This PR ensures that the tags and service meta of the parent service get copied to the sidecar service. If there are any tags or service meta set in the sidecar service definition then this copying does not take place. After the changes, the query will now return the expected results.
A second change was made to prepared queries in this PR which is to allow filtering on ServiceMeta just like we allow for filtering on NodeMeta.
This endpoint aggregates all checks related to <service id> on the agent
and return an appropriate http code + the string describing the worst
check.
This allows to cleanly expose service status to other component, hiding
complexity of multiple checks.
This is especially useful to use consul to feed a load balancer which
would delegate health checking to consul agent.
Exposing this endpoint on the agent is necessary to avoid a hit on
consul servers and avoid decreasing resiliency (this endpoint will work
even if there is no consul leader in the cluster).
* Update the ACL API docs
* Add a CreateTime to the anon token
Also require acl:read permissions at least to perform rule translation. Don’t want someone DoSing the system with an open endpoint that actually does a bit of work.
* Fix one place where I was referring to id instead of AccessorID
* Add godocs for the API package additions.
* Minor updates: removed some extra commas and updated the acl intro paragraph
* minor tweaks
* Updated the language to be clearer
* Updated the language to be clearer for policy page
* I was also confused by that! Your updates are much clearer.
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Sounds much better.
Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>
* Updated sidebar layout and deprecated warning
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description
At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.
On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.
Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
A server running the new system must still support other clients using the legacy system.
A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.
So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.