* Add integration test for central config; fix central config WIP
* Add integration test for central config; fix central config WIP
* Set proxy protocol correctly and begin adding upstream support
* Add upstreams to service config cache key and start new notify watcher if they change.
This doesn't update the tests to pass though.
* Fix some merging logic get things working manually with a hack (TODO fix properly)
* Simplification to not allow enabling sidecars centrally - it makes no sense without upstreams anyway
* Test compile again and obvious ones pass. Lots of failures locally not debugged yet but may be flakes. Pushing up to see what CI does
* Fix up service manageer and API test failures
* Remove the enable command since it no longer makes much sense without being able to turn on sidecar proxies centrally
* Remove version.go hack - will make integration test fail until release
* Remove unused code from commands and upstream merge
* Re-bump version to 1.5.0
* Add support for HTTP proxy listeners
* Add customizable bootstrap configuration options
* Debug logging for xDS AuthZ
* Add Envoy Integration test suite with basic test coverage
* Add envoy command tests to cover new cases
* Add tracing integration test
* Add gRPC support WIP
* Merged changes from master Docker. get CI integration to work with same Dockerfile now
* Make docker build optional for integration
* Enable integration tests again!
* http2 and grpc integration tests and fixes
* Fix up command config tests
* Store all container logs as artifacts in circle on fail
* Add retries to outer part of stats measurements as we keep missing them in CI
* Only dump logs on failing cases
* Fix typos from code review
* Review tidying and make tests pass again
* Add debug logs to exec test.
* Fix legit test failure caused by upstream rename in envoy config
* Attempt to reduce cases of bad TLS handshake in CI integration tests
* bring up the right service
* Add prometheus integration test
* Add test for denied AuthZ both HTTP and TCP
* Try ANSI term for Circle
* Move the watch package into the api module
It was already just a thin wrapper around the API anyways. The biggest change was to the testing. Instead of using a test agent directly from the agent package it now uses the binary on the PATH just like the other API tests.
The other big changes were to fix up the connect based watch tests so that we didn’t need to pull in the connect package (and therefore all of Consul)
Fixes#4673
Supercedes: #5677
There was an error decoding `map[string]string` values due to Go strings being immutable. This was fixes in our go-msgpack fork.
Fixes: #4222
# Data Filtering
This PR will implement filtering for the following endpoints:
## Supported HTTP Endpoints
- `/agent/checks`
- `/agent/services`
- `/catalog/nodes`
- `/catalog/service/:service`
- `/catalog/connect/:service`
- `/catalog/node/:node`
- `/health/node/:node`
- `/health/checks/:service`
- `/health/service/:service`
- `/health/connect/:service`
- `/health/state/:state`
- `/internal/ui/nodes`
- `/internal/ui/services`
More can be added going forward and any endpoint which is used to list some data is a good candidate.
## Usage
When using the HTTP API a `filter` query parameter can be used to pass a filter expression to Consul. Filter Expressions take the general form of:
```
<selector> == <value>
<selector> != <value>
<value> in <selector>
<value> not in <selector>
<selector> contains <value>
<selector> not contains <value>
<selector> is empty
<selector> is not empty
not <other expression>
<expression 1> and <expression 2>
<expression 1> or <expression 2>
```
Normal boolean logic and precedence is supported. All of the actual filtering and evaluation logic is coming from the [go-bexpr](https://github.com/hashicorp/go-bexpr) library
## Other changes
Adding the `Internal.ServiceDump` RPC endpoint. This will allow the UI to filter services better.
acl: reduce complexity of token resolution process with alternative singleflighting
Switches acl resolution to use golang.org/x/sync/singleflight. For the
identity/legacy lookups this is a drop-in replacement with the same
overall approach to request coalescing.
For policies this is technically a change in behavior, but when
considered holistically is approximately performance neutral (with the
benefit of less code).
There are two goals with this blob of code (speaking specifically of
policy resolution here):
1) Minimize cross-DC requests.
2) Minimize client-to-server LAN requests.
The previous iteration of this code was optimizing for the case of many
possibly different tokens being resolved concurrently that have a
significant overlap in linked policies such that deduplication would be
worth the complexity. While this is laudable there are some things to
consider that can help to adjust expectations:
1) For v1.4+ policies are always replicated, and once a single policy
shows up in a secondary DC the replicated data is considered
authoritative for requests made in that DC. This means that our
earlier concerns about minimizing cross-DC requests are irrelevant
because there will be no cross-DC policy reads that occur.
2) For Server nodes the in-memory ACL policy cache is capped at zero,
meaning it has no caching. Only Client nodes run with a cache. This
means that instead of having an entire DC's worth of tokens (what a
Server might see) that can have policy resolutions coalesced these
nodes will only ever be seeing node-local token resolutions. In a
reasonable worst-case scenario where a scheduler like Kubernetes has
"filled" a node with Connect services, even that will only schedule
~100 connect services per node. If every service has a unique token
there will only be 100 tokens to coalesce and even then those requests
have to occur concurrently AND be hitting an empty consul cache.
Instead of seeing a great coalescing opportunity for cutting down on
redundant Policy resolutions, in practice it's far more likely given
node densities that you'd see requests for the same token concurrently
than you would for two tokens sharing a policy concurrently (to a degree
that would warrant the overhead of the current variation of
singleflighting.
Given that, this patch switches the Policy resolution process to only
singleflight by requesting token (but keeps the cache as by-policy).
Bump `shirou/gopsutil` to include https://github.com/shirou/gopsutil/pull/603
This will allow to have consistent node-id even when machine is reinstalled
when using `"disable_host_node_id": false`
It will fix https://github.com/hashicorp/consul/issues/4914 and allow having
the same node-id even when reinstalling a node from scratch. However,
it is only compatible with a single OS (installing to Windows will change
the node-id, but it seems acceptable).
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description
At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.
On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.
Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
A server running the new system must still support other clients using the legacy system.
A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.
So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
* agent/debug: add package for debugging, host info
* api: add v1/agent/host endpoint
* agent: add v1/agent/host endpoint
* command/debug: implementation of static capture
* command/debug: tests and only configured targets
* agent/debug: add basic test for host metrics
* command/debug: add methods for dynamic data capture
* api: add debug/pprof endpoints
* command/debug: add pprof
* command/debug: timing, wg, logs to disk
* vendor: add gopsutil/disk
* command/debug: add a usage section
* website: add docs for consul debug
* agent/host: require operator:read
* api/host: improve docs and no retry timing
* command/debug: fail on extra arguments
* command/debug: fixup file permissions to 0644
* command/debug: remove server flags
* command/debug: improve clarity of usage section
* api/debug: add Trace for profiling, fix profile
* command/debug: capture profile and trace at the same time
* command/debug: add index document
* command/debug: use "clusters" in place of members
* command/debug: remove address in output
* command/debug: improve comment on metrics sleep
* command/debug: clarify usage
* agent: always register pprof handlers and protect
This will allow us to avoid a restart of a target agent
for profiling by always registering the pprof handlers.
Given this is a potentially sensitive path, it is protected
with an operator:read ACL and enable debug being
set to true on the target agent. enable_debug still requires
a restart.
If ACLs are disabled, enable_debug is sufficient.
* command/debug: use trace.out instead of .prof
More in line with golang docs.
* agent: fix comment wording
* agent: wrap table driven tests in t.run()
* Vendor updates for gRPC and xDS server
* xDS server implementation for serving Envoy as a Connect proxy
* Address initial review comments
* consistent envoy package aliases; typos fixed; override TLS and authz for custom listeners
* Moar Typos
* Moar typos
This includes fixes that improve gossip scalability on very large (> 10k node) clusters.
The Serf changes:
- take snapshot disk IO out of the critical path for handling messages hashicorp/serf#524
- make snapshot compaction much less aggressive - the old fixed threshold caused snapshots to be constantly compacted (synchronously with request handling) on clusters larger than about 2000 nodes! hashicorp/serf#525
Memberlist changes:
- prioritize handling alive messages over suspect/dead to improve stability, and handle queue in LIFO order to avoid acting on info that 's already stale in the queue by the time we handle it. hashicorp/memberlist#159
- limit the number of concurrent pushPull requests being handled at once to 128. In one test scenario with 10s of thousands of servers we saw channel and lock blocking cause over 3000 pushPulls at once which ballooned the memory of the server because each push pull contained a de-serialised list of all known 10k+ nodes and their tags for a total of about 60 million objects and 7GB of memory stuck. While the rest of the fixes here should prevent the same root cause from blocking in the same way, this prevents any other bug or source of contention from allowing pushPull messages to stack up and eat resources. hashicorp/memberlist#158
* New Providers added and updated vendoring for go-discover
* Vendor.json formatted using make vendorfmt
* Docs/Agent/auto-join: Added documentation for the new providers introduced in this PR
* Updated the golang.org/x/sys/unix in the vendor directory
* Agent: TestGoDiscoverRegistration updated to reflect the addition of new providers
* Deleted terraform.tfstate from vendor.
* Deleted terraform.tfstate.backup
Deleted terraform state file artifacts from unknown runs.
* Updated x/sys/windows vendor for Windows binary compilation
The need has been spotted in issue https://github.com/hashicorp/consul/issues/3687.
Using "NYTimes/gziphandler", the http api responses can now be compressed if required.
The Go API requires compressed response if possible and handle the compressed response.
We here change only the http api (not the UI for instance).
Note that the vendor.json is already correct but the actual files were never checked in so report as missing:
```
$ govendor list | grep testify
v github.com/stretchr/testify/assert
m github.com/stretchr/testify/require
```
See https://github.com/hashicorp/consul/issues/3977
While trying to improve furthermore #3948 (This pull request is still valid since we are not using Compression to compute the result anyway).
I saw a strange behaviour of dns library.
Basically, msg.Len() and len(msg.Pack()) disagree on Message len.
Thus, calculation of DNS response is false consul relies on msg.Len() instead of the result of Pack()
This is linked to miekg/dns#453 and a fix has been provided with miekg/dns#454
Would it be possible to upgrade miekg/dns to a more recent function ?
Consul might for instance upgrade to a post 1.0 release such as https://github.com/miekg/dns/releases/tag/v1.0.4
Update posener/complete to revision=v1.1.
Leave only once occurrence of posener/complete in vendor, there was an occurrence for
each package individualy.
The formatting of vendor/vendor.json has changed after using
the command "govendor fetch github.com/posener/complete@=v1.1"
* Refactors the HTTP listen path to create servers in the same spot.
* Adds HTTP/2 support to Consul's HTTPS server.
* Vendors Go HTTP/2 library and associated deps.
This patch backports a fix which will show the correct usage screen for
command line flags.
This is considered a temporary fix until the code has been refactored.
Newer versions of the cli library require that the flag set is populated
when Help() is called or that it is populated within Help() itself.
Fixes#3536
This patch updates the go-discover library to use the new config parser
which uses a different encoding scheme for the go-discover config DSL.
Values are no longer URL encoded but taken literally unless they contain
spaces, backslashes or double quotes. To support keys or values with
these special characters the string needs to be double quoted and usual
escaping rules apply.
Fixes#3417
* vendor: add github.com/sergi/go-diff/diffmatchpatch for diff'ing test output
* config: refactor Sanitize to recursively clean runtime config and format complex fields
* Removes an extra int cast.
* Adds a top-level check test case for sanitization.