Commit Graph

540 Commits

Author SHA1 Message Date
Kyle Havlovitz 9df597b257 Fill out state store/FSM functions and add tests 2019-03-19 15:56:17 -07:00
Kyle Havlovitz 53913461db Add config types and state store table 2019-03-19 10:06:46 -07:00
Kyle Havlovitz bb0839ea5b Condense some test logic and add a comment about renaming 2019-03-18 16:15:36 -07:00
Paul Banks dd08426b04
Optimize health watching to single chan/goroutine. (#5449)
Refs #4984.

Watching chans for every node we touch in a health query is wasteful. In #4984 it shows that if there are more than 682 service instances we always fallback to watching all services which kills performance.

We already have a record in MemDB that is reliably update whenever the service health result should change thanks to per-service watch indexes.

So in general, provided there is at least one service instances and we actually have a service index for it (we always do now) we only ever need to watch a single channel.

This saves us from ever falling back to the general index and causing the performance cliff in #4984, but it also means fewer goroutines and work done for every blocking health query.

It also saves some allocations made during the query because we no longer have to populate a WatchSet with 3 chans per service instance which saves the internal map allocation.

This passes all state store tests except the one that explicitly checked for the fallback behaviour we've now optimized away and in general seems safe.
2019-03-15 20:18:48 +00:00
R.B. Boyer d65008700a
acl: reduce complexity of token resolution process with alternative singleflighting (#5480)
acl: reduce complexity of token resolution process with alternative singleflighting

Switches acl resolution to use golang.org/x/sync/singleflight. For the
identity/legacy lookups this is a drop-in replacement with the same
overall approach to request coalescing.

For policies this is technically a change in behavior, but when
considered holistically is approximately performance neutral (with the
benefit of less code).

There are two goals with this blob of code (speaking specifically of
policy resolution here):

  1) Minimize cross-DC requests.
  2) Minimize client-to-server LAN requests.

The previous iteration of this code was optimizing for the case of many
possibly different tokens being resolved concurrently that have a
significant overlap in linked policies such that deduplication would be
worth the complexity. While this is laudable there are some things to
consider that can help to adjust expectations:

  1) For v1.4+ policies are always replicated, and once a single policy
  shows up in a secondary DC the replicated data is considered
  authoritative for requests made in that DC. This means that our
  earlier concerns about minimizing cross-DC requests are irrelevant
  because there will be no cross-DC policy reads that occur.

  2) For Server nodes the in-memory ACL policy cache is capped at zero,
  meaning it has no caching. Only Client nodes run with a cache. This
  means that instead of having an entire DC's worth of tokens (what a
  Server might see) that can have policy resolutions coalesced these
  nodes will only ever be seeing node-local token resolutions. In a
  reasonable worst-case scenario where a scheduler like Kubernetes has
  "filled" a node with Connect services, even that will only schedule
  ~100 connect services per node. If every service has a unique token
  there will only be 100 tokens to coalesce and even then those requests
  have to occur concurrently AND be hitting an empty consul cache.

Instead of seeing a great coalescing opportunity for cutting down on
redundant Policy resolutions, in practice it's far more likely given
node densities that you'd see requests for the same token concurrently
than you would for two tokens sharing a policy concurrently (to a degree
that would warrant the overhead of the current variation of
singleflighting.

Given that, this patch switches the Policy resolution process to only
singleflight by requesting token (but keeps the cache as by-policy).
2019-03-14 09:35:34 -05:00
Kyle Havlovitz 3aec844fd2 Update state store test for changing node ID 2019-03-13 17:05:31 -07:00
Kyle Havlovitz df4ec913f0 Add a test for changing a failed node's ID 2019-03-13 15:39:07 -07:00
Hans Hasselberg d511e86491
agent: enable reloading of tls config (#5419)
This PR introduces reloading tls configuration. Consul will now be able to reload the TLS configuration which previously required a restart. It is not yet possible to turn TLS ON or OFF with these changes. Only when TLS is already turned on, the configuration can be reloaded. Most importantly the certificates and CAs.
2019-03-13 10:29:06 +01:00
R.B. Boyer e9614ee92f
acl: correctly extend the cache for acl identities during resolution (#5475) 2019-03-12 10:23:43 -05:00
Aestek 071fcb28ba [catalog] Update the node's services indexes on update (#5458)
Node updates were not updating the service indexes, which are used for
service related queries. This caused the X-Consul-Index to stay the same
after a node update as seen from a service query even though the node
data is returned in heath queries. If that happened in between queries
the client would miss this change.
We now update the indexes of the services on the node when it is
updated.

Fixes: #5450
2019-03-11 14:48:19 +00:00
Kyle Havlovitz bf09061e86 Add logic to allow changing a failed node's ID 2019-03-07 22:42:54 -08:00
Alvin Huang ece3b5907d fix typos 2019-03-06 14:47:33 -05:00
R.B. Boyer 91e78e00c7
fix typos reported by golangci-lint:misspell (#5434) 2019-03-06 11:13:28 -06:00
R.B. Boyer c24e3584be improve flaky LANReap tests by expliciting configuring the tombstone timeout
In TestServer_LANReap autopilot is running, so the alternate flow
through the serf reaping function is possible. In that situation the
ReconnectTimeout is not relevant so for parity also override the
TombstoneTimeout value as well.

For additional parity update the TestServer_WANReap and
TestClient_LANReap versions of this test in the same way even though
autopilot is irrelevant here .
2019-03-05 14:34:03 -06:00
Matt Keeler 87f9365eee Fixes for CVE-2019-8336
Fix error in detecting raft replication errors.

Detect redacted token secrets and prevent attempting to insert.

Add a Redacted field to the TokenBatchRead and TokenRead RPC endpoints

This will indicate whether token secrets have been redacted.

Ensure any token with a redacted secret in secondary datacenters is removed.

Test that redacted tokens cannot be replicated.
2019-03-04 19:13:24 +00:00
Matt Keeler 612aba7ced
Dont modify memdb owned token data for get/list requests of tokens (#5412)
Previously we were fixing up the token links directly on the *ACLToken returned by memdb. This invalidated some assumptions that a snapshot is immutable as well as potentially being able to cause a crash.

The fix here is to give the policy link fixing function copy on write semantics. When no fixes are necessary we can return the memdb object directly, otherwise we copy it and create a new list of links.

Eventually we might find a better way to keep those policy links in sync but for now this fixes the issue.
2019-03-04 09:28:46 -05:00
Matt Keeler 416a6543a6
Call RemoveServer for reap events (#5317)
This ensures that servers are removed from RPC routing when they are reaped.
2019-03-04 09:19:35 -05:00
R.B. Boyer d3be5c1d3a fix ignored errors in state store internals as reported by errcheck 2019-03-01 14:18:00 -06:00
Matt Keeler 0c76a4389f
ACL Token Persistence and Reloading (#5328)
This PR adds two features which will be useful for operators when ACLs are in use.

1. Tokens set in configuration files are now reloadable.
2. If `acl.enable_token_persistence` is set to `true` in the configuration, tokens set via the `v1/agent/token` endpoint are now persisted to disk and loaded when the agent starts (or during configuration reload)

Note that token persistence is opt-in so our users who do not want tokens on the local disk will see no change.

Some other secondary changes:

* Refactored a bunch of places where the replication token is retrieved from the token store. This token isn't just for replicating ACLs and now it is named accordingly.
* Allowed better paths in the `v1/agent/token/` API. Instead of paths like: `v1/agent/token/acl_replication_token` the path can now be just `v1/agent/token/replication`. The old paths remain to be valid. 
* Added a couple new API functions to set tokens via the new paths. Deprecated the old ones and pointed to the new names. The names are also generally better and don't imply that what you are setting is for ACLs but rather are setting ACL tokens. There is a minor semantic difference there especially for the replication token as again, its no longer used only for ACL token/policy replication. The new functions will detect 404s and fallback to using the older token paths when talking to pre-1.4.3 agents.
* Docs updated to reflect the API additions and to show using the new endpoints.
* Updated the ACL CLI set-agent-tokens command to use the non-deprecated APIs.
2019-02-27 14:28:31 -05:00
Hans Hasselberg 75ababb54f
Centralise tls configuration part 1 (#5366)
In order to be able to reload the TLS configuration, we need one way to generate the different configurations.

This PR introduces a `tlsutil.Configurator` which holds a `tlsutil.Config`. Afterwards it is responsible for rendering every `tls.Config`. In this particular PR I moved `IncomingHTTPSConfig`, `IncomingTLSConfig`, and `OutgoingTLSWrapper` into `tlsutil.Configurator`.

This PR is a pure refactoring - not a single feature added. And not a single test added. I only slightly modified existing tests as necessary.
2019-02-26 16:52:07 +01:00
Alvin Huang c4168e6dfc add wait to TestClient_JoinLAN 2019-02-22 17:34:45 -05:00
Alvin Huang 2e961d6539 add retry to TestResetSessionTimerLocked 2019-02-22 17:34:45 -05:00
R.B. Boyer ae1cb27126 fix incorrect body of TestACLEndpoint_PolicyBatchRead
Lifted from PR #5307 as it was an unrelated drive-by fix on that PR anyway.

s/token/policy/
2019-02-22 09:32:51 -06:00
R.B. Boyer 8e344c0218 test: switch test file from assert -> require for consistency
Also in acl_endpoint_test.go:

* convert logical blocks in some token tests to subtests
* remove use of require.New

This removes a lot of noise in a later PR.
2019-02-14 14:21:19 -06:00
R.B. Boyer 57be6ca215 correct some typos 2019-02-13 13:02:12 -06:00
R.B. Boyer a3e0fb8370 ensure that we plumb our configured logger into all parts of the raft library 2019-02-13 13:02:09 -06:00
R.B. Boyer 3b60891bf8 reduce the local scope of variable 2019-02-13 11:54:28 -06:00
R.B. Boyer 77d28fe9ce
clarify the ACL.PolicyDelete endpoint (#5337)
There was an errant early-return in PolicyDelete() that bypassed the
rest of the function.  This was ok because the only caller of this
function ignores the results.

This removes the early-return making it structurally behave like
TokenDelete() and for both PolicyDelete and TokenDelete clarify the lone
callers to indicate that the return values are ignored.

We may wish to avoid the entire return value as well, but this patch
doesn't go that far.
2019-02-13 09:16:30 -06:00
R.B. Boyer 106d87a4a8
update TestStateStore_ACLBootstrap to not rely upon request mutation (#5335) 2019-02-12 16:09:26 -06:00
Matt Keeler fa2c7059a2
Move autopilot initialization to prevent race (#5322)
`establishLeadership` invoked during leadership monitoring may use autopilot to do promotions etc. There was a race with doing that and having autopilot initialized and this fixes it.
2019-02-11 11:12:24 -05:00
Matt Keeler 210c3a56b0
Improve Connect with Prepared Queries (#5291)
Given a query like:

```
{
   "Name": "tagged-connect-query",
   "Service": {
      "Service": "foo",
      "Tags": ["tag"],
      "Connect": true
   }
}
```

And a Consul configuration like:

```
{
   "services": [
      "name": "foo",
      "port": 8080,
      "connect": { "sidecar_service": {} },
      "tags": ["tag"]
   ]
}
```

If you executed the query it would always turn up with 0 results. This was because the sidecar service was being created without any tags. You could instead make your config look like:

```
{
   "services": [
      "name": "foo",
      "port": 8080,
      "connect": { "sidecar_service": {
         "tags": ["tag"]
      } },
      "tags": ["tag"]
   ]
}
```

However that is a bit redundant for most cases. This PR ensures that the tags and service meta of the parent service get copied to the sidecar service. If there are any tags or service meta set in the sidecar service definition then this copying does not take place. After the changes, the query will now return the expected results.

A second change was made to prepared queries in this PR which is to allow filtering on ServiceMeta just like we allow for filtering on NodeMeta.
2019-02-04 09:36:51 -05:00
R.B. Boyer b5d71ea779
testutil: redirect some test agent logs to testing.T.Logf (#5304)
When tests fail, only the logs for the failing run are dumped to the
console which helps in diagnosis. This is easily added to other test
scenarios as they come up.
2019-02-01 09:21:54 -06:00
Kyle Havlovitz b30b541007
connect: Forward intention RPCs if this isn't the primary 2019-01-22 11:29:21 -08:00
Kyle Havlovitz a731173661
Merge pull request #5249 from hashicorp/ca-fixes-oss
Minor CA fixes
2019-01-22 11:25:09 -08:00
Kyle Havlovitz b0f07d9b5e
Merge pull request #4869 from hashicorp/txn-checks
Add node/service/check operations to transaction api
2019-01-22 11:16:09 -08:00
Matt Keeler cc2cd75f5c
Fix several ACL token/policy resolution issues. (#5246)
* Fix 2 remote ACL policy resolution issues

1 - Use the right method to fire async not found errors when the ACL.PolicyResolve RPC returns that error. This was previously accidentally firing a token result instead of a policy result which would have effectively done nothing (unless there happened to be a token with a secret id == the policy id being resolved.

2. When concurrent policy resolution is being done we single flight the requests. The bug before was that for the policy resolution that was going to piggy back on anothers RPC results it wasn’t waiting long enough for the results to come back due to looping with the wrong variable.

* Fix a handful of other edge case ACL scenarios

The main issue was that token specific issues (not able to access a particular policy or the token being deleted after initial fetching) were poisoning the policy cache.

A second issue was that for concurrent token resolutions, the first resolution to get started would go fetch all the policies. If before the policies were retrieved a second resolution request came in, the new request would register watchers for those policies but then never block waiting for them to complete. This resulted in using the default policy when it shouldn't have.
2019-01-22 13:14:43 -05:00
Paul Banks 1c4dfbcd2e
connect: tame thundering herd of CSRs on CA rotation (#5228)
* Support rate limiting and concurrency limiting CSR requests on servers; handle CA rotations gracefully with jitter and backoff-on-rate-limit in client

* Add CSR rate limiting docs

* Fix config naming and add tests for new CA configs
2019-01-22 17:19:36 +00:00
Kyle Havlovitz 4f53fe897a
oss: add the enterprise server stub for intention replication check 2019-01-18 17:32:10 -08:00
Matt Keeler 2f6a9edfac
Store leaf cert indexes in raft and use for the ModifyIndex on the returned certs (#5211)
* Store leaf cert indexes in raft and use for the ModifyIndex on the returned certs

This ensures that future certificate signings will have a strictly greater ModifyIndex than any previous certs signed.
2019-01-11 16:04:57 -05:00
Aestek ff13518961 Improve blocking queries on services that do not exist (#4810)
## Background

When making a blocking query on a missing service (was never registered, or is not registered anymore) the query returns as soon as any service is updated.
On clusters with frequent updates (5~10 updates/s in our DCs) these queries virtually do not block, and clients with no protections againt this waste ressources on the agent and server side. Clients that do protect against this get updates later than they should because of the backoff time they implement between requests.

## Implementation

While reducing the number of unnecessary updates we still want :
* Clients to be notified as soon as when the last instance of a service disapears.
* Clients to be notified whenever there's there is an update for the service.
* Clients to be notified as soon as the first instance of the requested service is added.

To reduce the number of unnecessary updates we need to block when a request to a missing service is made. However in the following case :

1. Client `client1` makes a query for service `foo`, gets back a node and X-Consul-Index 42
2. `foo` is unregistered 
3. `client1`  makes a query for `foo` with `index=42` -> `foo` does not exist, the query blocks and `client1` is not notified of the change on `foo` 

We could store the last raft index when each service was last alive to know wether we should block on the incoming query or not, but that list could grow indefinetly. 
We instead store the last raft index when a service was unregistered and use it when a query targets a service that does not exist. 
When a service `srv` is unregistered this "missing service index" is always greater than any X-Consul-Index held by the clients while `srv` was up, allowing us to immediatly notify them.

1. Client `client1` makes a query for service `foo`, gets back a node and `X-Consul-Index: 42`
2. `foo` is unregistered, we set the "missing service index" to 43 
3. `client1` makes a blocking query for `foo` with `index=42` -> `foo` does not exist, we check against the "missing service index" and return immediatly with `X-Consul-Index: 43`
4. `client1` makes a blocking query for `foo` with `index=43` -> we block
5. Other changes happen in the cluster, but foo still doesn't exist and "missing service index" hasn't changed, the query is still blocked
6. `foo` is registered again on index 62 -> `foo` exists and its index is greater than 43, we unblock the query
2019-01-11 09:26:14 -05:00
Matt Keeler 29b4512120
acl: Prevent tokens from deleting themselves (#5210)
Fixes #4897 

Also apparently token deletion could segfault in secondary DCs when attempting to delete non-existant tokens. For that reason both checks are wrapped within the non-nil check.
2019-01-10 09:22:51 -05:00
Kyle Havlovitz c266277a49 txn: clean up some state store/acl code 2019-01-09 11:59:23 -08:00
Pierre Souchay 5b8a7d7127 Avoid to have infinite recursion in DNS lookups when resolving CNAMEs (#4918)
* Avoid to have infinite recursion in DNS lookups when resolving CNAMEs

This will avoid killing Consul when a Service.Address is using CNAME
to a Consul CNAME that creates an infinite recursion.

This will fix https://github.com/hashicorp/consul/issues/4907

* Use maxRecursionLevel = 3 to allow several recursions
2019-01-07 16:53:54 -05:00
Paul Banks 0962e95e85
bugfix: use ServiceTags to generate cache key hash (#4987)
* bugfix: use ServiceTags to generate cahce key hash

* update unit test

* update

* remote print log

* Update .gitignore

* Completely deprecate ServiceTag field internally for clarity

* Add explicit test for CacheInfo cases
2019-01-07 21:30:47 +00:00
Kyle Havlovitz 8b1dc6a22c txn: fix an issue with querying nodes by name instead of ID 2018-12-12 12:46:33 -08:00
Pierre Souchay 61870be137 [Travis][UnstableTests] Fixed unstable tests in travis (#5013)
* [Travis][UnstableTests] Fixed unstable tests in travis as seen in https://travis-ci.org/hashicorp/consul/jobs/460824602

* Fixed unstable tests in https://travis-ci.org/hashicorp/consul/jobs/460857687
2018-12-12 12:09:42 -08:00
Kyle Havlovitz efcdc85e1a api: add support for new txn operations 2018-12-12 10:54:09 -08:00
Kyle Havlovitz 2408f99cca txn: add tests for RPC endpoint 2018-12-12 10:04:10 -08:00
Kyle Havlovitz 9f4f673c4d txn: add ACL enforcement/validation to new txn ops 2018-12-12 10:04:10 -08:00
Kyle Havlovitz 41e8120d3d state: add tests for new txn ops 2018-12-12 10:04:10 -08:00
Kyle Havlovitz a40a346be8 txn: add service operations 2018-12-12 10:04:10 -08:00
Kyle Havlovitz b1aeb3b943 txn: add node operations 2018-12-12 10:04:10 -08:00
Kyle Havlovitz bd6b7ad162 txn: add pre-check operations to txn endpoint 2018-12-12 10:04:10 -08:00
Kyle Havlovitz 8a0d7b65d6 Add check operations to transaction api 2018-12-12 10:04:10 -08:00
Kyle Havlovitz e7946197b8 connect/ca: prevent blank CA config in snapshot
This PR both prevents a blank CA config from being written out to
a snapshot and allows Consul to gracefully recover from a snapshot
with an invalid CA config.

Fixes #4954.
2018-12-06 17:40:53 -08:00
R.B. Boyer c86eff8859
agent: remove some stray fmt.Print* calls (#5015) 2018-11-29 09:45:51 -06:00
Pierre Souchay d0ca1bade9 Fixed another list of unstable unit tests in travis (#4915)
* Fixed another list of unstable unit tests in travis

Fixed failing tests in https://travis-ci.org/hashicorp/consul/jobs/451357061

* Fixed another list of unstable unit tests in travis.

Fixed failing tests in https://travis-ci.org/hashicorp/consul/jobs/451357061
2018-11-20 11:27:26 +00:00
Kyle Havlovitz 3cc7d6ebb5
Merge pull request #4952 from hashicorp/test-version
tests: Bump test server version to 1.4.0
2018-11-13 13:37:10 -08:00
R.B. Boyer 8662a6d260
acl: add stub hooks to support some plumbing in enterprise (#4951) 2018-11-13 15:35:54 -06:00
Kyle Havlovitz 19f9cad3fe
oss: bump test server version to 1.4.0 2018-11-13 13:13:26 -08:00
Aestek 4fb564abbc Fix catalog tag filter backward compat (#4944)
Fix catalog service node filtering (ex /v1/catalog/service/srv?tag=tag1)
between agent version <=v1.2.3 and server >=v1.3.0.
New server version did not account for the old field when filtering
hence request made from old agent were not tag-filtered.
2018-11-13 14:44:36 +00:00
Kyle Havlovitz b0dcf54e50
Merge pull request #4917 from hashicorp/replication-token-cleanup
Use acl replication_token for connect
2018-11-12 09:12:54 -08:00
Kyle Havlovitz 038aefa0bc update non-voting server test to fix enterprise diff 2018-11-09 12:50:24 -08:00
Kyle Havlovitz 70accbb2e0 oss: do a proper check-and-set on the CA roots/config fsm operation 2018-11-09 12:36:23 -08:00
R.B. Boyer 2e29f234b1
acl: fixes ACL replication for legacy tokens without AccessorIDs (#4885) 2018-11-07 07:59:44 -08:00
Kyle Havlovitz 1a4204f363
agent: fix formatting 2018-11-07 02:16:03 -08:00
R.B. Boyer a5d57f5326
fix comment typos (#4890) 2018-11-02 12:00:39 -05:00
Kyle Havlovitz 5b7b8bf842
Merge pull request #4872 from hashicorp/node-snapshot-fix
Node ID/datacenter snapshot fix
2018-10-31 15:51:07 -07:00
Matt Keeler 26b1873b3b Adds documentation for the new ACL APIs (#4851)
* Update the ACL API docs

* Add a CreateTime to the anon token

Also require acl:read permissions at least to perform rule translation. Don’t want someone DoSing the system with an open endpoint that actually does a bit of work.

* Fix one place where I was referring to id instead of AccessorID

* Add godocs for the API package additions.

* Minor updates: removed some extra commas and updated the acl intro paragraph

* minor tweaks

* Updated the language to be clearer

* Updated the language to be clearer for policy page

* I was also confused by that! Your updates are much clearer.

Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>

* Sounds much better.

Co-Authored-By: kaitlincarter-hc <43049322+kaitlincarter-hc@users.noreply.github.com>

* Updated sidebar layout and deprecated warning
2018-10-31 15:11:51 -07:00
Matt Keeler ec9934b6f8 Remaining ACL Unit Tests (#4852)
* Add leader token upgrade test and fix various ACL enablement bugs

* Update the leader ACL initialization tests.

* Add a StateStore ACL tests for ACLTokenSet and ACLTokenGetBy* functions

* Advertise the agents acl support status with the agent/self endpoint.

* Make batch token upsert CAS’able to prevent consistency issues with token auto-upgrade

* Finish up the ACL state store token tests

* Finish the ACL state store unit tests

Also rename some things to make them more consistent.

* Do as much ACL replication testing as I can.
2018-10-31 13:00:46 -07:00
Kyle Havlovitz cf2210b5c5 fsm: update snapshot/restore test to include ID and datacenter 2018-10-30 15:53:14 -07:00
Kyle Havlovitz 58ff5e46cb fsm: add missing ID/datacenter to persistNodes 2018-10-30 15:52:54 -07:00
Matt Keeler 0dd537e506
Fix the NonVoter Bootstrap test (#4786) 2018-10-24 10:23:50 -04:00
Kyle Havlovitz 6f40708aca fsm: add Intention operations to transactions for internal use 2018-10-19 10:02:28 -07:00
Matt Keeler df507a4a55 A few misc fixes found by go vet 2018-10-19 12:28:36 -04:00
Matt Keeler 99e0a124cb
New ACLs (#4791)
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description

At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.

On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.

    Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
    All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
    Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
        A server running the new system must still support other clients using the legacy system.
        A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
        The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.

So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 12:04:07 -04:00
Pierre Souchay a72f92cac6 dns: implements prefix lookups for DNS TTL (#4605)
This will fix https://github.com/hashicorp/consul/issues/4509 and allow forinstance lb-* to match services lb-001 or lb-service-007.
2018-10-19 08:41:04 -07:00
Kyle Havlovitz 96a35f8abc re-add Connect multi-dc config changes
This reverts commit 8bcfbaffb6588b024cd1a3cf0952e6bfa7d9e900.
2018-10-19 08:41:03 -07:00
Jack Pearkes 847a0a5266 Revert "Connect multi-dc config" (#4784) 2018-10-11 17:32:45 +01:00
Aestek 260a9880ae [Security] Add finer control over script checks (#4715)
* Add -enable-local-script-checks options

These options allow for a finer control over when script checks are enabled by
giving the option to only allow them when they are declared from the local
file system.

* Add documentation for the new option

* Nitpick doc wording
2018-10-11 13:22:11 +01:00
Rebecca Zanzig 0ec6d880f5 Support multiple tags for health and catalog http api endpoints (#4717)
* Support multiple tags for health and catalog api endpoints

Fixes #1781.

Adds a `ServiceTags` field to the ServiceSpecificRequest to support
multiple tags, updates the filter logic in the catalog store, and
propagates these change through to the health and catalog endpoints.

Note: Leaves `ServiceTag` in the struct, since it is being used as
part of the DNS lookup, which in turn uses the health check.

* Update the api package to support multiple tags

Includes additional tests.

* Update new tests to use the `require` library

* Update HealthConnect check after a bad merge
2018-10-11 12:50:05 +01:00
Pierre Souchay b0fc91a1d2 [Performance On Large clusters] Reduce updates on large services (#4720)
* [Performance On Large clusters] Checks do update services/nodes only when really modified to avoid too many updates on very large clusters

In a large cluster, when having a few thousands of nodes, the anti-entropy
mechanism performs lots of changes (several per seconds) while
there is no real change. This patch wants to improve this in order
to increase Consul scalability when using many blocking requests on
health for instance.

* [Performance for large clusters] Only updates index of service if service is really modified

* [Performance for large clusters] Only updates index of nodes if node is really modified

* Added comments / ensure IsSame() has clear semantics

* Avoid having modified boolean, return nil directly if stutures are Same

* Fixed unstable unit tests TestLeader_ChangeServerID

* Rewrite TestNode_IsSame() for better readability as suggested by @banks

* Rename ServiceNode.IsSame() into IsSameService() + added unit tests

* Do not duplicate TestStructs_ServiceNode_Conversions() and increase test coverage of IsSameService

* Clearer documentation in IsSameService

* Take into account ServiceProxy into ServiceNode.IsSameService()

* Fixed IsSameService() with all new structures
2018-10-11 12:42:39 +01:00
Pierre Souchay 42f250fa53 Added SOA configuration for DNS settings. (#4714)
This will allow to fine TUNE SOA settings sent by Consul in DNS responses,
for instance to be able to control negative ttl.

Will fix: https://github.com/hashicorp/consul/issues/4713

# Example

Override all settings:

* min_ttl: 0 => 60s
* retry: 600 (10m) => 300s (5 minutes),
* expire: 86400 (24h) => 43200 (12h)
* refresh: 3600 (1h) => 1800 (30 minutes)

```
consul agent -dev -hcl 'dns_config={soa={min_ttl=60,retry=300,expire=43200,refresh=1800}}'
```

Result:
```
dig +multiline @localhost -p 8600 service.consul

; <<>> DiG 9.12.1 <<>> +multiline @localhost -p 8600 service.consul
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 36557
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;service.consul.		IN A

;; AUTHORITY SECTION:
consul.			0 IN SOA ns.consul. hostmaster.consul. (
				1537959133 ; serial
				1800       ; refresh (30 minutes)
				300        ; retry (5 minutes)
				43200      ; expire (12 hours)
				60         ; minimum (1 minute)
				)

;; Query time: 4 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Wed Sep 26 12:52:13 CEST 2018
;; MSG SIZE  rcvd: 93
```
2018-10-10 15:50:56 -04:00
Kyle Havlovitz 0cbd176a48 connect/ca: more OSS split for multi-dc 2018-10-10 12:17:59 -07:00
Kyle Havlovitz 6d5160c139 connect/ca: split CA initialization logic between oss/enterprise 2018-10-10 12:17:59 -07:00
Kyle Havlovitz 5b98a602af agent: add primary_datacenter and connect replication config options 2018-10-10 12:17:59 -07:00
Kyle Havlovitz 304595f7a6 connect: add ExternalTrustDomain to CARoot fields 2018-10-10 12:16:47 -07:00
Kyle Havlovitz 475afd0300 docs: deprecate acl_datacenter and replace it with primary_datacenter 2018-10-10 12:16:47 -07:00
Paul Banks 92fe8c8e89 Add Proxy Upstreams to Service Definition (#4639)
* Refactor Service Definition ProxyDestination.

This includes:
 - Refactoring all internal structs used
 - Updated tests for both deprecated and new input for:
   - Agent Services endpoint response
   - Agent Service endpoint response
   - Agent Register endpoint
     - Unmanaged deprecated field
     - Unmanaged new fields
     - Managed deprecated upstreams
     - Managed new
   - Catalog Register
     - Unmanaged deprecated field
     - Unmanaged new fields
     - Managed deprecated upstreams
     - Managed new
   - Catalog Services endpoint response
   - Catalog Node endpoint response
   - Catalog Service endpoint response
 - Updated API tests for all of the above too (both deprecated and new forms of register)

TODO:
 - config package changes for on-disk service definitions
 - proxy config endpoint
 - built-in proxy support for new fields

* Agent proxy config endpoint updated with upstreams

* Config file changes for upstreams.

* Add upstream opaque config and update all tests to ensure it works everywhere.

* Built in proxy working with new Upstreams config

* Command fixes and deprecations

* Fix key translation, upstream type defaults and a spate of other subtele bugs found with ned to end test scripts...

TODO: tests still failing on one case that needs a fix. I think it's key translation for upstreams nested in Managed proxy struct.

* Fix translated keys in API registration.
≈

* Fixes from docs
 - omit some empty undocumented fields in API
 - Bring back ServiceProxyDestination in Catalog responses to not break backwards compat - this was removed assuming it was only used internally.

* Documentation updates for Upstreams in service definition

* Fixes for tests broken by many refactors.

* Enable travis on f-connect branch in this branch too.

* Add consistent Deprecation comments to ProxyDestination uses

* Update version number on deprecation notices, and correct upstream datacenter field with explanation in docs
2018-10-10 16:55:34 +01:00
Alex Dadgar 90ed72fd70 do not bootstrap with non voters 2018-09-19 17:41:36 -07:00
Kyle Havlovitz 9b8f8975c6
Merge pull request #4644 from hashicorp/ca-refactor
connect/ca: rework initialization/root generation in providers
2018-09-13 13:08:34 -07:00
Paul Banks 09e4c2995b
Fix CA pruning when CA config uses string durations. (#4669)
* Fix CA pruning when CA config uses string durations.

The tl;dr here is:

 - Configuring LeafCertTTL with a string like "72h" is how we do it by default and should be supported
 - Most of our tests managed to escape this by defining them as time.Duration directly
 - Out actual default value is a string
 - Since this is stored in a map[string]interface{} config, when it is written to Raft it goes through a msgpack encode/decode cycle (even though it's written from server not over RPC).
 - msgpack decode leaves the string as a `[]uint8`
 - Some of our parsers required string and failed
 - So after 1 hour, a default configured server would throw an error about pruning old CAs
 - If a new CA was configured that set LeafCertTTL as a time.Duration, things might be OK after that, but if a new CA was just configured from config file, intialization would cause same issue but always fail still so would never prune the old CA.
 - Mostly this is just a janky error that got passed tests due to many levels of complicated encoding/decoding.

tl;dr of the tl;dr: Yay for type safety. Map[string]interface{} combined with msgpack always goes wrong but we somehow get bitten every time in a new way :D

We already fixed this once! The main CA config had the same problem so @kyhavlov already wrote the mapstructure DecodeHook that fixes it. It wasn't used in several places it needed to be and one of those is notw in `structs` which caused a dependency cycle so I've moved them.

This adds a whole new test thta explicitly tests the case that broke here. It also adds tests that would have failed in other places before (Consul and Vaul provider parsing functions). I'm not sure if they would ever be affected as it is now as we've not seen things broken with them but it seems better to explicitly test that and support it to not be bitten a third time!

* Typo fix

* Fix bad Uint8 usage
2018-09-13 15:43:00 +01:00
Pierre Souchay 5ecf9823d2 Fix more unstable tests in agent and command 2018-09-12 14:49:27 +01:00
Kyle Havlovitz 8fc2c77fdf
connect/ca: some cleanup and reorganizing of the new methods 2018-09-11 16:43:04 -07:00
Pierre Souchay 7a42c31330 Fix unstable tests in agent, api, and command/watch 2018-09-10 16:58:53 +01:00
Pierre Souchay 473e589d86 Implementation of Weights Data structures (#4468)
* Implementation of Weights Data structures

Adding this datastructure will allow us to resolve the
issues #1088 and #4198

This new structure defaults to values:
```
   { Passing: 1, Warning: 0 }
```

Which means, use weight of 0 for a Service in Warning State
while use Weight 1 for a Healthy Service.
Thus it remains compatible with previous Consul versions.

* Implemented weights for DNS SRV Records

* DNS properly support agents with weight support while server does not (backwards compatibility)

* Use Warning value of Weights of 1 by default

When using DNS interface with only_passing = false, all nodes
with non-Critical healthcheck used to have a weight value of 1.
While having weight.Warning = 0 as default value, this is probably
a bad idea as it breaks ascending compatibility.

Thus, we put a default value of 1 to be consistent with existing behaviour.

* Added documentation for new weight field in service description

* Better documentation about weights as suggested by @banks

* Return weight = 1 for unknown Check states as suggested by @banks

* Fixed typo (of -> or) in error message as requested by @mkeeler

* Fixed unstable unit test TestRetryJoin

* Fixed unstable tests

* Fixed wrong Fatalf format in `testrpc/wait.go`

* Added notes regarding DNS SRV lookup limitations regarding number of instances

* Documentation fixes and clarification regarding SRV records with weights as requested by @banks

* Rephrase docs
2018-09-07 15:30:47 +01:00
Kyle Havlovitz e184a18e4b
connect/ca: add Configure/GenerateRoot to provider interface 2018-09-06 19:18:59 -07:00
Pierre Souchay 54d8157ee1 Fixed more flaky tests in ./agent/consul (#4617) 2018-09-04 14:02:47 +01:00
Freddy 10d3048bd6
Bugfix: Use "%#v" when formatting structs (#4600) 2018-08-28 12:37:34 -04:00
Pierre Souchay 9b5cf0c1d0 [BUGFIX] Avoid returning empty data on startup of a non-leader server (#4554)
Ensure that DB is properly initialized when performing stale queries

Addresses:
- https://github.com/hashicorp/consul-replicate/issues/82
- https://github.com/hashicorp/consul/issues/3975
- https://github.com/hashicorp/consul-template/issues/1131
2018-08-23 12:06:39 -04:00