Commit Graph

58 Commits

Author SHA1 Message Date
R.B. Boyer 05c7373a28 bulk rewrite using this script
set -euo pipefail

    unset CDPATH

    cd "$(dirname "$0")"

    for f in $(git grep '\brequire := require\.New(' | cut -d':' -f1 | sort -u); do
        echo "=== require: $f ==="
        sed -i '/require := require.New(t)/d' $f
        # require.XXX(blah) but not require.XXX(tblah) or require.XXX(rblah)
        sed -i 's/\brequire\.\([a-zA-Z0-9_]*\)(\([^tr]\)/require.\1(t,\2/g' $f
        # require.XXX(tblah) but not require.XXX(t, blah)
        sed -i 's/\brequire\.\([a-zA-Z0-9_]*\)(\(t[^,]\)/require.\1(t,\2/g' $f
        # require.XXX(rblah) but not require.XXX(r, blah)
        sed -i 's/\brequire\.\([a-zA-Z0-9_]*\)(\(r[^,]\)/require.\1(t,\2/g' $f
        gofmt -s -w $f
    done

    for f in $(git grep '\bassert := assert\.New(' | cut -d':' -f1 | sort -u); do
        echo "=== assert: $f ==="
        sed -i '/assert := assert.New(t)/d' $f
        # assert.XXX(blah) but not assert.XXX(tblah) or assert.XXX(rblah)
        sed -i 's/\bassert\.\([a-zA-Z0-9_]*\)(\([^tr]\)/assert.\1(t,\2/g' $f
        # assert.XXX(tblah) but not assert.XXX(t, blah)
        sed -i 's/\bassert\.\([a-zA-Z0-9_]*\)(\(t[^,]\)/assert.\1(t,\2/g' $f
        # assert.XXX(rblah) but not assert.XXX(r, blah)
        sed -i 's/\bassert\.\([a-zA-Z0-9_]*\)(\(r[^,]\)/assert.\1(t,\2/g' $f
        gofmt -s -w $f
    done
2022-01-20 10:46:23 -06:00
Dan Upton 0efe478044
Groundwork for exposing when queries are filtered by ACLs (#11569) 2021-12-03 17:11:26 +00:00
Kyle Havlovitz db88f95fbe consul: add virtual IP generation for connect services 2021-12-02 15:42:47 -08:00
Daniel Nephin b73b68d696 acl: remove ACL.GetPolicy endpoint and resolve legacy acls
And all code that was no longer used once those two were removed.
2021-09-29 14:33:19 -04:00
Dhia Ayachi 96d7842118
partition dicovery chains (#10983)
* partition dicovery chains

* fix default partition for OSS
2021-09-07 16:29:32 -04:00
Freddy f19b1f0058
Relax validation for expose.paths config (#10394)
Previously we would return an error if duplicate paths were specified.
This could lead to problems in cases where a user has the same path,
say /healthz, on two different ports.

This validation was added to signal a potential misconfiguration.
Instead we will only check for duplicate listener ports, since that is
what would lead to ambiguity issues when generating xDS config.

In the future we could look into using a single listener and creating
distinct filter chains for each path/port.
2021-06-14 14:04:11 -06:00
Daniel Nephin dcf80907a9 structs: fix cache keys
So that requests are cached properly, and the cache does not return the wrong data for a
request.
2021-05-31 17:22:16 -04:00
Daniel Nephin 01790fbcb7 structs: improve the interface of assertCacheInfoKeyIsComplete 2021-05-31 16:54:41 -04:00
Daniel Nephin 9de439f66a structs: Add more cache key tests 2021-05-31 16:54:40 -04:00
Mark Anderson 83c083e70c Fixup more structs_test
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
2021-05-05 10:13:56 -07:00
Mark Anderson 626b27a874 Continue working through proxy and agent
Rework/listeners, rename makeListener

Refactor, tests pass

Signed-off-by: Mark Anderson <manderson@hashicorp.com>
2021-05-04 12:41:43 -07:00
freddygv 042753fc26 Prevent wildcard destinations for proxies and upstreams 2021-04-07 09:32:47 -06:00
freddygv 8566495f4f Prevent synthetic upstreams without addresses from failing duplicate ip/port validation 2021-04-07 09:32:47 -06:00
Daniel Nephin ef9d44fdfe structs: fix caching of ServiceSpecificRequest when ingress=true
The field was not being included in the cache info key. This would result in a DNS request for
web.service.consul returning the same result as web.ingress.consul, when those results should
not be the same.
2021-01-14 17:01:40 -05:00
kevinkengne 18a12d6517
add completeness test for types with CacheInfo method (#9480)
include all fields when fuzzing in tests
split tests by struct type

Ensure the new value for the field is different

fuzzer.Fuzz could produce the same value again in some cases.

Use a custom fuzz function for QueryOptions. That type is an embedded struct in the request types
but only one of the fields is important to include in the cache key.

Move enterpriseMetaField to an oss file so that we can change it in enterprise.
2021-01-12 19:45:46 -05:00
Daniel Nephin 8a017c4f43 structs: add a namespace test for CheckServiceNode.CanRead 2020-10-30 15:07:04 -04:00
Daniel Nephin eb6f2a8d72 structs: add CheckServiceNode.CanRead
And use it from the subscribe endpoint.
2020-10-07 18:15:13 -04:00
Paul Banks 0594667c3a
Fix bad int -> string conversions caught by go vet changes in 1.15 (#8739) 2020-09-24 11:14:07 +01:00
Daniel Nephin 4155cae1cb Fix panic when decoding 'Connect: null'
Surprisingly the json Unmarshal updates the aux pointer to a nil.
2020-08-20 17:52:14 -04:00
Daniel Nephin 89d95561df Enable gofmt simplify
Code changes done automatically with 'gofmt -s -w'
2020-06-16 13:21:11 -04:00
freddygv 1e7e716742 Move compound service names to use ServiceName type 2020-06-12 13:47:43 -06:00
Jono Sosulska 7a13c96a2a
Replace whitelist/blacklist terminology with allowlist/denylist (#7971)
* Replace whitelist/blacklist terminology with allowlist/denylist
2020-05-29 14:19:16 -04:00
Daniel Nephin c2499418ed test: Remove t.Parallel() from agent/structs tests
go test will only run tests in parallel within a single package. In this case the package test run time is exactly the same with or without t.Parallel() (~0.7s).

In generally we should avoid t.Parallel() as it causes a number of problems with `go test` not reporting failure messages correctly. I encountered one of these problems, which is what prompted this change.  Since `t.Parallel` is not providing any benefit in this package, this commit removes it.

The change was automated with:

    git grep -l 't.Parallel' | xargs sed -i -e '/t.Parallel/d'
2020-05-08 14:06:10 -04:00
R.B. Boyer 9faf8c42d1
sdk: extracting testutil.RequireErrorContains from various places it was duplicated (#7753) 2020-05-01 11:56:34 -05:00
Freddy f5c1e5268b
TLS Origination for Terminating Gateways (#7671) 2020-04-27 16:25:37 -06:00
Freddy c1f79c6b3c
Terminating gateway discovery (#7571)
* Enable discovering terminating gateways

* Add TerminatingGatewayServices to state store

* Use GatewayServices RPC endpoint for ingress/terminating
2020-04-08 12:37:24 -06:00
Kyle Havlovitz 01a23b8eb4
Add config entry/state for Ingress Gateways (#7483)
* Add Ingress gateway config entry and other relevant structs

* Add api package tests for ingress gateways

* Embed EnterpriseMeta into ingress service struct

* Add namespace fields to api module and test consul config write decoding

* Don't require a port for ingress gateways

* Add snakeJSON and camelJSON cases in command test

* Run Normalize on service's ent metadata

Sadly cannot think of a way to test this in OSS.

* Every protocol requires at least 1 service

* Validate ingress protocols

* Update agent/structs/config_entry_gateways.go

Co-authored-by: Chris Piraino <cpiraino@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
2020-03-31 11:59:10 -05:00
Freddy cb55fa3742
Enable CLI to register terminating gateways (#7500)
* Enable CLI to register terminating gateways

* Centralize gateway proxy configuration
2020-03-26 10:20:56 -06:00
R.B. Boyer a7fb26f50f
wan federation via mesh gateways (#6884)
This is like a Möbius strip of code due to the fact that low-level components (serf/memberlist) are connected to high-level components (the catalog and mesh-gateways) in a twisty maze of references which make it hard to dive into. With that in mind here's a high level summary of what you'll find in the patch:

There are several distinct chunks of code that are affected:

* new flags and config options for the server

* retry join WAN is slightly different

* retry join code is shared to discover primary mesh gateways from secondary datacenters

* because retry join logic runs in the *agent* and the results of that
  operation for primary mesh gateways are needed in the *server* there are
  some methods like `RefreshPrimaryGatewayFallbackAddresses` that must occur
  at multiple layers of abstraction just to pass the data down to the right
  layer.

* new cache type `FederationStateListMeshGatewaysName` for use in `proxycfg/xds` layers

* the function signature for RPC dialing picked up a new required field (the
  node name of the destination)

* several new RPCs for manipulating a FederationState object:
  `FederationState:{Apply,Get,List,ListMeshGateways}`

* 3 read-only internal APIs for debugging use to invoke those RPCs from curl

* raft and fsm changes to persist these FederationStates

* replication for FederationStates as they are canonically stored in the
  Primary and replicated to the Secondaries.

* a special derivative of anti-entropy that runs in secondaries to snapshot
  their local mesh gateway `CheckServiceNodes` and sync them into their upstream
  FederationState in the primary (this works in conjunction with the
  replication to distribute addresses for all mesh gateways in all DCs to all
  other DCs)

* a "gateway locator" convenience object to make use of this data to choose
  the addresses of gateways to use for any given RPC or gossip operation to a
  remote DC. This gets data from the "retry join" logic in the agent and also
  directly calls into the FSM.

* RPC (`:8300`) on the server sniffs the first byte of a new connection to
  determine if it's actually doing native TLS. If so it checks the ALPN header
  for protocol determination (just like how the existing system uses the
  type-byte marker).

* 2 new kinds of protocols are exclusively decoded via this native TLS
  mechanism: one for ferrying "packet" operations (udp-like) from the gossip
  layer and one for "stream" operations (tcp-like). The packet operations
  re-use sockets (using length-prefixing) to cut down on TLS re-negotiation
  overhead.

* the server instances specially wrap the `memberlist.NetTransport` when running
  with gateway federation enabled (in a `wanfed.Transport`). The general gist is
  that if it tries to dial a node in the SAME datacenter (deduced by looking
  at the suffix of the node name) there is no change. If dialing a DIFFERENT
  datacenter it is wrapped up in a TLS+ALPN blob and sent through some mesh
  gateways to eventually end up in a server's :8300 port.

* a new flag when launching a mesh gateway via `consul connect envoy` to
  indicate that the servers are to be exposed. This sets a special service
  meta when registering the gateway into the catalog.

* `proxycfg/xds` notice this metadata blob to activate additional watches for
  the FederationState objects as well as the location of all of the consul
  servers in that datacenter.

* `xds:` if the extra metadata is in place additional clusters are defined in a
  DC to bulk sink all traffic to another DC's gateways. For the current
  datacenter we listen on a wildcard name (`server.<dc>.consul`) that load
  balances all servers as well as one mini-cluster per node
  (`<node>.server.<dc>.consul`)

* the `consul tls cert create` command got a new flag (`-node`) to help create
  an additional SAN in certs that can be used with this flavor of federation.
2020-03-09 15:59:02 -05:00
R.B. Boyer b4325dfbce
agent: ensure that we always use the same settings for msgpack (#7245)
We set RawToString=true so that []uint8 => string when decoding an interface{}.
We set the MapType so that map[interface{}]interface{} decodes to map[string]interface{}.

Add tests to ensure that this doesn't break existing usages.

Fixes #7223
2020-02-07 15:50:24 -06:00
Freddy 5eace88ce2
Expose HTTP-based paths through Connect proxy (#6446)
Fixes: #5396

This PR adds a proxy configuration stanza called expose. These flags register
listeners in Connect sidecar proxies to allow requests to specific HTTP paths from outside of the node. This allows services to protect themselves by only
listening on the loopback interface, while still accepting traffic from non
Connect-enabled services.

Under expose there is a boolean checks flag that would automatically expose all
registered HTTP and gRPC check paths.

This stanza also accepts a paths list to expose individual paths. The primary
use case for this functionality would be to expose paths for third parties like
Prometheus or the kubelet.

Listeners for requests to exposed paths are be configured dynamically at run
time. Any time a proxy, or check can be registered, a listener can also be
created.

In this initial implementation requests to these paths are not
authenticated/encrypted.
2019-09-25 20:55:52 -06:00
Mike Morris 88df658243
connect: remove managed proxies (#6220)
* connect: remove managed proxies implementation and all supporting config options and structs

* connect: remove deprecated ProxyDestination

* command: remove CONNECT_PROXY_TOKEN env var

* agent: remove entire proxyprocess proxy manager

* test: remove all managed proxy tests

* test: remove irrelevant managed proxy note from TestService_ServerTLSConfig

* test: update ContentHash to reflect managed proxy removal

* test: remove deprecated ProxyDestination test

* telemetry: remove managed proxy note

* http: remove /v1/agent/connect/proxy endpoint

* ci: remove deprecated test exclusion

* website: update managed proxies deprecation page to note removal

* website: remove managed proxy configuration API docs

* website: remove managed proxy note from built-in proxy config

* website: add note on removing proxy subdirectory of data_dir
2019-08-09 15:19:30 -04:00
R.B. Boyer 6bbbfde88b
connect: validate upstreams and prevent duplicates (#6224)
* connect: validate upstreams and prevent duplicates

* Actually run Upstream.Validate() instead of ignoring it as dead code.

* Prevent two upstreams from declaring the same bind address and port.
  It wouldn't work anyway.

* Prevent two upstreams from being declared that use the same
  type+name+namespace+datacenter. Due to how the Upstream.Identity()
  function worked this ended up mostly being enforced in xDS at use-time,
  but it should be enforced more clearly at register-time.
2019-08-01 13:26:02 -05:00
Matt Keeler 63c344727c
Envoy CLI bind addresses (#6107)
* Ensure we MapWalk the proxy config in the NodeService and ServiceNode structs

This gets rid of some json encoder errors in the catalog endpoints

* Allow passing explicit bind addresses to envoy

* Move map walking to the ConnectProxyConfig struct

Any place where this struct gets JSON encoded will benefit as opposed to having to implement it everywhere.

* Fail when a non-empty address is provided and not bindable

* camel case

* Update command/connect/envoy/envoy.go

Co-Authored-By: Paul Banks <banks@banksco.de>
2019-07-12 12:57:31 -04:00
Matt Keeler 39bb0e3e77 Implement Mesh Gateways
This includes both ingress and egress functionality.
2019-07-01 16:28:30 -04:00
Matt Keeler b6688a6b5b
Add tagged addresses for services (#5965)
This allows addresses to be tagged at the service level similar to what we allow for nodes already. The address translation that can be enabled with the `translate_wan_addrs` config was updated to take these new addresses into account as well.
2019-06-17 10:51:50 -04:00
Paul Banks 078f4cf5bb Add integration test for central config; fix central config WIP (#5752)
* Add integration test for central config; fix central config WIP

* Add integration test for central config; fix central config WIP

* Set proxy protocol correctly and begin adding upstream support

* Add upstreams to service config cache key and start new notify watcher if they change.

This doesn't update the tests to pass though.

* Fix some merging logic get things working manually with a hack (TODO fix properly)

* Simplification to not allow enabling sidecars centrally - it makes no sense without upstreams anyway

* Test compile again and obvious ones pass. Lots of failures locally not debugged yet but may be flakes. Pushing up to see what CI does

* Fix up service manageer and API test failures

* Remove the enable command since it no longer makes much sense without being able to turn on sidecar proxies centrally

* Remove version.go hack - will make integration test fail until release

* Remove unused code from commands and upstream merge

* Re-bump version to 1.5.0
2019-05-01 16:39:31 -07:00
Kyle Havlovitz 1fc96c770b Make central service config opt-in and rework the initial registration 2019-04-24 06:11:08 -07:00
Hans Hasselberg cbe53e68f0 correct name 2019-01-25 11:00:56 +01:00
Hans Hasselberg fc2f2b6bd7 demo nomad problem 2019-01-24 16:45:54 +01:00
Paul Banks 0962e95e85
bugfix: use ServiceTags to generate cache key hash (#4987)
* bugfix: use ServiceTags to generate cahce key hash

* update unit test

* update

* remote print log

* Update .gitignore

* Completely deprecate ServiceTag field internally for clarity

* Add explicit test for CacheInfo cases
2019-01-07 21:30:47 +00:00
Matt Keeler ec9934b6f8 Remaining ACL Unit Tests (#4852)
* Add leader token upgrade test and fix various ACL enablement bugs

* Update the leader ACL initialization tests.

* Add a StateStore ACL tests for ACLTokenSet and ACLTokenGetBy* functions

* Advertise the agents acl support status with the agent/self endpoint.

* Make batch token upsert CAS’able to prevent consistency issues with token auto-upgrade

* Finish up the ACL state store token tests

* Finish the ACL state store unit tests

Also rename some things to make them more consistent.

* Do as much ACL replication testing as I can.
2018-10-31 13:00:46 -07:00
Matt Keeler 99e0a124cb
New ACLs (#4791)
This PR is almost a complete rewrite of the ACL system within Consul. It brings the features more in line with other HashiCorp products. Obviously there is quite a bit left to do here but most of it is related docs, testing and finishing the last few commands in the CLI. I will update the PR description and check off the todos as I finish them over the next few days/week.
Description

At a high level this PR is mainly to split ACL tokens from Policies and to split the concepts of Authorization from Identities. A lot of this PR is mostly just to support CRUD operations on ACLTokens and ACLPolicies. These in and of themselves are not particularly interesting. The bigger conceptual changes are in how tokens get resolved, how backwards compatibility is handled and the separation of policy from identity which could lead the way to allowing for alternative identity providers.

On the surface and with a new cluster the ACL system will look very similar to that of Nomads. Both have tokens and policies. Both have local tokens. The ACL management APIs for both are very similar. I even ripped off Nomad's ACL bootstrap resetting procedure. There are a few key differences though.

    Nomad requires token and policy replication where Consul only requires policy replication with token replication being opt-in. In Consul local tokens only work with token replication being enabled though.
    All policies in Nomad are globally applicable. In Consul all policies are stored and replicated globally but can be scoped to a subset of the datacenters. This allows for more granular access management.
    Unlike Nomad, Consul has legacy baggage in the form of the original ACL system. The ramifications of this are:
        A server running the new system must still support other clients using the legacy system.
        A client running the new system must be able to use the legacy RPCs when the servers in its datacenter are running the legacy system.
        The primary ACL DC's servers running in legacy mode needs to be a gate that keeps everything else in the entire multi-DC cluster running in legacy mode.

So not only does this PR implement the new ACL system but has a legacy mode built in for when the cluster isn't ready for new ACLs. Also detecting that new ACLs can be used is automatic and requires no configuration on the part of administrators. This process is detailed more in the "Transitioning from Legacy to New ACL Mode" section below.
2018-10-19 12:04:07 -04:00
Pierre Souchay b0fc91a1d2 [Performance On Large clusters] Reduce updates on large services (#4720)
* [Performance On Large clusters] Checks do update services/nodes only when really modified to avoid too many updates on very large clusters

In a large cluster, when having a few thousands of nodes, the anti-entropy
mechanism performs lots of changes (several per seconds) while
there is no real change. This patch wants to improve this in order
to increase Consul scalability when using many blocking requests on
health for instance.

* [Performance for large clusters] Only updates index of service if service is really modified

* [Performance for large clusters] Only updates index of nodes if node is really modified

* Added comments / ensure IsSame() has clear semantics

* Avoid having modified boolean, return nil directly if stutures are Same

* Fixed unstable unit tests TestLeader_ChangeServerID

* Rewrite TestNode_IsSame() for better readability as suggested by @banks

* Rename ServiceNode.IsSame() into IsSameService() + added unit tests

* Do not duplicate TestStructs_ServiceNode_Conversions() and increase test coverage of IsSameService

* Clearer documentation in IsSameService

* Take into account ServiceProxy into ServiceNode.IsSameService()

* Fixed IsSameService() with all new structures
2018-10-11 12:42:39 +01:00
Paul Banks 7038fe6b71 Add SidecarService Syntax sugar to Service Definition (#4686)
* Added new Config for SidecarService in ServiceDefinitions.

* WIP: all the code needed for SidecarService is written... none of it is tested other than config :). Need API updates too.

* Test coverage for the new sidecarServiceFromNodeService method.

* Test API registratrion with SidecarService

* Recursive Key Translation 🤦

* Add tests for nested sidecar defintion arrays to ensure they are translated correctly

* Use dedicated internal state rather than Service Meta for tracking sidecars for deregistration.

Add tests for deregistration.

* API struct for agent register. No other endpoint should be affected yet.

* Additional test cases to cover updates to API registrations
2018-10-10 16:55:34 +01:00
Paul Banks 92fe8c8e89 Add Proxy Upstreams to Service Definition (#4639)
* Refactor Service Definition ProxyDestination.

This includes:
 - Refactoring all internal structs used
 - Updated tests for both deprecated and new input for:
   - Agent Services endpoint response
   - Agent Service endpoint response
   - Agent Register endpoint
     - Unmanaged deprecated field
     - Unmanaged new fields
     - Managed deprecated upstreams
     - Managed new
   - Catalog Register
     - Unmanaged deprecated field
     - Unmanaged new fields
     - Managed deprecated upstreams
     - Managed new
   - Catalog Services endpoint response
   - Catalog Node endpoint response
   - Catalog Service endpoint response
 - Updated API tests for all of the above too (both deprecated and new forms of register)

TODO:
 - config package changes for on-disk service definitions
 - proxy config endpoint
 - built-in proxy support for new fields

* Agent proxy config endpoint updated with upstreams

* Config file changes for upstreams.

* Add upstream opaque config and update all tests to ensure it works everywhere.

* Built in proxy working with new Upstreams config

* Command fixes and deprecations

* Fix key translation, upstream type defaults and a spate of other subtele bugs found with ned to end test scripts...

TODO: tests still failing on one case that needs a fix. I think it's key translation for upstreams nested in Managed proxy struct.

* Fix translated keys in API registration.
≈

* Fixes from docs
 - omit some empty undocumented fields in API
 - Bring back ServiceProxyDestination in Catalog responses to not break backwards compat - this was removed assuming it was only used internally.

* Documentation updates for Upstreams in service definition

* Fixes for tests broken by many refactors.

* Enable travis on f-connect branch in this branch too.

* Add consistent Deprecation comments to ProxyDestination uses

* Update version number on deprecation notices, and correct upstream datacenter field with explanation in docs
2018-10-10 16:55:34 +01:00
Pierre Souchay 473e589d86 Implementation of Weights Data structures (#4468)
* Implementation of Weights Data structures

Adding this datastructure will allow us to resolve the
issues #1088 and #4198

This new structure defaults to values:
```
   { Passing: 1, Warning: 0 }
```

Which means, use weight of 0 for a Service in Warning State
while use Weight 1 for a Healthy Service.
Thus it remains compatible with previous Consul versions.

* Implemented weights for DNS SRV Records

* DNS properly support agents with weight support while server does not (backwards compatibility)

* Use Warning value of Weights of 1 by default

When using DNS interface with only_passing = false, all nodes
with non-Critical healthcheck used to have a weight value of 1.
While having weight.Warning = 0 as default value, this is probably
a bad idea as it breaks ascending compatibility.

Thus, we put a default value of 1 to be consistent with existing behaviour.

* Added documentation for new weight field in service description

* Better documentation about weights as suggested by @banks

* Return weight = 1 for unknown Check states as suggested by @banks

* Fixed typo (of -> or) in error message as requested by @mkeeler

* Fixed unstable unit test TestRetryJoin

* Fixed unstable tests

* Fixed wrong Fatalf format in `testrpc/wait.go`

* Added notes regarding DNS SRV lookup limitations regarding number of instances

* Documentation fixes and clarification regarding SRV records with weights as requested by @banks

* Rephrase docs
2018-09-07 15:30:47 +01:00
Mitchell Hashimoto ad382d7351 agent: switch ConnectNative to an embedded struct 2018-06-25 12:24:10 -07:00
Mitchell Hashimoto a3e0ac1ee3 agent/consul/state: support querying by Connect native 2018-06-25 12:24:08 -07:00
Mitchell Hashimoto 8e02bbc897 agent/consul: support catalog registration with Connect native 2018-06-25 12:24:07 -07:00