Commit Graph

15 Commits

Author SHA1 Message Date
Mark Anderson 5591cb1e11
Bulk acl message fixup oss (#12470)
* First pass for helper for bulk changes

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Convert ACLRead and ACLWrite to new form

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* AgentRead and AgentWRite

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Fix EventWrite

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* KeyRead, KeyWrite, KeyList

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* KeyRing

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* NodeRead NodeWrite

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* OperatorRead and OperatorWrite

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* PreparedQuery

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Intention partial

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Fix ServiceRead, Write ,etc

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Error check ServiceRead?

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Fix Sessionread/Write

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Fixup snapshot ACL

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Error fixups for txn

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Add changelog

Signed-off-by: Mark Anderson <manderson@hashicorp.com>

* Fixup review comments

Signed-off-by: Mark Anderson <manderson@hashicorp.com>
2022-03-10 18:48:27 -08:00
R.B. Boyer 07b92a2855
server: fix spurious blocking query suppression for discovery chains (#12512)
Minor fix for behavior in #12362

IsDefault sometimes returns true even if there was a proxy-defaults or service-defaults config entry that was consulted. This PR fixes that.
2022-03-03 16:54:41 -06:00
R.B. Boyer 3804677570
server: suppress spurious blocking query returns where multiple config entries are involved (#12362)
Starting from and extending the mechanism introduced in #12110 we can specially handle the 3 main special Consul RPC endpoints that react to many config entries in a single blocking query in Connect:

- `DiscoveryChain.Get`
- `ConfigEntry.ResolveServiceConfig`
- `Intentions.Match`

All of these will internally watch for many config entries, and at least one of those will likely be not found in any given query. Because these are blends of multiple reads the exact solution from #12110 isn't perfectly aligned, but we can tweak the approach slightly and regain the utility of that mechanism.

### No Config Entries Found

In this case, despite looking for many config entries none may be found at all. Unlike #12110 in this scenario we do not return an empty reply to the caller, but instead synthesize a struct from default values to return. This can be handled nearly identically to #12110 with the first 1-2 replies being non-empty payloads followed by the standard spurious wakeup suppression mechanism from #12110.

### No Change Since Last Wakeup

Once a blocking query loop on the server has completed and slept at least once, there is a further optimization we can make here to detect if any of the config entries that were present at specific versions for the prior execution of the loop are identical for the loop we just woke up for. In that scenario we can return a slightly different internal sentinel error and basically externally handle it similar to #12110.

This would mean that even if 20 discovery chain read RPC handling goroutines wakeup due to the creation of an unrelated config entry, the only ones that will terminate and reply with a blob of data are those that genuinely have new data to report.

### Extra Endpoints

Since this pattern is pretty reusable, other key config-entry-adjacent endpoints used by `agent/proxycfg` also were updated:

- `ConfigEntry.List`
- `Internal.IntentionUpstreams` (tproxy)
2022-02-25 15:46:34 -06:00
freddygv feaebde1f1 Remove useInDatacenter from disco chain requests
useInDatacenter was used to determine whether the mesh gateway mode of
the upstream should be returned in the discovery chain target. This
commit makes it so that the mesh gateway mode is returned every time,
and it is up to the caller to decide whether mesh gateways should be
watched or used.
2021-10-26 23:35:21 -06:00
Dhia Ayachi f766b6dff7
oss portion of ent #1069 (#10883) 2021-08-20 12:57:45 -04:00
Daniel Nephin 608b291565 acl: use authz consistently as the variable name for an acl.Authorizer
Follow up to https://github.com/hashicorp/consul/pull/10737#discussion_r682147950

Renames all variables for acl.Authorizer to use `authz`. Previously some
places used `rule` which I believe was an old name carried over from the
legacy ACL system.

A couple places also used authorizer.

This commit also removes another couple of authorizer nil checks that
are no longer necessary.
2021-08-17 12:14:10 -04:00
Daniel Nephin 9b41e7287f acl: use acl.ManangeAll when ACLs are disabled
Instead of returning nil and checking for nilness

Removes a bunch of nil checks, and fixes one test failures.
2021-07-30 12:58:24 -04:00
Daniel Nephin c38f4869ad rpc: remove unnecessary arg to ForwardRPC 2021-05-06 13:30:07 -04:00
freddygv 3653045cb0 Add method for downstreams from disco chain 2020-10-05 10:24:50 -06:00
Matt Keeler a77ed471c8
Rename (*Server).forward to (*Server).ForwardRPC
Also get rid of the preexisting shim in server.go that existed before to have this name just call the unexported one.
2020-07-08 11:05:44 -04:00
Matt Keeler 485a0a65ea
Updates to Config Entries and Connect for Namespaces (#7116) 2020-01-24 10:04:58 -05:00
Matt Keeler f9a43a1e2d
ACL Authorizer overhaul (#6620)
* ACL Authorizer overhaul

To account for upcoming features every Authorization function can now take an extra *acl.EnterpriseAuthorizerContext. These are unused in OSS and will always be nil.

Additionally the acl package has received some thorough refactoring to enable all of the extra Consul Enterprise specific authorizations including moving sentinel enforcement into the stubbed structs. The Authorizer funcs now return an acl.EnforcementDecision instead of a boolean. This improves the overall interface as it makes multiple Authorizers easily chainable as they now indicate whether they had an authoritative decision or should use some other defaults. A ChainedAuthorizer was added to handle this Authorizer enforcement chain and will never itself return a non-authoritative decision.

* Include stub for extra enterprise rules in the global management policy

* Allow for an upgrade of the global-management policy
2019-10-15 16:58:50 -04:00
R.B. Boyer 0675e0606e
connect: generate the full SNI names for discovery targets in the compiler rather than in the xds package (#6340) 2019-08-19 13:03:03 -05:00
R.B. Boyer 64fc002e03
connect: fix failover through a mesh gateway to a remote datacenter (#6259)
Failover is pushed entirely down to the data plane by creating envoy
clusters and putting each successive destination in a different load
assignment priority band. For example this shows that normally requests
go to 1.2.3.4:8080 but when that fails they go to 6.7.8.9:8080:

- name: foo
  load_assignment:
    cluster_name: foo
    policy:
      overprovisioning_factor: 100000
    endpoints:
    - priority: 0
      lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: 1.2.3.4
              port_value: 8080
    - priority: 1
      lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: 6.7.8.9
              port_value: 8080

Mesh gateways route requests based solely on the SNI header tacked onto
the TLS layer. Envoy currently only lets you configure the outbound SNI
header at the cluster layer.

If you try to failover through a mesh gateway you ideally would
configure the SNI value per endpoint, but that's not possible in envoy
today.

This PR introduces a simpler way around the problem for now:

1. We identify any target of failover that will use mesh gateway mode local or
   remote and then further isolate any resolver node in the compiled discovery
   chain that has a failover destination set to one of those targets.

2. For each of these resolvers we will perform a small measurement of
   comparative healths of the endpoints that come back from the health API for the
   set of primary target and serial failover targets. We walk the list of targets
   in order and if any endpoint is healthy we return that target, otherwise we
   move on to the next target.

3. The CDS and EDS endpoints both perform the measurements in (2) for the
   affected resolver nodes.

4. For CDS this measurement selects which TLS SNI field to use for the cluster
   (note the cluster is always going to be named for the primary target)

5. For EDS this measurement selects which set of endpoints will populate the
   cluster. Priority tiered failover is ignored.

One of the big downsides to this approach to failover is that the failover
detection and correction is going to be controlled by consul rather than
deferring that entirely to the data plane as with the prior version. This also
means that we are bound to only failover using official health signals and
cannot make use of data plane signals like outlier detection to affect
failover.

In this specific scenario the lack of data plane signals is ok because the
effectiveness is already muted by the fact that the ultimate destination
endpoints will have their data plane signals scrambled when they pass through
the mesh gateway wrapper anyway so we're not losing much.

Another related fix is that we now use the endpoint health from the
underlying service, not the health of the gateway (regardless of
failover mode).
2019-08-05 13:30:35 -05:00
R.B. Boyer 0165e93517
connect: expose an API endpoint to compile the discovery chain (#6248)
In addition to exposing compilation over the API cleaned up the structures that would be exchanged to be cleaner and easier to support and understand.

Also removed ability to configure the envoy OverprovisioningFactor.
2019-08-02 15:34:54 -05:00