* First pass for helper for bulk changes
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Convert ACLRead and ACLWrite to new form
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* AgentRead and AgentWRite
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Fix EventWrite
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* KeyRead, KeyWrite, KeyList
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* KeyRing
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* NodeRead NodeWrite
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* OperatorRead and OperatorWrite
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* PreparedQuery
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Intention partial
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Fix ServiceRead, Write ,etc
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Error check ServiceRead?
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Fix Sessionread/Write
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Fixup snapshot ACL
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Error fixups for txn
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Add changelog
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
* Fixup review comments
Signed-off-by: Mark Anderson <manderson@hashicorp.com>
Minor fix for behavior in #12362
IsDefault sometimes returns true even if there was a proxy-defaults or service-defaults config entry that was consulted. This PR fixes that.
Starting from and extending the mechanism introduced in #12110 we can specially handle the 3 main special Consul RPC endpoints that react to many config entries in a single blocking query in Connect:
- `DiscoveryChain.Get`
- `ConfigEntry.ResolveServiceConfig`
- `Intentions.Match`
All of these will internally watch for many config entries, and at least one of those will likely be not found in any given query. Because these are blends of multiple reads the exact solution from #12110 isn't perfectly aligned, but we can tweak the approach slightly and regain the utility of that mechanism.
### No Config Entries Found
In this case, despite looking for many config entries none may be found at all. Unlike #12110 in this scenario we do not return an empty reply to the caller, but instead synthesize a struct from default values to return. This can be handled nearly identically to #12110 with the first 1-2 replies being non-empty payloads followed by the standard spurious wakeup suppression mechanism from #12110.
### No Change Since Last Wakeup
Once a blocking query loop on the server has completed and slept at least once, there is a further optimization we can make here to detect if any of the config entries that were present at specific versions for the prior execution of the loop are identical for the loop we just woke up for. In that scenario we can return a slightly different internal sentinel error and basically externally handle it similar to #12110.
This would mean that even if 20 discovery chain read RPC handling goroutines wakeup due to the creation of an unrelated config entry, the only ones that will terminate and reply with a blob of data are those that genuinely have new data to report.
### Extra Endpoints
Since this pattern is pretty reusable, other key config-entry-adjacent endpoints used by `agent/proxycfg` also were updated:
- `ConfigEntry.List`
- `Internal.IntentionUpstreams` (tproxy)
useInDatacenter was used to determine whether the mesh gateway mode of
the upstream should be returned in the discovery chain target. This
commit makes it so that the mesh gateway mode is returned every time,
and it is up to the caller to decide whether mesh gateways should be
watched or used.
Follow up to https://github.com/hashicorp/consul/pull/10737#discussion_r682147950
Renames all variables for acl.Authorizer to use `authz`. Previously some
places used `rule` which I believe was an old name carried over from the
legacy ACL system.
A couple places also used authorizer.
This commit also removes another couple of authorizer nil checks that
are no longer necessary.
* ACL Authorizer overhaul
To account for upcoming features every Authorization function can now take an extra *acl.EnterpriseAuthorizerContext. These are unused in OSS and will always be nil.
Additionally the acl package has received some thorough refactoring to enable all of the extra Consul Enterprise specific authorizations including moving sentinel enforcement into the stubbed structs. The Authorizer funcs now return an acl.EnforcementDecision instead of a boolean. This improves the overall interface as it makes multiple Authorizers easily chainable as they now indicate whether they had an authoritative decision or should use some other defaults. A ChainedAuthorizer was added to handle this Authorizer enforcement chain and will never itself return a non-authoritative decision.
* Include stub for extra enterprise rules in the global management policy
* Allow for an upgrade of the global-management policy
Failover is pushed entirely down to the data plane by creating envoy
clusters and putting each successive destination in a different load
assignment priority band. For example this shows that normally requests
go to 1.2.3.4:8080 but when that fails they go to 6.7.8.9:8080:
- name: foo
load_assignment:
cluster_name: foo
policy:
overprovisioning_factor: 100000
endpoints:
- priority: 0
lb_endpoints:
- endpoint:
address:
socket_address:
address: 1.2.3.4
port_value: 8080
- priority: 1
lb_endpoints:
- endpoint:
address:
socket_address:
address: 6.7.8.9
port_value: 8080
Mesh gateways route requests based solely on the SNI header tacked onto
the TLS layer. Envoy currently only lets you configure the outbound SNI
header at the cluster layer.
If you try to failover through a mesh gateway you ideally would
configure the SNI value per endpoint, but that's not possible in envoy
today.
This PR introduces a simpler way around the problem for now:
1. We identify any target of failover that will use mesh gateway mode local or
remote and then further isolate any resolver node in the compiled discovery
chain that has a failover destination set to one of those targets.
2. For each of these resolvers we will perform a small measurement of
comparative healths of the endpoints that come back from the health API for the
set of primary target and serial failover targets. We walk the list of targets
in order and if any endpoint is healthy we return that target, otherwise we
move on to the next target.
3. The CDS and EDS endpoints both perform the measurements in (2) for the
affected resolver nodes.
4. For CDS this measurement selects which TLS SNI field to use for the cluster
(note the cluster is always going to be named for the primary target)
5. For EDS this measurement selects which set of endpoints will populate the
cluster. Priority tiered failover is ignored.
One of the big downsides to this approach to failover is that the failover
detection and correction is going to be controlled by consul rather than
deferring that entirely to the data plane as with the prior version. This also
means that we are bound to only failover using official health signals and
cannot make use of data plane signals like outlier detection to affect
failover.
In this specific scenario the lack of data plane signals is ok because the
effectiveness is already muted by the fact that the ultimate destination
endpoints will have their data plane signals scrambled when they pass through
the mesh gateway wrapper anyway so we're not losing much.
Another related fix is that we now use the endpoint health from the
underlying service, not the health of the gateway (regardless of
failover mode).
In addition to exposing compilation over the API cleaned up the structures that would be exchanged to be cleaner and easier to support and understand.
Also removed ability to configure the envoy OverprovisioningFactor.