* agent: configure server lastseen timestamp
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use correct config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use default age in test golden data
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add changelog
Signed-off-by: Dan Bond <danbond@protonmail.com>
* fix runtime test
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add server_metadata
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* correctly check if metadata file does not exist
Signed-off-by: Dan Bond <danbond@protonmail.com>
* follow instructions for adding new config
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* update comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* Update agent/agent.go
Co-authored-by: Dan Upton <daniel@floppy.co>
* agent/config: add validation for duration with min
Signed-off-by: Dan Bond <danbond@protonmail.com>
* docs: add new server_rejoin_age_max config definition
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: add unit test for checking server last seen
Signed-off-by: Dan Bond <danbond@protonmail.com>
* agent: log continually for 60s before erroring
Signed-off-by: Dan Bond <danbond@protonmail.com>
* pr comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* remove unneeded todo
* agent: fix error message
Signed-off-by: Dan Bond <danbond@protonmail.com>
---------
Signed-off-by: Dan Bond <danbond@protonmail.com>
Co-authored-by: Dan Upton <daniel@floppy.co>
* Add v1/internal/service-virtual-ip for manually setting service VIPs
* Attach service virtual IP info to compiled discovery chain
* Separate auto-assigned and manual VIPs in response
The grpc resolver implementation is fed from changes to the
router.Router. Within the router there is a map of various areas storing
the addressing information for servers in those areas. All map entries
are of the WAN variety except a single special entry for the LAN.
Addressing information in the LAN "area" are local addresses intended
for use when making a client-to-server or server-to-server request.
The client agent correctly updates this LAN area when receiving lan serf
events, so by extension the grpc resolver works fine in that scenario.
The server agent only initially populates a single entry in the LAN area
(for itself) on startup, and then never mutates that area map again.
For normal RPCs a different structure is used for LAN routing.
Additionally when selecting a server to contact in the local datacenter
it will randomly select addresses from either the LAN or WAN addressed
entries in the map.
Unfortunately this means that the grpc resolver stack as it exists on
server agents is either broken or only accidentally functions by having
servers dial each other over the WAN-accessible address. If the operator
disables the serf wan port completely likely this incidental functioning
would break.
This PR enforces that local requests for servers (both for stale reads
or leader forwarded requests) exclusively use the LAN "area" information
and also fixes it so that servers keep that area up to date in the
router.
A test for the grpc resolver logic was added, as well as a higher level
full-stack test to ensure the externally perceived bug does not return.
* snapshot: some improvments to the snapshot process
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
Co-authored-by: Chris S. Kim <ckim@hashicorp.com>
Fix multiple issues related to proxycfg health queries.
1. The datacenter was not being provided to a proxycfg query, which resulted in
bypassing agentless query optimizations and using the normal API instead.
2. The health rpc endpoint would return a zero index when insufficient ACLs were
detected. This would result in the agent cache performing an infinite loop of
queries in rapid succession without backoff.
This adds filtering for service-defaults: consul config list -filter 'MutualTLSMode == "permissive"'.
It adds CLI warnings when the CLI writes a config entry and sees that either service-defaults or proxy-defaults contains MutualTLSMode=permissive, or sees that the mesh config entry contains AllowEnablingPermissiveMutualTLSMode=true.
* Persist HCP management token from server config
We want to move away from injecting an initial management token into
Consul clusters linked to HCP. The reasoning is that by using a separate
class of token we can have more flexibility in terms of allowing HCP's
token to co-exist with the user's management token.
Down the line we can also more easily adjust the permissions attached to
HCP's token to limit it's scope.
With these changes, the cloud management token is like the initial
management token in that iit has the same global management policy and
if it is created it effectively bootstraps the ACL system.
* Update SDK and mock HCP server
The HCP management token will now be sent in a special field rather than
as Consul's "initial management" token configuration.
This commit also updates the mock HCP server to more accurately reflect
the behavior of the CCM backend.
* Refactor HCP bootstrapping logic and add tests
We want to allow users to link Consul clusters that already exist to
HCP. Existing clusters need care when bootstrapped by HCP, since we do
not want to do things like change ACL/TLS settings for a running
cluster.
Additional changes:
* Deconstruct MaybeBootstrap so that it can be tested. The HCP Go SDK
requires HTTPS to fetch a token from the Auth URL, even if the backend
server is mocked. By pulling the hcp.Client creation out we can modify
its TLS configuration in tests while keeping the secure behavior in
production code.
* Add light validation for data received/loaded.
* Sanitize initial_management token from received config, since HCP will
only ever use the CloudConfig.MangementToken.
* Add changelog entry
* Move status condition for invalid certifcate to reference the listener
that is using the certificate
* Fix where we set the condition status for listeners and certificate
refs, added tests
* Add changelog
* Add MaxEjectionPercent to config entry
* Add BaseEjectionTime to config entry
* Add MaxEjectionPercent and BaseEjectionTime to protobufs
* Add MaxEjectionPercent and BaseEjectionTime to api
* Fix integration test breakage
* Verify MaxEjectionPercent and BaseEjectionTime in integration test upstream confings
* Website docs for MaxEjectionPercent and BaseEjection time
* Add `make docs` to browse docs at http://localhost:3000
* Changelog entry
* so that is the difference between consul-docker and dev-docker
* blah
* update proto funcs
* update proto
---------
Co-authored-by: Maliz <maliheh.monshizadeh@hashicorp.com>
* normalize status conditions for gateways and routes
* Added tests for checking condition status and panic conditions for
validating combinations, added dummy code for fsm store
* get rid of unneeded gateway condition generator struct
* Remove unused file
* run go mod tidy
* Update tests, add conflicted gateway status
* put back removed status for test
* Fix linting violation, remove custom conflicted status
* Update fsm commands oss
* Fix incorrect combination of type/condition/status
* cleaning up from PR review
* Change "invalidCertificate" to be of accepted status
* Move status condition enums into api package
* Update gateways controller and generated code
* Update conditions in fsm oss tests
* run go mod tidy on consul-container module to fix linting
* Fix type for gateway endpoint test
* go mod tidy from changes to api
* go mod tidy on troubleshoot
* Fix route conflicted reason
* fix route conflict reason rename
* Fix text for gateway conflicted status
* Add valid certificate ref condition setting
* Revert change to resolved refs to be handled in future PR
* added method for converting SamenessGroupConfigEntry
- added new method `ToQueryFailoverTargets` for converting a SamenessGroupConfigEntry's members to a list of QueryFailoverTargets
- renamed `ToFailoverTargets` ToServiceResolverFailoverTargets to distinguish it from `ToQueryFailoverTargets`
* Added SamenessGroup to PreparedQuery
- exposed Service.Partition to API when defining a prepared query
- added a method for determining if a QueryFailoverOptions is empty
- This will be useful for validation
- added unit tests
* added method for retrieving a SamenessGroup to state store
* added logic for using PQ with SamenessGroup
- added branching path for SamenessGroup handling in execute. It will be handled separate from the normal PQ case
- added a new interface so that the `GetSamenessGroupFailoverTargets` can be properly tested
- separated the execute logic into a `targetSelector` function so that it can be used for both failover and sameness group PQs
- split OSS only methods into new PQ OSS files
- added validation that `samenessGroup` is an enterprise only feature
* added documentation for PQ SamenessGroup
Before this change, we were not fetching service resolvers (and therefore
service defaults) configuration entries for services on members of sameness
groups.
This implements permissive mTLS , which allows toggling services into "permissive" mTLS mode.
Permissive mTLS mode allows incoming "non Consul-mTLS" traffic to be forward unmodified to the application.
* Update service-defaults and proxy-defaults config entries with a MutualTLSMode field
* Update the mesh config entry with an AllowEnablingPermissiveMutualTLS field and implement the necessary validation. AllowEnablingPermissiveMutualTLS must be true to allow changing to MutualTLSMode=permissive, but this does not require that all proxy-defaults and service-defaults are currently in strict mode.
* Update xDS listener config to add a "permissive filter chain" when MutualTLSMode=permissive for a particular service. The permissive filter chain matches incoming traffic by the destination port. If the destination port matches the service port from the catalog, then no mTLS is required and the traffic sent is forwarded unmodified to the application.
This commit adds the PrioritizeByLocality field to both proxy-config
and service-resolver config entries for locality-aware routing. The
field is currently intended for enterprise only, and will be used to
enable prioritization of service-mesh connections to services based
on geographical region / zone.
- added Sameness Group to config entries
- added Sameness Group to subscriptions
* generated proto files
* added Sameness Group events to the state store
- added test cases
* Refactored health RPC Client
- moved code that is common to rpcclient under rpcclient common.go. This will help set us up to support future RPC clients
* Refactored proxycfg glue views
- Moved views to rpcclient config entry. This will allow us to reuse this code for a config entry client
* added config entry RPC Client
- Copied most of the testing code from rpcclient/health
* hooked up new rpcclient in agent
* fixed documentation and comments for clarity
* Add a test to reproduce the race condition
* Fix race condition by publishing the event after the commit and adding a lock to prevent out of order events.
* split publish to generate the list of events before committing the transaction.
* add changelog
* remove extra func
* Apply suggestions from code review
Co-authored-by: Dan Upton <daniel@floppy.co>
* add comment to explain test
---------
Co-authored-by: Dan Upton <daniel@floppy.co>
Prior to this change, peer services would be targeted by service-default
overrides as long as the new `peer` field was not found in the config entry.
This commit removes that deprecated backwards-compatibility behavior. Now
it is necessary to specify the `peer` field in order for upstream overrides
to apply to a peer upstream.
The old setting of 24 hours was not enough time to deal with an expiring certificates. This change ups it to 28 days OR 40% of the full cert duration, whichever is shorter. It also adds details to the log message to indicate which certificate it is logging about and a suggested action.
Currently, if an acceptor peer deletes a peering the dialer's peering
will eventually get to a "terminated" state. If the two clusters need to
be re-peered the acceptor will re-generate the token but the dialer will
encounter this error on the call to establish:
"failed to get addresses to dial peer: failed to refresh peer server
addresses, will continue to use initial addresses: there is no active
peering for "<<<ID>>>""
This is because in `exchangeSecret().GetDialAddresses()` we will get an
error if fetching addresses for an inactive peering. The peering shows
up as inactive at this point because of the existing terminated state.
Rather than checking whether a peering is active we can instead check
whether it was deleted. This way users do not need to delete terminated
peerings in the dialing cluster before re-establishing them.
* Rename Intermediate cert references to LeafSigningCert
Within the Consul CA subsystem, the term "Intermediate"
is confusing because the meaning changes depending on
provider and datacenter (primary vs secondary). For
example, when using the Consul CA the "ActiveIntermediate"
may return the root certificate in a primary datacenter.
At a high level, we are interested in knowing which
CA is responsible for signing leaf certs, regardless of
its position in a certificate chain. This rename makes
the intent clearer.
* Move provider state check earlier
* Remove calls to GenerateLeafSigningCert
GenerateLeafSigningCert (formerly known
as GenerateIntermediate) is vestigial in
non-Vault providers, as it simply returns
the root certificate in primary
datacenters.
By folding Vault's intermediate cert logic
into `GenerateRoot` we can encapsulate
the intermediate cert handling within
`newCARoot`.
* Move GenerateLeafSigningCert out of PrimaryProvidder
Now that the Vault Provider calls
GenerateLeafSigningCert within
GenerateRoot, we can remove the method
from all other providers that never
used it in a meaningful way.
* Add test for IntermediatePEM
* Rename GenerateRoot to GenerateCAChain
"Root" was being overloaded in the Consul CA
context, as different providers and configs
resulted in a single root certificate or
a chain originating from an external trusted
CA. Since the Vault provider also generates
intermediates, it seems more accurate to
call this a CAChain.
This PR adds the sameness-group field to exported-service
config entries, which allows for services to be exported
to multiple destination partitions / peers easily.