* docs: add envoy to the proxycfg diagram (#16834)
* docs: add envoy to the proxycfg diagram
* increase dee-copy job to use large runner. disable lint-enums on ENT
* set lint-enums to a large
* remove redunant installation of deep-copy
---------
Co-authored-by: cskh <hui.kang@hashicorp.com>
This is part of an effort to raise awareness that you need to monitor
your mesh CA if coming from an external source as you'll need to manage
the rotation.
Currently, if an acceptor peer deletes a peering the dialer's peering
will eventually get to a "terminated" state. If the two clusters need to
be re-peered the acceptor will re-generate the token but the dialer will
encounter this error on the call to establish:
"failed to get addresses to dial peer: failed to refresh peer server
addresses, will continue to use initial addresses: there is no active
peering for "<<<ID>>>""
This is because in `exchangeSecret().GetDialAddresses()` we will get an
error if fetching addresses for an inactive peering. The peering shows
up as inactive at this point because of the existing terminated state.
Rather than checking whether a peering is active we can instead check
whether it was deleted. This way users do not need to delete terminated
peerings in the dialing cluster before re-establishing them.
* Rename Intermediate cert references to LeafSigningCert
Within the Consul CA subsystem, the term "Intermediate"
is confusing because the meaning changes depending on
provider and datacenter (primary vs secondary). For
example, when using the Consul CA the "ActiveIntermediate"
may return the root certificate in a primary datacenter.
At a high level, we are interested in knowing which
CA is responsible for signing leaf certs, regardless of
its position in a certificate chain. This rename makes
the intent clearer.
* Move provider state check earlier
* Remove calls to GenerateLeafSigningCert
GenerateLeafSigningCert (formerly known
as GenerateIntermediate) is vestigial in
non-Vault providers, as it simply returns
the root certificate in primary
datacenters.
By folding Vault's intermediate cert logic
into `GenerateRoot` we can encapsulate
the intermediate cert handling within
`newCARoot`.
* Move GenerateLeafSigningCert out of PrimaryProvidder
Now that the Vault Provider calls
GenerateLeafSigningCert within
GenerateRoot, we can remove the method
from all other providers that never
used it in a meaningful way.
* Add test for IntermediatePEM
* Rename GenerateRoot to GenerateCAChain
"Root" was being overloaded in the Consul CA
context, as different providers and configs
resulted in a single root certificate or
a chain originating from an external trusted
CA. Since the Vault provider also generates
intermediates, it seems more accurate to
call this a CAChain.
* ci: add build-artifacts workflow
Signed-off-by: Dan Bond <danbond@protonmail.com>
* makefile for gha dev-docker
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use docker actions instead of make
Signed-off-by: Dan Bond <danbond@protonmail.com>
* Add context
Signed-off-by: Dan Bond <danbond@protonmail.com>
* testing push
Signed-off-by: Dan Bond <danbond@protonmail.com>
* set short sha
Signed-off-by: Dan Bond <danbond@protonmail.com>
* upload to s3
Signed-off-by: Dan Bond <danbond@protonmail.com>
* rm s3 upload
Signed-off-by: Dan Bond <danbond@protonmail.com>
* use runner setup job
Signed-off-by: Dan Bond <danbond@protonmail.com>
* on push
Signed-off-by: Dan Bond <danbond@protonmail.com>
* testing
Signed-off-by: Dan Bond <danbond@protonmail.com>
* on pr
Signed-off-by: Dan Bond <danbond@protonmail.com>
* revert testing
Signed-off-by: Dan Bond <danbond@protonmail.com>
* OSS/ENT logic
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add comments
Signed-off-by: Dan Bond <danbond@protonmail.com>
* Update .github/workflows/build-artifacts.yml
Co-authored-by: John Murret <john.murret@hashicorp.com>
---------
Signed-off-by: Dan Bond <danbond@protonmail.com>
Co-authored-by: John Murret <john.murret@hashicorp.com>
This PR adds the sameness-group field to exported-service
config entries, which allows for services to be exported
to multiple destination partitions / peers easily.
* go-tests workflow
* add test splitting to go-tests
* fix re-reun fails report path
* fix re-reun fails report path another place
* fixing tests for32bit and race
* use script file to generate runners
* fixing run path
* add checkout
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* passing runs-on
* setting up runs-on as a parameter to check-go-mod
* making on pull_request
* Update .github/scripts/rerun_fails_report.sh
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* make runs-on required
* removing go-version param that is not used.
* removing go-version param that is not used.
* Modify build-distros to use medium runners (#16773)
* go-tests workflow
* add test splitting to go-tests
* fix re-reun fails report path
* fix re-reun fails report path another place
* fixing tests for32bit and race
* use script file to generate runners
* fixing run path
* add checkout
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* passing runs-on
* setting up runs-on as a parameter to check-go-mod
* trying mediums
* adding in script
* fixing runs-on to be parameter
* fixing merge conflict
* changing to on push
* removing whitespace
* go-tests workflow
* add test splitting to go-tests
* fix re-reun fails report path
* fix re-reun fails report path another place
* fixing tests for32bit and race
* use script file to generate runners
* fixing run path
* add checkout
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* passing runs-on
* setting up runs-on as a parameter to check-go-mod
* changing back to on pull_request
---------
Co-authored-by: Dan Bond <danbond@protonmail.com>
* Github Actions Migration - move verify-ci workflows to GHA (#16777)
* add verify-ci workflow
* adding comment and changing to on pull request.
* changing to pull_requests
* changing to pull_request
* Apply suggestions from code review
Co-authored-by: Dan Bond <danbond@protonmail.com>
* [NET-3029] Migrate frontend to GHA (#16731)
* changing set up to a small
* using consuls own custom runner pool.
---------
Co-authored-by: Dan Bond <danbond@protonmail.com>
* copyright headers for agent folder
* Ignore test data files
* fix proto files and remove headers in agent/uiserver folder
* ignore deep-copy files
* copyright headers for agent folder
* fix merge conflicts
* copyright headers for agent folder
* Ignore test data files
* fix proto files
* ignore agent/uiserver folder for now
* copyright headers for agent folder
* Add copyright headers for acl, api and bench folders
* Use merge of enterprise meta's rather than new custom method
* Add merge logic for tcp routes
* Add changelog
* Normalize certificate refs on gateways
* Fix infinite call loop
* Explicitly call enterprise meta
* Exand route flattening test for multiple namespaces
* Add helper for checking http route config entry exists without checking for bound
status
* Fix port and hostname check for http route flattening test
Introduces `storage.Backend`, which will serve as the interface between the
Resource Service and the underlying storage system (Raft today, but in the
future, who knows!).
The primary design goal of this interface is to keep its surface area small,
and push as much functionality as possible into the layers above, so that new
implementations can be added with little effort, and easily proven to be
correct. To that end, we also provide a suite of "conformance" tests that can
be run against a backend implementation to check it behaves correctly.
In this commit, we introduce an initial in-memory storage backend, which is
suitable for tests and when running Consul in development mode. This backend is
a thin wrapper around the `Store` type, which implements a resource database
using go-memdb and our internal pub/sub system. `Store` will also be used to
handle reads in our Raft backend, and in the future, used as a local cache for
external storage systems.