* migrate build distros to GHA
Signed-off-by: Dan Bond <danbond@protonmail.com>
* build-arm
Signed-off-by: Dan Bond <danbond@protonmail.com>
* don't use matrix
Signed-off-by: Dan Bond <danbond@protonmail.com>
* check-go-mod
Signed-off-by: Dan Bond <danbond@protonmail.com>
* add notify slack script
Signed-off-by: Dan Bond <danbond@protonmail.com>
* notify slack if failure
Signed-off-by: Dan Bond <danbond@protonmail.com>
* rm notify slack script
Signed-off-by: Dan Bond <danbond@protonmail.com>
* fix check-go-mod job
Signed-off-by: Dan Bond <danbond@protonmail.com>
---------
Signed-off-by: Dan Bond <danbond@protonmail.com>
* Remove unused are hosts set check
* Remove all traces of unused 'AreHostsSet' parameter
* Remove unused Hosts attribute
* Remove commented out use of snap.APIGateway.Hosts
* NET-2397: Add readme.md to upgrade test subdirectory
* remove test code
* fix link and update steps of adding new test cases (#16654)
* fix link and update steps of adding new test cases
* Apply suggestions from code review
Co-authored-by: Nick Irvine <115657443+nfi-hashicorp@users.noreply.github.com>
---------
Co-authored-by: Nick Irvine <115657443+nfi-hashicorp@users.noreply.github.com>
---------
Co-authored-by: cskh <hui.kang@hashicorp.com>
Co-authored-by: Nick Irvine <115657443+nfi-hashicorp@users.noreply.github.com>
* add snapshot restore test
* add logstore as test parameter
* Use the correct image version
* make sure we read the logs from a followers to test the follower snapshot install path.
* update to raf-wal v0.3.0
* add changelog.
* updating changelog for bug description and removed integration test.
* setting up test container builder to only set logStore for 1.15 and higher
---------
Co-authored-by: Paul Banks <pbanks@hashicorp.com>
Co-authored-by: John Murret <john.murret@hashicorp.com>
* First cluster grpc service should be NodePort
This is based on the issue opened here https://github.com/hashicorp/consul-k8s/issues/1903
If you follow the documentation https://developer.hashicorp.com/consul/docs/k8s/deployment-configurations/single-dc-multi-k8s exactly as it is, the first cluster will only create the consul UI service on NodePort but not the rest of the services (including for grpc). By default, from the helm chart, they are created as headless services by setting clusterIP None. This will cause an issue for the second cluster to discover consul server on the first cluster over gRPC as it cannot simply cannot through gRPC default port 8502 and it ends up in an error as shown in the issue https://github.com/hashicorp/consul-k8s/issues/1903
As a solution, the grpc service should be exposed using NodePort (or LoadBalancer). I added those changes required in both cluster1-values.yaml and cluster2-values.yaml, and also a description for those changes for the normal users to understand. Kindly review and I hope this PR will be accepted.
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Update website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
---------
Co-authored-by: trujillo-adam <47586768+trujillo-adam@users.noreply.github.com>
* Refactored "NewGatewayService" to handle namespaces, fixed
TestHTTPRouteFlattening test
* Fixed existing http_route tests for namespacing
* Squash aclEnterpriseMeta for ResourceRefs and HTTPServices, accept
namespace for creating connect services and regular services
* Use require instead of assert after creating namespaces in
http_route_tests
* Refactor NewConnectService and NewGatewayService functions to use cfg
objects to reduce number of method args
* Rename field on SidecarConfig in tests from `SidecarServiceName` to
`Name` to avoid stutter
This commit fixes an issue where trust bundles could not be read
by services in a non-default namespace, unless they had excessive
ACL permissions given to them.
Prior to this change, `service:write` was required in the default
namespace in order to read the trust bundle. Now, `service:write`
to a service in any namespace is sufficient.
If a CA config update did not cause a root change, the codepath would return early and skip some steps which preserve its intermediate certificates and signing key ID. This commit re-orders some code and prevents updates from generating new intermediate certificates.
This commit adds a sameness-group config entry to the API and structs packages. It includes some validation logic and a new memdb index that tracks the default sameness-group for each partition. Sameness groups will simplify the effort of managing failovers / intentions / exports for peers and partitions.
Note that this change purely to introduce the configuration entry and does not include the full functionality of sameness-groups.
Co-authored-by: Ashvitha Sridharan <ashvitha.sridharan@hashicorp.com>
Co-authored-by: Freddy <freddygv@users.noreply.github.com>
Add a new envoy flag: "envoy_hcp_metrics_bind_socket_dir", a directory
where a unix socket will be created with the name
`<namespace>_<proxy_id>.sock` to forward Envoy metrics.
If set, this will configure:
- In bootstrap configuration a local stats_sink and static cluster.
These will forward metrics to a loopback listener sent over xDS.
- A dynamic listener listening at the socket path that the previously
defined static cluster is sending metrics to.
- A dynamic cluster that will forward traffic received at this listener
to the hcp-metrics-collector service.
Reasons for having a static cluster pointing at a dynamic listener:
- We want to secure the metrics stream using TLS, but the stats sink can
only be defined in bootstrap config. With dynamic listeners/clusters
we can use the proxy's leaf certificate issued by the Connect CA,
which isn't available at bootstrap time.
- We want to intelligently route to the HCP collector. Configuring its
addreess at bootstrap time limits our flexibility routing-wise. More
on this below.
Reasons for defining the collector as an upstream in `proxycfg`:
- The HCP collector will be deployed as a mesh service.
- Certificate management is taken care of, as mentioned above.
- Service discovery and routing logic is automatically taken care of,
meaning that no code changes are required in the xds package.
- Custom routing rules can be added for the collector using discovery
chain config entries. Initially the collector is expected to be
deployed to each admin partition, but in the future could be deployed
centrally in the default partition. These config entries could even be
managed by HCP itself.