open-nomad/e2e
Seth Hoenig 8b05efcf88 consul/connect: Add support for Connect terminating gateways
This PR implements Nomad built-in support for running Consul Connect
terminating gateways. Such a gateway can be used by services running
inside the service mesh to access "legacy" services running outside
the service mesh while still making use of Consul's service identity
based networking and ACL policies.

https://www.consul.io/docs/connect/gateways/terminating-gateway

These gateways are declared as part of a task group level service
definition within the connect stanza.

service {
  connect {
    gateway {
      proxy {
        // envoy proxy configuration
      }
      terminating {
        // terminating-gateway configuration entry
      }
    }
  }
}

Currently Envoy is the only supported gateway implementation in
Consul. The gateay task can be customized by configuring the
connect.sidecar_task block.

When the gateway.terminating field is set, Nomad will write/update
the Configuration Entry into Consul on job submission. Because CEs
are global in scope and there may be more than one Nomad cluster
communicating with Consul, there is an assumption that any terminating
gateway defined in Nomad for a particular service will be the same
among Nomad clusters.

Gateways require Consul 1.8.0+, checked by a node constraint.

Closes #9445
2021-01-25 10:36:04 -06:00
..
affinities e2e: setup consul ACLs a little more correctly 2020-01-31 19:06:11 -06:00
bin e2e/bin/run: run & update only attempt to contact linux servers (#8517) 2020-07-24 10:52:12 -04:00
cli Apply some suggested fixes from staticcheck (#9598) 2020-12-10 07:29:18 -08:00
clientstate e2e: setup consul ACLs a little more correctly 2020-01-31 19:06:11 -06:00
connect consul/connect: Add support for Connect terminating gateways 2021-01-25 10:36:04 -06:00
consul e2e: added tests for check restart behavior 2021-01-22 10:55:40 -05:00
consulacls e2e: add e2e test for consul connect ingress gateway demo 2020-11-25 16:54:02 -06:00
consultemplate modified e2e test so that it explicitly tested the use case in #6929 2021-01-04 22:25:39 +00:00
csi E2E: CSI driver provisioning (#9443) 2020-11-25 09:05:22 -05:00
deployment e2e: setup consul ACLs a little more correctly 2020-01-31 19:06:11 -06:00
e2eutil prevent double job status update (#9768) 2021-01-22 09:18:17 -05:00
events e2e: deflake events 2021-01-21 10:25:42 -05:00
example e2e/cli: fix formatting 2018-07-31 13:52:25 -04:00
execagent Apply some suggested fixes from staticcheck (#9598) 2020-12-10 07:29:18 -08:00
framework Add gosimple linter (#9590) 2020-12-09 11:05:18 -08:00
lifecycle lifecycle: update e2e test for service job with new docker signal #8932 2020-12-01 23:41:32 -08:00
metrics remove references to default mbits 2020-11-23 10:32:13 -06:00
namespaces e2e deflake namespaces: only check namespace jobs 2021-01-21 10:26:24 -05:00
networking e2e: networking test job needs to outlast assert (#9113) 2020-10-16 10:13:16 -04:00
nodedrain e2e: cleanup errors should use assert, not require (#8989) 2020-09-30 09:00:37 -04:00
nomad09upgrade e2e: setup consul ACLs a little more correctly 2020-01-31 19:06:11 -06:00
nomadexec e2e: setup consul ACLs a little more correctly 2020-01-31 19:06:11 -06:00
periodic prevent double job status update (#9768) 2021-01-22 09:18:17 -05:00
podman e2e: ensure tests are constrained to Linux (#8990) 2020-09-30 09:43:30 -04:00
quotas e2e: ENT placeholder for namespace/quotas tests (#8973) 2020-09-28 11:23:37 -04:00
rescheduling e2e: eliminate race condition causing rescheduling test flake (#9085) 2020-10-14 11:35:30 -04:00
scaling e2e: add job scaling test suite. 2021-01-11 11:34:19 +01:00
scalingpolicies e2e: add ScalingPolicies test suite with initial test case. 2021-01-07 14:39:55 +01:00
spread e2e: add reporting to flaky spread test (#9115) 2020-10-16 11:01:07 -04:00
systemsched Add gocritic to golangci-lint config (#9556) 2020-12-08 12:47:04 -08:00
taskevents e2e: fix flaky TaskEventsTest (#9114) 2020-10-16 10:22:40 -04:00
terraform prevent double job status update (#9768) 2021-01-22 09:18:17 -05:00
upgrades script e2e/upgrades: cluster upgrade scripts 2019-09-24 14:35:45 -04:00
vaultcompat E2E: vault secrets (#9081) 2020-10-14 08:43:28 -04:00
vaultsecrets Add gosimple linter (#9590) 2020-12-09 11:05:18 -08:00
volumes e2e: deflake TestVolumeMounts 2021-01-21 10:28:41 -05:00
.gitignore e2e: have TF write-out HCL for CSI volume registration (#7599) 2020-04-02 12:16:43 -04:00
e2e_test.go prevent double job status update (#9768) 2021-01-22 09:18:17 -05:00
README.md e2e: provision cluster entirely through Terraform (#8748) 2020-09-18 11:27:24 -04:00

End to End Tests

This package contains integration tests. Unlike tests alongside Nomad code, these tests expect there to already be a functional Nomad cluster accessible (either on localhost or via the NOMAD_ADDR env var).

See framework/doc.go for how to write tests.

The NOMAD_E2E=1 environment variable must be set for these tests to run.

Provisioning Test Infrastructure on AWS

The terraform/ folder has provisioning code to spin up a Nomad cluster on AWS. You'll need both Terraform and AWS credentials to setup AWS instances on which e2e tests will run. See the README for details. The number of servers and clients is configurable, as is the specific build of Nomad to deploy and the configuration file for each client and server.

Provisioning Local Clusters

To run tests against a local cluster, you'll need to make sure the following environment variables are set:

  • NOMAD_ADDR should point to one of the Nomad servers
  • CONSUL_HTTP_ADDR should point to one of the Consul servers
  • NOMAD_E2E=1

TODO: the scripts in ./bin currently work only with Terraform, it would be nice for us to have a way to deploy Nomad to Vagrant or local clusters.

Running

After completing the provisioning step above, you can set the client environment for NOMAD_ADDR and run the tests as shown below:

# from the ./e2e/terraform directory, set your client environment
# if you haven't already
$(terraform output environment)

cd ..
go test -v .

If you want to run a specific suite, you can specify the -suite flag as shown below. Only the suite with a matching Framework.TestSuite.Component will be run, and all others will be skipped.

go test -v -suite=Consul .

If you want to run a specific test, you'll need to regex-escape some of the test's name so that the test runner doesn't skip over framework struct method names in the full name of the tests:

go test -v . -run 'TestE2E/Consul/\*consul\.ScriptChecksE2ETest/TestGroup'
                              ^       ^             ^               ^
                              |       |             |               |
                          Component   |             |           Test func
                                      |             |
                                  Go Package      Struct

I Want To...

...SSH Into One Of The Test Machines

You can use the Terraform output to find the IP address. The keys will in the ./terraform/keys/ directory.

ssh -i keys/nomad-e2e-*.pem ubuntu@${EC2_IP_ADDR}

Run terraform output for IP addresses and details.

...Deploy a Cluster of Mixed Nomad Versions

The variables.tf file describes the nomad_sha, nomad_version, and nomad_local_binary variable that can be used for most circumstances. But if you want to deploy mixed Nomad versions, you can provide a list of versions in your terraform.tfvars file.

For example, if you want to provision 3 servers all using Nomad 0.12.1, and 2 Linux clients using 0.12.1 and 0.12.2, you can use the following variables:

# will be used for servers
nomad_version = "0.12.1"

# will override the nomad_version for Linux clients
nomad_version_client_linux = [
    "0.12.1",
    "0.12.2"
]

...Deploy Custom Configuration Files

Set the profile field to "custom" and put the configuration files in ./terraform/config/custom/ as described in the README.

...Deploy More Than 4 Linux Clients

Use the "custom" profile as described above.

...Change the Nomad Version After Provisioning

You can update the nomad_sha or nomad_version variables, or simply rebuild the binary you have at the nomad_local_binary path so that Terraform picks up the changes. Then run terraform plan/terraform apply again. This will update Nomad in place, making the minimum amount of changes necessary.