open-nomad/e2e
Tim Gross a07386c507
e2e: use context for executing external commands (#12185)
If any E2E test hangs, it'll eventually timeout and panic, causing the
all the remaining tests to fail. External commands should use a short
context whenever possible so we can fail the test quickly and move on
to the next test.
2022-03-04 08:55:36 -05:00
..
affinities
bin
cli
clientstate
connect
consul e2e: account for new job stop CLI exit behaviour. 2022-02-01 14:16:37 +01:00
consulacls
consultemplate e2e: account for new job stop CLI exit behaviour. 2022-02-01 14:16:37 +01:00
csi e2e: use context for executing external commands (#12185) 2022-03-04 08:55:36 -05:00
deployment
e2eutil e2e: use context for executing external commands (#12185) 2022-03-04 08:55:36 -05:00
eval_priority core: allow setting and propagation of eval priority on job de/registration (#11532) 2021-11-23 09:23:31 +01:00
events
example
execagent
framework
isolation
lifecycle chore: fix incorrect docstring formatting. 2021-08-30 11:08:12 +02:00
metrics e2e: configure prometheus for mTLS for Metrics suite (#12181) 2022-03-04 08:55:06 -05:00
namespaces e2e: account for new job stop CLI exit behaviour. 2022-02-01 14:16:37 +01:00
networking e2e: use operator api for Networking suite validation (#12180) 2022-03-03 15:17:29 -05:00
nodedrain chore: fix incorrect docstring formatting. 2021-08-30 11:08:12 +02:00
nomad09upgrade gofmt all the files 2021-10-01 10:14:28 -04:00
nomadexec
oversubscription Revert "Return SchedulerConfig instead of SchedulerConfigResponse struct (#10799)" (#11433) 2021-11-02 17:42:52 -04:00
parameterized
periodic
podman
quotas gofmt all the files 2021-10-01 10:14:28 -04:00
remotetasks
rescheduling e2e: account for new job stop CLI exit behaviour. 2022-02-01 14:16:37 +01:00
scaling e2e: account for new job stop CLI exit behaviour. 2022-02-01 14:16:37 +01:00
scalingpolicies e2e: account for new job stop CLI exit behaviour. 2022-02-01 14:16:37 +01:00
scheduler_sysbatch e2e: Run system jobs on all datacenters (#11060) 2021-08-17 11:01:47 -04:00
scheduler_system e2e: Run system jobs on all datacenters (#11060) 2021-08-17 11:01:47 -04:00
spread
taskevents
terraform e2e: configure prometheus for mTLS for Metrics suite (#12181) 2022-03-04 08:55:06 -05:00
upgrades
vaultcompat gofmt all the files 2021-10-01 10:14:28 -04:00
vaultsecrets e2e: use context for executing external commands (#12185) 2022-03-04 08:55:36 -05:00
volumes e2e: moved missed volume test stop command to util helper. 2022-02-02 08:42:58 +01:00
.gitignore
e2e_test.go core: allow setting and propagation of eval priority on job de/registration (#11532) 2021-11-23 09:23:31 +01:00
README.md

End to End Tests

This package contains integration tests. Unlike tests alongside Nomad code, these tests expect there to already be a functional Nomad cluster accessible (either on localhost or via the NOMAD_ADDR env var).

See framework/doc.go for how to write tests.

The NOMAD_E2E=1 environment variable must be set for these tests to run.

Provisioning Test Infrastructure on AWS

The terraform/ folder has provisioning code to spin up a Nomad cluster on AWS. You'll need both Terraform and AWS credentials to setup AWS instances on which e2e tests will run. See the README for details. The number of servers and clients is configurable, as is the specific build of Nomad to deploy and the configuration file for each client and server.

Provisioning Local Clusters

To run tests against a local cluster, you'll need to make sure the following environment variables are set:

  • NOMAD_ADDR should point to one of the Nomad servers
  • CONSUL_HTTP_ADDR should point to one of the Consul servers
  • NOMAD_E2E=1

TODO: the scripts in ./bin currently work only with Terraform, it would be nice for us to have a way to deploy Nomad to Vagrant or local clusters.

Running

After completing the provisioning step above, you can set the client environment for NOMAD_ADDR and run the tests as shown below:

# from the ./e2e/terraform directory, set your client environment
# if you haven't already
$(terraform output environment)

cd ..
go test -v .

If you want to run a specific suite, you can specify the -suite flag as shown below. Only the suite with a matching Framework.TestSuite.Component will be run, and all others will be skipped.

go test -v -suite=Consul .

If you want to run a specific test, you'll need to regex-escape some of the test's name so that the test runner doesn't skip over framework struct method names in the full name of the tests:

go test -v . -run 'TestE2E/Consul/\*consul\.ScriptChecksE2ETest/TestGroup'
                              ^       ^             ^               ^
                              |       |             |               |
                          Component   |             |           Test func
                                      |             |
                                  Go Package      Struct

I Want To...

...SSH Into One Of The Test Machines

You can use the Terraform output to find the IP address. The keys will in the ./terraform/keys/ directory.

ssh -i keys/nomad-e2e-*.pem ubuntu@${EC2_IP_ADDR}

Run terraform output for IP addresses and details.

...Deploy a Cluster of Mixed Nomad Versions

The variables.tf file describes the nomad_version, and nomad_local_binary variables that can be used for most circumstances. But if you want to deploy mixed Nomad versions, you can provide a list of versions in your terraform.tfvars file.

For example, if you want to provision 3 servers all using Nomad 0.12.1, and 2 Linux clients using 0.12.1 and 0.12.2, you can use the following variables:

# will be used for servers
nomad_version = "0.12.1"

# will override the nomad_version for Linux clients
nomad_version_client_linux = [
    "0.12.1",
    "0.12.2"
]

...Deploy Custom Configuration Files

Set the profile field to "custom" and put the configuration files in ./terraform/config/custom/ as described in the README.

...Deploy More Than 4 Linux Clients

Use the "custom" profile as described above.

...Change the Nomad Version After Provisioning

You can update the nomad_version variable, or simply rebuild the binary you have at the nomad_local_binary path so that Terraform picks up the changes. Then run terraform plan/terraform apply again. This will update Nomad in place, making the minimum amount of changes necessary.