open-nomad/e2e/csi/input/plugin-aws-efs-nodes.nomad
Tim Gross cd1c6173f4 csi: e2e tests for EBS and EFS plugins (#7343)
This changeset provides two basic e2e tests for CSI plugins targeting
common AWS use cases.

The EBS test launches the EBS plugin (controller + nodes) and registers
an EBS volume as a Nomad CSI volume. We deploy a job that writes to
the volume, stop that job, and reuse the volume for another job which
should be able to read the data written by the first job.

The EFS test launches the EFS plugin (nodes-only) and registers an EFS
volume as a Nomad CSI volume. We deploy a job that writes to the
volume, stop that job, and reuse the volume for another job which
should be able to read the data written by the first job.

The writer jobs mount the CSI volume at a location within the alloc
dir.
2020-03-23 13:59:18 -04:00

46 lines
1.1 KiB
HCL

# jobspec for running CSI plugin for AWS EFS, derived from
# the kubernetes manifests found at
# https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/deploy/kubernetes
job "plugin-aws-efs-nodes" {
datacenters = ["dc1"]
# you can run node plugins as service jobs as well, but this ensures
# that all nodes in the DC have a copy.
type = "system"
group "nodes" {
task "plugin" {
driver = "docker"
config {
image = "amazon/aws-efs-csi-driver:latest"
# note: the EFS driver doesn't seem to respect the --endpoint
# flag and always sets up the listener at '/tmp/csi.sock'
args = [
"node",
"--endpoint=unix://tmp/csi.sock",
"--logtostderr",
"--v=5",
]
privileged = true
}
csi_plugin {
id = "aws-efs0"
type = "node"
mount_dir = "/tmp"
}
# note: there's no upstream guidance on resource usage so
# this is a best guess until we profile it in heavy use
resources {
cpu = 500
memory = 256
}
}
}
}