The job factory will now accept an array of resourceSpecs that is a shorthand
notation for memory, cpu, disk, and iops requirements.
These specs get passed down to task groups. The task group factory will
split the resource requirements near evenly (there is variance
threshold) across all expected tasks.
Allocations then construct task-resource objects based on the resources
from the matching task.
This changeset stages upcoming E2E provisioning improvements work. It splits
the existing shared configuration directory into 3 profiles:
* "full-cluster": the set of configurations currently in use
* "dev-cluster": a simplified set of mostly existing configurations that
weren't in use.
* "custom": an empty profile for developers to keep non-standard
configurations during complex feature development.
The tooling to switch between profiles will be in a later changeset.
Also drops some unused configuration knobs from the provisioning scripts to
make the next stage of work easier.
Our provisioning process for E2E doesn't require the `depends_on` fields to be
set for client instances, so dropping that field allows all instances to be
started in parallel.
We don't use the extra EBS volumes (they aren't even mounted), so remove them
to reduce costs.
The `-recursor` flag in the Consul service unit files is specific to a given
cloud, but we already have cloud-specific configuration files. Consolidate all
the cloud-specific items into the config.
As we add new Linux targets for E2E, the existing setup.sh script will be used
only for Ubuntu. Rather than have the service and config files echo'd from the
script, move them into files we upload so they can be reused.
Includes some general noise reduction in the setup.sh script and removal of
unused bits.
This eases adoption of the jobspec package by other projects (e.g. terraform nomad provider, Lavant). Either by consuming directy as a library (hopefully without having go mod import rest of nomad) or by copying the package without modification.
Ideally, this package will be published as an independent module. We aren't ready for that considering we'll be switching to HCLv2 "soon", but eitherway, this seems like a reasonable intermediate step if we choose to.
* Removed deprecated Spark pieces.
* Bumped HashiCorp stack versions to current as of commit date
* Bumped versions of HashiCorp stack tools
* Bumped versions, added VAULT_ADDR in GCP, removed refs to Spark in shared README
This commit wraps memdb.DB with a changeTrackerDB, which is a thin
wrapper around memdb.DB which enables go-memdb's TrackChanges on all write
transactions. When the transaction is comitted the changes are sent to
an eventPublisher which will be used to create and emit change events.
debugging TestFSM_ReconcileSummaries
wip
revert back rebase
revert back rebase
fix snapshot to actually use a snapshot