* Honor value for distinct_hosts constraint
* Add test for feasibility checking for `false`
---------
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
* api: enable support for setting original source alongside job
This PR adds support for setting job source material along with
the registration of a job.
This includes a new HTTP endpoint and a new RPC endpoint for
making queries for the original source of a job. The
HTTP endpoint is /v1/job/<id>/submission?version=<version> and
the RPC method is Job.GetJobSubmission.
The job source (if submitted, and doing so is always optional), is
stored in the job_submission memdb table, separately from the
actual job. This way we do not incur overhead of reading the large
string field throughout normal job operations.
The server config now includes job_max_source_size for configuring
the maximum size the job source may be, before the server simply
drops the source material. This should help prevent Bad Things from
happening when huge jobs are submitted. If the value is set to 0,
all job source material will be dropped.
* api: avoid writing var content to disk for parsing
* api: move submission validation into RPC layer
* api: return an error if updating a job submission without namespace or job id
* api: be exact about the job index we associate a submission with (modify)
* api: reword api docs scheduling
* api: prune all but the last 6 job submissions
* api: protect against nil job submission in job validation
* api: set max job source size in test server
* api: fixups from pr
When a Nomad agent starts and loads jobs that already existed in the
cluster, the default template uid and gid was being set to 0, since this
is the zero value for int. This caused these jobs to fail in
environments where it was not possible to use 0, such as in Windows
clients.
In order to differentiate between an explicit 0 and a template where
these properties were not set we need to use a pointer.
UID/GID 0 is usually reserved for the root user/group. While Nomad
clients are expected to run as root it may not always be the case.
Setting these values as -1 if not defined will fallback to the pervious
behaviour of not attempting to set file ownership and use whatever
UID/GID the Nomad agent is running as. It will also keep backwards
compatibility, which is specially important for platforms where this
feature is not supported, like Windows.
Add new namespace ACL requirement for the /v1/jobs/parse endpoint and
return early if HCLv2 parsing fails.
The endpoint now requires the new `parse-job` ACL capability or
`submit-job`.
This PR exposes the following existing`consul-template` configuration options to Nomad jobspec authors in the `{job.group.task.template}` stanza.
- `wait`
It also exposes the following`consul-template` configuration to Nomad operators in the `{client.template}` stanza.
- `max_stale`
- `block_query_wait`
- `consul_retry`
- `vault_retry`
- `wait`
Finally, it adds the following new Nomad-specific configuration to the `{client.template}` stanza that allows Operators to set bounds on what `jobspec` authors configure.
- `wait_bounds`
Co-authored-by: Tim Gross <tgross@hashicorp.com>
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
This fixes a regression in #10326, to handle unquoted unknown variables.
The HCL job may contain unquoted undefined variable references without ${...} wrapping, e.g. value = meta.node_class. In 1.0.4, this got parsed as value = "${meta.node_class}".
This code performs a scan to find the relevant ${ and }, and only tries to find the closest ones with whitespace as the only separator.
With the updated undefined variable code, we attempt to pick the text of
`${....}` verbatim from the hcl body. Previously, we'd attempt to
regenerate the string from the AST and pray it matches input; the
generation is lossy, as the same AST can represent multiple variations
(e.g. `${v.0}` and `${v[0]}` have the same HCLv2 AST). In this change,
we attempt to go back to the hcl2 source and find the string snippet
corresponding to the variable reference.
Variables that are unset return the correct diagnostic but throw a panic when
we later parse the job body. Return early if there are any variable parsing
errors instead of continuing in a potentially invalid state.
Allow expressing `meta` and `env` blocks as map attributes as well.
`env` and `meta` should support arbitrary key and values, yet hcl2
restricts the keys to valid identifiers. For example, block attribute
identifiers may not contain dots, `.`, which frequently used in meta
fields, and sometimes in environment variable fields.
This change attempts to parse `env`/`meta` both as an attribute and as a
block.
This additionally allows better expressivity for env/meta blocks, using
functions. For example, one can reuse a set of environment variables for
multiple tasks, using a local common_envs value:
```hcl
env = merge(local.common_envs, {"more_env_key", "..."})
```
Previously, if decoding the job, tasks, or vault portion of the config
failed, we would not return an error; it was silently ignored.
This also includes a little refactor to reduce some duplication.
The HCL2 parser needs to apply special parsing tweaks so it can parse
the task config the same way as HCL1. Particularly, it needs to
reinterprets `map[string]interface{}` fields and blocks that appear when
attributes are expected.
This commit restricts the special casing to the Job fields, and ignore
`variables` and `locals` block.
state store: call-out to generic update of job recommendations from job update method
recommendations API work, and http endpoint errors for OSS
support for scaling polices in task block of job spec
add query filters for ScalingPolicy list endpoint
command: nomad scaling policy list: added -job and -type