This PR adds the ability to set HTTP headers when downloading
an artifact from an `http` or `https` resource.
The implementation in `go-getter` is such that a new `HTTPGetter`
must be created for each artifact that sets headers (as opposed
to conveniently setting headers per-request). This PR maintains
the memoization of the default Getter objects, creating new ones
only for artifacts where headers are set.
Closes#9306
Update the default value for `client.bridge_network_subnet` in docs
to match the new value from 99742f2665. Was `172.26.66.0/23`, is
now `172.26.64.0/20`.
Fixes#9316
The default behavior for `docker.volumes.enabled` is intended to be `false`,
but the HCL schema defaults to `true` if the value is unset. Set the default
literal value to `true`.
Additionally, Docker driver mounts of type "volume" (but not "bind") are not
being properly sandboxed with that setting. Disable Docker mounts with type
"volume" entirely whenever the `docker.volumes.enabled` flag is set to
false. Note this is unrelated to the `volume_mount` feature, which is
constrained to preconfigured host volumes or whatever is mounted by a CSI
plugin.
This changeset includes updates to unit tests that should have been failing
under the documented behavior but were not.
I believe there’s a typo where “workloads” was changed to “jobs” but the original word wasn’t removed. Or maybe it’s the other way around. But currently there is an orphaned one-word sentence.
We recently added documentation disambiguating the terminology of the
allocation/task working directories. This changeset adds an internals document
that describes in more detail exactly what does into the allocation working
directory, how this interacts with the filesystem isolation provided by task
drivers, and how this interacts with features like `artifact` and `template`.
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
This is a first draft of HCLv2 docs - I added the details under hcl2 doc with some minimal info highlighting the hcl2 introductions.
As a longer term strategy, we'll want to mimic the Packer HCL docs structure that details all the blocks and allowed expressions/functions in greater details. However, given that the exact functions and templating syntax is still somewhat influx, I opt to push that to another time.
Dockerhub is going to rate limit unauthenticated pulls.
Use our HashiCorp internal mirror for builds run through CircleCI.
Co-authored-by: Mahmood Ali <mahmood@hashicorp.com>
The `template.allow_host_source` configuration was not operable, leading to
the recent security patch in 0.12.6. We forgot to update this piece of the
documentation referring to the correct configuration value.
Beforehand tasks and field replacements did not have access to the
unique ID of their job or its parent. This adds this information as
new environment variables.
* remove event durability
temporarily removing go-memdb event durability until a new strategy is developed on how to best handled increased durability needs
* drop events table schema and state store methods
* fix neweventbuffer invocations
The terms task directory and allocation directory are used throughout the
documentation but these directories are not the same as the `NOMAD_TASK_DIR`
and `NOMAD_ALLOC_DIR` locations. This is confusing when trying to use the
`template` and `artifact` stanzas, especially when trying to use a destination
outside the Nomad-mounted directories for Docker and similar drivers.
This changeset introduces "allocation working directory" to mean the location
on disk where the various directories and artifacts are staged, and "task
working directory" for the task. Clarify how specific task drivers interact
with the task working directory.
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
The terms task directory and allocation directory are used throughout the
documentation but these directories are not the same as the `NOMAD_TASK_DIR`
and `NOMAD_ALLOC_DIR` locations. This is confusing when trying to use the
`template` and `artifact` stanzas, especially when trying to use a destination
outside the Nomad-mounted directories for Docker and similar drivers.
This changeset introduces "allocation working directory" to mean the location
on disk where the various directories and artifacts are staged, and "task
working directory" for the task. Clarify how specific task drivers interact
with the task working directory.
* consul: advertise cni and multi host interface addresses
* structs: add service/check address_mode validation
* ar/groupservices: fetch networkstatus at hook runtime
* ar/groupservice: nil check network status getter before calling
* consul: comment network status can be nil
As newer versions of Consul are released, the minimum version of Envoy
it supports as a sidecar proxy also gets bumped. Starting with the upcoming
Consul v1.9.X series, Envoy v1.11.X will no longer be supported. Current
versions of Nomad hardcode a version of Envoy v1.11.2 to be used as the
default implementation of Connect sidecar proxy.
This PR introduces a change such that each Nomad Client will query its
local Consul for a list of Envoy proxies that it supports (https://github.com/hashicorp/consul/pull/8545)
and then launch the Connect sidecar proxy task using the latest supported version
of Envoy. If the `SupportedProxies` API component is not available from
Consul, Nomad will fallback to the old version of Envoy supported by old
versions of Consul.
Setting the meta configuration option `meta.connect.sidecar_image` or
setting the `connect.sidecar_task` stanza will take precedence as is
the current behavior for sidecar proxies.
Setting the meta configuration option `meta.connect.gateway_image`
will take precedence as is the current behavior for connect gateways.
`meta.connect.sidecar_image` and `meta.connect.gateway_image` may make
use of the special `${NOMAD_envoy_version}` variable interpolation, which
resolves to the newest version of Envoy supported by the Consul agent.
Addresses #8585#7665
When we try to prefix match the `nomad volume detach` node ID argument, the
node may have been already GC'd. The volume unpublish workflow gracefully
handles this case so that we can free the claim. So make a best effort to find
a node ID among the volume's claimed allocations, or otherwise just use the
node ID we've been given by the user as-is.
CSI plugins with the same plugin ID and type (controller, node, monolith) will
collide on a host, both in the communication socket and in the dynamic plugin
registry. Until this can be fixed, leave notice to operators in the
documentation.
Previously, Nomad was using a hand-made lookup table for looking
up EC2 CPU performance characteristics (core count + speed = ticks).
This data was incomplete and incorrect depending on region. The AWS
API has the correct data but requires API keys to use (i.e. should not
be queried directly from Nomad).
This change introduces a lookup table generated by a small command line
tool in Nomad's tools module which uses the Amazon AWS API.
Running the tool requires AWS_* environment variables set.
$ # in nomad/tools/cpuinfo
$ go run .
Going forward, Nomad can incorporate regeneration of the lookup table
somewhere in the CI pipeline so that we remain up-to-date on the latest
offerings from EC2.
Fixes#7830
The CSI specification for `ValidateVolumeCapability` says that we shall
"reconcile successful capability-validation responses by comparing the
validated capabilities with those that it had originally requested" but leaves
the details of that reconcilation unspecified. This API is not implemented in
Kubernetes, so controller plugins don't have a real-world implementation to
verify their behavior against.
We have found that CSI plugins in the wild may return "successful" but
incomplete `VolumeCapability` responses, so we can't require that all
capabilities we expect have been validated, only that the ones that have been
validated match. This appears to violate the CSI specification but until
that's been resolved in upstream we have to loosen our validation
requirements. The tradeoff is that we're more likely to have runtime errors
during `NodeStageVolume` instead of at the time of volume registration.
The CSI specification allows only the `file-system` attachment mode to have
mount options. The `block-device` mode is left "intentionally empty, for now"
in the protocol. We should be validating against this problem, but our
documentation also had it backwards.
Also adds missing mount_options on group volume.