The terms task directory and allocation directory are used throughout the
documentation but these directories are not the same as the `NOMAD_TASK_DIR`
and `NOMAD_ALLOC_DIR` locations. This is confusing when trying to use the
`template` and `artifact` stanzas, especially when trying to use a destination
outside the Nomad-mounted directories for Docker and similar drivers.
This changeset introduces "allocation working directory" to mean the location
on disk where the various directories and artifacts are staged, and "task
working directory" for the task. Clarify how specific task drivers interact
with the task working directory.
* consul: advertise cni and multi host interface addresses
* structs: add service/check address_mode validation
* ar/groupservices: fetch networkstatus at hook runtime
* ar/groupservice: nil check network status getter before calling
* consul: comment network status can be nil
As newer versions of Consul are released, the minimum version of Envoy
it supports as a sidecar proxy also gets bumped. Starting with the upcoming
Consul v1.9.X series, Envoy v1.11.X will no longer be supported. Current
versions of Nomad hardcode a version of Envoy v1.11.2 to be used as the
default implementation of Connect sidecar proxy.
This PR introduces a change such that each Nomad Client will query its
local Consul for a list of Envoy proxies that it supports (https://github.com/hashicorp/consul/pull/8545)
and then launch the Connect sidecar proxy task using the latest supported version
of Envoy. If the `SupportedProxies` API component is not available from
Consul, Nomad will fallback to the old version of Envoy supported by old
versions of Consul.
Setting the meta configuration option `meta.connect.sidecar_image` or
setting the `connect.sidecar_task` stanza will take precedence as is
the current behavior for sidecar proxies.
Setting the meta configuration option `meta.connect.gateway_image`
will take precedence as is the current behavior for connect gateways.
`meta.connect.sidecar_image` and `meta.connect.gateway_image` may make
use of the special `${NOMAD_envoy_version}` variable interpolation, which
resolves to the newest version of Envoy supported by the Consul agent.
Addresses #8585#7665
When we try to prefix match the `nomad volume detach` node ID argument, the
node may have been already GC'd. The volume unpublish workflow gracefully
handles this case so that we can free the claim. So make a best effort to find
a node ID among the volume's claimed allocations, or otherwise just use the
node ID we've been given by the user as-is.
CSI plugins with the same plugin ID and type (controller, node, monolith) will
collide on a host, both in the communication socket and in the dynamic plugin
registry. Until this can be fixed, leave notice to operators in the
documentation.
Previously, Nomad was using a hand-made lookup table for looking
up EC2 CPU performance characteristics (core count + speed = ticks).
This data was incomplete and incorrect depending on region. The AWS
API has the correct data but requires API keys to use (i.e. should not
be queried directly from Nomad).
This change introduces a lookup table generated by a small command line
tool in Nomad's tools module which uses the Amazon AWS API.
Running the tool requires AWS_* environment variables set.
$ # in nomad/tools/cpuinfo
$ go run .
Going forward, Nomad can incorporate regeneration of the lookup table
somewhere in the CI pipeline so that we remain up-to-date on the latest
offerings from EC2.
Fixes#7830
The CSI specification for `ValidateVolumeCapability` says that we shall
"reconcile successful capability-validation responses by comparing the
validated capabilities with those that it had originally requested" but leaves
the details of that reconcilation unspecified. This API is not implemented in
Kubernetes, so controller plugins don't have a real-world implementation to
verify their behavior against.
We have found that CSI plugins in the wild may return "successful" but
incomplete `VolumeCapability` responses, so we can't require that all
capabilities we expect have been validated, only that the ones that have been
validated match. This appears to violate the CSI specification but until
that's been resolved in upstream we have to loosen our validation
requirements. The tradeoff is that we're more likely to have runtime errors
during `NodeStageVolume` instead of at the time of volume registration.
The CSI specification allows only the `file-system` attachment mode to have
mount options. The `block-device` mode is left "intentionally empty, for now"
in the protocol. We should be validating against this problem, but our
documentation also had it backwards.
Also adds missing mount_options on group volume.
This PR adds a version specific upgrade note about the docker stop
signal behavior. Also adds test for the signal logic in docker driver.
Closes#8932 which was fixed in #8933
`nomad volume detach volume-id 00000000-0000-0000-0000-000000000000` produces an API call containing the UUID as part of the query string. This is the only way the API accepts the request correctly - if you pass it in the payload you get `detach requires node ID`
When defining a script-check in a group-level service, Nomad needs to
know which task is associated with the check so that it can use the
correct task driver to execute the check.
This PR fixes two bugs:
1) validate service.task or service.check.task is configured
2) make service.check.task inherit service.task if it is itself unset
Fixes#8952
The 'docker.config.infra_image' would default to an amd64 container.
It is possible to reference the correct image for a platform using
the `runtime.GOARCH` variable, eliminating the need to explicitly set
the `infra_image` on non-amd64 platforms.
Also upgrade to Google's pause container version 3.1 from 3.0, which
includes some enhancements around process management.
Fixes#8926
This PR adds initial support for running Consul Connect Ingress Gateways (CIGs) in Nomad. These gateways are declared as part of a task group level service definition within the connect stanza.
```hcl
service {
connect {
gateway {
proxy {
// envoy proxy configuration
}
ingress {
// ingress-gateway configuration entry
}
}
}
}
```
A gateway can be run in `bridge` or `host` networking mode, with the caveat that host networking necessitates manually specifying the Envoy admin listener (which cannot be disabled) via the service port value.
Currently Envoy is the only supported gateway implementation in Consul, and Nomad only supports running Envoy as a gateway using the docker driver.
Aims to address #8294 and tangentially #8647
* docker: support group allocated ports
* docker: add new ports driver config to specify which group ports are mapped
* docker: update port mapping docs
The soundness guarantees of the CSI specification leave a little to be desired
in our ability to provide a 100% reliable automated solution for managing
volumes. This changeset provides a new command to bridge this gap by providing
the operator the ability to intervene.
The command doesn't take an allocation ID so that the operator doesn't have to
keep track of alloc IDs that may have been GC'd. Handle this case in the
unpublish RPC by sending the client RPC for all the terminal/nil allocs on the
selected node.
This change adds the ability to set the fields `success_before_passing` and
`failures_before_critical` on Consul service check definitions. This is a
feature added to Consul v1.7.0 and later.
https://www.consul.io/docs/agent/checks#success-failures-before-passing-critical
Nomad doesn't do much besides pass the fields through to Consul.
Fixes#6913
* update vault integration docs
docs/integrations/vault-integration was a copy of the learn guide. Remove that and move /docs/vault-integration to this location instead
fix link
fix link
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
Update website/pages/docs/integrations/vault-integration.mdx
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
* revert accidental deletion
Co-authored-by: Charlie Voiselle <464492+angrycub@users.noreply.github.com>
In order to prevent staleness, changed driver links to point to releases page rather than a specific version.
Co-authored-by: Michael Schurter <mschurter@hashicorp.com>
Postrun hooks for allocation runners don't currently block the registration of
terminal health with the servers, which is what allows system jobs to be
drained. So draining nodes with jobs that claim CSI volumes requires the
`-ignore-system` job to ensure that the postrun hook for service jobs gets a
chance to execute.
The Nomad binary size has been detailed differently in places
and is subject to changing almost daily. We should therefore
remove this to avoid confusion and misrepresentation.
adds in oss components to support enterprise multi-vault namespace feature
upgrade specific doc on vault multi-namespaces
vault docs
update test to reflect new error
Also fixed the same typo in a test. Fixing the typo fixes the link, but
the link was still broken when running the website locally due to the
trailing slash. It would have worked in prod thanks to redirects, but
using the canonical URL seems ideal.
Not sure if this was meant to imply adding more schedulers to Nomad is
easy, or that we plan on adding pluggable schedulers. Either way,
neither of those statements is really true unless you really stretch the
definitions of "easy" and "plan".
So remove this sentence as I can't imagine it does anything other than
confuse people.
Before docker, the only default was `SIGINT` for `kill_signal`. The
docker driver however defaults to `SIGTERM`, and we should document
as such.
Fixes#7140
This changes fixes a syntax error in the autoscaling apm plugin
docs as well as updates the scaling stanza doc. The stazna wording
implied its use was only for external autoscalers, whereas it also
is used by the UI.
Before, the service definition for a Connect Native service would always
require setting the `service.task` parameter. Now, that parameter is
automatically inferred when there is only one task in the task group.
Fixes#8274
The `nomad volume deregister` command currently returns an error if the volume
has any claims, but in cases where the claims can't be dropped because of
plugin errors, providing a `-force` flag gives the operator an escape hatch.
If the volume has no allocations or if they are all terminal, this flag
deletes the volume from the state store, immediately and implicitly dropping
all claims without further CSI RPCs. Note that this will not also
unmount/detach the volume, which we'll make the responsibility of a separate
`nomad volume detach` command.
The suggested plugin configuration to re-enable Docker volumes was erroneously
using the singlular `volume` instead of the correct `volumes`, making the
client fail to parse the configuration and causing it not to start.
Adds a `-global` flag for stopping multiregion jobs in all regions at
once. Warn the user if they attempt to stop a multiregion job in a single
region.
Per Slack conversation on April 30 with Schmichael and Gale. Aligning navigation verbiage with other .io sites (Community instead of Resources). Page updates remove Gitter and Mailing List links to direct users to the Forum. Also changed some header content and positioning. If this all doesn't work based on full content of page, happy to brainstorm a better approach. Thank you for reviewing!
This PR adds the capability of running Connect Native Tasks on Nomad,
particularly when TLS and ACLs are enabled on Consul.
The `connect` stanza now includes a `native` parameter, which can be
set to the name of task that backs the Connect Native Consul service.
There is a new Client configuration parameter for the `consul` stanza
called `share_ssl`. Like `allow_unauthenticated` the default value is
true, but recommended to be disabled in production environments. When
enabled, the Nomad Client's Consul TLS information is shared with
Connect Native tasks through the normal Consul environment variables.
This does NOT include auth or token information.
If Consul ACLs are enabled, Service Identity Tokens are automatically
and injected into the Connect Native task through the CONSUL_HTTP_TOKEN
environment variable.
Any of the automatically set environment variables can be overridden by
the Connect Native task using the `env` stanza.
Fixes#6083
- Changed boilerplate intro copy to match messaging in approved 0.12 announcement copy launching next Monday
- Added Virtual Talks section with each of their Youtube links and year timestamps from this year
- Updated the Who Uses Nomad section in alignment with Nomad GitHub READDME in ordering
- Added new customer talks such as Cloudflare and yearly timestamps to each of them
- Removed outdated Community Tools and Integrations section
> If you do not run Nomad as root, make sure you add the Nomad user to the Docker group so Nomad can communicate with the Docker daemon.
Changing the username in the example from `vagrant` to `nomad`. Vagrant isn't addressed in the entire document, so I guess that this was a mistake.
- Guides now point to HashiCorp Learn, rather than old website
- Condensed the documentation & guides section for brevity
- Updated "Who Uses Nomad" page and section in README with new names collected from past 6 months
- Added yearly publication dates to each of the public talks
The tasklet passes the timeout for the script check into the task
driver's `Exec`, and its up to the task driver to enforce that via a
golang `context.WithDeadline`. In practice, this deadline is started
before the task driver starts setting up the execution
environment (because we need it to do things like timeout Docker API
calls).
Under even moderate load, the time it takes to set up the execution
context for the script check regularly exceeds a full second or
two. This can cause script checks to unexpected timeout or even never
execute if the context expires before the task driver ever gets a
chance to `execve`.
This changeset adds a notice to operators about setting script check
timeouts with plenty of padding and what to monitor for problems.