* Added Persistent Workload guide using Host Volumes
* Update website/source/guides/stateful-workloads/stateful-workloads.html.md
Co-Authored-By: Danielle <dani@hashicorp.com>
* fix client config and job spec formatting
* fix typo in description
* fix navigation for both stateful workloads guides
* show output from nomad node status to verify host volumes
* Add value prop info; info about HA
From feedback, added more information about the value proposition for
host volumes (h/t @rkettelerij), and corrected an orphaned bit from
the original guide this one was created from.
* formatting paragraphs
* remove reference to consul 1.6-beta and update nomad agent command
* remove tech preview status and update limitations
* remove beta tag in navigation
* add screenshot of count dashboard
* update example summary and remove redis references
* capitalize Consul
* minor corrections
* hcl formatting
* demo is on localhost not host ip
* clarify consul on PATH
* mention variable interpolation limitation
Splitting the immutable and mutable components of the scriptCheck led
to a bug where the environment interpolation wasn't being incorporated
into the check's ID, which caused the UpdateTTL to update for a check
ID that Consul didn't have (because our Consul client creates the ID
from the structs.ServiceCheck each time we update).
Task group services don't have access to a task environment at
creation, so their checks get registered before the check can be
interpolated. Use the original check ID so they can be updated.
* ar: refactor network bridge config to use go-cni lib
* ar: use eth as the iface prefix for bridged network namespaces
* vendor: update containerd/go-cni package
* ar: update network hook to use TODO contexts when calling configurator
* unnecessary conversion
Tests typically call join cluster directly rather than rely on consul
discovery. Worse, consul discovery seems to cause additional leadership
transitions when a server is shutdown in tests than tests expect.
In Nomad prior to Consul Connect, all Consul checks work the same
except for Script checks. Because the Task being checked is running in
its own container namespaces, the check is executed by Nomad in the
Task's context. If the Script check passes, Nomad uses the TTL check
feature of Consul to update the check status. This means in order to
run a Script check, we need to know what Task to execute it in.
To support Consul Connect, we need Group Services, and these need to
be registered in Consul along with their checks. We could push the
Service down into the Task, but this doesn't work if someone wants to
associate a service with a task's ports, but do script checks in
another task in the allocation.
Because Nomad is handling the Script check and not Consul anyways,
this moves the script check handling into the task runner so that the
task runner can own the script check's configuration and
lifecycle. This will allow us to pass the group service check
configuration down into a task without associating the service itself
with the task.
When tasks are checked for script checks, we walk back through their
task group to see if there are script checks associated with the
task. If so, we'll spin off script check tasklets for them. The
group-level service and any restart behaviors it needs are entirely
encapsulated within the group service hook.
* connect: add unix socket to proxy grpc for envoy
Fixes#6124
Implement a L4 proxy from a unix socket inside a network namespace to
Consul's gRPC endpoint on the host. This allows Envoy to connect to
Consul's xDS configuration API.
* connect: pointer receiver on structs with mutexes
* connect: warn on all proxy errors