update links to use new canonical location
This commit is contained in:
parent
f8be7608a2
commit
3a3165f127
|
@ -384,7 +384,7 @@ The `Task` object supports the following keys:
|
|||
Consul for service discovery. A `Service` object represents a routable and
|
||||
discoverable service on the network. Nomad automatically registers when a task
|
||||
is started and de-registers it when the task transitions to the dead state.
|
||||
[Click here](/guides/operations/consul-integration/index.html#service-discovery) to learn more about
|
||||
[Click here](/guides/integrations/consul-integration/index.html#service-discovery) to learn more about
|
||||
services. Below is the fields in the `Service` object:
|
||||
|
||||
- `Name`: An explicit name for the Service. Nomad will replace `${JOB}`,
|
||||
|
|
|
@ -9,7 +9,7 @@ description: |-
|
|||
# Sentinel Policies HTTP API
|
||||
|
||||
The `/sentinel/policies` and `/sentinel/policy/` endpoints are used to manage Sentinel policies.
|
||||
For more details about Sentinel policies, please see the [Sentinel Policy Guide](/guides/security/sentinel-policy.html).
|
||||
For more details about Sentinel policies, please see the [Sentinel Policy Guide](/guides/governance-and-policy/sentinel/sentinel-policy.html).
|
||||
|
||||
Sentinel endpoints are only available when ACLs are enabled. For more details about ACLs, please see the [ACL Guide](/guides/security/acl.html).
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ or server functionality, including exposing interfaces for client consumption
|
|||
and running jobs.
|
||||
|
||||
Due to the power and flexibility of this command, the Nomad agent is documented
|
||||
in its own section. See the [Nomad Agent](/guides/operations/agent/index.html)
|
||||
in its own section. See the [Nomad Agent](/guides/install/production/nomad-agent.html)
|
||||
guide and the [Configuration](/docs/configuration/index.html) documentation section for
|
||||
more information on how to use this command and the options it has.
|
||||
|
||||
|
|
|
@ -141,7 +141,7 @@ isolation is not supported as of now.
|
|||
|
||||
[lxc-create]: https://linuxcontainers.org/lxc/manpages/man1/lxc-create.1.html
|
||||
[lxc-driver]: https://releases.hashicorp.com/nomad-driver-lxc
|
||||
[lxc-guide]: /guides/external/lxc.html
|
||||
[lxc-guide]: /guides/operating-a-job/external/lxc.html
|
||||
[lxc_man]: https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAM
|
||||
[plugin]: /docs/configuration/plugin.html
|
||||
[plugin_dir]: /docs/configuration/index.html#plugin_dir
|
||||
|
|
|
@ -11,7 +11,7 @@ description: |-
|
|||
# Nomad Enterprise Namespaces
|
||||
|
||||
In [Nomad Enterprise](https://www.hashicorp.com/go/nomad-enterprise), a shared
|
||||
cluster can be partitioned into [namespaces](/guides/security/namespaces.html) which allows
|
||||
cluster can be partitioned into [namespaces](/guides/governance-and-policy/namespaces.html) which allows
|
||||
jobs and their associated objects to be isolated from each other and other users
|
||||
of the cluster.
|
||||
|
||||
|
@ -19,8 +19,8 @@ Namespaces enhance the usability of a shared cluster by isolating teams from the
|
|||
jobs of others, provide fine grain access control to jobs when coupled with
|
||||
[ACLs](/guides/security/acl.html), and can prevent bad actors from negatively impacting
|
||||
the whole cluster when used in conjunction with
|
||||
[resource quotas](/guides/security/quotas.html). See the
|
||||
[Namespaces Guide](/guides/security/namespaces.html) for a thorough overview.
|
||||
[resource quotas](/guides/governance-and-policy/quotas.html). See the
|
||||
[Namespaces Guide](/guides/governance-and-policy/namespaces.html) for a thorough overview.
|
||||
|
||||
Click [here](https://www.hashicorp.com/go/nomad-enterprise) to set up a demo or
|
||||
request a trial of Nomad Enterprise.
|
||||
|
|
|
@ -11,13 +11,13 @@ description: |-
|
|||
# Nomad Enterprise Resource Quotas
|
||||
|
||||
In [Nomad Enterprise](https://www.hashicorp.com/go/nomad-enterprise), operators can
|
||||
define [quota specifications](/guides/security/quotas.html) and apply them to namespaces.
|
||||
define [quota specifications](/guides/governance-and-policy/quotas.html) and apply them to namespaces.
|
||||
When a quota is attached to a namespace, the jobs within the namespace may not
|
||||
consume more resources than the quota specification allows.
|
||||
|
||||
This allows operators to partition a shared cluster and ensure that no single
|
||||
actor can consume the whole resources of the cluster. See the
|
||||
[Resource Quotas Guide](/guides/security/quotas.html) for more details.
|
||||
[Resource Quotas Guide](/guides/governance-and-policy/quotas.html) for more details.
|
||||
|
||||
Click [here](https://www.hashicorp.com/go/nomad-enterprise) to set up a demo or
|
||||
request a trial of Nomad Enterprise.
|
||||
|
|
|
@ -9,7 +9,7 @@ description: |-
|
|||
# Nomad Enterprise Sentinel Policy Enforcement
|
||||
|
||||
In [Nomad Enterprise](https://www.hashicorp.com/go/nomad-enterprise), operators can
|
||||
create [Sentinel policies](/guides/security/sentinel-policy.html) for fine-grained policy
|
||||
create [Sentinel policies](/guides/governance-and-policy/sentinel/sentinel-policy.html) for fine-grained policy
|
||||
enforcement. Sentinel policies build on top of the ACL system and allow operators to define
|
||||
policies such as disallowing jobs to be submitted to production on
|
||||
Fridays. These extremely rich policies are defined as code. For example, to
|
||||
|
@ -30,7 +30,7 @@ all_drivers_docker = rule {
|
|||
}
|
||||
```
|
||||
|
||||
See the [Sentinel Policies Guide](/guides/security/sentinel-policy.html) for additional details and examples.
|
||||
See the [Sentinel Policies Guide](/guides/governance-and-policy/sentinel/sentinel-policy.html) for additional details and examples.
|
||||
|
||||
Click [here](https://www.hashicorp.com/go/nomad-enterprise) to set up a demo or
|
||||
request a trial of Nomad Enterprise.
|
||||
request a trial of Nomad Enterprise.
|
||||
|
|
|
@ -622,7 +622,7 @@ system of a task for that driver.</small>
|
|||
|
||||
[check_restart_stanza]: /docs/job-specification/check_restart.html "check_restart stanza"
|
||||
[consul_grpc]: https://www.consul.io/api/agent/check.html#grpc
|
||||
[service-discovery]: /guides/operations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
|
||||
[service-discovery]: /guides/integrations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
|
||||
[interpolation]: /docs/runtime/interpolation.html "Nomad Runtime Interpolation"
|
||||
[network]: /docs/job-specification/network.html "Nomad network Job Specification"
|
||||
[qemu]: /docs/drivers/qemu.html "Nomad qemu Driver"
|
||||
|
|
|
@ -205,7 +205,7 @@ task "server" {
|
|||
[java]: /docs/drivers/java.html "Nomad Java Driver"
|
||||
[Docker]: /docs/drivers/docker.html "Nomad Docker Driver"
|
||||
[rkt]: /docs/drivers/rkt.html "Nomad rkt Driver"
|
||||
[service_discovery]: /guides/operations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
|
||||
[service_discovery]: /guides/integrations/consul-integration/index.html#service-discovery/index.html "Nomad Service Discovery"
|
||||
[template]: /docs/job-specification/template.html "Nomad template Job Specification"
|
||||
[user_drivers]: /docs/configuration/client.html#_quot_user_checked_drivers_quot_
|
||||
[user_blacklist]: /docs/configuration/client.html#_quot_user_blacklist_quot_
|
||||
|
|
|
@ -19,7 +19,7 @@ application:
|
|||
|
||||
The Spark integration will use a generic job template by default. The template
|
||||
includes groups and tasks for the driver, executors and (optionally) the
|
||||
[shuffle service](/guides/spark/dynamic.html). The job itself and the tasks that
|
||||
[shuffle service](/guides/analytical-workloads/spark/dynamic.html). The job itself and the tasks that
|
||||
are created have the `spark.nomad.role` meta value defined accordingly:
|
||||
|
||||
```hcl
|
||||
|
@ -57,7 +57,7 @@ job "structure" {
|
|||
```
|
||||
|
||||
The default template can be customized indirectly by explicitly [setting
|
||||
configuration properties](/guides/spark/configuration.html).
|
||||
configuration properties](/guides/analytical-workloads/spark/configuration.html).
|
||||
|
||||
## Using a Custom Job Template
|
||||
|
||||
|
@ -122,5 +122,5 @@ The order of precedence for customized settings is as follows:
|
|||
|
||||
## Next Steps
|
||||
|
||||
Learn how to [allocate resources](/guides/spark/resource.html) for your Spark
|
||||
Learn how to [allocate resources](/guides/analytical-workloads/spark/resource.html) for your Spark
|
||||
applications.
|
||||
|
|
|
@ -25,4 +25,4 @@ allocation in Nomad, the resources allocated to the executor tasks are not
|
|||
|
||||
## Next Steps
|
||||
|
||||
Learn how to [integrate Spark with HDFS](/guides/spark/hdfs.html).
|
||||
Learn how to [integrate Spark with HDFS](/guides/analytical-workloads/spark/hdfs.html).
|
||||
|
|
|
@ -135,5 +135,5 @@ Availability](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-h
|
|||
|
||||
## Next Steps
|
||||
|
||||
Learn how to [monitor the output](/guides/spark/monitoring.html) of your
|
||||
Learn how to [monitor the output](/guides/analytical-workloads/spark/monitoring.html) of your
|
||||
Spark applications.
|
||||
|
|
|
@ -10,7 +10,7 @@ description: |-
|
|||
|
||||
By default, `spark-submit` in `cluster` mode will submit your application
|
||||
to the Nomad cluster and return immediately. You can use the
|
||||
[spark.nomad.cluster.monitorUntil](/guides/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
|
||||
[spark.nomad.cluster.monitorUntil](/guides/analytical-workloads/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
|
||||
`spark-submit` monitor the job continuously. Note that, with this flag set,
|
||||
killing `spark-submit` will *not* stop the spark application, since it will be
|
||||
running independently in the Nomad cluster.
|
||||
|
@ -31,7 +31,7 @@ cause the driver process to continue to run. You can force termination
|
|||
It is possible to reconstruct the web UI of a completed application using
|
||||
Spark’s [history server](https://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact).
|
||||
The history server requires the event log to have been written to an accessible
|
||||
location like [HDFS](/guides/spark/hdfs.html) or Amazon S3.
|
||||
location like [HDFS](/guides/analytical-workloads/spark/hdfs.html) or Amazon S3.
|
||||
|
||||
Sample history server job file:
|
||||
|
||||
|
@ -85,7 +85,7 @@ job "spark-history-server" {
|
|||
|
||||
The job file above can also be found [here](https://github.com/hashicorp/nomad/blob/master/terraform/examples/spark/spark-history-server-hdfs.nomad).
|
||||
|
||||
To run the history server, first [deploy HDFS](/guides/spark/hdfs.html) and then
|
||||
To run the history server, first [deploy HDFS](/guides/analytical-workloads/spark/hdfs.html) and then
|
||||
create a directory in HDFS to store events:
|
||||
|
||||
```shell
|
||||
|
@ -164,4 +164,4 @@ job "template" {
|
|||
|
||||
## Next Steps
|
||||
|
||||
Review the Nomad/Spark [configuration properties](/guides/spark/configuration.html).
|
||||
Review the Nomad/Spark [configuration properties](/guides/analytical-workloads/spark/configuration.html).
|
||||
|
|
|
@ -19,7 +19,7 @@ can be used to quickly provision a Spark-enabled Nomad environment in
|
|||
AWS. The embedded [Spark example](https://github.com/hashicorp/nomad/tree/master/terraform/examples/spark)
|
||||
provides for a quickstart experience that can be used in conjunction with
|
||||
this guide. When you have a cluster up and running, you can proceed to
|
||||
[Submitting applications](/guides/spark/submit.html).
|
||||
[Submitting applications](/guides/analytical-workloads/spark/submit.html).
|
||||
|
||||
## Manually Provision a Cluster
|
||||
|
||||
|
@ -90,20 +90,20 @@ $ spark-submit \
|
|||
### Using a Docker Image
|
||||
|
||||
An alternative to installing the JRE on every client node is to set the
|
||||
[spark.nomad.dockerImage](/guides/spark/configuration.html#spark-nomad-dockerimage)
|
||||
[spark.nomad.dockerImage](/guides/analytical-workloads/spark/configuration.html#spark-nomad-dockerimage)
|
||||
configuration property to the URL of a Docker image that has the Java runtime
|
||||
installed. If set, Nomad will use the `docker` driver to run Spark executors in
|
||||
a container created from the image. The
|
||||
[spark.nomad.dockerAuth](/guides/spark/configuration.html#spark-nomad-dockerauth)
|
||||
[spark.nomad.dockerAuth](/guides/analytical-workloads/spark/configuration.html#spark-nomad-dockerauth)
|
||||
configuration property can be set to a JSON object to provide Docker repository
|
||||
authentication configuration.
|
||||
|
||||
When using a Docker image, both the Spark distribution and the application
|
||||
itself can be included (in which case local URLs can be used for `spark-submit`).
|
||||
|
||||
Here, we include [spark.nomad.dockerImage](/guides/spark/configuration.html#spark-nomad-dockerimage)
|
||||
Here, we include [spark.nomad.dockerImage](/guides/analytical-workloads/spark/configuration.html#spark-nomad-dockerimage)
|
||||
and use local paths for
|
||||
[spark.nomad.sparkDistribution](/guides/spark/configuration.html#spark-nomad-sparkdistribution)
|
||||
[spark.nomad.sparkDistribution](/guides/analytical-workloads/spark/configuration.html#spark-nomad-sparkdistribution)
|
||||
and the application JAR file:
|
||||
|
||||
```shell
|
||||
|
@ -119,4 +119,4 @@ $ spark-submit \
|
|||
|
||||
## Next Steps
|
||||
|
||||
Learn how to [submit applications](/guides/spark/submit.html).
|
||||
Learn how to [submit applications](/guides/analytical-workloads/spark/submit.html).
|
||||
|
|
|
@ -39,7 +39,7 @@ Resource-related configuration properties are covered below.
|
|||
The standard Spark memory properties will be propagated to Nomad to control
|
||||
task resource allocation: `spark.driver.memory` (set by `--driver-memory`) and
|
||||
`spark.executor.memory` (set by `--executor-memory`). You can additionally specify
|
||||
[spark.nomad.shuffle.memory](/guides/spark/configuration.html#spark-nomad-shuffle-memory)
|
||||
[spark.nomad.shuffle.memory](/guides/analytical-workloads/spark/configuration.html#spark-nomad-shuffle-memory)
|
||||
to control how much memory Nomad allocates to shuffle service tasks.
|
||||
|
||||
## CPU
|
||||
|
@ -48,11 +48,11 @@ Spark sizes its thread pools and allocates tasks based on the number of CPU
|
|||
cores available. Nomad manages CPU allocation in terms of processing speed
|
||||
rather than number of cores. When running Spark on Nomad, you can control how
|
||||
much CPU share Nomad will allocate to tasks using the
|
||||
[spark.nomad.driver.cpu](/guides/spark/configuration.html#spark-nomad-driver-cpu)
|
||||
[spark.nomad.driver.cpu](/guides/analytical-workloads/spark/configuration.html#spark-nomad-driver-cpu)
|
||||
(set by `--driver-cpu`),
|
||||
[spark.nomad.executor.cpu](/guides/spark/configuration.html#spark-nomad-executor-cpu)
|
||||
[spark.nomad.executor.cpu](/guides/analytical-workloads/spark/configuration.html#spark-nomad-executor-cpu)
|
||||
(set by `--executor-cpu`) and
|
||||
[spark.nomad.shuffle.cpu](/guides/spark/configuration.html#spark-nomad-shuffle-cpu)
|
||||
[spark.nomad.shuffle.cpu](/guides/analytical-workloads/spark/configuration.html#spark-nomad-shuffle-cpu)
|
||||
properties. When running on Nomad, executors will be configured to use one core
|
||||
by default, meaning they will only pull a single 1-core task at a time. You can
|
||||
set the `spark.executor.cores` property (set by `--executor-cores`) to allow
|
||||
|
@ -64,9 +64,9 @@ Nomad does not restrict the network bandwidth of running tasks, bit it does
|
|||
allocate a non-zero number of Mbit/s to each task and uses this when bin packing
|
||||
task groups onto Nomad clients. Spark defaults to requesting the minimum of 1
|
||||
Mbit/s per task, but you can change this with the
|
||||
[spark.nomad.driver.networkMBits](/guides/spark/configuration.html#spark-nomad-driver-networkmbits),
|
||||
[spark.nomad.executor.networkMBits](/guides/spark/configuration.html#spark-nomad-executor-networkmbits), and
|
||||
[spark.nomad.shuffle.networkMBits](/guides/spark/configuration.html#spark-nomad-shuffle-networkmbits)
|
||||
[spark.nomad.driver.networkMBits](/guides/analytical-workloads/spark/configuration.html#spark-nomad-driver-networkmbits),
|
||||
[spark.nomad.executor.networkMBits](/guides/analytical-workloads/spark/configuration.html#spark-nomad-executor-networkmbits), and
|
||||
[spark.nomad.shuffle.networkMBits](/guides/analytical-workloads/spark/configuration.html#spark-nomad-shuffle-networkmbits)
|
||||
properties.
|
||||
|
||||
## Log rotation
|
||||
|
@ -74,9 +74,9 @@ properties.
|
|||
Nomad performs log rotation on the `stdout` and `stderr` of its tasks. You can
|
||||
configure the number number and size of log files it will keep for driver and
|
||||
executor task groups using
|
||||
[spark.nomad.driver.logMaxFiles](/guides/spark/configuration.html#spark-nomad-driver-logmaxfiles)
|
||||
and [spark.nomad.executor.logMaxFiles](/guides/spark/configuration.html#spark-nomad-executor-logmaxfiles).
|
||||
[spark.nomad.driver.logMaxFiles](/guides/analytical-workloads/spark/configuration.html#spark-nomad-driver-logmaxfiles)
|
||||
and [spark.nomad.executor.logMaxFiles](/guides/analytical-workloads/spark/configuration.html#spark-nomad-executor-logmaxfiles).
|
||||
|
||||
## Next Steps
|
||||
|
||||
Learn how to [dynamically allocate Spark executors](/guides/spark/dynamic.html).
|
||||
Learn how to [dynamically allocate Spark executors](/guides/analytical-workloads/spark/dynamic.html).
|
||||
|
|
|
@ -18,4 +18,4 @@ optionally the application driver itself, run as Nomad tasks in a Nomad job.
|
|||
## Next Steps
|
||||
|
||||
The links in the sidebar contain detailed information about specific aspects of
|
||||
the integration, beginning with [Getting Started](/guides/spark/pre.html).
|
||||
the integration, beginning with [Getting Started](/guides/analytical-workloads/spark/pre.html).
|
||||
|
|
|
@ -79,4 +79,4 @@ $ spark-submit --class org.apache.spark.examples.SparkPi \
|
|||
|
||||
## Next Steps
|
||||
|
||||
Learn how to [customize applications](/guides/spark/customizing.html).
|
||||
Learn how to [customize applications](/guides/analytical-workloads/spark/customizing.html).
|
||||
|
|
|
@ -19,4 +19,4 @@ description: |-
|
|||
|
||||
## Next Steps
|
||||
|
||||
[Next step](/guides/spark/name.html)
|
||||
[Next step](/guides/analytical-workloads/spark/name.html)
|
||||
|
|
|
@ -27,7 +27,7 @@ When combined with ACLs, the isolation of namespaces can be enforced, only
|
|||
allowing designated users access to read or modify the jobs and associated
|
||||
objects in a namespace.
|
||||
|
||||
When [resource quotas](/guides/security/quotas.html) are applied to a namespace they
|
||||
When [resource quotas](/guides/governance-and-policy/quotas.html) are applied to a namespace they
|
||||
provide a means to limit resource consumption by the jobs in the namespace. This
|
||||
can prevent a single actor from consuming excessive cluster resources and
|
||||
negatively impacting other teams and applications sharing the cluster.
|
||||
|
@ -39,8 +39,8 @@ jobs, allocations, deployments, and evaluations.
|
|||
|
||||
Nomad does not namespace objects that are shared across multiple namespaces.
|
||||
This includes nodes, [ACL policies](/guides/security/acl.html), [Sentinel
|
||||
policies](/guides/security/sentinel-policy.html), and [quota
|
||||
specifications](/guides/security/quotas.html).
|
||||
policies](/guides/governance-and-policy/sentinel/sentinel-policy.html), and [quota
|
||||
specifications](/guides/governance-and-policy/quotas.html).
|
||||
|
||||
## Working with Namespaces
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ This is not present in the open source version of Nomad.
|
|||
When many teams or users are sharing Nomad clusters, there is the concern that a
|
||||
single user could use more than their fair share of resources. Resource quotas
|
||||
provide a mechanism for cluster administrators to restrict the resources that a
|
||||
[namespace](/guides/security/namespaces.html) has access to.
|
||||
[namespace](/guides/governance-and-policy/namespaces.html) has access to.
|
||||
|
||||
## Quotas Objects
|
||||
|
||||
|
|
|
@ -205,4 +205,4 @@ The following objects are made available in the `submit-job` scope:
|
|||
| ------ | ------------------------- |
|
||||
| `job` | The job being submitted |
|
||||
|
||||
See the [Sentinel Job Object](/guides/security/sentinel/job.html) for details on the fields that are available.
|
||||
See the [Sentinel Job Object](/guides/governance-and-policy/sentinel/job.html) for details on the fields that are available.
|
||||
|
|
|
@ -11,17 +11,17 @@ ea_version: 0.9
|
|||
|
||||
# Nomad Reference Install Guide
|
||||
|
||||
This deployment guide covers the steps required to install and configure a single HashiCorp Nomad cluster as defined in the [Nomad Reference Architecture](/guides/operations/reference-architecture.html).
|
||||
This deployment guide covers the steps required to install and configure a single HashiCorp Nomad cluster as defined in the [Nomad Reference Architecture](/guides/install/production/reference-architecture.html).
|
||||
|
||||
These instructions are for installing and configuring Nomad on Linux hosts running the systemd system and service manager.
|
||||
|
||||
## Reference Material
|
||||
|
||||
This deployment guide is designed to work in combination with the [Nomad Reference Architecture](/guides/operations/reference-architecture.html) and [Consul Deployment Guide](https://www.consul.io/docs/guides/deployment-guide.html). Although it is not a strict requirement to follow the Nomad Reference Architecture, please ensure you are familiar with the overall architecture design. For example, installing Nomad server agents on multiple physical or virtual (with correct anti-affinity) hosts for high-availability.
|
||||
This deployment guide is designed to work in combination with the [Nomad Reference Architecture](/guides/install/production/reference-architecture.html) and [Consul Deployment Guide](https://www.consul.io/docs/guides/deployment-guide.html). Although it is not a strict requirement to follow the Nomad Reference Architecture, please ensure you are familiar with the overall architecture design. For example, installing Nomad server agents on multiple physical or virtual (with correct anti-affinity) hosts for high-availability.
|
||||
|
||||
## Overview
|
||||
|
||||
To provide a highly-available single cluster architecture, we recommend Nomad server agents be deployed to more than one host, as shown in the [Nomad Reference Architecture](/guides/operations/reference-architecture.html).
|
||||
To provide a highly-available single cluster architecture, we recommend Nomad server agents be deployed to more than one host, as shown in the [Nomad Reference Architecture](/guides/install/production/reference-architecture.html).
|
||||
|
||||
![Reference diagram](/assets/images/nomad_reference_diagram.png)
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ The following topics are addressed:
|
|||
- [High Availability](#high-availability)
|
||||
- [Failure Scenarios](#failure-scenarios)
|
||||
|
||||
This document describes deploying a Nomad cluster in combination with, or with access to, a [Consul cluster](/guides/operations/consul-integration/index.html). We recommend the use of Consul with Nomad to provide automatic clustering, service discovery, health checking and dynamic configuration.
|
||||
This document describes deploying a Nomad cluster in combination with, or with access to, a [Consul cluster](/guides/integrations/consul-integration/index.html). We recommend the use of Consul with Nomad to provide automatic clustering, service discovery, health checking and dynamic configuration.
|
||||
|
||||
## <a name="ra"></a>Reference Architecture
|
||||
|
||||
|
@ -32,7 +32,7 @@ In a Nomad multi-region architecture, communication happens via [WAN gossip](/do
|
|||
|
||||
In cloud environments, a single cluster may be deployed across multiple availability zones. For example, in AWS each Nomad server can be deployed to an associated EC2 instance, and those EC2 instances distributed across multiple AZs. Similarly, Nomad server clusters can be deployed to multiple cloud regions to allow for region level HA scenarios.
|
||||
|
||||
For more information on Nomad server cluster design, see the [cluster requirements documentation](/guides/operations/requirements.html).
|
||||
For more information on Nomad server cluster design, see the [cluster requirements documentation](/guides/install/production/requirements.html).
|
||||
|
||||
The design shared in this document is the recommended architecture for production environments, as it provides flexibility and resilience. Nomad utilizes an existing Consul server cluster; however, the deployment design of the Consul server cluster is outside the scope of this document.
|
||||
|
||||
|
@ -66,7 +66,7 @@ Nomad servers are expected to be able to communicate in high bandwidth, low late
|
|||
|
||||
Nomad client clusters require the ability to receive traffic as noted above in the Network Connectivity Details; however, clients can be separated into any type of infrastructure (multi-cloud, on-prem, virtual, bare metal, etc.) as long as they are reachable and can receive job requests from the Nomad servers.
|
||||
|
||||
Additional documentation is available to learn more about [Nomad networking](/guides/operations/requirements.html#network-topology).
|
||||
Additional documentation is available to learn more about [Nomad networking](/guides/install/production/requirements.html#network-topology).
|
||||
|
||||
## <a name="system-reqs"></a>Deployment System Requirements
|
||||
|
||||
|
@ -127,5 +127,5 @@ In the event of a region-level failure (which would contain an entire Nomad serv
|
|||
|
||||
## Next Steps
|
||||
|
||||
- Read [Deployment Guide](/guides/operations/deployment-guide.html) to learn
|
||||
- Read [Deployment Guide](/guides/install/production/deployment-guide.html) to learn
|
||||
the steps required to install and configure a single HashiCorp Nomad cluster.
|
||||
|
|
|
@ -678,7 +678,7 @@ below </h2>
|
|||
[role]: https://www.vaultproject.io/docs/auth/token.html
|
||||
[seal]: https://www.vaultproject.io/docs/concepts/seal.html
|
||||
[secrets-task-directory]: /docs/runtime/environment.html#secrets-
|
||||
[step-5]: /guides/operations/vault-integration/index.html#step-5-create-a-token-role
|
||||
[step-5]: /guides/integrations/vault-integration/index.html#step-5-create-a-token-role
|
||||
[template]: /docs/job-specification/template.html
|
||||
[token]: https://www.vaultproject.io/docs/concepts/tokens.html
|
||||
[vault]: https://www.vaultproject.io/
|
||||
|
|
|
@ -19,9 +19,9 @@ Please refer to the guides below for using affinity and spread in Nomad 0.9.
|
|||
- [Affinity][affinity-guide]
|
||||
- [Spread][spread-guide]
|
||||
|
||||
[affinity-guide]: /guides/advanced-scheduling/affinity.html
|
||||
[affinity-guide]: /guides/operating-a-job/advanced-scheduling/affinity.html
|
||||
[affinity-stanza]: /docs/job-specification/affinity.html
|
||||
[spread-guide]: /guides/advanced-scheduling/spread.html
|
||||
[spread-guide]: /guides/operating-a-job/advanced-scheduling/spread.html
|
||||
[spread-stanza]: /docs/job-specification/spread.html
|
||||
[scheduling]: /docs/internals/scheduling/scheduling.html
|
||||
|
||||
|
|
|
@ -16,4 +16,4 @@ drivers available for Nomad.
|
|||
|
||||
- [LXC][lxc]
|
||||
|
||||
[lxc]: /guides/external/lxc.html
|
||||
[lxc]: /guides/operating-a-job/external/lxc.html
|
||||
|
|
|
@ -15,7 +15,7 @@ servers, monitoring the state of the Raft cluster, and stable server introductio
|
|||
To enable Autopilot features (with the exception of dead server cleanup),
|
||||
the `raft_protocol` setting in the [server stanza](/docs/configuration/server.html)
|
||||
must be set to 3 on all servers. In Nomad 0.8 this setting defaults to 2; in Nomad 0.9 it will default to 3.
|
||||
For more information, see the [Version Upgrade section](/guides/operations/upgrade/upgrade-specific.html#raft-protocol-version-compatibility)
|
||||
For more information, see the [Version Upgrade section](/guides/upgrade/upgrade-specific.html#raft-protocol-version-compatibility)
|
||||
on Raft Protocol versions.
|
||||
|
||||
## Configuration
|
||||
|
|
|
@ -33,4 +33,4 @@ enough to join just one known server.
|
|||
If bootstrapped via Consul and the Consul clusters in the Nomad regions are
|
||||
federated, then federation occurs automatically.
|
||||
|
||||
[ports]: /guides/operations/requirements.html#ports-used
|
||||
[ports]: /guides/install/production/requirements.html#ports-used
|
||||
|
|
|
@ -23,7 +23,7 @@ For upgrades we strive to ensure backwards compatibility. For most upgrades, the
|
|||
process is as simple as upgrading the binary and restarting the service.
|
||||
|
||||
Prior to starting the upgrade please check the
|
||||
[specific version details](/guides/operations/upgrade/upgrade-specific.html) page as some
|
||||
[specific version details](/guides/upgrade/upgrade-specific.html) page as some
|
||||
version differences may require specific steps.
|
||||
|
||||
At a high level we complete the following steps to upgrade Nomad:
|
||||
|
@ -118,5 +118,5 @@ are in a `ready` state.
|
|||
The process of upgrading to a Nomad Enterprise version is identical to upgrading
|
||||
between versions of open source Nomad. The same guidance above should be
|
||||
followed and as always, prior to starting the upgrade please check the [specific
|
||||
version details](/guides/operations/upgrade/upgrade-specific.html) page as some version
|
||||
version details](/guides/upgrade/upgrade-specific.html) page as some version
|
||||
differences may require specific steps.
|
||||
|
|
|
@ -9,7 +9,7 @@ description: |-
|
|||
|
||||
# Upgrade Guides
|
||||
|
||||
The [upgrading page](/guides/operations/upgrade/index.html) covers the details of doing
|
||||
The [upgrading page](/guides/upgrade/index.html) covers the details of doing
|
||||
a standard upgrade. However, specific versions of Nomad may have more
|
||||
details provided for their upgrades as a result of new features or changed
|
||||
behavior. This page is used to document those details separately from the
|
||||
|
@ -141,7 +141,7 @@ in a Nomad cluster must be running with Raft protocol version 3 or later.
|
|||
|
||||
#### Upgrading to Raft Protocol 3
|
||||
|
||||
This section provides details on upgrading to Raft Protocol 3 in Nomad 0.8 and higher. Raft protocol version 3 requires Nomad running 0.8.0 or newer on all servers in order to work. See [Raft Protocol Version Compatibility](/guides/operations/upgrade/upgrade-specific.html#raft-protocol-version-compatibility) for more details. Also the format of `peers.json` used for outage recovery is different when running with the latest Raft protocol. See [Manual Recovery Using peers.json](/guides/operations/outage.html#manual-recovery-using-peers-json) for a description of the required format.
|
||||
This section provides details on upgrading to Raft Protocol 3 in Nomad 0.8 and higher. Raft protocol version 3 requires Nomad running 0.8.0 or newer on all servers in order to work. See [Raft Protocol Version Compatibility](/guides/upgrade/upgrade-specific.html#raft-protocol-version-compatibility) for more details. Also the format of `peers.json` used for outage recovery is different when running with the latest Raft protocol. See [Manual Recovery Using peers.json](/guides/operations/outage.html#manual-recovery-using-peers-json) for a description of the required format.
|
||||
|
||||
Please note that the Raft protocol is different from Nomad's internal protocol as shown in commands like `nomad server members`. To see the version of the Raft protocol in use on each server, use the `nomad operator raft list-peers` command.
|
||||
|
||||
|
|
|
@ -19,7 +19,7 @@ Organizations are increasingly moving towards a Docker centric workflow for
|
|||
application deployment and management. This transition requires new tooling
|
||||
to automate placement, perform job updates, enable self-service for developers,
|
||||
and to handle failures automatically. Nomad supports a [first-class Docker workflow](/docs/drivers/docker.html)
|
||||
and integrates seamlessly with [Consul](/guides/operations/consul-integration/index.html)
|
||||
and integrates seamlessly with [Consul](/guides/integrations/consul-integration/index.html)
|
||||
and [Vault](/docs/vault-integration/index.html) to enable a complete solution
|
||||
while maximizing operational flexibility. Nomad is easy to use, can scale to
|
||||
thousands of nodes in a single cluster, and can easily deploy across private data
|
||||
|
@ -43,7 +43,7 @@ Microservices and Service Oriented Architectures (SOA) are a design paradigm in
|
|||
which many services with narrow scope, tight state encapsulation, and API driven
|
||||
communication interact together to form a larger solution. However, managing hundreds
|
||||
or thousands of services instead of a few large applications creates an operational
|
||||
challenge. Nomad elegantly integrates with [Consul](/guides/operations/consul-integration/index.html)
|
||||
challenge. Nomad elegantly integrates with [Consul](/guides/integrations/consul-integration/index.html)
|
||||
for automatic service registration and dynamic rendering of configuration files. Nomad
|
||||
and Consul together provide an ideal solution for managing microservices, making it
|
||||
easier to adopt the paradigm.
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
</li>
|
||||
|
||||
<li>
|
||||
<a href="/guides/operations/install/index.html#compiling-from-source">Build from Source</a>
|
||||
<a href="/guides/install/index.html#compiling-from-source">Build from Source</a>
|
||||
</li>
|
||||
</ul>
|
||||
<% end %>
|
||||
|
|
|
@ -117,7 +117,7 @@
|
|||
</li>
|
||||
|
||||
<li<%= sidebar_current("guides-spread") %>>
|
||||
<a href="/guides/advanced-scheduling/spread.html">Fault Tolerance with Spread</a>
|
||||
<a href="/guides/operating-a-job/advanced-scheduling/spread.html">Fault Tolerance with Spread</a>
|
||||
</li>
|
||||
|
||||
<li<%= sidebar_current("guides-operating-a-job-external-lxc") %>>
|
||||
|
|
Loading…
Reference in New Issue