Merge pull request #5667 from hashicorp/yishan/revised-nomadproject-structure

Revised NomadProject Structure
This commit is contained in:
Yishan Lin 2019-05-09 13:03:50 -07:00 committed by GitHub
commit 46bb7bbe08
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
48 changed files with 710 additions and 677 deletions

View File

@ -0,0 +1,16 @@
---
layout: "guides"
page_title: "Analytical Workloads on Nomad"
sidebar_current: "guides-analytical-workloads"
description: |-
List of data services.
---
# Analytical Workloads on Nomad
Nomad is well-suited for analytical workloads, given its [performance](https://www.hashicorp.com/c1m/) and first-class support for
[batch scheduling](/docs/schedulers.html).
This section provides some best practices and guidance for running analytical workloads on Nomad.
Please navigate the appropriate sub-sections for more information.

View File

@ -1,154 +1,153 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Configuration Properties"
sidebar_current: "guides-spark-configuration"
sidebar_current: "guides-analytical-workloads-spark-configuration"
description: |-
Comprehensive list of Spark configuration properties.
---
# Spark Configuration Properties
Spark [configuration properties](https://spark.apache.org/docs/latest/configuration.html#available-properties)
are generally applicable to the Nomad integration. The properties listed below
are specific to running Spark on Nomad. Configuration properties can be set by
Spark [configuration properties](https://spark.apache.org/docs/latest/configuration.html#available-properties)
are generally applicable to the Nomad integration. The properties listed below
are specific to running Spark on Nomad. Configuration properties can be set by
adding `--conf [property]=[value]` to the `spark-submit` command.
- `spark.nomad.authToken` `(string: nil)` - Specifies the secret key of the auth
token to use when accessing the API. This falls back to the NOMAD_TOKEN environment
variable. Note that if this configuration setting is set and the cluster deploy
mode is used, this setting will be propagated to the driver application in the
job spec. If it is not set and an auth token is taken from the NOMAD_TOKEN
environment variable, the token will not be propagated to the driver which will
- `spark.nomad.authToken` `(string: nil)` - Specifies the secret key of the auth
token to use when accessing the API. This falls back to the NOMAD_TOKEN environment
variable. Note that if this configuration setting is set and the cluster deploy
mode is used, this setting will be propagated to the driver application in the
job spec. If it is not set and an auth token is taken from the NOMAD_TOKEN
environment variable, the token will not be propagated to the driver which will
require the driver to pick up its token from an environment variable.
- `spark.nomad.cluster.expectImmediateScheduling` `(bool: false)` - Specifies
that `spark-submit` should fail if Nomad is not able to schedule the job
- `spark.nomad.cluster.expectImmediateScheduling` `(bool: false)` - Specifies
that `spark-submit` should fail if Nomad is not able to schedule the job
immediately.
- `spark.nomad.cluster.monitorUntil` `(string: "submitted"`) - Specifies the
- `spark.nomad.cluster.monitorUntil` `(string: "submitted"`) - Specifies the
length of time that `spark-submit` should monitor a Spark application in cluster
mode. When set to `submitted`, `spark-submit` will return as soon as the
application has been submitted to the Nomad cluster. When set to `scheduled`,
`spark-submit` will return as soon as the Nomad job has been scheduled. When
set to `complete`, `spark-submit` will tail the output from the driver process
mode. When set to `submitted`, `spark-submit` will return as soon as the
application has been submitted to the Nomad cluster. When set to `scheduled`,
`spark-submit` will return as soon as the Nomad job has been scheduled. When
set to `complete`, `spark-submit` will tail the output from the driver process
and return when the job has completed.
- `spark.nomad.datacenters` `(string: dynamic)` - Specifies a comma-separated
list of Nomad datacenters to use. This property defaults to the datacenter of
- `spark.nomad.datacenters` `(string: dynamic)` - Specifies a comma-separated
list of Nomad datacenters to use. This property defaults to the datacenter of
the first Nomad server contacted.
- `spark.nomad.docker.email` `(string: nil)` - Specifies the email address to
use when downloading the Docker image specified by
[spark.nomad.dockerImage](#spark.nomad.dockerImage). See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
- `spark.nomad.docker.email` `(string: nil)` - Specifies the email address to
use when downloading the Docker image specified by
[spark.nomad.dockerImage](#spark.nomad.dockerImage). See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
docs for more information.
- `spark.nomad.docker.password` `(string: nil)` - Specifies the password to use
when downloading the Docker image specified by
[spark.nomad.dockerImage](#spark.nomad.dockerImage). See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
when downloading the Docker image specified by
[spark.nomad.dockerImage](#spark.nomad.dockerImage). See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
docs for more information.
- `spark.nomad.docker.serverAddress` `(string: nil)` - Specifies the server
address (domain/IP without the protocol) to use when downloading the Docker
image specified by [spark.nomad.dockerImage](#spark.nomad.dockerImage). Docker
Hub is used by default. See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
- `spark.nomad.docker.serverAddress` `(string: nil)` - Specifies the server
address (domain/IP without the protocol) to use when downloading the Docker
image specified by [spark.nomad.dockerImage](#spark.nomad.dockerImage). Docker
Hub is used by default. See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
docs for more information.
- `spark.nomad.docker.username` `(string: nil)` - Specifies the username to use
when downloading the Docker image specified by
[spark.nomad.dockerImage](#spark-nomad-dockerImage). See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
when downloading the Docker image specified by
[spark.nomad.dockerImage](#spark-nomad-dockerImage). See the
[Docker driver authentication](/docs/drivers/docker.html#authentication)
docs for more information.
- `spark.nomad.dockerImage` `(string: nil)` - Specifies the `URL` for the
[Docker image](/docs/drivers/docker.html#image) to
use to run Spark with Nomad's `docker` driver. When not specified, Nomad's
- `spark.nomad.dockerImage` `(string: nil)` - Specifies the `URL` for the
[Docker image](/docs/drivers/docker.html#image) to
use to run Spark with Nomad's `docker` driver. When not specified, Nomad's
`exec` driver will be used instead.
- `spark.nomad.driver.cpu` `(string: "1000")` - Specifies the CPU in MHz that
- `spark.nomad.driver.cpu` `(string: "1000")` - Specifies the CPU in MHz that
should be reserved for driver tasks.
- `spark.nomad.driver.logMaxFileSize` `(string: "1m")` - Specifies the maximum
- `spark.nomad.driver.logMaxFileSize` `(string: "1m")` - Specifies the maximum
size by time that Nomad should use for driver task log files.
- `spark.nomad.driver.logMaxFiles` `(string: "5")` - Specifies the number of log
files that Nomad should keep for driver tasks.
- `spark.nomad.driver.networkMBits` `(string: "1")` - Specifies the network
- `spark.nomad.driver.networkMBits` `(string: "1")` - Specifies the network
bandwidth that Nomad should allocate to driver tasks.
- `spark.nomad.driver.retryAttempts` `(string: "5")` - Specifies the number of
- `spark.nomad.driver.retryAttempts` `(string: "5")` - Specifies the number of
times that Nomad should retry driver task groups upon failure.
- `spark.nomad.driver.retryDelay` `(string: "15s")` - Specifies the length of
- `spark.nomad.driver.retryDelay` `(string: "15s")` - Specifies the length of
time that Nomad should wait before retrying driver task groups upon failure.
- `spark.nomad.driver.retryInterval` `(string: "1d")` - Specifies Nomad's retry
- `spark.nomad.driver.retryInterval` `(string: "1d")` - Specifies Nomad's retry
interval for driver task groups.
- `spark.nomad.executor.cpu` `(string: "1000")` - Specifies the CPU in MHz that
- `spark.nomad.executor.cpu` `(string: "1000")` - Specifies the CPU in MHz that
should be reserved for executor tasks.
- `spark.nomad.executor.logMaxFileSize` `(string: "1m")` - Specifies the maximum
size by time that Nomad should use for executor task log files.
- `spark.nomad.executor.logMaxFiles` `(string: "5")` - Specifies the number of
- `spark.nomad.executor.logMaxFiles` `(string: "5")` - Specifies the number of
log files that Nomad should keep for executor tasks.
- `spark.nomad.executor.networkMBits` `(string: "1")` - Specifies the network
- `spark.nomad.executor.networkMBits` `(string: "1")` - Specifies the network
bandwidth that Nomad should allocate to executor tasks.
- `spark.nomad.executor.retryAttempts` `(string: "5")` - Specifies the number of
times that Nomad should retry executor task groups upon failure.
- `spark.nomad.executor.retryDelay` `(string: "15s")` - Specifies the length of
- `spark.nomad.executor.retryDelay` `(string: "15s")` - Specifies the length of
time that Nomad should wait before retrying executor task groups upon failure.
- `spark.nomad.executor.retryInterval` `(string: "1d")` - Specifies Nomad's retry
- `spark.nomad.executor.retryInterval` `(string: "1d")` - Specifies Nomad's retry
interval for executor task groups.
- `spark.nomad.job.template` `(string: nil)` - Specifies the path to a JSON file
containing a Nomad job to use as a template. This can also be set with
- `spark.nomad.job.template` `(string: nil)` - Specifies the path to a JSON file
containing a Nomad job to use as a template. This can also be set with
`spark-submit's --nomad-template` parameter.
- `spark.nomad.namespace` `(string: nil)` - Specifies the namespace to use. This
falls back first to the NOMAD_NAMESPACE environment variable and then to Nomad's
default namespace.
- `spark.nomad.namespace` `(string: nil)` - Specifies the namespace to use. This
falls back first to the NOMAD_NAMESPACE environment variable and then to Nomad's
default namespace.
- `spark.nomad.priority` `(string: nil)` - Specifies the priority for the
- `spark.nomad.priority` `(string: nil)` - Specifies the priority for the
Nomad job.
- `spark.nomad.region` `(string: dynamic)` - Specifies the Nomad region to use.
- `spark.nomad.region` `(string: dynamic)` - Specifies the Nomad region to use.
This property defaults to the region of the first Nomad server contacted.
- `spark.nomad.shuffle.cpu` `(string: "1000")` - Specifies the CPU in MHz that
- `spark.nomad.shuffle.cpu` `(string: "1000")` - Specifies the CPU in MHz that
should be reserved for shuffle service tasks.
- `spark.nomad.shuffle.logMaxFileSize` `(string: "1m")` - Specifies the maximum
size by time that Nomad should use for shuffle service task log files..
- `spark.nomad.shuffle.logMaxFiles` `(string: "5")` - Specifies the number of
- `spark.nomad.shuffle.logMaxFiles` `(string: "5")` - Specifies the number of
log files that Nomad should keep for shuffle service tasks.
- `spark.nomad.shuffle.memory` `(string: "256m")` - Specifies the memory that
- `spark.nomad.shuffle.memory` `(string: "256m")` - Specifies the memory that
Nomad should allocate for the shuffle service tasks.
- `spark.nomad.shuffle.networkMBits` `(string: "1")` - Specifies the network
- `spark.nomad.shuffle.networkMBits` `(string: "1")` - Specifies the network
bandwidth that Nomad should allocate to shuffle service tasks.
- `spark.nomad.sparkDistribution` `(string: nil)` - Specifies the location of
- `spark.nomad.sparkDistribution` `(string: nil)` - Specifies the location of
the Spark distribution archive file to use.
- `spark.nomad.tls.caCert` `(string: nil)` - Specifies the path to a `.pem` file
containing the certificate authority that should be used to validate the Nomad
containing the certificate authority that should be used to validate the Nomad
server's TLS certificate.
- `spark.nomad.tls.cert` `(string: nil)` - Specifies the path to a `.pem` file
- `spark.nomad.tls.cert` `(string: nil)` - Specifies the path to a `.pem` file
containing the TLS certificate to present to the Nomad server.
- `spark.nomad.tls.trustStorePassword` `(string: nil)` - Specifies the path to a
`.pem` file containing the private key corresponding to the certificate in
`.pem` file containing the private key corresponding to the certificate in
[spark.nomad.tls.cert](#spark-nomad-tls-cert).

View File

@ -1,15 +1,15 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Customizing Applications"
sidebar_current: "guides-spark-customizing"
sidebar_current: "guides-analytical-workloads-spark-customizing"
description: |-
Learn how to customize the Nomad job that is created to run a Spark
Learn how to customize the Nomad job that is created to run a Spark
application.
---
# Customizing Applications
There are two ways to customize the Nomad job that Spark creates to run an
There are two ways to customize the Nomad job that Spark creates to run an
application:
- Use the default job template and set configuration properties
@ -17,8 +17,8 @@ application:
## Using the Default Job Template
The Spark integration will use a generic job template by default. The template
includes groups and tasks for the driver, executors and (optionally) the
The Spark integration will use a generic job template by default. The template
includes groups and tasks for the driver, executors and (optionally) the
[shuffle service](/guides/spark/dynamic.html). The job itself and the tasks that
are created have the `spark.nomad.role` meta value defined accordingly:
@ -45,7 +45,7 @@ job "structure" {
}
}
# Shuffle service tasks are only added when enabled (as it must be when
# Shuffle service tasks are only added when enabled (as it must be when
# using dynamic allocation)
task "shuffle-service" {
meta {
@ -56,38 +56,38 @@ job "structure" {
}
```
The default template can be customized indirectly by explicitly [setting
The default template can be customized indirectly by explicitly [setting
configuration properties](/guides/spark/configuration.html).
## Using a Custom Job Template
An alternative to using the default template is to set the
`spark.nomad.job.template` configuration property to the path of a file
An alternative to using the default template is to set the
`spark.nomad.job.template` configuration property to the path of a file
containing a custom job template. There are two important considerations:
* The template must use the JSON format. You can convert an HCL jobspec to
* The template must use the JSON format. You can convert an HCL jobspec to
JSON by running `nomad job run -output <job.nomad>`.
* `spark.nomad.job.template` should be set to a path on the submitting
machine, not to a URL (even in cluster mode). The template does not need to
* `spark.nomad.job.template` should be set to a path on the submitting
machine, not to a URL (even in cluster mode). The template does not need to
be accessible to the driver or executors.
Using a job template you can override Sparks default resource utilization, add
additional metadata or constraints, set environment variables, add sidecar
tasks and utilize the Consul and Vault integration. The template does
not need to be a complete Nomad job specification, since Spark will add
everything necessary to run your the application. For example, your template
might set `job` metadata, but not contain any task groups, making it an
Using a job template you can override Sparks default resource utilization, add
additional metadata or constraints, set environment variables, add sidecar
tasks and utilize the Consul and Vault integration. The template does
not need to be a complete Nomad job specification, since Spark will add
everything necessary to run your the application. For example, your template
might set `job` metadata, but not contain any task groups, making it an
incomplete Nomad job specification but still a valid template to use with Spark.
To customize the driver task group, include a task group in your template that
To customize the driver task group, include a task group in your template that
has a task that contains a `spark.nomad.role` meta value set to `driver`.
To customize the executor task group, include a task group in your template that
has a task that contains a `spark.nomad.role` meta value set to `executor` or
To customize the executor task group, include a task group in your template that
has a task that contains a `spark.nomad.role` meta value set to `executor` or
`shuffle`.
The following template adds a `meta` value at the job level and an environment
The following template adds a `meta` value at the job level and an environment
variable to the executor task group:
```hcl
@ -122,5 +122,5 @@ The order of precedence for customized settings is as follows:
## Next Steps
Learn how to [allocate resources](/guides/spark/resource.html) for your Spark
Learn how to [allocate resources](/guides/spark/resource.html) for your Spark
applications.

View File

@ -1,25 +1,25 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Dynamic Executors"
sidebar_current: "guides-spark-dynamic"
sidebar_current: "guides-analytical-workloads-spark-dynamic"
description: |-
Learn how to dynamically scale Spark executors based the queue of pending
Learn how to dynamically scale Spark executors based the queue of pending
tasks.
---
# Dynamically Allocate Spark Executors
By default, the Spark application will use a fixed number of executors. Setting
`spark.dynamicAllocation` to `true` enables Spark to add and remove executors
during execution depending on the number of Spark tasks scheduled to run. As
By default, the Spark application will use a fixed number of executors. Setting
`spark.dynamicAllocation` to `true` enables Spark to add and remove executors
during execution depending on the number of Spark tasks scheduled to run. As
described in [Dynamic Resource Allocation](http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation), dynamic allocation requires that `spark.shuffle.service.enabled` be set to `true`.
On Nomad, this adds an additional shuffle service task to the executor
task group. This results in a one-to-one mapping of executors to shuffle
On Nomad, this adds an additional shuffle service task to the executor
task group. This results in a one-to-one mapping of executors to shuffle
services.
When the executor exits, the shuffle service continues running so that it can
serve any results produced by the executor. Due to the nature of resource
When the executor exits, the shuffle service continues running so that it can
serve any results produced by the executor. Due to the nature of resource
allocation in Nomad, the resources allocated to the executor tasks are not
freed until the shuffle service (and the application) has finished.

View File

@ -1,23 +1,23 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Using HDFS"
sidebar_current: "guides-spark-hdfs"
sidebar_current: "guides-analytical-workloads-spark-hdfs"
description: |-
Learn how to deploy HDFS on Nomad and integrate it with Spark.
---
# Using HDFS
[HDFS](https://en.wikipedia.org/wiki/Apache_Hadoop#Hadoop_distributed_file_system)
is a distributed, replicated and scalable file system written for the Hadoop
framework. Spark was designed to read from and write to HDFS, since it is
common for Spark applications to perform data-intensive processing over large
[HDFS](https://en.wikipedia.org/wiki/Apache_Hadoop#Hadoop_distributed_file_system)
is a distributed, replicated and scalable file system written for the Hadoop
framework. Spark was designed to read from and write to HDFS, since it is
common for Spark applications to perform data-intensive processing over large
datasets. HDFS can be deployed as its own Nomad job.
## Running HDFS on Nomad
A sample HDFS job file can be found [here](https://github.com/hashicorp/nomad/blob/master/terraform/examples/spark/hdfs.nomad).
It has two task groups, one for the HDFS NameNode and one for the
It has two task groups, one for the HDFS NameNode and one for the
DataNodes. Both task groups use a [Docker image](https://github.com/hashicorp/nomad/tree/master/terraform/examples/spark/docker/hdfs) that includes Hadoop:
```hcl
@ -35,7 +35,7 @@ DataNodes. Both task groups use a [Docker image](https://github.com/hashicorp/no
config {
image = "rcgenova/hadoop-2.7.3"
command = "bash"
args = [ "-c", "hdfs namenode -format && exec hdfs namenode
args = [ "-c", "hdfs namenode -format && exec hdfs namenode
-D fs.defaultFS=hdfs://${NOMAD_ADDR_ipc}/ -D dfs.permissions.enabled=false" ]
network_mode = "host"
port_map {
@ -65,7 +65,7 @@ DataNodes. Both task groups use a [Docker image](https://github.com/hashicorp/no
}
```
The NameNode task registers itself in Consul as `hdfs`. This enables the
The NameNode task registers itself in Consul as `hdfs`. This enables the
DataNodes to generically reference the NameNode:
```hcl
@ -77,7 +77,7 @@ DataNodes to generically reference the NameNode:
operator = "distinct_hosts"
value = "true"
}
task "DataNode" {
driver = "docker"
@ -116,10 +116,10 @@ DataNodes to generically reference the NameNode:
}
```
Another viable option for DataNode task group is to use a dedicated
[system](/docs/schedulers.html#system) job.
This will deploy a DataNode to every client node in the system, which may or may
not be desirable depending on your use case.
Another viable option for DataNode task group is to use a dedicated
[system](/docs/schedulers.html#system) job.
This will deploy a DataNode to every client node in the system, which may or may
not be desirable depending on your use case.
The HDFS job can be deployed using the `nomad job run` command:
@ -129,11 +129,11 @@ $ nomad job run hdfs.nomad
## Production Deployment Considerations
A production deployment will typically have redundant NameNodes in an
active/passive configuration (which requires ZooKeeper). See [HDFS High
A production deployment will typically have redundant NameNodes in an
active/passive configuration (which requires ZooKeeper). See [HDFS High
Availability](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html).
## Next Steps
Learn how to [monitor the output](/guides/spark/monitoring.html) of your
Learn how to [monitor the output](/guides/spark/monitoring.html) of your
Spark applications.

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Monitoring Output"
sidebar_current: "guides-spark-monitoring"
sidebar_current: "guides-analytical-workloadsspark-monitoring"
description: |-
Learn how to monitor Spark application output.
---
@ -9,28 +9,28 @@ description: |-
# Monitoring Spark Application Output
By default, `spark-submit` in `cluster` mode will submit your application
to the Nomad cluster and return immediately. You can use the
[spark.nomad.cluster.monitorUntil](/guides/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
`spark-submit` monitor the job continuously. Note that, with this flag set,
to the Nomad cluster and return immediately. You can use the
[spark.nomad.cluster.monitorUntil](/guides/spark/configuration.html#spark-nomad-cluster-monitoruntil) configuration property to have
`spark-submit` monitor the job continuously. Note that, with this flag set,
killing `spark-submit` will *not* stop the spark application, since it will be
running independently in the Nomad cluster.
running independently in the Nomad cluster.
## Spark UI
In cluster mode, if `spark.ui.enabled` is set to `true` (as by default), the
In cluster mode, if `spark.ui.enabled` is set to `true` (as by default), the
Spark web UI will be dynamically allocated a port. The Web UI will be exposed by
Nomad as a service, and the UIs `URL` will appear in the Spark drivers log. By
default, the Spark web UI will terminate when the application finishes. This can
be problematic when debugging an application. You can delay termination by
setting `spark.ui.stopDelay` (e.g. `5m` for 5 minutes). Note that this will
Nomad as a service, and the UIs `URL` will appear in the Spark drivers log. By
default, the Spark web UI will terminate when the application finishes. This can
be problematic when debugging an application. You can delay termination by
setting `spark.ui.stopDelay` (e.g. `5m` for 5 minutes). Note that this will
cause the driver process to continue to run. You can force termination
immediately on the “Jobs” page of the web UI.
## Spark History Server
It is possible to reconstruct the web UI of a completed application using
Sparks [history server](https://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact).
The history server requires the event log to have been written to an accessible
It is possible to reconstruct the web UI of a completed application using
Sparks [history server](https://spark.apache.org/docs/latest/monitoring.html#viewing-after-the-fact).
The history server requires the event log to have been written to an accessible
location like [HDFS](/guides/spark/hdfs.html) or Amazon S3.
Sample history server job file:
@ -45,7 +45,7 @@ job "spark-history-server" {
task "history-server" {
driver = "docker"
config {
image = "barnardb/spark"
command = "/spark/spark-2.1.0-bin-nomad/bin/spark-class"
@ -85,7 +85,7 @@ job "spark-history-server" {
The job file above can also be found [here](https://github.com/hashicorp/nomad/blob/master/terraform/examples/spark/spark-history-server-hdfs.nomad).
To run the history server, first [deploy HDFS](/guides/spark/hdfs.html) and then
To run the history server, first [deploy HDFS](/guides/spark/hdfs.html) and then
create a directory in HDFS to store events:
```shell
@ -104,10 +104,10 @@ You can get the private IP for the history server with a Consul DNS lookup:
$ dig spark-history.service.consul
```
Find the public IP that corresponds to the private IP returned by the `dig`
Find the public IP that corresponds to the private IP returned by the `dig`
command above. You can access the history server at http://PUBLIC_IP:18080.
Use the `spark.eventLog.enabled` and `spark.eventLog.dir` configuration
Use the `spark.eventLog.enabled` and `spark.eventLog.dir` configuration
properties in `spark-submit` to log events for a given application:
```shell
@ -126,9 +126,9 @@ $ spark-submit \
## Logs
Nomad clients collect the `stderr` and `stdout` of running tasks. The CLI or the
HTTP API can be used to inspect logs, as documented in
HTTP API can be used to inspect logs, as documented in
[Accessing Logs](/guides/operating-a-job/accessing-logs.html).
In cluster mode, the `stderr` and `stdout` of the `driver` application can be
In cluster mode, the `stderr` and `stdout` of the `driver` application can be
accessed in the same way. The [Log Shipper Pattern](/guides/operating-a-job/accessing-logs.html#log-shipper-pattern) uses sidecar tasks to forward logs to a central location. This
can be done using a job template as follows:

View File

@ -1,35 +1,35 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Getting Started"
sidebar_current: "guides-spark-pre"
sidebar_current: "guides-analytical-workloads-spark-pre"
description: |-
Get started with the Nomad/Spark integration.
---
# Getting Started
To get started, you can use Nomad's example Terraform configuration to
automatically provision an environment in AWS, or you can manually provision a
To get started, you can use Nomad's example Terraform configuration to
automatically provision an environment in AWS, or you can manually provision a
cluster.
## Provision a Cluster in AWS
Nomad's [Terraform configuration](https://github.com/hashicorp/nomad/tree/master/terraform)
Nomad's [Terraform configuration](https://github.com/hashicorp/nomad/tree/master/terraform)
can be used to quickly provision a Spark-enabled Nomad environment in
AWS. The embedded [Spark example](https://github.com/hashicorp/nomad/tree/master/terraform/examples/spark)
provides for a quickstart experience that can be used in conjunction with
this guide. When you have a cluster up and running, you can proceed to
provides for a quickstart experience that can be used in conjunction with
this guide. When you have a cluster up and running, you can proceed to
[Submitting applications](/guides/spark/submit.html).
## Manually Provision a Cluster
To manually configure provision a cluster, see the Nomad
[Getting Started](/intro/getting-started/install.html) guide. There are two
basic prerequisites to using the Spark integration once you have a cluster up
To manually configure provision a cluster, see the Nomad
[Getting Started](/intro/getting-started/install.html) guide. There are two
basic prerequisites to using the Spark integration once you have a cluster up
and running:
- Access to a [Spark distribution](https://nomad-spark.s3.amazonaws.com/spark-2.1.0-bin-nomad.tgz)
built with Nomad support. This is required for the machine that will submit
- Access to a [Spark distribution](https://nomad-spark.s3.amazonaws.com/spark-2.1.0-bin-nomad.tgz)
built with Nomad support. This is required for the machine that will submit
applications as well as the Nomad tasks that will run the Spark executors.
- A Java runtime environment (JRE) for the submitting machine and the executors.
@ -38,15 +38,15 @@ The subsections below explain further.
### Configure the Submitting Machine
To run Spark applications on Nomad, the submitting machine must have access to
the cluster and have the Nomad-enabled Spark distribution installed. The code
To run Spark applications on Nomad, the submitting machine must have access to
the cluster and have the Nomad-enabled Spark distribution installed. The code
snippets below walk through installing Java and Spark on Ubuntu:
Install Java:
```shell
$ sudo add-apt-repository -y ppa:openjdk-r/ppa
$ sudo apt-get update
$ sudo apt-get update
$ sudo apt-get install -y openjdk-8-jdk
$ JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::")
```
@ -68,10 +68,10 @@ $ export NOMAD_ADDR=http://NOMAD_SERVER_IP:4646
### Executor Access to the Spark Distribution
When running on Nomad, Spark creates Nomad tasks to run executors for use by the
application's driver program. The executor tasks need access to a JRE, a Spark
distribution built with Nomad support, and (in cluster mode) the Spark
application itself. By default, Nomad will only place Spark executors on client
When running on Nomad, Spark creates Nomad tasks to run executors for use by the
application's driver program. The executor tasks need access to a JRE, a Spark
distribution built with Nomad support, and (in cluster mode) the Spark
application itself. By default, Nomad will only place Spark executors on client
nodes that have the Java runtime installed (version 7 or higher).
In this example, the Spark distribution and the Spark application JAR file are
@ -89,21 +89,21 @@ $ spark-submit \
### Using a Docker Image
An alternative to installing the JRE on every client node is to set the
An alternative to installing the JRE on every client node is to set the
[spark.nomad.dockerImage](/guides/spark/configuration.html#spark-nomad-dockerimage)
configuration property to the URL of a Docker image that has the Java runtime
installed. If set, Nomad will use the `docker` driver to run Spark executors in
a container created from the image. The
[spark.nomad.dockerAuth](/guides/spark/configuration.html#spark-nomad-dockerauth)
configuration property to the URL of a Docker image that has the Java runtime
installed. If set, Nomad will use the `docker` driver to run Spark executors in
a container created from the image. The
[spark.nomad.dockerAuth](/guides/spark/configuration.html#spark-nomad-dockerauth)
configuration property can be set to a JSON object to provide Docker repository
authentication configuration.
When using a Docker image, both the Spark distribution and the application
When using a Docker image, both the Spark distribution and the application
itself can be included (in which case local URLs can be used for `spark-submit`).
Here, we include [spark.nomad.dockerImage](/guides/spark/configuration.html#spark-nomad-dockerimage)
and use local paths for
[spark.nomad.sparkDistribution](/guides/spark/configuration.html#spark-nomad-sparkdistribution)
Here, we include [spark.nomad.dockerImage](/guides/spark/configuration.html#spark-nomad-dockerimage)
and use local paths for
[spark.nomad.sparkDistribution](/guides/spark/configuration.html#spark-nomad-sparkdistribution)
and the application JAR file:
```shell

View File

@ -1,15 +1,15 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Resource Allocation"
sidebar_current: "guides-spark-resource"
sidebar_current: "guides-analytical-workloads-spark-resource"
description: |-
Learn how to configure resource allocation for your Spark applications.
---
# Resource Allocation
Resource allocation can be configured using a job template or through
configuration properties. Here is a sample template in HCL syntax (this would
Resource allocation can be configured using a job template or through
configuration properties. Here is a sample template in HCL syntax (this would
need to be converted to JSON):
```hcl
@ -36,45 +36,45 @@ Resource-related configuration properties are covered below.
## Memory
The standard Spark memory properties will be propagated to Nomad to control
task resource allocation: `spark.driver.memory` (set by `--driver-memory`) and
The standard Spark memory properties will be propagated to Nomad to control
task resource allocation: `spark.driver.memory` (set by `--driver-memory`) and
`spark.executor.memory` (set by `--executor-memory`). You can additionally specify
[spark.nomad.shuffle.memory](/guides/spark/configuration.html#spark-nomad-shuffle-memory)
to control how much memory Nomad allocates to shuffle service tasks.
## CPU
Spark sizes its thread pools and allocates tasks based on the number of CPU
cores available. Nomad manages CPU allocation in terms of processing speed
rather than number of cores. When running Spark on Nomad, you can control how
much CPU share Nomad will allocate to tasks using the
[spark.nomad.driver.cpu](/guides/spark/configuration.html#spark-nomad-driver-cpu)
(set by `--driver-cpu`),
[spark.nomad.executor.cpu](/guides/spark/configuration.html#spark-nomad-executor-cpu)
(set by `--executor-cpu`) and
[spark.nomad.shuffle.cpu](/guides/spark/configuration.html#spark-nomad-shuffle-cpu)
properties. When running on Nomad, executors will be configured to use one core
by default, meaning they will only pull a single 1-core task at a time. You can
set the `spark.executor.cores` property (set by `--executor-cores`) to allow
Spark sizes its thread pools and allocates tasks based on the number of CPU
cores available. Nomad manages CPU allocation in terms of processing speed
rather than number of cores. When running Spark on Nomad, you can control how
much CPU share Nomad will allocate to tasks using the
[spark.nomad.driver.cpu](/guides/spark/configuration.html#spark-nomad-driver-cpu)
(set by `--driver-cpu`),
[spark.nomad.executor.cpu](/guides/spark/configuration.html#spark-nomad-executor-cpu)
(set by `--executor-cpu`) and
[spark.nomad.shuffle.cpu](/guides/spark/configuration.html#spark-nomad-shuffle-cpu)
properties. When running on Nomad, executors will be configured to use one core
by default, meaning they will only pull a single 1-core task at a time. You can
set the `spark.executor.cores` property (set by `--executor-cores`) to allow
more tasks to be executed concurrently on a single executor.
## Network
Nomad does not restrict the network bandwidth of running tasks, bit it does
allocate a non-zero number of Mbit/s to each task and uses this when bin packing
task groups onto Nomad clients. Spark defaults to requesting the minimum of 1
Mbit/s per task, but you can change this with the
[spark.nomad.driver.networkMBits](/guides/spark/configuration.html#spark-nomad-driver-networkmbits),
Nomad does not restrict the network bandwidth of running tasks, bit it does
allocate a non-zero number of Mbit/s to each task and uses this when bin packing
task groups onto Nomad clients. Spark defaults to requesting the minimum of 1
Mbit/s per task, but you can change this with the
[spark.nomad.driver.networkMBits](/guides/spark/configuration.html#spark-nomad-driver-networkmbits),
[spark.nomad.executor.networkMBits](/guides/spark/configuration.html#spark-nomad-executor-networkmbits), and
[spark.nomad.shuffle.networkMBits](/guides/spark/configuration.html#spark-nomad-shuffle-networkmbits)
[spark.nomad.shuffle.networkMBits](/guides/spark/configuration.html#spark-nomad-shuffle-networkmbits)
properties.
## Log rotation
Nomad performs log rotation on the `stdout` and `stderr` of its tasks. You can
configure the number number and size of log files it will keep for driver and
executor task groups using
[spark.nomad.driver.logMaxFiles](/guides/spark/configuration.html#spark-nomad-driver-logmaxfiles)
Nomad performs log rotation on the `stdout` and `stderr` of its tasks. You can
configure the number number and size of log files it will keep for driver and
executor task groups using
[spark.nomad.driver.logMaxFiles](/guides/spark/configuration.html#spark-nomad-driver-logmaxfiles)
and [spark.nomad.executor.logMaxFiles](/guides/spark/configuration.html#spark-nomad-executor-logmaxfiles).
## Next Steps

View File

@ -1,24 +1,21 @@
---
layout: "guides"
page_title: "Running Apache Spark on Nomad"
sidebar_current: "guides-spark-spark"
sidebar_current: "guides-analytical-workloads-spark-intro"
description: |-
Learn how to run Apache Spark on a Nomad cluster.
---
# Running Apache Spark on Nomad
Nomad is well-suited for analytical workloads, given its [performance
characteristics](https://www.hashicorp.com/c1m/) and first-class support for
[batch scheduling](/docs/schedulers.html).
Apache Spark is a popular data processing engine/framework that has been
architected to use third-party schedulers. The Nomad ecosystem includes a
[fork of Apache Spark](https://github.com/hashicorp/nomad-spark) that natively
integrates Nomad as a cluster manager and scheduler for Spark. When running on
Nomad, the Spark executors that run Spark tasks for your application, and
Apache Spark is a popular data processing engine/framework that has been
architected to use third-party schedulers. The Nomad ecosystem includes a
[fork of Apache Spark](https://github.com/hashicorp/nomad-spark) that natively
integrates Nomad as a cluster manager and scheduler for Spark. When running on
Nomad, the Spark executors that run Spark tasks for your application, and
optionally the application driver itself, run as Nomad tasks in a Nomad job.
## Next Steps
The links in the sidebar contain detailed information about specific aspects of
The links in the sidebar contain detailed information about specific aspects of
the integration, beginning with [Getting Started](/guides/spark/pre.html).

View File

@ -1,41 +1,41 @@
---
layout: "guides"
page_title: "Apache Spark Integration - Submitting Applications"
sidebar_current: "guides-spark-submit"
sidebar_current: "guides-analytical-workloads-spark-submit"
description: |-
Learn how to submit Spark jobs that run on a Nomad cluster.
---
# Submitting Applications
The [`spark-submit`](https://spark.apache.org/docs/latest/submitting-applications.html)
script located in Sparks `bin` directory is used to launch applications on a
cluster. Spark applications can be submitted to Nomad in either `client` mode
The [`spark-submit`](https://spark.apache.org/docs/latest/submitting-applications.html)
script located in Sparks `bin` directory is used to launch applications on a
cluster. Spark applications can be submitted to Nomad in either `client` mode
or `cluster` mode.
## Client Mode
In `client` mode, the application driver runs on a machine that is not
necessarily in the Nomad cluster. The drivers `SparkContext` creates a Nomad
job to run Spark executors. The executors connect to the driver and run Spark
tasks on behalf of the application. When the drivers SparkContext is stopped,
the executors are shut down. Note that the machine running the driver or
`spark-submit` needs to be reachable from the Nomad clients so that the
In `client` mode, the application driver runs on a machine that is not
necessarily in the Nomad cluster. The drivers `SparkContext` creates a Nomad
job to run Spark executors. The executors connect to the driver and run Spark
tasks on behalf of the application. When the drivers SparkContext is stopped,
the executors are shut down. Note that the machine running the driver or
`spark-submit` needs to be reachable from the Nomad clients so that the
executors can connect to it.
In `client` mode, application resources need to start out present on the
submitting machine, so JAR files (both the primary JAR and those added with the
`--jars` option) can not be specified using `http:` or `https:` URLs. You can
In `client` mode, application resources need to start out present on the
submitting machine, so JAR files (both the primary JAR and those added with the
`--jars` option) can not be specified using `http:` or `https:` URLs. You can
either use files on the submitting machine (either as raw paths or `file:` URLs)
, or use `local:` URLs to indicate that the files are independently available on
both the submitting machine and all of the Nomad clients where the executors
both the submitting machine and all of the Nomad clients where the executors
might run.
In this mode, the `spark-submit` invocation doesnt return until the application
has finished running, and killing the `spark-submit` process kills the
application.
In this mode, the `spark-submit` invocation doesnt return until the application
has finished running, and killing the `spark-submit` process kills the
application.
In this example, the `spark-submit` command is used to run the `SparkPi` sample
In this example, the `spark-submit` command is used to run the `SparkPi` sample
application:
```shell
@ -48,22 +48,22 @@ $ spark-submit --class org.apache.spark.examples.SparkPi \
## Cluster Mode
In cluster mode, the `spark-submit` process creates a Nomad job to run the Spark
In cluster mode, the `spark-submit` process creates a Nomad job to run the Spark
application driver itself. The drivers `SparkContext` then adds Spark executors
to the Nomad job. The executors connect to the driver and run Spark tasks on
behalf of the application. When the drivers `SparkContext` is stopped, the
to the Nomad job. The executors connect to the driver and run Spark tasks on
behalf of the application. When the drivers `SparkContext` is stopped, the
executors are shut down.
In cluster mode, application resources need to be hosted somewhere accessible
to the Nomad cluster, so JARs (both the primary JAR and those added with the
`--jars` option) cant be specified using raw paths or `file:` URLs. You can either
use `http:` or `https:` URLs, or use `local:` URLs to indicate that the files are
independently available on all of the Nomad clients where the driver and executors
In cluster mode, application resources need to be hosted somewhere accessible
to the Nomad cluster, so JARs (both the primary JAR and those added with the
`--jars` option) cant be specified using raw paths or `file:` URLs. You can either
use `http:` or `https:` URLs, or use `local:` URLs to indicate that the files are
independently available on all of the Nomad clients where the driver and executors
might run.
Note that in cluster mode, the Nomad master URL needs to be routable from both
the submitting machine and the Nomad client node that runs the driver. If the
Nomad cluster is integrated with Consul, you may want to use a DNS name for the
Note that in cluster mode, the Nomad master URL needs to be routable from both
the submitting machine and the Nomad client node that runs the driver. If the
Nomad cluster is integrated with Consul, you may want to use a DNS name for the
Nomad service served by Consul.
For example, to submit an application in cluster mode:

View File

@ -0,0 +1,14 @@
---
layout: "guides"
page_title: "Governance & Policy on Nomad"
sidebar_current: "guides-governance-and-policy"
description: |-
List of data services.
---
#Governance & Policy
This section provides some best practices and guidance for operating Nomad securely in a multi-team setting through features such as namespaces, resource quotas, and Sentinel.
Please navigate the appropriate sub-sections for more information.

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Namespaces"
sidebar_current: "guides-security-namespaces"
sidebar_current: "guides-governance-and-policy-namespaces"
description: |-
Nomad Enterprise provides support for namespaces, which allow jobs and their
associated objects to be segmented from each other and other users of the
@ -10,8 +10,8 @@ description: |-
# Namespaces
[Nomad Enterprise](https://www.hashicorp.com/products/nomad/) has support for
namespaces, which allow jobs and their associated objects to be segmented from
[Nomad Enterprise](https://www.hashicorp.com/products/nomad/) has support for
namespaces, which allow jobs and their associated objects to be segmented from
each other and other users of the cluster.
~> **Enterprise Only!** This functionality only exists in Nomad Enterprise.
@ -35,7 +35,7 @@ negatively impacting other teams and applications sharing the cluster.
## Namespaced Objects
Nomad places all jobs and their derived objects into namespaces. These include
jobs, allocations, deployments, and evaluations.
jobs, allocations, deployments, and evaluations.
Nomad does not namespace objects that are shared across multiple namespaces.
This includes nodes, [ACL policies](/guides/security/acl.html), [Sentinel

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Resource Quotas"
sidebar_current: "guides-security-quotas"
sidebar_current: "guides-governance-and-policy-quotas"
description: |-
Nomad Enterprise provides support for resource quotas, which allow operators
to restrict the aggregate resource usage of namespaces.
@ -9,8 +9,8 @@ description: |-
# Resource Quotas
[Nomad Enterprise](https://www.hashicorp.com/products/nomad/) provides support
for resource quotas, which allow operators to restrict the aggregate resource
[Nomad Enterprise](https://www.hashicorp.com/products/nomad/) provides support
for resource quotas, which allow operators to restrict the aggregate resource
usage of namespaces.
~> **Enterprise Only!** This functionality only exists in Nomad Enterprise.

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Sentinel Job Object"
sidebar_current: "guides-security-sentinel-job"
sidebar_current: "guides-governance-and-policy-sentinel-job"
description: |-
Job objects can be introspected to apply fine grained Sentinel policies.
---
@ -20,4 +20,3 @@ Sentinel convention for identifiers is lower case and separated by underscores.
| `job.ParentID` | `job.parent_id` |
| `job.TaskGroups` | `job.task_groups` |
| `job.TaskGroups[0].EphemeralDisk.SizeMB`| `job.task_groups[0].ephemeral_disk.size_mb` |

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Sentinel Policies"
sidebar_current: "guides-security-sentinel"
sidebar_current: "guides-governance-and-policy-sentinel"
description: |-
Nomad integrates with Sentinel for fine-grained policy enforcement. Sentinel allows operators to express their policies as code, and have their policies automatically enforced. This allows operators to define a "sandbox" and restrict actions to only those compliant with policy. The Sentinel integration builds on the ACL System.
---
@ -206,4 +206,3 @@ The following objects are made available in the `submit-job` scope:
| `job` | The job being submitted |
See the [Sentinel Job Object](/guides/security/sentinel/job.html) for details on the fields that are available.

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Installing Nomad"
sidebar_current: "guides-operations-installing"
sidebar_current: "guides-install"
description: |-
Learn how to install Nomad.
---

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Nomad Deployment Guide"
sidebar_current: "guides-operations-deployment-guide"
sidebar_current: "guides-install-production-deployment-guide"
description: |-
This deployment guide covers the steps required to install and
configure a single HashiCorp Nomad cluster as defined in the
@ -9,7 +9,7 @@ description: |-
ea_version: 0.9
---
# Nomad Deployment Guide
# Nomad Reference Install Guide
This deployment guide covers the steps required to install and configure a single HashiCorp Nomad cluster as defined in the [Nomad Reference Architecture](/guides/operations/reference-architecture.html).

View File

@ -0,0 +1,38 @@
---
layout: "guides"
page_title: "Installing Nomad for Production"
sidebar_current: "guides-install-production"
description: |-
Learn how to install Nomad for Production.
---
#Installing Nomad for Production
This section covers how to install Nomad for production.
There are multiple steps to cover for a successful Nomad deployment:
##Installing Nomad
This page lists the two primary methods to installing Nomad and how to verify a successful installation.
Please refer to [Installing Nomad](/guides/install/index.html) sub-section.
##Hardware Requirements
This page details the recommended machine resources (instances), port requirements, and network topology for Nomad.
Please refer to [Hardware Requirements](/guides/install/production/requirements.html) sub-section.
##Setting Nodes with Nomad Agent
These pages explain the Nomad agent process and how to set the server and client nodes in the cluster.
Please refer to [Set Server & Client Nodes](/guides/install/production/nomad-agent.html) and [Nomad Agent documentation](/docs/commands/agent.html) pages.
##Reference Architecture
This document provides recommended practices and a reference architecture for HashiCorp Nomad production deployments. This reference architecture conveys a general architecture that should be adapted to accommodate the specific needs of each implementation.
Please refer to [Reference Architecture](/guides/install/production/reference-architecture.html) sub-section.
##Install Guide Based on Reference Architecture
This guide provides an end-to-end walkthrough of the steps required to install a single production-ready Nomad cluster as defined in the Reference Architecture section.
Please refer to [Reference Install Guide](/guides/install/production/deployment-guide.html) sub-section.

View File

@ -1,13 +1,13 @@
---
layout: "guides"
page_title: "Nomad Agent"
sidebar_current: "guides-operations-agent"
sidebar_current: "guides-install-production-nomad-agent"
description: |-
The Nomad agent is a long running process which can be used either in
a client or server mode.
---
# Nomad Agent
# Setting Nodes with Nomad Agent
The Nomad agent is a long running process which runs on every machine that
is part of the Nomad cluster. The behavior of the agent depends on if it is

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Nomad Reference Architecture"
sidebar_current: "guides-operations-reference-architecture"
sidebar_current: "guides-install-production-reference-architecture"
description: |-
This document provides recommended practices and a reference
architecture for HashiCorp Nomad production deployments.

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Hardware Requirements"
sidebar_current: "guides-operations-requirements"
sidebar_current: "guides-install-production-requirements"
description: |-
Learn about Nomad client and server requirements such as memory and CPU
recommendations, network topologies, and more.
@ -83,7 +83,7 @@ you should tune the OS to avoid this overlap.
On Linux this can be checked and set as follows:
```
$ cat /proc/sys/net/ipv4/ip_local_port_range
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 60999
$ echo "49152 65535" > /proc/sys/net/ipv4/ip_local_port_range
$ echo "49152 65535" > /proc/sys/net/ipv4/ip_local_port_range
```

View File

@ -0,0 +1,32 @@
---
layout: "guides"
page_title: "Installing Nomad for QuickStart"
sidebar_current: "guides-install-quickstart"
description: |-
Learn how to install Nomad for sandbox.
---
#Quickstart
This page lists multiple methods to installing Nomad in a sandbox environment.
These installations are designed to get you started with Nomad easily and should be used only for experimentation purposes. If you are looking to install Nomad in production, please refer to our [Production Installation](/guides/install/production/index.html) guide here.
##Local
Install Nomad on your local machine.
* [Vagrant](/intro/getting-started/install.html)
##Cloud
Install Nomad on the public cloud.
* AWS
* [CloudFormation](https://aws.amazon.com/quickstart/architecture/nomad/)
* [Terraform](https://github.com/hashicorp/nomad/blob/master/terraform/aws/README.md)
* Azure
* [Terraform](https://github.com/hashicorp/nomad/tree/master/terraform/azure)
##Katacoda
Experiment with Nomad in your browser via KataCoda's interactive learning platform.
* [Introduction to Nomad](https://www.katacoda.com/hashicorp/scenarios/nomad-introduction)
* [Nomad Playground](https://katacoda.com/hashicorp/scenarios/playground)

View File

@ -1,18 +1,18 @@
---
layout: "guides"
page_title: "Consul Integration"
sidebar_current: "guides-operations-consul-integration"
sidebar_current: "guides-integrations-consul"
description: |-
Learn how to integrate Nomad with Consul and add service discovery to jobs
---
# Consul Integration
[Consul][] is a tool for discovering and configuring services in your
infrastructure. Consul's key features include service discovery, health checking,
a KV store, and robust support for multi-datacenter deployments. Nomad's integration
with Consul enables automatic clustering, built-in service registration, and
dynamic rendering of configuration files and environment variables. The sections
[Consul][] is a tool for discovering and configuring services in your
infrastructure. Consul's key features include service discovery, health checking,
a KV store, and robust support for multi-datacenter deployments. Nomad's integration
with Consul enables automatic clustering, built-in service registration, and
dynamic rendering of configuration files and environment variables. The sections
below describe the integration in more detail.
## Configuration
@ -27,9 +27,9 @@ configuration.
## Automatic Clustering with Consul
Nomad servers and clients will be automatically informed of each other's
existence when a running Consul cluster already exists and the Consul agent is
installed and configured on each host. Please see the [Automatic Clustering with
Nomad servers and clients will be automatically informed of each other's
existence when a running Consul cluster already exists and the Consul agent is
installed and configured on each host. Please see the [Automatic Clustering with
Consul](/guides/operations/cluster/automatic.html) guide for more information.
## Service Discovery
@ -45,12 +45,12 @@ To configure a job to register with service discovery, please see the
## Dynamic Configuration
Nomad's job specification includes a [`template` stanza](/docs/job-specification/template.html)
that utilizes a Consul ecosystem tool called [Consul Template](https://github.com/hashicorp/consul-template). This mechanism creates a convenient way to ship configuration files
that are populated from environment variables, Consul data, Vault secrets, or just
Nomad's job specification includes a [`template` stanza](/docs/job-specification/template.html)
that utilizes a Consul ecosystem tool called [Consul Template](https://github.com/hashicorp/consul-template). This mechanism creates a convenient way to ship configuration files
that are populated from environment variables, Consul data, Vault secrets, or just
general configurations within a Nomad task.
For more information on Nomad's template stanza and how it leverages Consul Template,
For more information on Nomad's template stanza and how it leverages Consul Template,
please see the [`template` job specification documentation](/docs/job-specification/template.html).
## Assumptions

View File

@ -0,0 +1,13 @@
---
layout: "guides"
page_title: "Nomad HashiStack Integrations"
sidebar_current: "guides-integrations"
description: |-
This section features Nomad's integrations with Consul and Vault.
---
# HashiCorp Integrations
Nomad integrates seamlessly with Consul and Vault for service discovery and secrets management.
Please navigate the appropriate sub-sections for more information.

View File

@ -1,13 +1,13 @@
---
layout: "guides"
page_title: "Vault Integration and Retrieving Dynamic Secrets"
sidebar_current: "guides-operations-vault-integration"
sidebar_current: "guides-integrations-vault"
description: |-
Learn how to deploy an application in Nomad and retrieve dynamic credentials
by integrating with Vault.
---
# Vault Integration and Retrieving Dynamic Secrets
# Vault Integration
Nomad integrates seamlessly with [Vault][vault] and allows your application to
retrieve dynamic credentials for various tasks. In this guide, you will deploy a
@ -16,7 +16,7 @@ display data from a table to the user.
## Reference Material
- [Vault Integration Docs Page][vault-integration]
- [Vault Integration Documentation][vault-integration]
- [Nomad Template Stanza Integration with Vault][nomad-template-vault]
- [Secrets Task Directory][secrets-task-directory]
@ -261,11 +261,11 @@ Nomad Server's configuration file located at `/etc/nomad.d/nomad.hcl`. Provide
the token you generated in the previous step in the `vault` stanza of your Nomad
server configuration. The token can also be provided as an environment variable
called `VAULT_TOKEN`. Be sure to specify the `nomad-cluster-role` in the
[create_from_role][create-from-role] option. If using
[Vault Namespaces](https://www.vaultproject.io/docs/enterprise/namespaces/index.html),
modify both the client and server configuration to include the namespace;
alternatively, it can be provided in the environment variable `VAULT_NAMESPACE`.
After following these steps and enabling Vault, the `vault` stanza in your Nomad
[create_from_role][create-from-role] option. If using
[Vault Namespaces](https://www.vaultproject.io/docs/enterprise/namespaces/index.html),
modify both the client and server configuration to include the namespace;
alternatively, it can be provided in the environment variable `VAULT_NAMESPACE`.
After following these steps and enabling Vault, the `vault` stanza in your Nomad
server configuration will be similar to what is shown below:
```hcl
@ -285,7 +285,7 @@ Restart the Nomad server
$ sudo systemctl restart nomad
```
NOTE: Nomad servers will renew the token automatically.
NOTE: Nomad servers will renew the token automatically.
Vault integration needs to be enabled on the client nodes as well, but this has
been configured for you already in this environment. You will see the `vault`
@ -348,7 +348,7 @@ job "postgres-nomad-demo" {
Run the job as shown below:
```shell
$ nomad run db.nomad
$ nomad run db.nomad
```
Verify the job is running with the following command:
@ -433,7 +433,7 @@ Recall from the previous step that we specified `accessdb` in the
CREATE USER "{{name}}" WITH ENCRYPTED PASSWORD '{{password}}' VALID UNTIL
'{{expiration}}';
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "{{name}}";
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "{{name}}";
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "{{name}}";
GRANT ALL ON SCHEMA public TO "{{name}}";
```
@ -499,7 +499,7 @@ path "database/creds/accessdb" {
Create the `access-tables` policy with the following command:
```shell
$ vault policy write access-tables access-tables-policy.hcl
$ vault policy write access-tables access-tables-policy.hcl
```
You should see the following output:
@ -603,7 +603,7 @@ There are a few key points to note here:
Use the following command to run the job:
```shell
$ nomad run web-app.nomad
$ nomad run web-app.nomad
```
### Step 15: Confirm the Application is Accessing the Database

View File

@ -10,6 +10,7 @@ description: |-
# Load Balancing
There are multiple approaches to set up load balancing across a Nomad cluster.
Most of these methods assume Consul is installed alongside Nomad (see [Load
Balancing Strategies for
Consul](https://www.hashicorp.com/blog/load-balancing-strategies-for-consul)).

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Affinity"
sidebar_current: "guides-advanced-scheduling"
sidebar_current: "guides-operating-a-job-advanced-scheduling-affinity"
description: |-
The following guide walks the user through using the affinity stanza in Nomad.
---
@ -127,7 +127,7 @@ value for the [attribute][attributes] `${node.datacenter}`. We used the value `1
Run the Nomad job with the following command:
```shell
$ nomad run redis.nomad
$ nomad run redis.nomad
==> Monitoring evaluation "11388ef2"
Evaluation triggered by job "redis"
Allocation "0dfcf0ba" created: node "6b6e9518", group "cache1"
@ -180,7 +180,7 @@ be different).
```shell
$ nomad alloc status -verbose 0dfcf0ba
```
```
The resulting output will show the `Placement Metrics` section at the bottom.
```shell
@ -205,11 +205,10 @@ changes (use the `nomad alloc status` command as shown in the previous step).
[affinity-stanza]: /docs/job-specification/affinity.html
[alloc status]: /docs/commands/alloc/status.html
[attributes]: /docs/runtime/interpolation.html#node-variables-
[attributes]: /docs/runtime/interpolation.html#node-variables-
[constraint]: /docs/job-specification/constraint.html
[client-metadata]: /docs/configuration/client.html#meta
[node-status]: /docs/commands/node/status.html
[scheduling]: /docs/internals/scheduling/scheduling.html
[verbose]: /docs/commands/alloc/status.html#verbose
[weight]: /docs/job-specification/affinity.html#weight

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "LXC"
sidebar_current: "guides-external-lxc"
sidebar_current: "guides-operating-a-job-external-lxc"
description: |-
Guide for using LXC external task driver plugin.
---
@ -19,7 +19,7 @@ containers. This guide walks through the steps involved in configuring a Nomad c
- Nomad [LXC][lxc-docs] external driver documentation
- Nomad LXC external driver [repo][lxc-driver-repo]
## Installation Instructions
## Installation Instructions
### Step 1: Install the `lxc` and `lxc-templates` Packages
@ -29,7 +29,7 @@ Before deploying an LXC workload, you will need to install the `lxc` and `lxc-te
sudo apt install -y lxc lxc-templates
```
### Step 2: Download and Install the LXC Driver
### Step 2: Download and Install the LXC Driver
External drivers must be placed in the [plugin_dir][plugin_dir] directory which
defaults to [`data_dir`][data_dir]`/plugins`. Make a directory called `plugins` in [data_dir][data_dir] (which is `/opt/nomad/data` in the example below) and download/place the [LXC driver][lxc_driver_download] in it. The following sequence of commands illustrate this process:
@ -37,7 +37,7 @@ defaults to [`data_dir`][data_dir]`/plugins`. Make a directory called `plugins`
```shell
$ sudo mkdir -p /opt/nomad/data/plugins
$ curl -O https://releases.hashicorp.com/nomad-driver-lxc/0.1.0-rc2/nomad-driver-lxc_0.1.0-rc2_linux_amd64.zip
$ unzip nomad-driver-lxc_0.1.0-rc2_linux_amd64.zip
$ unzip nomad-driver-lxc_0.1.0-rc2_linux_amd64.zip
Archive: nomad-driver-lxc_0.1.0-rc2_linux_amd64.zip
inflating: nomad-driver-lxc
$ sudo mv nomad-driver-lxc /opt/nomad/data/plugins
@ -140,7 +140,7 @@ ID Node ID Task Group Version Desired Status Created Modified
The LXC driver is enabled by default in the client configuration. In order to
provide additional options to the LXC plugin, add [plugin
options][lxc_plugin_options] `volumes_enabled` and `lxc_path` for the `lxc`
driver in the client's configuration file like in the following example:
driver in the client's configuration file like in the following example:
```hcl
plugin "nomad-driver-lxc" {
@ -155,7 +155,7 @@ plugin "nomad-driver-lxc" {
[data_dir]: /docs/configuration/index.html#data_dir
[linux-containers]: https://linuxcontainers.org/lxc/introduction/
[linux-containers-home]: https://linuxcontainers.org
[lxc_driver_download]: https://releases.hashicorp.com/nomad-driver-lxc
[lxc_driver_download]: https://releases.hashicorp.com/nomad-driver-lxc
[lxc-driver-repo]: https://github.com/hashicorp/nomad-driver-lxc
[lxc-docs]: /docs/drivers/external/lxc.html
[lxc-job]: https://github.com/hashicorp/nomad-education-content/blob/master/lxc.nomad

View File

@ -6,7 +6,14 @@ description: |-
Learn how to deploy and manage a Nomad Job.
---
# Job Lifecycle
# Deploying & Managing Applications
Developers deploy and manage their applications in Nomad via jobs.
This section provides some best practices and guidance for operating jobs under
Nomad. Please navigate the appropriate sub-sections for more information.
## Deploying
The general flow for operating a job in Nomad is:
@ -15,6 +22,8 @@ The general flow for operating a job in Nomad is:
1. Submit the job file to a Nomad server
1. (Optional) Review job status and logs
## Updating
When updating a job, there are a number of built-in update strategies which may
be defined in the job file. The general flow for updating an existing job in
Nomad is:
@ -27,6 +36,3 @@ Nomad is:
Because the job file defines the update strategy (blue-green, rolling updates,
etc.), the workflow remains the same regardless of whether this is an initial
deployment or a long-running job.
This section provides some best practices and guidance for operating jobs under
Nomad. Please navigate the appropriate sub-sections for more information.

View File

@ -1,13 +1,15 @@
---
layout: "guides"
page_title: "Security and Governance"
page_title: "Security"
sidebar_current: "guides-security"
description: |-
Learn how to use Nomad safely and securely in a multi-team setting.
---
# Security and Governance
# Security
The Nomad Security and Governance guides section provides best practices and
guidance for operating Nomad safely and securely in a multi-team setting. Please
navigate the appropriate sub-sections for more information.
The Nomad Security section provides best practices and
guidance for securing Nomad in an enterprise environment.
Please
navigate the appropriate sub-sections for more information.

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Upgrading"
sidebar_current: "guides-operations-upgrade"
sidebar_current: "guides-upgrade"
description: |-
Learn how to upgrade Nomad.
---
@ -102,7 +102,7 @@ Use the same actions in step #2 above to confirm cluster health.
Following the successful upgrade of the servers you can now update your
clients using a similar process as the servers. You may either upgrade clients
in-place or start new nodes on the new version. See the [Workload Migration
in-place or start new nodes on the new version. See the [Workload Migration
Guide](/guides/operations/node-draining.html) for instructions on how to migrate running
allocations from the old nodes to the new nodes with the [`nomad node
drain`](/docs/commands/node/drain.html) command.

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Upgrade Guides"
sidebar_current: "guides-operations-upgrade-specific"
sidebar_current: "guides-upgrade-specific"
description: |-
Specific versions of Nomad may have additional information about the upgrade
process beyond the standard flow.

View File

@ -8,91 +8,40 @@ description: |-
# Introduction to Nomad
Welcome to the intro guide to Nomad! This guide is the best
place to start with Nomad. We cover what Nomad is, what
problems it can solve, how it compares to existing software,
and contains a quick start for using Nomad.
If you are already familiar with the basics of Nomad, the [Guides](/guides/index.html)
and the [reference documentation](/docs/index.html) will provide a more comprehensive
resource.
Welcome to the intro guide to Nomad! This guide is the best place to start with Nomad. We cover what Nomad is, what problems it can solve, how it compares to existing software, and how you can get started using it. If you are familiar with the basics of Nomad, the [documentation](https://www.nomadproject.io/docs/index.html) and [guides](https://www.nomadproject.io/guides/index.html) provides a more detailed reference of available features.
## What is Nomad?
Nomad is a flexible container orchestration tool that enables an organization to
easily deploy and manage any containerized or legacy application using a single,
unified workflow. Nomad can run a diverse workload of Docker, non-containerized,
microservice, and batch applications, and generally offers the following benefits
to developers and operators:
Nomad is a flexible workload orchestrator that enables an organization to easily deploy and manage any containerized or legacy application using a single, unified workflow. Nomad can run a diverse workload of Docker, non-containerized, microservice, and batch applications.
* **API-driven Automation**: Workload placement, scaling, and upgrades can be
automated, simplifying operations and eliminating the need for homegrown tooling.
* **Self-service Deployments**: Developers are empowered to service application
lifecycles directly, allowing operators to focus on higher value tasks.
* **Workload Reliability**: Application, node, and driver failures are handled
automatically, reducing the need for manual operator intervention
* **Increased Efficiency and Reduced Cost**: Higher application densities allow
operators to reduce fleet sizes and save money.
Nomad enables developers to use declarative infrastructure-as-code for deploying applications. Nomad uses bin packing to efficiently schedule jobs and optimize for resource utilization. Nomad is supported on macOS, Windows, and Linux.
Nomad is trusted by enterprises from a range of sectors including financial,
retail, software, and others to run production workloads at scale across private
infrastructure and the public cloud.
Nomad is widely adopted and used in production by PagerDuty, Target, Citadel, Trivago, SAP, Pandora, Roblox, eBay, Deluxe Entertainment, and more.
## How it Works
## Key Features
At its core, Nomad is a tool for managing a cluster of machines and running applications
on them. Nomad abstracts away machines and the location of applications,
and instead enables users to declare what they want to run while Nomad handles
where and how to run them.
* **Deploy Containers and Legacy Applications**: Nomads flexibility as an orchestrator enables an organization to run containers, legacy, and batch applications together on the same infrastructure. Nomad brings core orchestration benefits to legacy applications without needing to containerize via pluggable [task drivers](https://www.nomadproject.io/docs/drivers/index.html).
The key features of Nomad are:
* **Simple & Reliable**: Nomad runs as a single 75MB binary and is entirely self contained - combining resource management and scheduling into a single system. Nomad does not require any external services for storage or coordination. Nomad automatically handles application, node, and driver failures. Nomad is distributed and resilient, using leader election and state replication to provide high availability in the event of failures.
* **Docker Support**: Nomad supports Docker as a first-class workload type.
Jobs submitted to Nomad can use the `docker` driver to easily deploy containerized
applications to a cluster. Nomad enforces the user-specified constraints,
ensuring the application only runs in the correct region, datacenter, and host
environment. Jobs can specify the number of instances needed and
Nomad will handle placement and recover from failures automatically.
* **Device Plugins & GPU Support**: Nomad offers built-in support for GPU workloads such as machine learning (ML) and artificial intelligence (AI). Nomad uses [device plugins](https://www.nomadproject.io/docs/devices/index.html) to automatically detect and utilize resources from hardware devices such as GPU, FPGAs, and TPUs.
* **Operationally Simple**: Nomad ships as a single binary, both for clients and servers,
and requires no external services for coordination or storage. Nomad combines features
of both resource managers and schedulers into a single system. Nomad builds on the strength
of [Serf](https://www.serf.io) and [Consul](https://www.consul.io), distributed management
tools by [HashiCorp](https://www.hashicorp.com).
* **Federation for Multi-Region**: Nomad has native support for multi-region federation. This built-in capability allows multiple clusters to be linked together, which in turn enables developers to deploy jobs to any cluster in any region. Federation also enables automatic replication of ACL policies, namespaces, resource quotas and Sentinel policies across all clusters.
* **Multi-Datacenter and Multi-Region Aware**: Nomad models infrastructure as
groups of datacenters which form a larger region. Scheduling operates at the region
level allowing for cross-datacenter scheduling. Multiple regions federate together
allowing jobs to be registered globally.
* **Proven Scalability**: Nomad is optimistically concurrent, which increases throughput and reduces latency for workloads. Nomad has been proven to scale to clusters of 10K+ nodes in real-world production environments.
* **Flexible Workloads**: Nomad has extensible support for task drivers, allowing it to run
containerized, virtualized, and standalone applications. Users can easily start Docker
containers, VMs, or application runtimes like Java. Nomad supports Linux, Windows, BSD and OSX,
providing the flexibility to run any workload.
* **Built for Scale**: Nomad was designed from the ground up to support global scale
infrastructure. Nomad is distributed and highly available, using both
leader election and state replication to provide availability in the face
of failures. Nomad is optimistically concurrent, enabling all servers to participate
in scheduling decisions which increases the total throughput and reduces latency
to support demanding workloads. Nomad has been proven to scale to cluster sizes that
exceed 10k nodes in real-world production environments.
* **HashiCorp Ecosystem**: Nomad integrates seamlessly with Terraform, Consul, Vault for provisioning, service discovery, and secrets management.
## How Nomad Compares to Other Tools
Nomad differentiates from related tools by virtue of its **simplicity**, **flexibility**,
**scalability**, and **high performance**. Nomad's synergy and integration points with
HashiCorp Terraform, Consul, and Vault make it uniquely suited for easy integration into
an organization's existing workflows, minimizing the time-to-market for critical initiatives.
See the [Nomad vs. Other Software](/intro/vs/index.html) page for additional details and
Nomad differentiates from related tools by virtue of its **simplicity**, **flexibility**,
**scalability**, and **high performance**. Nomad's synergy and integration points with
HashiCorp Terraform, Consul, and Vault make it uniquely suited for easy integration into
an organization's existing workflows, minimizing the time-to-market for critical initiatives.
See the [Nomad vs. Other Software](/intro/vs/index.html) page for additional details and
comparisons.
## Next Steps
See the page on [Nomad use cases](/intro/use-cases.html) to see the
multiple ways Nomad can be used. Then see
[how Nomad compares to other software](/intro/vs/index.html)
to see how it fits into your existing infrastructure. Finally, continue onwards with
the [getting started guide](/intro/getting-started/install.html) to use
Nomad to run a job and see how it works in practice.
See the [Use Cases](/intro/use-cases.html) and [Who Uses Nomad](/intro/who-uses-nomad.html) page to understand the many ways Nomad is used in production today across many industries to solve critical, real-world business objectives.

View File

@ -3,70 +3,65 @@ layout: "intro"
page_title: "Use Cases"
sidebar_current: "use-cases"
description: |-
This page lists some concrete use cases for Nomad, but the possible use cases
This page lists some concrete use cases for Nomad, but the possible use cases
are much broader than what we cover.
---
# Use Cases
This page lists Nomad's core use cases. Please note that the full range of potential
use cases is much broader than what is currently covered here. Reading through the
[Introduction to Nomad](/intro/index.html) is highly recommended before diving into
the use cases.
This page features Nomad's core use cases.
## Docker Container Management
Note that the full range of potential use cases is broader than what is covered here.
Organizations are increasingly moving towards a Docker centric workflow for
application deployment and management. This transition requires new tooling
to automate placement, perform job updates, enable self-service for developers,
and to handle failures automatically. Nomad supports a [first-class Docker workflow](/docs/drivers/docker.html)
and integrates seamlessly with [Consul](/guides/operations/consul-integration/index.html)
and [Vault](/docs/vault-integration/index.html) to enable a complete solution
while maximizing operational flexibility. Nomad is easy to use, can scale to
thousands of nodes in a single cluster, and can easily deploy across private data
## Docker Container Orchestration
Organizations are increasingly moving towards a Docker centric workflow for
application deployment and management. This transition requires new tooling
to automate placement, perform job updates, enable self-service for developers,
and to handle failures automatically. Nomad supports a [first-class Docker workflow](/docs/drivers/docker.html)
and integrates seamlessly with [Consul](/guides/operations/consul-integration/index.html)
and [Vault](/docs/vault-integration/index.html) to enable a complete solution
while maximizing operational flexibility. Nomad is easy to use, can scale to
thousands of nodes in a single cluster, and can easily deploy across private data
centers and multiple clouds.
## Legacy Application Deployment
A virtual machine based application deployment strategy can lead to low hardware
utlization rates and high infrastructure costs. While a Docker-based deployment
strategy can be impractical for some organizations or use cases, the potential for
greater automation, increased resilience, and reduced cost is very attractive.
Nomad natively supports running legacy applications, static binaries, JARs, and
simple OS commands directly. Workloads are natively isolated at runtime and bin
packed to maximize efficiency and utilization (reducing cost). Developers and
operators benefit from API-driven automation and enhanced reliability for
A virtual machine based application deployment strategy can lead to low hardware
utlization rates and high infrastructure costs. While a Docker-based deployment
strategy can be impractical for some organizations or use cases, the potential for
greater automation, increased resilience, and reduced cost is very attractive.
Nomad natively supports running legacy applications, static binaries, JARs, and
simple OS commands directly. Workloads are natively isolated at runtime and bin
packed to maximize efficiency and utilization (reducing cost). Developers and
operators benefit from API-driven automation and enhanced reliability for
applications through automatic failure handling.
## Microservices
Microservices and Service Oriented Architectures (SOA) are a design paradigm in
which many services with narrow scope, tight state encapsulation, and API driven
communication interact together to form a larger solution. However, managing hundreds
or thousands of services instead of a few large applications creates an operational
challenge. Nomad elegantly integrates with [Consul](/guides/operations/consul-integration/index.html)
for automatic service registration and dynamic rendering of configuration files. Nomad
and Consul together provide an ideal solution for managing microservices, making it
Microservices and Service Oriented Architectures (SOA) are a design paradigm in
which many services with narrow scope, tight state encapsulation, and API driven
communication interact together to form a larger solution. However, managing hundreds
or thousands of services instead of a few large applications creates an operational
challenge. Nomad elegantly integrates with [Consul](/guides/operations/consul-integration/index.html)
for automatic service registration and dynamic rendering of configuration files. Nomad
and Consul together provide an ideal solution for managing microservices, making it
easier to adopt the paradigm.
## Batch Processing Workloads
As data science and analytics teams grow in size and complexity, they increasingly
benefit from highly performant and scalable tools that can run batch workloads with
minimal operational overhead. Nomad can natively run batch jobs, [parameterized](https://www.hashicorp.com/blog/replacing-queues-with-nomad-dispatch) jobs, and [Spark](https://github.com/hashicorp/nomad-spark)
workloads. Nomad's architecture enables easy scalability and an optimistically
concurrent scheduling strategy that can yield [thousands of container deployments per
second](https://www.hashicorp.com/c1m). Alternatives are overly complex and limited
As data science and analytics teams grow in size and complexity, they increasingly
benefit from highly performant and scalable tools that can run batch workloads with
minimal operational overhead. Nomad can natively run batch jobs, [parameterized](https://www.hashicorp.com/blog/replacing-queues-with-nomad-dispatch) jobs, and [Spark](https://github.com/hashicorp/nomad-spark)
workloads. Nomad's architecture enables easy scalability and an optimistically
concurrent scheduling strategy that can yield [thousands of container deployments per
second](https://www.hashicorp.com/c1m). Alternatives are overly complex and limited
in terms of their scheduling throughput, scalability, and multi-cloud capabilities.
**Related video**: [End to End Production Nomad at Citadel](https://www.youtube.com/watch?reload=9&v=ZOBcGpGsboA)
## Multi-Region and Multi-Cloud Federated Deployments
## Multi-region and Multi-cloud Deployments
Nomad is designed to natively handle multi-datacenter and multi-region deployments
and is cloud agnostic. This allows Nomad to schedule in private datacenters running
bare metal, OpenStack, or VMware alongside an AWS, Azure, or GCE cloud deployment.
This makes it easier to migrate workloads incrementally and to utilize the cloud
Nomad is designed to natively handle multi-datacenter and multi-region deployments
and is cloud agnostic. This allows Nomad to schedule in private datacenters running
bare metal, OpenStack, or VMware alongside an AWS, Azure, or GCE cloud deployment.
This makes it easier to migrate workloads incrementally and to utilize the cloud
for bursting.

View File

@ -1,29 +0,0 @@
---
layout: "intro"
page_title: "Nomad vs. Custom Solutions"
sidebar_current: "vs-other-custom"
description: |-
Comparison between Nomad and writing a custom solution.
---
# Nomad vs. Custom Solutions
It is an undisputed fact that distributed systems are hard; building
one is error-prone and time-consuming. As a result, few organizations
build a scheduler due to the inherent challenges. However,
most organizations must develop a means of deploying applications
and typically this evolves into an ad hoc deployment platform.
These deployment platforms are typically special cased to the needs
of the organization at the time of development, reduce future agility,
and require time and resources to build and maintain.
Nomad provides a high-level job specification to easily deploy applications.
It has been designed to work at large scale, with multi-datacenter and
multi-region support built in. Nomad also has extensible drivers giving it
flexibility in the workloads it supports, including Docker.
Nomad provides organizations of any size a solution for deployment
that is simple, robust, and scalable. It reduces the time and effort spent
re-inventing the wheel and users can focus instead on their business applications.

View File

@ -1,28 +0,0 @@
---
layout: "intro"
page_title: "Nomad vs. HTCondor"
sidebar_current: "vs-other-htcondor"
description: |-
Comparison between Nomad and HTCondor
---
# Nomad vs. HTCondor
HTCondor is a batch queuing system that is traditionally deployed in
grid computing environments. These environments have a fixed set of
resources, and large batch jobs that consume the entire cluster or
large portions. HTCondor is used to manage queuing, dispatching and
execution of these workloads.
HTCondor is not designed for services or long lived applications.
Due to the batch nature of workloads on HTCondor, it does not prioritize
high availability and is operationally complex to setup. It does support
federation in the form of "flocking" allowing batch workloads to
be run on alternate clusters if they would otherwise be forced to wait.
Nomad is focused on both long-lived services and batch workloads, and
is designed to be a platform for running large scale applications instead
of just managing a queue of batch work. Nomad supports a broader range
of workloads, is designed for high availability, supports much
richer constraint enforcement and bin packing logic.

View File

@ -3,10 +3,10 @@ layout: "intro"
page_title: "Nomad vs. Mesos with Aurora, Marathon, etc"
sidebar_current: "vs-other-mesos"
description: |-
Comparison between Nomad and Mesos with Aurora, Marathon, etc
Comparison between Nomad and Mesos with Marathon
---
# Nomad vs. Mesos with Aurora, Marathon
# Nomad vs. Mesos with Marathon
Mesos is a resource manager, which is used to pool together the
resources of a datacenter and exposes an API to integrate with
@ -35,4 +35,3 @@ the scale that can be supported.
Mesos does not support federation or multiple failure isolation regions.
Nomad supports multi-datacenter and multi-region configurations for failure
isolation and scalability.

View File

@ -1,37 +0,0 @@
---
layout: "intro"
page_title: "Nomad vs. Docker Swarm"
sidebar_current: "vs-other-swarm"
description: |-
Comparison between Nomad and Docker Swarm
---
# Nomad vs. Docker Swarm
Docker Swarm is the native clustering solution for Docker. It provides
an API compatible with the Docker Remote API, and allows containers to
be scheduled across many machines.
Nomad differs in many ways with Docker Swarm, most obviously Docker Swarm
can only be used to run Docker containers, while Nomad is more general purpose.
Nomad supports virtualized, containerized and standalone applications, including Docker.
Nomad is designed with extensible drivers and support will be extended to all
common drivers.
Docker Swarm provides API compatibility with their remote API, which focuses
on the container abstraction. Nomad uses a higher-level abstraction of jobs.
Jobs contain task groups, which are sets of tasks. This allows more complex
applications to be expressed and easily managed without reasoning about the
individual containers that compose the application.
The architectures also differ between Nomad and Docker Swarm.
Nomad does not depend on external systems for coordination or storage,
is distributed, highly available, and supports multi-datacenter
and multi-region configurations.
By contrast, Swarm is not distributed or highly available by default.
External systems must be used for coordination to support replication.
When replication is enabled, Swarm uses an active/standby model,
meaning the other servers cannot be used to make scheduling decisions.
Swarm also does not support multiple failure isolation regions or federation.

View File

@ -1,18 +0,0 @@
---
layout: "intro"
page_title: "Nomad vs. YARN"
sidebar_current: "vs-other-yarn"
description: |-
Comparison between Nomad and Hadoop YARN
---
# Nomad vs. Hadoop YARN
YARN, or Yet Another Resource Negotiator, is a system in Hadoop
for both resource management and job scheduling. It is used to allow multiple
Hadoop frameworks like MapReduce and Spark to run on the same hardware.
Nomad is focused on both long-lived services and batch workloads, and
is designed to be a platform for running large scale applications
generally instead of specifically for Hadoop.

View File

@ -0,0 +1,93 @@
---
layout: "intro"
page_title: "Who Uses Nomad"
sidebar_current: "who-uses-nomad"
description: |-
This page features many ways Nomad is used in production today across many industries to solve critical, real-world business objectives
---
#Who Uses Nomad
This page features talks from users on how they use Nomad today to solve critical, real-world business objectives.
Nomad is widely adopted and used in production by PagerDuty, Target, Citadel, Trivago, SAP, Pandora, Roblox, eBay, Deluxe Entertainment, and more.
####CircleCI
* [How CircleCI Processes 4.5 Million Builds Per Month](https://stackshare.io/circleci/how-circleci-processes-4-5-million-builds-per-month)
* [Security & Scheduling are Not Your Core Competencies](https://www.hashicorp.com/resources/nomad-vault-circleci-security-scheduling)
####Citadel
* [End-to-End Production Nomad at Citadel](https://www.hashicorp.com/resources/end-to-end-production-nomad-citadel)
* [Extreme Scaling with HashiCorp Nomad & Consul](https://www.hashicorp.com/resources/citadel-scaling-hashicorp-nomad-consul)
####Deluxe Entertainment
* [How Deluxe Uses the Complete HashiStack for Video Production](https://www.hashicorp.com/resources/deluxe-hashistack-video-production)
####Jet.com (Walmart)
* [Driving down costs at Jet.com with HashiCorp Nomad](https://www.hashicorp.com/resources/jet-walmart-hashicorp-nomad-azure-run-apps)
####PagerDuty
* [PagerDutys Nomadic Journey](https://www.hashicorp.com/resources/pagerduty-nomad-journey)
####Pandora
* [How Pandora Uses Nomad](https://www.youtube.com/watch?v=OsZeKTP2u98&t=2s)
####SAP
* [HashiCorp Nomad @ SAP Ariba](https://www.hashicorp.com/resources/nomad-community-call-core-team-sap-ariba)
####SeatGeek
* [Nomad Helper Tools](https://github.com/seatgeek/nomad-helper)
####Spaceflight Industries
* [Spaceflights Hub-And-Spoke Infrastructure](https://www.hashicorp.com/blog/spaceflight-uses-hashicorp-consul-for-service-discovery-and-real-time-updates-to-their-hub-and-spoke-network-architecture)
####SpotInst
* [SpotInst and HashiCorp Nomad to Reduce EC2 Costs for Users](https://www.hashicorp.com/blog/spotinst-and-hashicorp-nomad-to-reduce-ec2-costs-fo)
####Target
* [Nomad at Target: Scaling Microservices Across Public and Private Clouds](https://www.hashicorp.com/resources/nomad-scaling-target-microservices-across-cloud)
* [Playing with Nomad from HashiCorp](https://danielparker.me/nomad/hashicorp/schedulers/nomad/)
####Trivago
* [Maybe You Dont Need Kubernetes](https://matthias-endler.de/2019/maybe-you-dont-need-kubernetes/)
* [Nomad - Our Experiences and Best Practices](https://tech.trivago.com/2019/01/25/nomad-our-experiences-and-best-practices/)
####Roblox
* [How Roblox runs a platform for 70 million gamers on HashiCorp Nomad](https://portworx.com/architects-corner-roblox-runs-platform-70-million-gamers-hashicorp-nomad/)
####Oscar Health
* [Scalable CI at Oscar Health with Nomad and Docker](https://www.hashicorp.com/resources/scalable-ci-oscar-health-insurance-nomad-docker)
####eBay
* [HashiStack at eBay: A Fully Containerized Platform Based on Infrastructure as Code](https://www.hashicorp.com/resources/ebay-hashistack-fully-containerized-platform-iac)
####Joyent
* [Build Your Own Autoscaling Feature with HashiCorp Nomad](https://www.hashicorp.com/resources/autoscaling-hashicorp-nomad)
####Dutch National Police
* [Going Cloud-Native at the Dutch National Police](https://www.hashicorp.com/resources/going-cloud-native-at-the-dutch-national-police)
####N26
* [Tech at N26 - The Bank in the Cloud](https://medium.com/insiden26/tech-at-n26-the-bank-in-the-cloud-e5ff818b528b)
####Elsevier
* [Esleviers Container Framework with Nomad, Terraform, and Consul](https://www.hashicorp.com/resources/elsevier-nomad-container-framework-demo)
####Palantir
* [Enterprise Security at Palantir with the HashiCorp stack](https://www.hashicorp.com/resources/enterprise-security-hashicorp-stack)
####Graymeta
* [Backend Batch Processing At Scale with Nomad](https://www.hashicorp.com/resources/backend-batch-processing-nomad)
####NIH NCBI
* [NCBIs Legacy Migration to Hybrid Cloud with Consul & Nomad](https://www.hashicorp.com/resources/ncbi-legacy-migration-hybrid-cloud-consul-nomad)
####Q2Ebanking
* [Q2s Nomad Use and Overview](https://www.youtube.com/watch?v=OsZeKTP2u98&feature=youtu.be&t=1499)
####imgix
* [Cluster Schedulers & Why We Chose Nomad Over Kubernetes](https://medium.com/@copyconstruct/schedulers-kubernetes-and-nomad-b0f2e14a896)
####Region Syddanmark
...and more!

View File

@ -10,10 +10,8 @@
<li><a href="/guides/index.html">Guides</a></li>
<li><a href="/docs/index.html">Docs</a></li>
<li><a href="/api/index.html">API</a></li>
<li><a href="/community.html">Community</a></li>
<li><a href="/resources.html">Resources</a></li>
<li><a href="https://www.hashicorp.com/products/nomad/?utm_source=oss&utm_medium=header-nav&utm_campaign=nomad">Enterprise</a></li>
<li><a href="/security.html">Security</a></li>
<li><a href="/assets/files/press-kit.zip">Press Kit</a></li>
</ul>
<div class="divider"></div>

View File

@ -2,28 +2,86 @@
<% content_for :sidebar do %>
<ul class="nav docs-sidenav">
<li<%= sidebar_current("guides-getting-started") %>>
<a href="/guides/getting-started.html">Getting Started</a>
<li<%= sidebar_current("guides-install") %>>
<a href="/guides/install/index.html">Installing Nomad</a>
<ul class="nav">
<li<%= sidebar_current("guides-install-quickstart") %>>
<a href="/guides/install/quickstart/index.html">Quickstart</a>
</li>
<li<%= sidebar_current("guides-install-production") %>>
<a href="/guides/install/production/index.html">Production</a>
<ul class="nav">
<li<%= sidebar_current("guides-install-production-requirements") %>>
<a href="/guides/install/production/requirements.html">Hardware Requirements</a>
</li>
<li<%= sidebar_current("guides-install-production-nomad-agent") %>>
<a href="/guides/install/production/nomad-agent.html">Set Server & Client Nodes</a>
</li>
<li<%= sidebar_current("guides-install-production-reference-architecture") %>>
<a href="/guides/install/production/reference-architecture.html">Reference Architecture</a>
</li>
<li<%= sidebar_current("guides-install-production-deployment-guide") %>>
<a href="/guides/install/production/deployment-guide.html">Reference Install Guide</a>
</li>
</ul>
</li>
</ul>
</li>
<li<%= sidebar_current("guides-operating-a-job") %>>
<a href="/guides/operating-a-job/index.html">Job Lifecycle</a>
<li<%= sidebar_current("guides-upgrade") %>>
<a href="/guides/upgrade/index.html">Upgrading</a>
<ul class="nav">
<li<%= sidebar_current("guides-upgrade-specific") %>>
<a href="/guides/upgrade/upgrade-specific.html">Specific Version Details</a>
</li>
</ul>
</li>
<li<%= sidebar_current("guides-integrations") %>>
<a href="/guides/integrations/index.html">Integrations</a>
<ul class="nav">
<li<%= sidebar_current("guides-integrations-consul") %>>
<a href="/guides/integrations/consul-integration/index.html">Consul</a>
</li>
<li<%= sidebar_current("guides-integrations-vault") %>>
<a href="/guides/integrations/vault-integration/index.html">Vault</a>
</li>
</ul>
</li>
<hr>
<li<%= sidebar_current("guides-operating-a-job") %>>
<a href="/guides/operating-a-job/index.html">Deploying & Managing Applications</a>
<ul class="nav">
<li<%= sidebar_current("guides-operating-a-job-configuring-tasks") %>>
<a href="/guides/operating-a-job/configuring-tasks.html">Configuring Tasks</a>
</li>
<li<%= sidebar_current("guides-operating-a-job-submitting-jobs") %>>
<a href="/guides/operating-a-job/submitting-jobs.html">Submitting Jobs</a>
</li>
<li<%= sidebar_current("guides-operating-a-job-inspecting-state") %>>
<a href="/guides/operating-a-job/inspecting-state.html">Inspecting State</a>
</li>
<li<%= sidebar_current("guides-operating-a-job-accessing-logs") %>>
<a href="/guides/operating-a-job/accessing-logs.html">Accessing Logs</a>
</li>
<li<%= sidebar_current("guides-operating-a-job-resource-utilization") %>>
<a href="/guides/operating-a-job/resource-utilization.html">Resource Utilization</a>
</li>
<li<%= sidebar_current("guides-operating-a-job-updating") %>>
<a href="/guides/operating-a-job/update-strategies/index.html">Update Strategies</a>
<ul class="nav">
@ -38,6 +96,7 @@
</li>
</ul>
</li>
<li<%= sidebar_current("guides-operating-a-job-failure-handling-strategies") %>>
<a href="/guides/operating-a-job/failure-handling-strategies/index.html">Failure Recovery Strategies</a>
<ul class="nav">
@ -52,39 +111,32 @@
</li>
</ul>
</li>
<li<%= sidebar_current("guides-operating-a-job-advanced-scheduling-affinity") %>>
<a href="/guides/operating-a-job/advanced-scheduling/affinity.html">Placement Preferences with Affinities</a>
</li>
<li<%= sidebar_current("guides-spread") %>>
<a href="/guides/advanced-scheduling/spread.html">Fault Tolerance with Spread</a>
</li>
<li<%= sidebar_current("guides-operating-a-job-external-lxc") %>>
<a href="/guides/operating-a-job/external/lxc.html">Running LXC Applications</a>
</li>
</ul>
</li>
<li<%= sidebar_current("guides-operations") %>>
<a href="/guides/operations/index.html">Operations</a>
<a href="/guides/operations/index.html">Operating Nomad</a>
<ul class="nav">
<li<%= sidebar_current("guides-operations-reference-architecture") %>>
<a href="/guides/operations/reference-architecture.html">Reference Architecture</a>
</li>
<li<%= sidebar_current("guides-operations-deployment-guide") %>>
<a href="/guides/operations/deployment-guide.html">Deployment Guide</a>
</li>
<li<%= sidebar_current("guides-operations-installing") %>>
<a href="/guides/operations/install/index.html">Installing Nomad</a>
</li>
<li<%= sidebar_current("guides-operations-agent") %>>
<a href="/guides/operations/agent/index.html">Running the Agent</a>
</li>
<li<%= sidebar_current("guides-operations-consul-integration") %>>
<a href="/guides/operations/consul-integration/index.html">Consul Integration</a>
</li>
<li<%= sidebar_current("guides-operations-cluster") %>>
<a href="/guides/operations/cluster/bootstrapping.html">Clustering</a>
<ul class="nav">
<li<%= sidebar_current("guides-operations-cluster-manual") %>>
<a href="/guides/operations/cluster/manual.html">Manual Clustering</a>
</li>
</li>
<li<%= sidebar_current("guides-operations-cluster-automatic") %>>
<a href="/guides/operations/cluster/automatic.html">Automatic Clustering with Consul</a>
</li>
@ -94,18 +146,10 @@
</ul>
</li>
<li<%= sidebar_current("guides-operations-requirements") %>>
<a href="/guides/operations/requirements.html">Hardware Requirements</a>
</li>
<li<%= sidebar_current("guides-operations-federation") %>>
<a href="/guides/operations/federation.html">Multi-region Federation</a>
<a href="/guides/operations/federation.html">Federation</a>
</li>
<li<%= sidebar_current("guides-operations-vault-integration") %>>
<a href="/guides/operations/vault-integration/index.html">Vault Integration</a>
</li>
<li<%= sidebar_current("guides-operations-decommissioning-nodes") %>>
<a href="/guides/operations/node-draining.html">Workload Migration</a>
</li>
@ -123,15 +167,6 @@
</ul>
</li>
<li<%= sidebar_current("guides-operations-upgrade") %>>
<a href="/guides/operations/upgrade/index.html">Upgrading</a>
<ul class="nav">
<li<%= sidebar_current("guides-operations-upgrade-specific") %>>
<a href="/guides/operations/upgrade/upgrade-specific.html">Upgrade Guides</a>
</li>
</ul>
</li>
<li<%= sidebar_current("guides-operations-autopilot") %>>
<a href="/guides/operations/autopilot.html">Autopilot</a>
</li>
@ -139,94 +174,70 @@
</ul>
</li>
<li<%= sidebar_current("guides-external") %>>
<a href="/guides/external/index.html">External Plugins</a>
<ul class="nav">
<li<%= sidebar_current("guides-external-lxc") %>>
<a href="/guides/external/lxc.html">LXC</a>
</li>
</ul>
</li>
<li<%= sidebar_current("guides-advanced-scheduling") %>>
<a href="/guides/advanced-scheduling/advanced-scheduling.html">Advanced Scheduling Features</a>
<ul class="nav">
<li<%= sidebar_current("guides-affinity") %>>
<a href="/guides/advanced-scheduling/affinity.html">Affinity</a>
</li>
<li<%= sidebar_current("guides-spread") %>>
<a href="/guides/advanced-scheduling/spread.html">Spread</a>
</li>
</ul>
</li>
<li<%= sidebar_current("guides-security") %>>
<a href="/guides/security/index.html">Security and Governance</a>
<a href="/guides/security/index.html">Securing Nomad</a>
<ul class="nav">
<li<%= sidebar_current("guides-security-encryption") %>>
<a href="/guides/security/encryption.html">Encryption Overview</a>
</li>
<li<%= sidebar_current("guides-security-tls") %>>
<a href="/guides/security/securing-nomad.html">Securing Nomad with TLS</a>
</li>
</li>
<li<%= sidebar_current("guides-security-vault-pki") %>>
<a href="/guides/security/vault-pki-integration.html">Vault PKI Secrets Engine Integration</a>
</li>
<li<%= sidebar_current("guides-security-acl") %>>
<a href="/guides/security/acl.html">Access Control</a>
</li>
<li<%= sidebar_current("guides-security-namespaces") %>>
<a href="/guides/security/namespaces.html">Namespaces</a>
<li<%= sidebar_current("guides-security-tls") %>>
<a href="/guides/security/securing-nomad.html">Securing Nomad with TLS</a>
</li>
<li<%= sidebar_current("guides-security-quotas") %>>
<a href="/guides/security/quotas.html">Resource Quotas</a>
</li>
<li<%= sidebar_current("guides-security-sentinel") %>>
<a href="/guides/security/sentinel-policy.html">Sentinel Policies</a>
<ul class="nav">
<li<%= sidebar_current("guides-security-sentinel-job") %>>
<a href="/guides/security/sentinel/job.html">Job Object</a>
</li>
</ul>
<li<%= sidebar_current("guides-security-vault-pki") %>>
<a href="/guides/security/vault-pki-integration.html">Vault PKI Secrets Engine</a>
</li>
</ul>
</li>
</li>
<li<%= sidebar_current("guides-spark") %>>
<a href="/guides/spark/spark.html">Apache Spark Integration</a>
<li<%= sidebar_current("guides-stateful-workloads") %>>
<a href="/guides/stateful-workloads/stateful-workloads.html">Stateful Workloads</a>
<ul class="nav">
<li<%= sidebar_current("guides-spark-pre") %>>
<a href="/guides/spark/pre.html">Getting Started</a>
<li<%= sidebar_current("guides-portworx") %>>
<a href="/guides/stateful-workloads/portworx.html">Portworx</a>
</li>
<li<%= sidebar_current("guides-spark-submit") %>>
<a href="/guides/spark/submit.html">Submitting Applications</a>
</li>
<li<%= sidebar_current("guides-spark-customizing") %>>
<a href="/guides/spark/customizing.html">Customizing Applications</a>
</li>
<li<%= sidebar_current("guides-spark-resource") %>>
<a href="/guides/spark/resource.html">Resource Allocation</a>
</li>
<li<%= sidebar_current("guides-spark-dynamic") %>>
<a href="/guides/spark/dynamic.html">Dynamic Executors</a>
</li>
<li<%= sidebar_current("guides-spark-hdfs") %>>
<a href="/guides/spark/hdfs.html">Using HDFS</a>
</li>
<li<%= sidebar_current("guides-spark-monitoring") %>>
<a href="/guides/spark/monitoring.html">Monitoring Output</a>
</li>
<li<%= sidebar_current("guides-spark-configuration") %>>
<a href="/guides/spark/configuration.html">Configuration Properties</a>
</ul>
</li>
<li<%= sidebar_current("guides-analytical-workloads") %>>
<a href="/guides/analytical-workloads/index.html">Analytical Workloads</a>
<ul class="nav">
<li<%= sidebar_current("guides-analytical-workloads") %>>
<a href="/guides/analytical-workloads/spark/spark.html">Apache Spark</a>
<ul class="nav">
<li<%= sidebar_current("guides-analytical-workloads-spark-pre") %>>
<a href="/guides/analytical-workloads/spark/pre.html">Getting Started</a>
</li>
<li<%= sidebar_current("guides-analytical-workloads-spark-customizing") %>>
<a href="/guides/analytical-workloads/spark/customizing.html">Customizing Applications</a>
</li>
<li<%= sidebar_current("guides-analytical-workloads-spark-resource") %>>
<a href="/guides/analytical-workloads/spark/resource.html">Resource Allocation</a>
</li>
<li<%= sidebar_current("guides-analytical-workloads-spark-dynamic") %>>
<a href="/guides/analytical-workloads/spark/dynamic.html">Dynamic Executors</a>
</li>
<li<%= sidebar_current("guides-analytical-workloads-spark-submit") %>>
<a href="/guides/analytical-workloads/spark/submit.html">Submitting Applications</a>
</li>
<li<%= sidebar_current("guides-analytical-workloads-spark-hdfs") %>>
<a href="/guides/analytical-workloads/spark/hdfs.html">Running HDFS</a>
</li>
<li<%= sidebar_current("guides-analytical-workloads-spark-monitoring") %>>
<a href="/guides/analytical-workloads/spark/monitoring.html">Monitoring Output</a>
</li>
<li<%= sidebar_current("guides-analytical-workloads-spark-configuration") %>>
<a href="/guides/analytical-workloads/spark/configuration.html">Configuration Properties</a>
</li>
</ul>
</li>
</ul>
</li>
@ -240,18 +251,26 @@
</ul>
</li>
<li<%= sidebar_current("guides-stateful-workloads") %>>
<a href="/guides/stateful-workloads/stateful-workloads.html">Stateful Workloads</a>
<li<%= sidebar_current("guides-governance-and-policy") %>>
<a href="/guides/governance-and-policy/index.html">Governance & Policy</a>
<ul class="nav">
<li<%= sidebar_current("guides-portworx") %>>
<a href="/guides/stateful-workloads/portworx.html">Portworx</a>
<li<%= sidebar_current("guides-governance-and-policy-namespaces") %>>
<a href="/guides/governance-and-policy/namespaces.html">Namespaces</a>
</li>
<li<%= sidebar_current("guides-governance-and-policy-quotas") %>>
<a href="/guides/governance-and-policy/quotas.html">Quotas</a>
</li>
<li<%= sidebar_current("guides-governance-and-policy-sentinel") %>>
<a href="/guides/governance-and-policy/sentinel/sentinel-policy.html">Sentinel</a>
<ul class="nav">
<li<%= sidebar_current("guides-governance-and-policy-sentinel-job") %>>
<a href="/guides/governance-and-policy/sentinel/job.html">Job Object</a>
</li>
</ul>
</li>
</ul>
</li>
<li<%= sidebar_current("guides-ui") %>>
<a href="/guides/ui.html">Web UI</a>
</li>
</ul>
<% end %>

View File

@ -9,42 +9,30 @@
<a href="/intro/use-cases.html">Use Cases</a>
</li>
<li<%= sidebar_current("who-uses-nomad") %>>
<a href="/intro/who-uses-nomad.html">Who Uses Nomad</a>
</li>
<li<%= sidebar_current("vs-other") %>>
<a href="/intro/vs/index.html">Nomad vs. Other Software</a>
<ul class="nav">
<li<%= sidebar_current("vs-other-ecs") %>>
<a href="/intro/vs/ecs.html">AWS ECS</a>
</li>
<li<%= sidebar_current("vs-other-swarm") %>>
<a href="/intro/vs/swarm.html">Docker Swarm</a>
</li>
<li<%= sidebar_current("vs-other-yarn") %>>
<a href="/intro/vs/yarn.html">Hadoop YARN</a>
</li>
<li<%= sidebar_current("vs-other-htcondor") %>>
<a href="/intro/vs/htcondor.html">HTCondor</a>
</li>
<li<%= sidebar_current("vs-other-kubernetes") %>>
<a href="/intro/vs/kubernetes.html">Kubernetes</a>
</li>
<li<%= sidebar_current("vs-other-ecs") %>>
<a href="/intro/vs/ecs.html">AWS ECS</a>
</li>
<li<%= sidebar_current("vs-other-mesos") %>>
<a href="/intro/vs/mesos.html">Mesos with Frameworks</a>
<a href="/intro/vs/mesos.html">Mesos & Marathon</a>
</li>
<li<%= sidebar_current("vs-other-terraform") %>>
<a href="/intro/vs/terraform.html">Terraform</a>
</li>
<li<%= sidebar_current("vs-other-custom") %>>
<a href="/intro/vs/custom.html">Custom Solutions</a>
</li>
</ul>
</li>

View File

@ -162,17 +162,6 @@ description: |-
<li><a href="https://www.hashicorp.com/resources/machine-learning-workflows-hashicorp-nomad-apache-spark">Machine Learning Workflows with HashiCorp Nomad & Apache Spark</a></li>
</ul>
<h2>How to Install Nomad</h2>
<ul>
<li><a href="https://www.nomadproject.io/intro/getting-started/install.html">Install Nomad Locally (Sandbox)</a> - via Vagrant</li>
<li><a href="https://github.com/hashicorp/nomad/tree/master/terraform">Nomad on AWS & Azure (Sandbox)</a> - via Terraform</li>
<li><a href="https://aws.amazon.com/quickstart/architecture/nomad/">Nomad on AWS Quickstart</a> - via CloudFormation</li>
<li><a href="https://www.nomadproject.io/guides/operations/deployment-guide.html">Install Nomad for Production</a> - Official HashiCorp Guide</li>
<li><a href="https://www.katacoda.com/hashicorp/scenarios/nomad-introduction">Introduction to Nomad (Katacoda)</a> - Interactive Introduction to Nomad</li>
<li><a href="https://katacoda.com/hashicorp/scenarios/playground">Nomad Playground (Katacoda)</a> - Online Nomad Playground Environment</li>
</ul>
<h2>Integrations</h2>
<ul>