Fix typos in documentation

This commit is contained in:
Mathias Lafeldt 2016-07-18 16:24:30 +02:00
parent 65c93efd95
commit 722669451a
No known key found for this signature in database
GPG Key ID: 942469C8EC936D44
17 changed files with 32 additions and 32 deletions

View File

@ -9,7 +9,7 @@ description: |-
# Creating a cluster
Nomad clusters in production comprises of a few Nomad servers (an odd number,
preferrably 3 or 5, but never an even number to prevent split-brain), clients and
preferably 3 or 5, but never an even number to prevent split-brain), clients and
optionally Consul servers and clients. Before we start discussing the specifics
around bootstrapping clusters we should discuss the network topology. Nomad
models infrastructure as regions and datacenters. Nomad regions may contain multiple
@ -142,7 +142,7 @@ for the complete set of configuration options.
Nomad clusters across multiple regions can be federated allowing users to submit
jobs or interact with the HTTP API targeting any region, from any server.
Federating multiple Nomad clusters is as simple as joing servers. From any
Federating multiple Nomad clusters is as simple as joining servers. From any
server in one region, simply issue a join command to a server in the remote
region:

View File

@ -130,7 +130,7 @@ The `auth` object supports the following keys:
* `email` - (Optional) The account email.
* `server_address` - (Optional) The server domain/ip without the protocol.
* `server_address` - (Optional) The server domain/IP without the protocol.
Docker Hub is used by default.
Example:
@ -256,7 +256,7 @@ socket. Nomad will need to be able to read/write to this socket. If you do not
run Nomad as root, make sure you add the Nomad user to the Docker group so
Nomad can communicate with the Docker daemon.
For example, on ubuntu you can use the `usermod` command to add the `vagrant`
For example, on Ubuntu you can use the `usermod` command to add the `vagrant`
user to the `docker` group so you can run Nomad without root:
sudo usermod -G docker -a vagrant
@ -312,7 +312,7 @@ options](/docs/agent/config.html#options):
Note: When testing or using the `-dev` flag you can use `DOCKER_HOST`,
`DOCKER_TLS_VERIFY`, and `DOCKER_CERT_PATH` to customize Nomad's behavior. If
`docker.endpoint` is set Nomad will **only** read client configuration from the
config filie.
config file.
An example is given below:

View File

@ -26,7 +26,7 @@ The `exec` driver supports the following configuration in the job spec:
path can be relative from the allocations's root directory.
* `args` - (Optional) A list of arguments to the optional `command`.
References to environment variables or any [intepretable Nomad
References to environment variables or any [interpretable Nomad
variables](/docs/jobspec/interpreted.html) will be interpreted
before launching the task. For example:

View File

@ -24,7 +24,7 @@ The `java` driver supports the following configuration in the job spec:
(`subdir/from_archive/my.jar`).
* `args` - (Optional) A list of arguments to the optional `command`.
References to environment variables or any [intepretable Nomad
References to environment variables or any [interpretable Nomad
variables](/docs/jobspec/interpreted.html) will be interpreted
before launching the task. For example:

View File

@ -36,9 +36,9 @@ The `Qemu` driver supports the following configuration in the job spec:
`kvm` for the `accelerator`. Default is `tcg`
* `port_map` - (Optional) A `map[string]int` that maps port labels to ports
on the guest. This forwards the host port to the guest vm. For example,
on the guest. This forwards the host port to the guest VM. For example,
`port_map { db = 6539 }` would forward the host port with label `db` to the
guest vm's port 6539.
guest VM's port 6539.
## Examples

View File

@ -24,7 +24,7 @@ The `raw_exec` driver supports the following configuration in the job spec:
path can be relative from the allocations's root directory.
* `args` - (Optional) A list of arguments to the optional `command`.
References to environment variables or any [intepretable Nomad
References to environment variables or any [interpretable Nomad
variables](/docs/jobspec/interpreted.html) will be interpreted
before launching the task. For example:

View File

@ -25,7 +25,7 @@ The `rkt` driver supports the following configuration in the job spec:
* `command` - (Optional) A command to execute on the ACI.
* `args` - (Optional) A list of arguments to the optional `command`.
References to environment variables or any [intepretable Nomad
References to environment variables or any [interpretable Nomad
variables](/docs/jobspec/interpreted.html) will be interpreted
before launching the task. For example:

View File

@ -506,7 +506,7 @@ region is used; another region can be specified using the `?region=` query param
</li>
<li>
<span class="param">FailedTGAllocs</span>
A set of metrics to understand any allocation failures that occured for
A set of metrics to understand any allocation failures that occurred for
the Task Group.
</li>
<li>

View File

@ -79,7 +79,7 @@ Evaluation "5744eb15" waiting for additional capacity to place remainder
More interesting though is the [`alloc-status`
command](/docs/commands/alloc-status.html). This command gives us the most
recent events that occured for a task, its resource usage, port allocations and
recent events that occurred for a task, its resource usage, port allocations and
more:
```
@ -140,8 +140,8 @@ Time Type Description
```
Not all failures are this easily debuggable. If the `alloc-status` command shows
many restarts occuring as in the example below, it is a good hint that the error
is occuring at the application level during start up. These failres can be
many restarts occurring as in the example below, it is a good hint that the error
is occurring at the application level during start up. These failures can be
debugged by looking at logs which is covered in the [Nomad Job Logging
documentation](/docs/jobops/logs.html).

View File

@ -74,7 +74,7 @@ Replacing `stdout` for `stderr` would display the respective `stderr` output.
While this works well for quickly accessing logs, we recommend running a
log-shipper for long term storage of logs. In many cases this will not be needed
and the above will suffice but for use cases in which log retention is needed
Nomad can accomodate.
Nomad can accommodate.
Since we place application logs inside the `alloc/` directory, all tasks within
the same task group have access to each others logs. Thus we can have a task

View File

@ -6,7 +6,7 @@ description: |-
Learn how to see resource utilization of a Nomad Job.
---
# Determing Resource Utilization
# Determining Resource Utilization
Understanding the resource utilization of your application is important for many
reasons and Nomad supports reporting detailed statistics in many of its drivers.
@ -65,7 +65,7 @@ While single point in time resource usage measurements are useful, it is often
more useful to graph resource usage over time to better understand and estimate
resource usage. Nomad supports outputting resource data to statsite and statsd
and is the recommended way of monitoring resources. For more information about
outputing telemetry see the [Telemetry documentation](/docs/agent/telemetry.html).
outputting telemetry see the [Telemetry documentation](/docs/agent/telemetry.html).
For more advanced use cases, the resource usage data may also be accessed via
the client's HTTP API. See the documentation of the Client's

View File

@ -8,7 +8,7 @@ description: |-
# Task Configurations
Most tasks need to be paramaterized in some way. The simplest is via
Most tasks need to be parameterized in some way. The simplest is via
command-line arguments but often times tasks consume complex configurations via
config files. Here we explore how to configure Nomad jobs to support many
common configuration use cases.

View File

@ -69,9 +69,9 @@ the [Jobspec documentation](/docs/jobspec/index.html#update).
## Blue-green and Canaries
Blue-green deploys have serveral names, Red/Black, A/B, Blue/Green, but the
Blue-green deploys have several names, Red/Black, A/B, Blue/Green, but the
concept is the same. The idea is to have two sets of applications with only one
of them being live at a given time, except while transistion from one set to
of them being live at a given time, except while transitioning from one set to
another. What the term "live" means is that the live set of applications are
the set receiving traffic.
@ -81,7 +81,7 @@ been tested in a QA environment and is now ready to start accepting production
traffic.
In this case we would consider version 1 to be the live set and we want to
transistion to version 2. We can model this workflow with the below job:
transition to version 2. We can model this workflow with the below job:
```
job "my-api" {
@ -114,7 +114,7 @@ job "my-api" {
```
Here we can see the live group is "api-green" since it has a non-zero count. To
transistion to v2, we up the count of "api-blue" and down the count of
transition to v2, we up the count of "api-blue" and down the count of
"api-green". We can now see how the canary process is a special case of
blue-green. If we set "api-blue" to `count = 1` and "api-green" to `count = 9`,
there will still be the original 10 instances but we will be testing only one
@ -124,7 +124,7 @@ If at any time we notice that the new version is behaving incorrectly and we
want to roll back, all that we have to do is drop the count of the new group to
0 and restore the original version back to 10. This fine control lets job
operators be confident that deployments will not cause down time. If the deploy
is successful and we fully transistion from v1 to v2 the job file will look like
is successful and we fully transition from v1 to v2 the job file will look like
this:
```
@ -160,7 +160,7 @@ job "my-api" {
Now "api-blue" is the live group and when we are ready to update the api to v3,
we would modify "api-green" and repeat this process. The rate at which the count
of groups are incremented and decremented is totally up to the user. It is
usually good practice to start by transistion one at a time until a certain
usually good practice to start by transition one at a time until a certain
confidence threshold is met based on application specific logs and metrics.
## Handling Drain Signals
@ -169,6 +169,6 @@ On operating systems that support signals, Nomad will signal the application
before killing it. This gives the application time to gracefully drain
connections and conduct any other cleanup that is necessary. Certain
applications take longer to drain than others and as such Nomad lets the job
file specify how long to wait inbetween signaling the application to exit and
file specify how long to wait in-between signaling the application to exit and
forcefully killing it. This is configurable via the `kill_timeout`. More details
can be seen in the [Jobspec documentation](/docs/jobspec/index.html#kill_timeout).

View File

@ -126,7 +126,7 @@ gives time to view the data produced by tasks.
Depending on the driver and operating system being targeted, the directories are
made available in various ways. For example, on `docker` the directories are
binded to the container, while on `exec` on Linux the directories are mounted into the
bound to the container, while on `exec` on Linux the directories are mounted into the
chroot. Regardless of how the directories are made available, the path to the
directories can be read through the following environment variables:
`NOMAD_ALLOC_DIR` and `NOMAD_TASK_DIR`.

View File

@ -431,7 +431,7 @@ Nomad downloads artifacts using
[`go-getter`](https://github.com/hashicorp/go-getter). The `go-getter` library
allows downloading of artifacts from various sources using a URL as the input
source. The key/value pairs given in the `options` block map directly to
parameters appended to the supplied `source` url. These are then used by
parameters appended to the supplied `source` URL. These are then used by
`go-getter` to appropriately download the artifact. `go-getter` also has a CLI
tool to validate its URL and can be used to check if the Nomad `artifact` is
valid.
@ -494,7 +494,7 @@ artifact {
}
```
or to override automatic detection in the url, use the s3 specific syntax
or to override automatic detection in the URL, use the S3-specific syntax
```
artifact {
source = "s3::https://s3-eu-west-1.amazonaws.com/my-bucket-example/my_app.tar.gz"

View File

@ -483,7 +483,7 @@ Nomad downloads artifacts using
[`go-getter`](https://github.com/hashicorp/go-getter). The `go-getter` library
allows downloading of artifacts from various sources using a URL as the input
source. The key/value pairs given in the `options` block map directly to
parameters appended to the supplied `source` url. These are then used by
parameters appended to the supplied `source` URL. These are then used by
`go-getter` to appropriately download the artifact. `go-getter` also has a CLI
tool to validate its URL and can be used to check if the Nomad `artifact` is
valid.
@ -547,7 +547,7 @@ Path based style:
]
```
or to override automatic detection in the url, use the s3 specific syntax
or to override automatic detection in the URL, use the S3-specific syntax
```
"Artifacts": [
{

View File

@ -177,7 +177,7 @@ potentially invalid.
We can see that the scheduler detected the change in count and informs us that
it will cause 2 new instances to be created. The in-place update that will
occur is to push the update job specification to the existing allocation and
will not cause any service interuption. We can then run the job with the
will not cause any service interruption. We can then run the job with the
run command the `plan` emitted.
By running with the `-check-index` flag, Nomad checks that the job has not