Merge pull request #3482 from hashicorp/docs-0.7.0

Docs 0.7.0
This commit is contained in:
Alex Dadgar 2017-11-01 12:06:16 -07:00 committed by GitHub
commit 35bb386a1e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
11 changed files with 129 additions and 99 deletions

6
Vagrantfile vendored
View file

@ -120,9 +120,9 @@ def configureLinuxProvisioners(vmCfg)
privileged: true,
path: './scripts/vagrant-linux-priv-rkt.sh'
vmCfg.vm.provision "shell",
privileged: false,
path: './scripts/vagrant-linux-priv-ui.sh'
vmCfg.vm.provision "shell",
privileged: false,
path: './scripts/vagrant-linux-priv-ui.sh'
return vmCfg
end

View file

@ -10,14 +10,14 @@ sudo DEBIAN_FRONTEND=noninteractive apt-get install -y unzip curl vim \
software-properties-common
# Download Nomad
NOMAD_VERSION=0.6.3
NOMAD_VERSION=0.7.0
echo "Fetching Nomad..."
cd /tmp/
curl -sSL https://releases.hashicorp.com/nomad/${NOMAD_VERSION}/nomad_${NOMAD_VERSION}_linux_amd64.zip -o nomad.zip
echo "Fetching Consul..."
curl -sSL https://releases.hashicorp.com/consul/0.9.3/consul_0.9.3_linux_amd64.zip > consul.zip
curl -sSL https://releases.hashicorp.com/consul/1.0.0/consul_1.0.0_linux_amd64.zip > consul.zip
echo "Installing Nomad..."
unzip nomad.zip
@ -85,6 +85,9 @@ Vagrant.configure(2) do |config|
config.vm.provision "shell", inline: $script, privileged: false
config.vm.provision "docker" # Just install it
# Expose the nomad api and ui to the host
config.vm.network "forwarded_port", guest: 4646, host: 4646, auto_correct: true
# Increase memory for Parallels Desktop
config.vm.provider "parallels" do |p, o|
p.memory = "1024"

View file

@ -2,7 +2,7 @@ set :base_url, "https://www.nomadproject.io/"
activate :hashicorp do |h|
h.name = "nomad"
h.version = "0.6.3"
h.version = "0.7.0"
h.github_slug = "hashicorp/nomad"
end

Binary file not shown.

Binary file not shown.

View file

@ -14,8 +14,6 @@ Jobs, Deployments, Task Groups, Allocations, Clients, and Servers can all be
monitored from the Web UI. The Web UI also supports the use of ACL tokens for
clusters that are using the [ACL system](/guides/acl.html).
~> **Beta Feature!** This page covers the Web UI, a feature of v0.7 which is still in beta.
## Accessing the Web UI
The Web UI is served on the same address and port as the HTTP API. It is namespaced

View file

@ -15,9 +15,9 @@ we will create our first real cluster with multiple nodes.
## Starting the Server
The first step is to create the config file for the server. Either download
the file from the [repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant),
or paste this into a file called `server.hcl`:
The first step is to create the config file for the server. Either download the
[file from the repository][server.hcl], or paste this into a file called
`server.hcl`:
```hcl
# Increase log verbosity
@ -43,8 +43,8 @@ corresponding `bootstrap_expect` value.
Once the file is created, start the agent in a new tab:
```
$ sudo nomad agent -config server.hcl
```text
$ nomad agent -config server.hcl
==> WARNING: Bootstrap mode enabled! Potentially unsafe operation.
==> Starting Nomad agent...
==> Nomad agent configuration:
@ -53,7 +53,7 @@ $ sudo nomad agent -config server.hcl
Log Level: DEBUG
Region: global (DC: dc1)
Server: true
Version: 0.6.0
Version: 0.7.0
==> Nomad agent started! Log data will stream in below:
@ -80,11 +80,11 @@ Now we need some agents to run tasks!
## Starting the Clients
Similar to the server, we must first configure the clients. Either download
the configuration for client1 and client2 from the
the configuration for `client1` and `client2` from the
[repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant), or
paste the following into `client1.hcl`:
```
```hcl
# Increase log verbosity
log_level = "DEBUG"
@ -108,11 +108,11 @@ ports {
```
Copy that file to `client2.hcl` and change the `data_dir` to
be "/tmp/client2" and the `http` port to 5657. Once you've created
be `/tmp/client2` and the `http` port to 5657. Once you've created
both `client1.hcl` and `client2.hcl`, open a tab for each and
start the agents:
```
```text
$ sudo nomad agent -config client1.hcl
==> Starting Nomad agent...
==> Nomad agent configuration:
@ -121,7 +121,7 @@ $ sudo nomad agent -config client1.hcl
Log Level: DEBUG
Region: global (DC: dc1)
Server: false
Version: 0.6.0
Version: 0.7.0
==> Nomad agent started! Log data will stream in below:
@ -138,7 +138,7 @@ in managing the cluster or making scheduling decisions.
Using the [`node-status` command](/docs/commands/node-status.html)
we should see both nodes in the `ready` state:
```
```text
$ nomad node-status
ID Datacenter Name Class Drain Status
fca62612 dc1 nomad <none> false ready
@ -157,7 +157,7 @@ verify that the `count` is still set to 3.
Then, use the [`run` command](/docs/commands/run.html) to submit the job:
```
```text
$ nomad run example.nomad
==> Monitoring evaluation "8e0a7cf9"
Evaluation triggered by job "example"
@ -217,3 +217,4 @@ Nomad is now up and running. The cluster can be entirely managed from the comman
but Nomad also comes with a web interface that is hosted alongside the HTTP API.
Next, we'll [visit the UI in the browser](ui.html).
[server.hcl]: https://raw.githubusercontent.com/hashicorp/nomad/master/demo/vagrant/server.hcl

View file

@ -20,15 +20,21 @@ Note: To use the Vagrant Setup first install Vagrant following these instruction
Once you have created a new directory and downloaded the `Vagrantfile`
you must create the virtual machine:
$ vagrant up
```shell
$ vagrant up
```
This will take a few minutes as the base Ubuntu box must be downloaded
and provisioned with both Docker and Nomad. Once this completes, you should
see output similar to:
Bringing machine 'default' up with 'vmware_fusion' provider...
==> default: Checking if box 'puphpet/ubuntu1404-x64' is up to date...
==> default: Machine is already running.
```text
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'bento/ubuntu-16.04'...
...
==> default: Running provisioner: docker...
```
At this point the Vagrant box is running and ready to go.
@ -38,15 +44,16 @@ After starting the Vagrant box, verify the installation worked by connecting
to the box using SSH and checking that `nomad` is available. By executing
`nomad`, you should see help output similar to the following:
```
```shell
$ vagrant ssh
...
vagrant@nomad:~$ nomad
usage: nomad [--version] [--help] <command> [<args>]
Usage: nomad [-version] [-help] [-autocomplete-(un)install] <command> [<args>]
Available commands are:
acl Interact with ACL policies and tokens
agent Runs a Nomad agent
agent-info Display status information about the local agent
alloc-status Display allocation status information and metadata
@ -60,27 +67,31 @@ Available commands are:
keygen Generates a new encryption key
keyring Manages gossip layer encryption keys
logs Streams the logs of a task.
namespace Interact with namespaces
node-drain Toggle drain mode on a given node
node-status Display status information about nodes
operator Provides cluster-level tools for Nomad operators
plan Dry-run a job update to determine its effects
quota Interact with quotas
run Run a new job or update an existing job
sentinel Interact with Sentinel policies
server-force-leave Force a server into the 'left' state
server-join Join server nodes together
server-members Display a list of known servers and their status
status Display status information about jobs
status Display the status output for a resource
stop Stop a running job
ui Open the Nomad Web UI
validate Checks if a given job specification is valid
version Prints the Nomad version
```
If you get an error that Nomad could not be found, then your Vagrant box
may not have provisioned correctly. Check for any error messages that may have
been emitted during `vagrant up`. You can always destroy the box and
been emitted during `vagrant up`. You can always [destroy the box][destroy] and
re-create it.
## Next Steps
Vagrant is running and Nomad is installed. Let's [start Nomad](/intro/getting-started/running.html)!
[destroy]: https://www.vagrantup.com/docs/cli/destroy.html

View file

@ -22,7 +22,7 @@ however we recommend only using JSON when the configuration is generated by a ma
To get started, we will use the [`init` command](/docs/commands/init.html) which
generates a skeleton job file:
```
```text
$ nomad init
Example job file written to example.nomad
```
@ -36,13 +36,14 @@ jobs and to update existing jobs.
We can register our example job now:
```
```text
$ nomad run example.nomad
==> Monitoring evaluation "26cfc69e"
==> Monitoring evaluation "13ebb66d"
Evaluation triggered by job "example"
Allocation "8ba85cef" created: node "171a583b", group "cache"
Allocation "883269bf" created: node "e42d6f19", group "cache"
Evaluation within deployment: "b0a84e74"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "26cfc69e" finished with status "complete"
==> Evaluation "13ebb66d" finished with status "complete"
```
Anytime a job is updated, Nomad creates an evaluation to determine what
@ -52,11 +53,11 @@ local agent.
To inspect the status of our job we use the [`status` command](/docs/commands/status.html):
```
```text
$ nomad status example
ID = example
Name = example
Submit Date = 07/25/17 23:14:43 UTC
Submit Date = 10/31/17 22:58:40 UTC
Type = service
Priority = 50
Datacenters = dc1
@ -69,7 +70,7 @@ Task Group Queued Starting Running Failed Complete Lost
cache 0 0 1 0 0 0
Latest Deployment
ID = 11c5cdc8
ID = b0a84e74
Status = successful
Description = Deployment completed successfully
@ -79,7 +80,7 @@ cache 1 1 1 0
Allocations
ID Node ID Task Group Version Desired Status Created At
8ba85cef 171a583b cache 0 run running 07/25/17 23:14:43 UTC
883269bf e42d6f19 cache 0 run running 10/31/17 22:58:40 UTC
```
Here we can see that the result of our evaluation was the creation of an
@ -88,33 +89,39 @@ allocation that is now running on the local node.
An allocation represents an instance of Task Group placed on a node. To inspect
an allocation we use the [`alloc-status` command](/docs/commands/alloc-status.html):
```
$ nomad alloc-status 8ba85cef
ID = 8ba85cef
Eval ID = 61b0b423
```text
$ nomad alloc-status 883269bf
ID = 883269bf
Eval ID = 13ebb66d
Name = example.cache[0]
Node ID = 171a583b
Node ID = e42d6f19
Job ID = example
Job Version = 0
Client Status = running
Client Description = <none>
Desired Status = run
Desired Description = <none>
Created At = 07/25/17 23:14:43 UTC
Deployment ID = fa882a5b
Created At = 10/31/17 22:58:40 UTC
Deployment ID = b0a84e74
Deployment Health = healthy
Task "redis" is "running"
Task Resources
CPU Memory Disk IOPS Addresses
2/500 6.3 MiB/256 MiB 300 MiB 0 db: 127.0.0.1:30329
CPU Memory Disk IOPS Addresses
8/500 MHz 6.3 MiB/256 MiB 300 MiB 0 db: 127.0.0.1:22672
Task Events:
Started At = 10/31/17 22:58:49 UTC
Finished At = N/A
Total Restarts = 0
Last Restart = N/A
Recent Events:
Time Type Description
07/25/17 23:14:53 UTC Started Task started by client
07/25/17 23:14:43 UTC Driver Downloading image redis:3.2
07/25/17 23:14:43 UTC Task Setup Building Task Directory
07/25/17 23:14:43 UTC Received Task received by client
Time Type Description
10/31/17 22:58:49 UTC Started Task started by client
10/31/17 22:58:40 UTC Driver Downloading image redis:3.2
10/31/17 22:58:40 UTC Task Setup Building Task Directory
10/31/17 22:58:40 UTC Received Task received by client
```
We can see that Nomad reports the state of the allocation as well as its
@ -123,8 +130,8 @@ usage statistics will be reported.
To see the logs of a task, we can use the [logs command](/docs/commands/logs.html):
```
$ nomad logs 8ba85cef redis
```text
$ nomad logs 883269bf redis
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.2.1 (00000000/0) 64 bit
@ -164,7 +171,7 @@ Once you have finished modifying the job specification, use the [`plan`
command](/docs/commands/plan.html) to invoke a dry-run of the scheduler to see
what would happen if you ran the updated job:
```
```text
$ nomad plan example.nomad
+/- Job: "example"
+/- Task Group: "cache" (2 create, 1 in-place update)
@ -174,10 +181,10 @@ $ nomad plan example.nomad
Scheduler dry-run:
- All tasks successfully allocated.
Job Modify Index: 6
Job Modify Index: 7
To submit the job with version verification run:
nomad run -check-index 6 example.nomad
nomad run -check-index 7 example.nomad
When running the job with the check-index flag, the job will only be run if the
server side version matches the job modify index returned. If the index has
@ -187,9 +194,9 @@ potentially invalid.
We can see that the scheduler detected the change in count and informs us that
it will cause 2 new instances to be created. The in-place update that will
occur is to push the update job specification to the existing allocation and
will not cause any service interruption. We can then run the job with the
run command the `plan` emitted.
occur is to push the updated job specification to the existing allocation and
will not cause any service interruption. We can then run the job with the run
command the `plan` emitted.
By running with the `-check-index` flag, Nomad checks that the job has not
been modified since the plan was run. This is useful if multiple people are
@ -197,15 +204,15 @@ interacting with the job at the same time to ensure the job hasn't changed
before you apply your modifications.
```
$ nomad run -check-index 6 example.nomad
==> Monitoring evaluation "127a49d0"
$ nomad run -check-index 7 example.nomad
==> Monitoring evaluation "93d16471"
Evaluation triggered by job "example"
Evaluation within deployment: "2e2c818f"
Allocation "8ab24eef" created: node "171a583b", group "cache"
Allocation "f6c29874" created: node "171a583b", group "cache"
Allocation "8ba85cef" modified: node "171a583b", group "cache"
Evaluation within deployment: "0d06e1b6"
Allocation "3249e320" created: node "e42d6f19", group "cache"
Allocation "453b210f" created: node "e42d6f19", group "cache"
Allocation "883269bf" modified: node "e42d6f19", group "cache"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "127a49d0" finished with status "complete"
==> Evaluation "93d16471" finished with status "complete"
```
Because we set the count of the task group to three, Nomad created two
@ -225,8 +232,7 @@ config {
We can run `plan` again to see what will happen if we submit this change:
```
$ nomad plan example.nomad
```text
+/- Job: "example"
+/- Task Group: "cache" (1 create/destroy update, 2 ignore)
+/- Task: "redis" (forces create/destroy update)
@ -237,12 +243,11 @@ $ nomad plan example.nomad
Scheduler dry-run:
- All tasks successfully allocated.
- Rolling update, next evaluation will be in 10s.
Job Modify Index: 42
Job Modify Index: 1127
To submit the job with version verification run:
nomad run -check-index 42 example.nomad
nomad run -check-index 1127 example.nomad
When running the job with the check-index flag, the job will only be run if the
server side version matches the job modify index returned. If the index has
@ -257,14 +262,14 @@ a time.
Once ready, use `run` to push the updated specification:
```
```text
$ nomad run example.nomad
==> Monitoring evaluation "02161762"
==> Monitoring evaluation "293b313a"
Evaluation triggered by job "example"
Evaluation within deployment: "429f8160"
Allocation "de4e3f7a" created: node "6c027e58", group "cache"
Evaluation within deployment: "f4047b3a"
Allocation "27bd4a41" created: node "e42d6f19", group "cache"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "02161762" finished with status "complete"
==> Evaluation "293b313a" finished with status "complete"
```
After running, the rolling upgrade can be followed by running `nomad status` and
@ -281,13 +286,13 @@ scale.
So far we've created, run and modified a job. The final step in a job lifecycle
is stopping the job. This is done with the [`stop` command](/docs/commands/stop.html):
```
```text
$ nomad stop example
==> Monitoring evaluation "ddc4eb7d"
==> Monitoring evaluation "6d4cd6ca"
Evaluation triggered by job "example"
Evaluation within deployment: "ec46fb3b"
Evaluation within deployment: "f4047b3a"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "ddc4eb7d" finished with status "complete"
==> Evaluation "6d4cd6ca" finished with status "complete"
```
When we stop a job, it creates an evaluation which is used to stop all
@ -295,11 +300,11 @@ the existing allocations. If we now query the job status, we can see it is
now marked as `dead (stopped)`, indicating that the job has been stopped and
Nomad is no longer running it:
```
```text
$ nomad status example
ID = example
Name = example
Submit Date = 07/26/17 17:51:01 UTC
Submit Date = 11/01/17 17:30:40 UTC
Type = service
Priority = 50
Datacenters = dc1
@ -309,10 +314,10 @@ Parameterized = false
Summary
Task Group Queued Starting Running Failed Complete Lost
cache 0 0 0 0 3 0
cache 0 0 0 0 6 0
Latest Deployment
ID = ec46fb3b
ID = f4047b3a
Status = successful
Description = Deployment completed successfully
@ -322,9 +327,12 @@ cache 3 3 3 0
Allocations
ID Node ID Task Group Version Desired Status Created At
8ace140d 2cfe061e cache 2 stop complete 07/26/17 17:51:01 UTC
8af5330a 2cfe061e cache 2 stop complete 07/26/17 17:51:01 UTC
df50c3ae 2cfe061e cache 2 stop complete 07/26/17 17:51:01 UTC
7dce5722 e42d6f19 cache 2 stop complete 11/01/17 17:31:16 UTC
8cfab5f4 e42d6f19 cache 2 stop complete 11/01/17 17:31:02 UTC
27bd4a41 e42d6f19 cache 2 stop complete 11/01/17 17:30:40 UTC
3249e320 e42d6f19 cache 1 stop complete 11/01/17 17:28:28 UTC
453b210f e42d6f19 cache 1 stop complete 11/01/17 17:28:28 UTC
883269bf e42d6f19 cache 1 stop complete 10/31/17 22:58:40 UTC
```
If we wanted to start the job again, we could simply `run` it again.

View file

@ -14,8 +14,8 @@ have at least one server, though a cluster of 3 or 5 servers is recommended.
A single server deployment is _**highly**_ discouraged as data loss is inevitable
in a failure scenario.
All other agents run in client mode. A client is a very lightweight
process that registers the host machine, performs heartbeating, and runs any tasks
All other agents run in client mode. A Nomad client is a very lightweight
process that registers the host machine, performs heartbeating, and runs the tasks
that are assigned to it by the servers. The agent must be run on every node that
is part of the cluster so that the servers can assign work to those machines.
@ -26,7 +26,7 @@ is used to quickly start an agent that is acting as a client and server to test
job configurations or prototype interactions. It should _**not**_ be used in
production as it does not persist state.
```
```text
vagrant@nomad:~$ sudo nomad agent -dev
==> Starting Nomad agent...
@ -99,7 +99,7 @@ ring using the [`server-members`](/docs/commands/server-members.html) command:
```text
$ nomad server-members
Name Address Port Status Leader Protocol Build Datacenter Region
nomad.global 127.0.0.1 4648 alive true 2 0.6.0 dc1 global
nomad.global 127.0.0.1 4648 alive true 2 0.7.0 dc1 global
```
The output shows our own agent, the address it is running on, its

View file

@ -12,8 +12,6 @@ At this point we have a fully functioning cluster with a job running in it. We h
learned how to inspect a job using `nomad status`, next we'll learn how to inspect
a job in the web client.
~> **Beta Feature!** This page covers the Nomad UI, a feature of v0.7 which is still in beta.
## Opening the Web UI
As long as Nomad is running, the Nomad UI is also running. It is hosted at the same address
@ -23,6 +21,17 @@ With Nomad running, visit [http://localhost:4646](http://localhost:4646) to open
[![Nomad UI Jobs List][img-jobs-list]][img-jobs-list]
If you can't connect it's possible that Vagrant was unable to properly map the
port from your host to the VM. Your `vagrant up` output will contain the new
port mapping:
```text
==> default: Fixed port collision for 4646 => 4646. Now on port 2200.
```
In the case above you would connect to
[http://localhost:2200](http://localhost:2200) instead.
## Inspecting a Job
You should be automatically redirected to `/ui/jobs` upon visiting the UI in your browser. This