This change updates the `nomad node-status -verbose` command to
also include the addreess of the node. This is helpful for cluster
administrators to quickly discover information and access nodes
when required.
This PR moves creating the API client into the returned predict
function. The creation of the client causes a lookup of all the system
certificates and doing that for each command on mac was extremely slow.
This PR fixes our vet script and fixes all the missed vet changes.
It also fixes pointers being printed in `nomad stop <job>` and `nomad
node-status <node>`.
When alloc-status is called, in it's long form only, print the resource
utilization for that single allocation.
When node-status is called, in it's long form only, print the TOTAL
resource utilization that is occurring on that single node.
Nomad Alloc Status:
```
% nomad alloc-status 195d3bf2
ID = 195d3bf2
Eval ID = c917e3ee
Name = example.cache[1]
Node ID = 1b2520a7
Job ID = example
Client Status = running
Evaluated Nodes = 1
Filtered Nodes = 0
Exhausted Nodes = 0
Allocation Time = 17.73µs
Failures = 0
==> Task "redis" is "running"
Recent Events:
Time Type Description
04/03/16 21:20:45 EST Started Task started by client
04/03/16 21:20:42 EST Received Task received by client
==> Status
Allocation "195d3bf2" status "running" (0/1 nodes filtered)
* Score "1b2520a7-6714-e78d-a8f7-68467dda6db7.binpack" = 1.209464
* Score "1b2520a7-6714-e78d-a8f7-68467dda6db7.job-anti-affinity" = -10.000000
==> Resources
CPU MemoryMB DiskMB IOPS
500 256 300 0
```
Nomad Node Status:
```
% nomad node-status 57b3a55a
ID = 57b3a55a
Name = biscuits
Class = <none>
DC = dc1
Drain = false
Status = ready
Attributes = arch:amd64, cpu.frequency:3753.458875, cpu.modelname:Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz, cpu.numcores:8, cpu.totalcompute:30027.671000, driver.docker:1, driver.docker.version:1.10.2, driver.exec:1, driver.raw_exec:1, hostname:biscuits, kernel.name:linux, kernel.version:4.4.0-9-generic, memory.totalbytes:25208934400, os.name:ubuntu, os.version:16.04, unique.cgroup.mountpoint:/sys/fs/cgroup, unique.network.ip-address:127.0.0.1, unique.storage.bytesfree:219781419008, unique.storage.bytestotal:246059892736, unique.storage.volume:/dev/sdb3
==> Allocations
ID Eval ID Job ID Task Group Desired Status Client Status
2c236883 aa11aca8 example cache run running
32f6e3d6 aa11aca8 example cache run running
==> Resource Utilization
CPU MemoryMB DiskMB IOPS
1000 512 600 0
```
We recently ran into an issue on a small percentage of nomad-clients
where the nomad-client was running successfully, but due to a race
condition, could not correctly bind to the docker socket. This caused
all of our nomad jobs to be allocated to a single nomad-client instead
of being spread evenly across our clients. The only way to discover this
was to run `nomad node-status <node>` and count each job allocation per
node.
This can lead to a fairly long debugging process if there are several
nomad-clients. Including the number of allocations for each node in the
`node-status` command would save a large amount of debug time.
```
jake@biscuits [12:08:41] [~]
-> % nomad node-status
ID Datacenter Name Class Drain Status Allocations
2b0aabc5 dc1 biscuits <none> false ready 0
```
```
jake@biscuits [12:08:55] [~]
-> % nomad node-status
ID Datacenter Name Class Drain Status Allocations
2b0aabc5 dc1 biscuits <none> false ready 1
```
* Truncate all UUID identifiers to eight characters by default
* Refactor the node identifier to an auto-generated UUID
* Created and updated tests and mocks
* Reverted changes to get methods
* Added prefix query parameter
* Updated node status to use prefix based searching
* Fixed tests
* Removed truncation logic