edd60b4fb1
When alloc-status is called, in it's long form only, print the resource utilization for that single allocation. When node-status is called, in it's long form only, print the TOTAL resource utilization that is occurring on that single node. Nomad Alloc Status: ``` % nomad alloc-status 195d3bf2 ID = 195d3bf2 Eval ID = c917e3ee Name = example.cache[1] Node ID = 1b2520a7 Job ID = example Client Status = running Evaluated Nodes = 1 Filtered Nodes = 0 Exhausted Nodes = 0 Allocation Time = 17.73µs Failures = 0 ==> Task "redis" is "running" Recent Events: Time Type Description 04/03/16 21:20:45 EST Started Task started by client 04/03/16 21:20:42 EST Received Task received by client ==> Status Allocation "195d3bf2" status "running" (0/1 nodes filtered) * Score "1b2520a7-6714-e78d-a8f7-68467dda6db7.binpack" = 1.209464 * Score "1b2520a7-6714-e78d-a8f7-68467dda6db7.job-anti-affinity" = -10.000000 ==> Resources CPU MemoryMB DiskMB IOPS 500 256 300 0 ``` Nomad Node Status: ``` % nomad node-status 57b3a55a ID = 57b3a55a Name = biscuits Class = <none> DC = dc1 Drain = false Status = ready Attributes = arch:amd64, cpu.frequency:3753.458875, cpu.modelname:Intel(R) Core(TM) i7-4770K CPU @ 3.50GHz, cpu.numcores:8, cpu.totalcompute:30027.671000, driver.docker:1, driver.docker.version:1.10.2, driver.exec:1, driver.raw_exec:1, hostname:biscuits, kernel.name:linux, kernel.version:4.4.0-9-generic, memory.totalbytes:25208934400, os.name:ubuntu, os.version:16.04, unique.cgroup.mountpoint:/sys/fs/cgroup, unique.network.ip-address:127.0.0.1, unique.storage.bytesfree:219781419008, unique.storage.bytestotal:246059892736, unique.storage.volume:/dev/sdb3 ==> Allocations ID Eval ID Job ID Task Group Desired Status Client Status 2c236883 aa11aca8 example cache run running 32f6e3d6 aa11aca8 example cache run running ==> Resource Utilization CPU MemoryMB DiskMB IOPS 1000 512 600 0 ``` |
||
---|---|---|
Godeps | ||
api | ||
client | ||
command | ||
demo | ||
dist | ||
helper | ||
jobspec | ||
nomad | ||
scheduler | ||
scripts | ||
testutil | ||
vendor | ||
website | ||
.gitattributes | ||
.gitignore | ||
.travis.yml | ||
CHANGELOG.md | ||
GNUmakefile | ||
LICENSE | ||
README.md | ||
Vagrantfile | ||
commands.go | ||
main.go | ||
main_test.go | ||
version.go |
README.md
Nomad
- Website: https://www.nomadproject.io
- IRC:
#nomad-tool
on Freenode - Mailing list: Google Groups
Nomad is a cluster manager, designed for both long lived services and short lived batch processing workloads. Developers use a declarative job specification to submit work, and Nomad ensures constraints are satisfied and resource utilization is optimized by efficient task packing. Nomad supports all major operating systems and virtualized, containerized, or standalone applications.
The key features of Nomad are:
-
Docker Support: Jobs can specify tasks which are Docker containers. Nomad will automatically run the containers on clients which have Docker installed, scale up and down based on the number of instances request, and automatically recover from failures.
-
Multi-Datacenter and Multi-Region Aware: Nomad is designed to be a global-scale scheduler. Multiple datacenters can be managed as part of a larger region, and jobs can be scheduled across datacenters if requested. Multiple regions join together and federate jobs making it easy to run jobs anywhere.
-
Operationally Simple: Nomad runs as a single binary that can be either a client or server, and is completely self contained. Nomad does not require any external services for storage or coordination. This means Nomad combines the features of a resource manager and scheduler in a single system.
-
Distributed and Highly-Available: Nomad servers cluster together and perform leader election and state replication to provide high availability in the face of failure. The Nomad scheduling engine is optimized for optimistic concurrency allowing all servers to make scheduling decisions to maximize throughput.
-
HashiCorp Ecosystem: Nomad integrates with the entire HashiCorp ecosystem of tools. Along with all HashiCorp tools, Nomad is designed in the unix philosophy of doing something specific and doing it well. Nomad integrates with tools like Packer, Consul, and Terraform to support building artifacts, service discovery, monitoring and capacity management.
For more information, see the introduction section of the Nomad website.
Getting Started & Documentation
All documentation is available on the Nomad website.
Developing Nomad
If you wish to work on Nomad itself or any of its built-in systems, you will first need Go installed on your machine (version 1.5+ is required).
Developing with Vagrant There is an included Vagrantfile that can help bootstrap the process. The created virtual machine is based off of Ubuntu 14, and installs several of the base libraries that can be used by Nomad.
To use this virtual machine, checkout Nomad and run vagrant up
from the root
of the repository:
$ git clone https://github.com/hashicorp/nomad.git
$ cd nomad
$ vagrant up
The virtual machine will launch, and a provisioning script will install the needed dependencies.
Developing locally
For local dev first make sure Go is properly installed, including setting up a
GOPATH. After setting up Go, clone this
repository into $GOPATH/src/github.com/hashicorp/nomad
. Then you can
download the required build tools such as vet, cover, godep etc by bootstrapping
your environment.
$ make bootstrap
...
Afterwards type make test
. This will run the tests. If this exits with exit status 0,
then everything is working!
$ make test
...
To compile a development version of Nomad, run make dev
. This will put the
Nomad binary in the bin
and $GOPATH/bin
folders:
$ make dev
...
$ bin/nomad
...
To cross-compile Nomad, run make bin
. This will compile Nomad for multiple
platforms and place the resulting binaries into the ./pkg
directory:
$ make bin
...
$ ls ./pkg
...