Merge branch 'master' into vagrant

* master: (57 commits)
  api: use stub structs
  README fillin
  nomad: fixing unit tests
  scheduler: pass failure reason to ExhaustedNode
  nomad: thread alloc fit failure reason through
  nomad: rename region1 to global. Fixes #41
  client: Use Alloc.TaskResouces to override Task.Resources
  scheduler: in-place update should preserve network offer
  scheduler: track dimension of exhaustion
  schedule: avoid in-place update of task if network resources are different
  scheduler: expose reason network offer failed
  nomad: adding reason network offer failed
  mock: use network resources
  scheduler: thread through the TaskResources
  nomad: removing old network index lookup methods
  nomad: Resource Superset ignores network in favor of NetworkIndex
  nomad: update for new AllocsFit API
  nomad: remove PortsOvercommited in favor of NetworkIndex
  scheduler: use the new network index
  nomad: exposing IntContains
  ...
This commit is contained in:
Clint Shryock 2015-09-14 10:13:37 -05:00
commit 676a27da10
245 changed files with 16619 additions and 363 deletions

View file

@ -2,6 +2,12 @@ DEPS = $(shell go list -f '{{range .TestImports}}{{.}} {{end}}' ./...)
PACKAGES = $(shell go list ./...) PACKAGES = $(shell go list ./...)
VETARGS?=-asmdecl -atomic -bool -buildtags -copylocks -methods \ VETARGS?=-asmdecl -atomic -bool -buildtags -copylocks -methods \
-nilfunc -printf -rangeloops -shift -structtags -unsafeptr -nilfunc -printf -rangeloops -shift -structtags -unsafeptr
EXTERNAL_TOOLS=\
github.com/tools/godep \
github.com/mitchellh/gox \
golang.org/x/tools/cmd/cover \
golang.org/x/tools/cmd/vet
all: deps format all: deps format
@mkdir -p bin/ @mkdir -p bin/
@ -50,4 +56,11 @@ web:
web-push: web-push:
./scripts/website_push.sh ./scripts/website_push.sh
# bootstrap the build by downloading additional tools
bootstrap:
@for tool in $(EXTERNAL_TOOLS) ; do \
echo "Installing $$tool" ; \
go get $$tool; \
done
.PHONY: all cov deps integ test vet web web-push test-nodep .PHONY: all cov deps integ test vet web web-push test-nodep

View file

@ -1,2 +1,89 @@
# nomad Nomad [![Build Status](https://travis-ci.org/hashicorp/nomad.svg)](https://travis-ci.org/hashicorp/nomad)
Where the wild bits roam =========
- Website: https://www.nomadproject.io
- IRC: `#nomad-tool` on Freenode
- Mailing list: [Google Groups](https://groups.google.com/group/nomad-tool)
![Nomad](https://raw.githubusercontent.com/hashicorp/nomad/master/website/source/assets/images/logo-header%402x.png?token=AAkIoLO_y1g3wgHMr3QO-559BN22rN0kks5V_2HpwA%3D%3D)
Nomad is a cluster manager, designed for both long lived services and short
lived batch processing workloads. Developers use a declarative job specification
to submit work, and Nomad ensures constraints are satisfied and resource utilization
is optimized by efficient task packing. Nomad supports all major operating systems
and virtualized, containerized, or standalone applications.
The key features of Nomad are:
* **Docker Support**: Jobs can specify tasks which are Docker containers.
Nomad will automatically run the containers on clients which have Docker
installed, scale up and down based on the number of instances request,
and automatically recover from failures.
* **Multi-Datacenter and Multi-Region Aware**: Nomad is designed to be
a global-scale scheduler. Multiple datacenters can be managed as part
of a larger region, and jobs can be scheduled across datacenters if
requested. Multiple regions join together and federate jobs making it
easy to run jobs anywhere.
* **Operationally Simple**: Nomad runs as a single binary that can be
either a client or server, and is completely self contained. Nomad does
not require any external services for storage or coordination. This means
Nomad combines the features of a resource manager and scheduler in a single
system.
* **Distributed and Highly-Available**: Nomad servers cluster together and
perform leader election and state replication to provide high availability
in the face of failure. The Nomad scheduling engine is optimized for
optimistic concurrency allowing all servers to make scheduling decisions to
maximize throughput.
* **HashiCorp Ecosystem**: Nomad integrates with the entire HashiCorp
ecosystem of tools. Along with all HashiCorp tools, Nomad is designed
in the unix philosophy of doing something specific and doing it well.
Nomad integrates with tools like Packer, Consul, and Terraform to support
building artifacts, service discovery, monitoring and capacity management.
For more information, see the [introduction section](https://www.nomadproject.io/intro)
of the Nomad website.
Getting Started & Documentation
-------------------------------
All documentation is available on the [Nomad website](https://www.nomadproject.io).
Developing Nomad
--------------------
If you wish to work on Nomad itself or any of its built-in systems,
you'll first need [Go](https://www.golang.org) installed on your
machine (version 1.4+ is *required*).
For local dev first make sure Go is properly installed, including setting up a
[GOPATH](https://golang.org/doc/code.html#GOPATH). After setting up Go, you can
download the required build tools such as vet, cover, godep etc by bootstrapping
your environment.
```sh
$ make bootstrap
...
```
Next, clone this repository into `$GOPATH/src/github.com/hashicorp/nomad`.
Then type `make test`. This will run the tests. If this exits with exit status 0,
then everything is working!
```sh
$ make test
...
```
To compile a development version of Nomad, run `make`. This will put the
Nomad binary in the `bin` and `$GOPATH/bin` folders:
```sh
$ make
...
$ bin/nomad
...
```

View file

@ -1,5 +1,9 @@
package api package api
import (
"time"
)
// Allocations is used to query the alloc-related endpoints. // Allocations is used to query the alloc-related endpoints.
type Allocations struct { type Allocations struct {
client *Client client *Client
@ -11,8 +15,8 @@ func (c *Client) Allocations() *Allocations {
} }
// List returns a list of all of the allocations. // List returns a list of all of the allocations.
func (a *Allocations) List(q *QueryOptions) ([]*Allocation, *QueryMeta, error) { func (a *Allocations) List(q *QueryOptions) ([]*AllocationListStub, *QueryMeta, error) {
var resp []*Allocation var resp []*AllocationListStub
qm, err := a.client.query("/v1/allocations", &resp, q) qm, err := a.client.query("/v1/allocations", &resp, q)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
@ -32,6 +36,41 @@ func (a *Allocations) Info(allocID string, q *QueryOptions) (*Allocation, *Query
// Allocation is used for serialization of allocations. // Allocation is used for serialization of allocations.
type Allocation struct { type Allocation struct {
ID string
EvalID string
Name string
NodeID string
JobID string
Job *Job
TaskGroup string
Resources *Resources
TaskResources map[string]*Resources
Metrics *AllocationMetric
DesiredStatus string
DesiredDescription string
ClientStatus string
ClientDescription string
CreateIndex uint64
ModifyIndex uint64
}
// AllocationMetric is used to deserialize allocation metrics.
type AllocationMetric struct {
NodesEvaluated int
NodesFiltered int
ClassFiltered map[string]int
ConstraintFiltered map[string]int
NodesExhausted int
ClassExhausted map[string]int
DimensionExhaused map[string]int
Scores map[string]float64
AllocationTime time.Duration
CoalescedFailures int
}
// AllocationListStub is used to return a subset of an allocation
// during list operations.
type AllocationListStub struct {
ID string ID string
EvalID string EvalID string
Name string Name string
@ -42,4 +81,6 @@ type Allocation struct {
DesiredDescription string DesiredDescription string
ClientStatus string ClientStatus string
ClientDescription string ClientDescription string
CreateIndex uint64
ModifyIndex uint64
} }

View file

@ -13,6 +13,8 @@ type configCallback func(c *Config)
func makeClient(t *testing.T, cb1 configCallback, func makeClient(t *testing.T, cb1 configCallback,
cb2 testutil.ServerConfigCallback) (*Client, *testutil.TestServer) { cb2 testutil.ServerConfigCallback) (*Client, *testutil.TestServer) {
// Always run these tests in parallel
t.Parallel()
// Make client config // Make client config
conf := DefaultConfig() conf := DefaultConfig()
@ -48,7 +50,6 @@ func TestDefaultConfig_env(t *testing.T) {
} }
func TestSetQueryOptions(t *testing.T) { func TestSetQueryOptions(t *testing.T) {
// TODO t.Parallel()
c, s := makeClient(t, nil, nil) c, s := makeClient(t, nil, nil)
defer s.Stop() defer s.Stop()
@ -76,7 +77,6 @@ func TestSetQueryOptions(t *testing.T) {
} }
func TestSetWriteOptions(t *testing.T) { func TestSetWriteOptions(t *testing.T) {
// TODO t.Parallel()
c, s := makeClient(t, nil, nil) c, s := makeClient(t, nil, nil)
defer s.Stop() defer s.Stop()
@ -92,7 +92,6 @@ func TestSetWriteOptions(t *testing.T) {
} }
func TestRequestToHTTP(t *testing.T) { func TestRequestToHTTP(t *testing.T) {
// TODO t.Parallel()
c, s := makeClient(t, nil, nil) c, s := makeClient(t, nil, nil)
defer s.Stop() defer s.Stop()
@ -157,7 +156,6 @@ func TestParseWriteMeta(t *testing.T) {
} }
func TestQueryString(t *testing.T) { func TestQueryString(t *testing.T) {
// TODO t.Parallel()
c, s := makeClient(t, nil, nil) c, s := makeClient(t, nil, nil)
defer s.Stop() defer s.Stop()

View file

@ -36,8 +36,8 @@ func (e *Evaluations) Info(evalID string, q *QueryOptions) (*Evaluation, *QueryM
// Allocations is used to retrieve a set of allocations given // Allocations is used to retrieve a set of allocations given
// an evaluation ID. // an evaluation ID.
func (e *Evaluations) Allocations(evalID string, q *QueryOptions) ([]*Allocation, *QueryMeta, error) { func (e *Evaluations) Allocations(evalID string, q *QueryOptions) ([]*AllocationListStub, *QueryMeta, error) {
var resp []*Allocation var resp []*AllocationListStub
qm, err := e.client.query("/v1/evaluation/"+evalID+"/allocations", &resp, q) qm, err := e.client.query("/v1/evaluation/"+evalID+"/allocations", &resp, q)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err

View file

@ -32,8 +32,8 @@ func (j *Jobs) Register(job *Job, q *WriteOptions) (string, *WriteMeta, error) {
} }
// List is used to list all of the existing jobs. // List is used to list all of the existing jobs.
func (j *Jobs) List(q *QueryOptions) ([]*Job, *QueryMeta, error) { func (j *Jobs) List(q *QueryOptions) ([]*JobListStub, *QueryMeta, error) {
var resp []*Job var resp []*JobListStub
qm, err := j.client.query("/v1/jobs", &resp, q) qm, err := j.client.query("/v1/jobs", &resp, q)
if err != nil { if err != nil {
return nil, qm, err return nil, qm, err
@ -53,8 +53,8 @@ func (j *Jobs) Info(jobID string, q *QueryOptions) (*Job, *QueryMeta, error) {
} }
// Allocations is used to return the allocs for a given job ID. // Allocations is used to return the allocs for a given job ID.
func (j *Jobs) Allocations(jobID string, q *QueryOptions) ([]*Allocation, *QueryMeta, error) { func (j *Jobs) Allocations(jobID string, q *QueryOptions) ([]*AllocationListStub, *QueryMeta, error) {
var resp []*Allocation var resp []*AllocationListStub
qm, err := j.client.query("/v1/job/"+jobID+"/allocations", &resp, q) qm, err := j.client.query("/v1/job/"+jobID+"/allocations", &resp, q)
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
@ -105,6 +105,21 @@ type Job struct {
Meta map[string]string Meta map[string]string
Status string Status string
StatusDescription string StatusDescription string
CreateIndex uint64
ModifyIndex uint64
}
// JobListStub is used to return a subset of information about
// jobs during list operations.
type JobListStub struct {
ID string
Name string
Type string
Priority int
Status string
StatusDescription string
CreateIndex uint64
ModifyIndex uint64
} }
// NewServiceJob creates and returns a new service-style job // NewServiceJob creates and returns a new service-style job

View file

@ -42,8 +42,7 @@ func TestJobs_Register(t *testing.T) {
assertQueryMeta(t, qm) assertQueryMeta(t, qm)
// Check that we got the expected response // Check that we got the expected response
expect := []*Job{job} if len(resp) != 1 || resp[0].ID != job.ID {
if !reflect.DeepEqual(resp, expect) {
t.Fatalf("bad: %#v", resp[0]) t.Fatalf("bad: %#v", resp[0])
} }
} }
@ -76,7 +75,7 @@ func TestJobs_Info(t *testing.T) {
assertQueryMeta(t, qm) assertQueryMeta(t, qm)
// Check that the result is what we expect // Check that the result is what we expect
if !reflect.DeepEqual(result, job) { if result == nil || result.ID != job.ID {
t.Fatalf("expect: %#v, got: %#v", job, result) t.Fatalf("expect: %#v, got: %#v", job, result)
} }
} }

101
api/nodes.go Normal file
View file

@ -0,0 +1,101 @@
package api
import (
"strconv"
)
// Nodes is used to query node-related API endpoints
type Nodes struct {
client *Client
}
// Nodes returns a handle on the node endpoints.
func (c *Client) Nodes() *Nodes {
return &Nodes{client: c}
}
// List is used to list out all of the nodes
func (n *Nodes) List(q *QueryOptions) ([]*NodeListStub, *QueryMeta, error) {
var resp []*NodeListStub
qm, err := n.client.query("/v1/nodes", &resp, q)
if err != nil {
return nil, nil, err
}
return resp, qm, nil
}
// Info is used to query a specific node by its ID.
func (n *Nodes) Info(nodeID string, q *QueryOptions) (*Node, *QueryMeta, error) {
var resp Node
qm, err := n.client.query("/v1/node/"+nodeID, &resp, q)
if err != nil {
return nil, nil, err
}
return &resp, qm, nil
}
// ToggleDrain is used to toggle drain mode on/off for a given node.
func (n *Nodes) ToggleDrain(nodeID string, drain bool, q *WriteOptions) (*WriteMeta, error) {
drainArg := strconv.FormatBool(drain)
wm, err := n.client.write("/v1/node/"+nodeID+"/drain?enable="+drainArg, nil, nil, q)
if err != nil {
return nil, err
}
return wm, nil
}
// Allocations is used to return the allocations associated with a node.
func (n *Nodes) Allocations(nodeID string, q *QueryOptions) ([]*AllocationListStub, *QueryMeta, error) {
var resp []*AllocationListStub
qm, err := n.client.query("/v1/node/"+nodeID+"/allocations", &resp, q)
if err != nil {
return nil, nil, err
}
return resp, qm, nil
}
// ForceEvaluate is used to force-evaluate an existing node.
func (n *Nodes) ForceEvaluate(nodeID string, q *WriteOptions) (string, *WriteMeta, error) {
var resp nodeEvalResponse
wm, err := n.client.write("/v1/node/"+nodeID+"/evaluate", nil, &resp, q)
if err != nil {
return "", nil, err
}
return resp.EvalID, wm, nil
}
// Node is used to deserialize a node entry.
type Node struct {
ID string
Datacenter string
Name string
Attributes map[string]string
Resources *Resources
Reserved *Resources
Links map[string]string
NodeClass string
Drain bool
Status string
StatusDescription string
CreateIndex uint64
ModifyIndex uint64
}
// NodeListStub is a subset of information returned during
// node list operations.
type NodeListStub struct {
ID string
Datacenter string
Name string
NodeClass string
Drain bool
Status string
StatusDescription string
CreateIndex uint64
ModifyIndex uint64
}
// nodeEvalResponse is used to decode a force-eval.
type nodeEvalResponse struct {
EvalID string
}

203
api/nodes_test.go Normal file
View file

@ -0,0 +1,203 @@
package api
import (
"fmt"
"strings"
"testing"
"github.com/hashicorp/nomad/testutil"
)
func TestNodes_List(t *testing.T) {
c, s := makeClient(t, nil, func(c *testutil.TestServerConfig) {
c.DevMode = true
})
defer s.Stop()
nodes := c.Nodes()
var qm *QueryMeta
var out []*NodeListStub
var err error
testutil.WaitForResult(func() (bool, error) {
out, qm, err = nodes.List(nil)
if err != nil {
return false, err
}
if n := len(out); n != 1 {
return false, fmt.Errorf("expected 1 node, got: %d", n)
}
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
// Check that we got valid QueryMeta.
assertQueryMeta(t, qm)
}
func TestNodes_Info(t *testing.T) {
c, s := makeClient(t, nil, func(c *testutil.TestServerConfig) {
c.DevMode = true
})
defer s.Stop()
nodes := c.Nodes()
// Retrieving a non-existent node returns error
_, _, err := nodes.Info("nope", nil)
if err == nil || !strings.Contains(err.Error(), "not found") {
t.Fatalf("expected not found error, got: %#v", err)
}
// Get the node ID
var nodeID, dc string
testutil.WaitForResult(func() (bool, error) {
out, _, err := nodes.List(nil)
if err != nil {
return false, err
}
if n := len(out); n != 1 {
return false, fmt.Errorf("expected 1 node, got: %d", n)
}
nodeID = out[0].ID
dc = out[0].Datacenter
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
// Querying for existing nodes returns properly
result, qm, err := nodes.Info(nodeID, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
assertQueryMeta(t, qm)
// Check that the result is what we expect
if result.ID != nodeID || result.Datacenter != dc {
t.Fatalf("expected %s (%s), got: %s (%s)",
nodeID, dc,
result.ID, result.Datacenter)
}
}
func TestNodes_ToggleDrain(t *testing.T) {
c, s := makeClient(t, nil, func(c *testutil.TestServerConfig) {
c.DevMode = true
})
defer s.Stop()
nodes := c.Nodes()
// Wait for node registration and get the ID
var nodeID string
testutil.WaitForResult(func() (bool, error) {
out, _, err := nodes.List(nil)
if err != nil {
return false, err
}
if n := len(out); n != 1 {
return false, fmt.Errorf("expected 1 node, got: %d", n)
}
nodeID = out[0].ID
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
// Check for drain mode
out, _, err := nodes.Info(nodeID, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
if out.Drain {
t.Fatalf("drain mode should be off")
}
// Toggle it on
wm, err := nodes.ToggleDrain(nodeID, true, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
assertWriteMeta(t, wm)
// Check again
out, _, err = nodes.Info(nodeID, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
if !out.Drain {
t.Fatalf("drain mode should be on")
}
// Toggle off again
wm, err = nodes.ToggleDrain(nodeID, false, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
assertWriteMeta(t, wm)
// Check again
out, _, err = nodes.Info(nodeID, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
if out.Drain {
t.Fatalf("drain mode should be off")
}
}
func TestNodes_Allocations(t *testing.T) {
c, s := makeClient(t, nil, nil)
defer s.Stop()
nodes := c.Nodes()
// Looking up by a non-existent node returns nothing. We
// don't check the index here because it's possible the node
// has already registered, in which case we will get a non-
// zero result anyways.
allocs, _, err := nodes.Allocations("nope", nil)
if err != nil {
t.Fatalf("err: %s", err)
}
if n := len(allocs); n != 0 {
t.Fatalf("expected 0 allocs, got: %d", n)
}
}
func TestNodes_ForceEvaluate(t *testing.T) {
c, s := makeClient(t, nil, func(c *testutil.TestServerConfig) {
c.DevMode = true
})
defer s.Stop()
nodes := c.Nodes()
// Force-eval on a non-existent node fails
_, _, err := nodes.ForceEvaluate("nope", nil)
if err == nil || !strings.Contains(err.Error(), "not found") {
t.Fatalf("expected not found error, got: %#v", err)
}
// Wait for node registration and get the ID
var nodeID string
testutil.WaitForResult(func() (bool, error) {
out, _, err := nodes.List(nil)
if err != nil {
return false, err
}
if n := len(out); n != 1 {
return false, fmt.Errorf("expected 1 node, got: %d", n)
}
nodeID = out[0].ID
return true, nil
}, func(err error) {
t.Fatalf("err: %s", err)
})
// Try force-eval again. We don't check the WriteMeta because
// there are no allocations to process, so we would get an index
// of zero. Same goes for the eval ID.
_, _, err = nodes.ForceEvaluate(nodeID, nil)
if err != nil {
t.Fatalf("err: %s", err)
}
}

32
api/status.go Normal file
View file

@ -0,0 +1,32 @@
package api
// Status is used to query the status-related endpoints.
type Status struct {
client *Client
}
// Status returns a handle on the status endpoints.
func (c *Client) Status() *Status {
return &Status{client: c}
}
// Leader is used to query for the current cluster leader.
func (s *Status) Leader() (string, error) {
var resp string
_, err := s.client.query("/v1/status/leader", &resp, nil)
if err != nil {
return "", err
}
return resp, nil
}
// Peers is used to query the addresses of the server peers
// in the cluster.
func (s *Status) Peers() ([]string, error) {
var resp []string
_, err := s.client.query("/v1/status/peers", &resp, nil)
if err != nil {
return nil, err
}
return resp, nil
}

20
api/status_test.go Normal file
View file

@ -0,0 +1,20 @@
package api
import (
"testing"
)
func TestStatus_Leader(t *testing.T) {
c, s := makeClient(t, nil, nil)
defer s.Stop()
status := c.Status()
// Query for leader status should return a result
out, err := status.Leader()
if err != nil {
t.Fatalf("err: %s", err)
}
if out == "" {
t.Fatalf("expected leader, got: %q", out)
}
}

View file

@ -288,6 +288,10 @@ func (r *AllocRunner) Run() {
if _, ok := r.tasks[task.Name]; ok { if _, ok := r.tasks[task.Name]; ok {
continue continue
} }
// Merge in the task resources
task.Resources = alloc.TaskResources[task.Name]
tr := NewTaskRunner(r.logger, r.config, r.setTaskStatus, r.ctx, r.alloc.ID, task) tr := NewTaskRunner(r.logger, r.config, r.setTaskStatus, r.ctx, r.alloc.ID, task)
r.tasks[task.Name] = tr r.tasks[task.Name] = tr
go tr.Run() go tr.Run()
@ -309,6 +313,9 @@ OUTER:
r.taskLock.RLock() r.taskLock.RLock()
for _, task := range tg.Tasks { for _, task := range tg.Tasks {
tr := r.tasks[task.Name] tr := r.tasks[task.Name]
// Merge in the task resources
task.Resources = update.TaskResources[task.Name]
tr.Update(task) tr.Update(task)
} }
r.taskLock.RUnlock() r.taskLock.RUnlock()

View file

@ -52,7 +52,7 @@ const (
func DefaultConfig() *config.Config { func DefaultConfig() *config.Config {
return &config.Config{ return &config.Config{
LogOutput: os.Stderr, LogOutput: os.Stderr,
Region: "region1", Region: "global",
} }
} }
@ -98,6 +98,11 @@ func NewClient(cfg *config.Config) (*Client, error) {
shutdownCh: make(chan struct{}), shutdownCh: make(chan struct{}),
} }
// Initialize the client
if err := c.init(); err != nil {
return nil, fmt.Errorf("failed intializing client: %v", err)
}
// Restore the state // Restore the state
if err := c.restoreState(); err != nil { if err := c.restoreState(); err != nil {
return nil, fmt.Errorf("failed to restore state: %v", err) return nil, fmt.Errorf("failed to restore state: %v", err)
@ -123,6 +128,18 @@ func NewClient(cfg *config.Config) (*Client, error) {
return c, nil return c, nil
} }
// init is used to initialize the client and perform any setup
// needed before we begin starting its various components.
func (c *Client) init() error {
// Ensure the alloc dir exists if we have one
if c.config.AllocDir != "" {
if err := os.MkdirAll(c.config.AllocDir, 0700); err != nil {
return fmt.Errorf("failed creating alloc dir: %s", err)
}
}
return nil
}
// Leave is used to prepare the client to leave the cluster // Leave is used to prepare the client to leave the cluster
func (c *Client) Leave() error { func (c *Client) Leave() error {
// TODO // TODO

View file

@ -2,7 +2,10 @@ package client
import ( import (
"fmt" "fmt"
"io/ioutil"
"net" "net"
"os"
"path/filepath"
"sync/atomic" "sync/atomic"
"testing" "testing"
"time" "time"
@ -155,7 +158,7 @@ func TestClient_Register(t *testing.T) {
req := structs.NodeSpecificRequest{ req := structs.NodeSpecificRequest{
NodeID: c1.Node().ID, NodeID: c1.Node().ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var out structs.SingleNodeResponse var out structs.SingleNodeResponse
@ -188,7 +191,7 @@ func TestClient_Heartbeat(t *testing.T) {
req := structs.NodeSpecificRequest{ req := structs.NodeSpecificRequest{
NodeID: c1.Node().ID, NodeID: c1.Node().ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var out structs.SingleNodeResponse var out structs.SingleNodeResponse
@ -365,3 +368,25 @@ func TestClient_SaveRestoreState(t *testing.T) {
t.Fatalf("bad: %#v", ar.Alloc()) t.Fatalf("bad: %#v", ar.Alloc())
} }
} }
func TestClient_Init(t *testing.T) {
dir, err := ioutil.TempDir("", "nomad")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.RemoveAll(dir)
allocDir := filepath.Join(dir, "alloc")
client := &Client{
config: &config.Config{
AllocDir: allocDir,
},
}
if err := client.init(); err != nil {
t.Fatalf("err: %s", err)
}
if _, err := os.Stat(allocDir); err != nil {
t.Fatalf("err: %s", err)
}
}

View file

@ -106,6 +106,9 @@ func persistState(path string, data interface{}) error {
func restoreState(path string, data interface{}) error { func restoreState(path string, data interface{}) error {
buf, err := ioutil.ReadFile(path) buf, err := ioutil.ReadFile(path)
if err != nil { if err != nil {
if os.IsNotExist(err) {
return nil
}
return fmt.Errorf("failed to read state: %v", err) return fmt.Errorf("failed to read state: %v", err)
} }
if err := json.Unmarshal(buf, data); err != nil { if err := json.Unmarshal(buf, data); err != nil {

View file

@ -1,7 +1,9 @@
package client package client
import ( import (
"io/ioutil"
"os" "os"
"path/filepath"
"reflect" "reflect"
"testing" "testing"
"time" "time"
@ -80,6 +82,16 @@ func TestShuffleStrings(t *testing.T) {
} }
func TestPersistRestoreState(t *testing.T) { func TestPersistRestoreState(t *testing.T) {
dir, err := ioutil.TempDir("", "nomad")
if err != nil {
t.Fatalf("err: %s", err)
}
defer os.RemoveAll(dir)
// Use a state path inside a non-existent directory. This
// verifies that the directory is created properly.
statePath := filepath.Join(dir, "subdir", "test-persist")
type stateTest struct { type stateTest struct {
Foo int Foo int
Bar string Bar string
@ -90,15 +102,14 @@ func TestPersistRestoreState(t *testing.T) {
Bar: "the quick brown fox", Bar: "the quick brown fox",
Baz: true, Baz: true,
} }
defer os.Remove("test-persist")
err := persistState("test-persist", &state) err = persistState(statePath, &state)
if err != nil { if err != nil {
t.Fatalf("err: %v", err) t.Fatalf("err: %v", err)
} }
var out stateTest var out stateTest
err = restoreState("test-persist", &out) err = restoreState(statePath, &out)
if err != nil { if err != nil {
t.Fatalf("err: %v", err) t.Fatalf("err: %v", err)
} }

View file

@ -58,13 +58,9 @@ func NewAgent(config *Config, logOutput io.Writer) (*Agent, error) {
return a, nil return a, nil
} }
// setupServer is used to setup the server if enabled // serverConfig is used to generate a new server configuration struct
func (a *Agent) setupServer() error { // for initializing a nomad server.
if !a.config.Server.Enabled { func (a *Agent) serverConfig() (*nomad.Config, error) {
return nil
}
// Setup the configuration
conf := a.config.NomadConfig conf := a.config.NomadConfig
if conf == nil { if conf == nil {
conf = nomad.DefaultConfig() conf = nomad.DefaultConfig()
@ -102,19 +98,57 @@ func (a *Agent) setupServer() error {
if len(a.config.Server.EnabledSchedulers) != 0 { if len(a.config.Server.EnabledSchedulers) != 0 {
conf.EnabledSchedulers = a.config.Server.EnabledSchedulers conf.EnabledSchedulers = a.config.Server.EnabledSchedulers
} }
if addr := a.config.Server.AdvertiseAddr; addr != "" {
tcpAddr, err := net.ResolveTCPAddr("tcp", addr) // Set up the advertise addrs
if addr := a.config.AdvertiseAddrs.Serf; addr != "" {
serfAddr, err := net.ResolveTCPAddr("tcp", addr)
if err != nil { if err != nil {
return fmt.Errorf("failed to resolve advertise address: %v", err) return nil, fmt.Errorf("error resolving serf advertise address: %s", err)
} }
conf.RPCAdvertise = tcpAddr conf.SerfConfig.MemberlistConfig.AdvertiseAddr = serfAddr.IP.String()
conf.SerfConfig.MemberlistConfig.AdvertisePort = serfAddr.Port
} }
if addr := a.config.Server.BindAddr; addr != "" { if addr := a.config.AdvertiseAddrs.RPC; addr != "" {
tcpAddr, err := net.ResolveTCPAddr("tcp", addr) rpcAddr, err := net.ResolveTCPAddr("tcp", addr)
if err != nil { if err != nil {
return fmt.Errorf("failed to resolve bind address: %v", err) return nil, fmt.Errorf("error resolving rpc advertise address: %s", err)
} }
conf.RPCAddr = tcpAddr conf.RPCAdvertise = rpcAddr
}
// Set up the bind addresses
if addr := a.config.BindAddr; addr != "" {
conf.RPCAddr.IP = net.ParseIP(addr)
conf.SerfConfig.MemberlistConfig.BindAddr = addr
}
if addr := a.config.Addresses.RPC; addr != "" {
conf.RPCAddr.IP = net.ParseIP(addr)
}
if addr := a.config.Addresses.Serf; addr != "" {
conf.SerfConfig.MemberlistConfig.BindAddr = addr
}
// Set up the ports
if port := a.config.Ports.RPC; port != 0 {
conf.RPCAddr.Port = port
}
if port := a.config.Ports.Serf; port != 0 {
conf.SerfConfig.MemberlistConfig.BindPort = port
}
return conf, nil
}
// setupServer is used to setup the server if enabled
func (a *Agent) setupServer() error {
if !a.config.Server.Enabled {
return nil
}
// Setup the configuration
conf, err := a.serverConfig()
if err != nil {
return fmt.Errorf("server config setup failed: %s", err)
} }
// Create the server // Create the server
@ -122,6 +156,7 @@ func (a *Agent) setupServer() error {
if err != nil { if err != nil {
return fmt.Errorf("server setup failed: %v", err) return fmt.Errorf("server setup failed: %v", err)
} }
a.server = server a.server = server
return nil return nil
} }

View file

@ -5,6 +5,7 @@ import (
"io/ioutil" "io/ioutil"
"net" "net"
"os" "os"
"strings"
"sync/atomic" "sync/atomic"
"testing" "testing"
"time" "time"
@ -79,3 +80,86 @@ func TestAgent_RPCPing(t *testing.T) {
t.Fatalf("err: %v", err) t.Fatalf("err: %v", err)
} }
} }
func TestAgent_ServerConfig(t *testing.T) {
conf := DefaultConfig()
a := &Agent{config: conf}
// Returns error on bad serf addr
conf.AdvertiseAddrs.Serf = "nope"
_, err := a.serverConfig()
if err == nil || !strings.Contains(err.Error(), "serf advertise") {
t.Fatalf("expected serf address error, got: %#v", err)
}
conf.AdvertiseAddrs.Serf = "127.0.0.1:4000"
// Returns error on bad rpc addr
conf.AdvertiseAddrs.RPC = "nope"
_, err = a.serverConfig()
if err == nil || !strings.Contains(err.Error(), "rpc advertise") {
t.Fatalf("expected rpc address error, got: %#v", err)
}
conf.AdvertiseAddrs.RPC = "127.0.0.1:4001"
// Parses the advertise addrs correctly
out, err := a.serverConfig()
if err != nil {
t.Fatalf("err: %s", err)
}
serfAddr := out.SerfConfig.MemberlistConfig.AdvertiseAddr
if serfAddr != "127.0.0.1" {
t.Fatalf("expect 127.0.0.1, got: %s", serfAddr)
}
serfPort := out.SerfConfig.MemberlistConfig.AdvertisePort
if serfPort != 4000 {
t.Fatalf("expected 4000, got: %d", serfPort)
}
if addr := out.RPCAdvertise; addr.IP.String() != "127.0.0.1" || addr.Port != 4001 {
t.Fatalf("bad rpc advertise addr: %#v", addr)
}
// Sets up the ports properly
conf.Ports.RPC = 4003
conf.Ports.Serf = 4004
out, err = a.serverConfig()
if err != nil {
t.Fatalf("err: %s", err)
}
if addr := out.RPCAddr.Port; addr != 4003 {
t.Fatalf("expect 4003, got: %d", out.RPCAddr.Port)
}
if port := out.SerfConfig.MemberlistConfig.BindPort; port != 4004 {
t.Fatalf("expect 4004, got: %d", port)
}
// Prefers the most specific bind addrs
conf.BindAddr = "127.0.0.3"
conf.Addresses.RPC = "127.0.0.2"
conf.Addresses.Serf = "127.0.0.2"
out, err = a.serverConfig()
if err != nil {
t.Fatalf("err: %s", err)
}
if addr := out.RPCAddr.IP.String(); addr != "127.0.0.2" {
t.Fatalf("expect 127.0.0.2, got: %s", addr)
}
if addr := out.SerfConfig.MemberlistConfig.BindAddr; addr != "127.0.0.2" {
t.Fatalf("expect 127.0.0.2, got: %s", addr)
}
// Defaults to the global bind addr
conf.Addresses.RPC = ""
conf.Addresses.Serf = ""
out, err = a.serverConfig()
if err != nil {
t.Fatalf("err: %s", err)
}
if addr := out.RPCAddr.IP.String(); addr != "127.0.0.3" {
t.Fatalf("expect 127.0.0.3, got: %s", addr)
}
if addr := out.SerfConfig.MemberlistConfig.BindAddr; addr != "127.0.0.3" {
t.Fatalf("expect 127.0.0.3, got: %s", addr)
}
}

View file

@ -4,6 +4,7 @@ import (
"fmt" "fmt"
"io" "io"
"io/ioutil" "io/ioutil"
"net"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
@ -16,7 +17,7 @@ import (
// Config is the configuration for the Nomad agent. // Config is the configuration for the Nomad agent.
type Config struct { type Config struct {
// Region is the region this agent is in. Defaults to region1. // Region is the region this agent is in. Defaults to global.
Region string `hcl:"region"` Region string `hcl:"region"`
// Datacenter is the datacenter this agent is in. Defaults to dc1 // Datacenter is the datacenter this agent is in. Defaults to dc1
@ -31,13 +32,22 @@ type Config struct {
// LogLevel is the level of the logs to putout // LogLevel is the level of the logs to putout
LogLevel string `hcl:"log_level"` LogLevel string `hcl:"log_level"`
// HttpAddr is used to control the address and port we bind to. // BindAddr is the address on which all of nomad's services will
// If not specified, 127.0.0.1:4646 is used. // be bound. If not specified, this defaults to 127.0.0.1.
HttpAddr string `hcl:"http_addr"` BindAddr string `hcl:"bind_addr"`
// EnableDebug is used to enable debugging HTTP endpoints // EnableDebug is used to enable debugging HTTP endpoints
EnableDebug bool `hcl:"enable_debug"` EnableDebug bool `hcl:"enable_debug"`
// Ports is used to control the network ports we bind to.
Ports *Ports `hcl:"ports"`
// Addresses is used to override the network addresses we bind to.
Addresses *Addresses `hcl:"addresses"`
// AdvertiseAddrs is used to control the addresses we advertise.
AdvertiseAddrs *AdvertiseAddrs `hcl:"advertise"`
// Client has our client related settings // Client has our client related settings
Client *ClientConfig `hcl:"client"` Client *ClientConfig `hcl:"client"`
@ -112,16 +122,6 @@ type ServerConfig struct {
// ProtocolVersionMin and ProtocolVersionMax. // ProtocolVersionMin and ProtocolVersionMax.
ProtocolVersion int `hcl:"protocol_version"` ProtocolVersion int `hcl:"protocol_version"`
// AdvertiseAddr is the address we use for advertising our Serf,
// and Consul RPC IP. If not specified, bind address is used.
AdvertiseAddr string `mapstructure:"advertise_addr"`
// BindAddr is used to control the address we bind to.
// If not specified, the first private IP we find is used.
// This controls the address we use for cluster facing
// services (Gossip, Server RPC)
BindAddr string `hcl:"bind_addr"`
// NumSchedulers is the number of scheduler thread that are run. // NumSchedulers is the number of scheduler thread that are run.
// This can be as many as one per core, or zero to disable this server // This can be as many as one per core, or zero to disable this server
// from doing any scheduling work. // from doing any scheduling work.
@ -140,6 +140,30 @@ type Telemetry struct {
DisableHostname bool `hcl:"disable_hostname"` DisableHostname bool `hcl:"disable_hostname"`
} }
// Ports is used to encapsulate the various ports we bind to for network
// services. If any are not specified then the defaults are used instead.
type Ports struct {
HTTP int `hcl:"http"`
RPC int `hcl:"rpc"`
Serf int `hcl:"serf"`
}
// Addresses encapsulates all of the addresses we bind to for various
// network services. Everything is optional and defaults to BindAddr.
type Addresses struct {
HTTP string `hcl:"http"`
RPC string `hcl:"rpc"`
Serf string `hcl:"serf"`
}
// AdvertiseAddrs is used to control the addresses we advertise out for
// different network services. Not all network services support an
// advertise address. All are optional and default to BindAddr.
type AdvertiseAddrs struct {
RPC string `hcl:"rpc"`
Serf string `hcl:"serf"`
}
// DevConfig is a Config that is used for dev mode of Nomad. // DevConfig is a Config that is used for dev mode of Nomad.
func DevConfig() *Config { func DevConfig() *Config {
conf := DefaultConfig() conf := DefaultConfig()
@ -150,21 +174,22 @@ func DevConfig() *Config {
conf.EnableDebug = true conf.EnableDebug = true
conf.DisableAnonymousSignature = true conf.DisableAnonymousSignature = true
return conf return conf
return &Config{
LogLevel: "DEBUG",
DevMode: true,
EnableDebug: true,
DisableAnonymousSignature: true,
}
} }
// DefaultConfig is a the baseline configuration for Nomad // DefaultConfig is a the baseline configuration for Nomad
func DefaultConfig() *Config { func DefaultConfig() *Config {
return &Config{ return &Config{
LogLevel: "INFO", LogLevel: "INFO",
Region: "region1", Region: "global",
Datacenter: "dc1", Datacenter: "dc1",
HttpAddr: "127.0.0.1:4646", BindAddr: "127.0.0.1",
Ports: &Ports{
HTTP: 4646,
RPC: 4647,
Serf: 4648,
},
Addresses: &Addresses{},
AdvertiseAddrs: &AdvertiseAddrs{},
Client: &ClientConfig{ Client: &ClientConfig{
Enabled: false, Enabled: false,
}, },
@ -174,6 +199,15 @@ func DefaultConfig() *Config {
} }
} }
// GetListener can be used to get a new listener using a custom bind address.
// If the bind provided address is empty, the BindAddr is used instead.
func (c *Config) Listener(proto, addr string, port int) (net.Listener, error) {
if addr == "" {
addr = c.BindAddr
}
return net.Listen(proto, fmt.Sprintf("%s:%d", addr, port))
}
// Merge merges two configurations. // Merge merges two configurations.
func (a *Config) Merge(b *Config) *Config { func (a *Config) Merge(b *Config) *Config {
var result Config = *a var result Config = *a
@ -193,8 +227,8 @@ func (a *Config) Merge(b *Config) *Config {
if b.LogLevel != "" { if b.LogLevel != "" {
result.LogLevel = b.LogLevel result.LogLevel = b.LogLevel
} }
if b.HttpAddr != "" { if b.BindAddr != "" {
result.HttpAddr = b.HttpAddr result.BindAddr = b.BindAddr
} }
if b.EnableDebug { if b.EnableDebug {
result.EnableDebug = true result.EnableDebug = true
@ -242,6 +276,30 @@ func (a *Config) Merge(b *Config) *Config {
result.Server = result.Server.Merge(b.Server) result.Server = result.Server.Merge(b.Server)
} }
// Apply the ports config
if result.Ports == nil && b.Ports != nil {
ports := *b.Ports
result.Ports = &ports
} else if b.Ports != nil {
result.Ports = result.Ports.Merge(b.Ports)
}
// Apply the address config
if result.Addresses == nil && b.Addresses != nil {
addrs := *b.Addresses
result.Addresses = &addrs
} else if b.Addresses != nil {
result.Addresses = result.Addresses.Merge(b.Addresses)
}
// Apply the advertise addrs config
if result.AdvertiseAddrs == nil && b.AdvertiseAddrs != nil {
advertise := *b.AdvertiseAddrs
result.AdvertiseAddrs = &advertise
} else if b.AdvertiseAddrs != nil {
result.AdvertiseAddrs = result.AdvertiseAddrs.Merge(b.AdvertiseAddrs)
}
return &result return &result
} }
@ -264,12 +322,6 @@ func (a *ServerConfig) Merge(b *ServerConfig) *ServerConfig {
if b.ProtocolVersion != 0 { if b.ProtocolVersion != 0 {
result.ProtocolVersion = b.ProtocolVersion result.ProtocolVersion = b.ProtocolVersion
} }
if b.AdvertiseAddr != "" {
result.AdvertiseAddr = b.AdvertiseAddr
}
if b.BindAddr != "" {
result.BindAddr = b.BindAddr
}
if b.NumSchedulers != 0 { if b.NumSchedulers != 0 {
result.NumSchedulers = b.NumSchedulers result.NumSchedulers = b.NumSchedulers
} }
@ -330,6 +382,51 @@ func (a *Telemetry) Merge(b *Telemetry) *Telemetry {
return &result return &result
} }
// Merge is used to merge two port configurations.
func (a *Ports) Merge(b *Ports) *Ports {
var result Ports = *a
if b.HTTP != 0 {
result.HTTP = b.HTTP
}
if b.RPC != 0 {
result.RPC = b.RPC
}
if b.Serf != 0 {
result.Serf = b.Serf
}
return &result
}
// Merge is used to merge two address configs together.
func (a *Addresses) Merge(b *Addresses) *Addresses {
var result Addresses = *a
if b.HTTP != "" {
result.HTTP = b.HTTP
}
if b.RPC != "" {
result.RPC = b.RPC
}
if b.Serf != "" {
result.Serf = b.Serf
}
return &result
}
// Merge merges two advertise addrs configs together.
func (a *AdvertiseAddrs) Merge(b *AdvertiseAddrs) *AdvertiseAddrs {
var result AdvertiseAddrs = *a
if b.RPC != "" {
result.RPC = b.RPC
}
if b.Serf != "" {
result.Serf = b.Serf
}
return &result
}
// LoadConfig loads the configuration at the given path, regardless if // LoadConfig loads the configuration at the given path, regardless if
// its a file or directory. // its a file or directory.
func LoadConfig(path string) (*Config, error) { func LoadConfig(path string) (*Config, error) {

View file

@ -12,12 +12,11 @@ import (
func TestConfig_Merge(t *testing.T) { func TestConfig_Merge(t *testing.T) {
c1 := &Config{ c1 := &Config{
Region: "region1", Region: "global",
Datacenter: "dc1", Datacenter: "dc1",
NodeName: "node1", NodeName: "node1",
DataDir: "/tmp/dir1", DataDir: "/tmp/dir1",
LogLevel: "INFO", LogLevel: "INFO",
HttpAddr: "127.0.0.1:4646",
EnableDebug: false, EnableDebug: false,
LeaveOnInt: false, LeaveOnInt: false,
LeaveOnTerm: false, LeaveOnTerm: false,
@ -25,6 +24,7 @@ func TestConfig_Merge(t *testing.T) {
SyslogFacility: "local0.info", SyslogFacility: "local0.info",
DisableUpdateCheck: false, DisableUpdateCheck: false,
DisableAnonymousSignature: false, DisableAnonymousSignature: false,
BindAddr: "127.0.0.1",
Telemetry: &Telemetry{ Telemetry: &Telemetry{
StatsiteAddr: "127.0.0.1:8125", StatsiteAddr: "127.0.0.1:8125",
StatsdAddr: "127.0.0.1:8125", StatsdAddr: "127.0.0.1:8125",
@ -43,10 +43,22 @@ func TestConfig_Merge(t *testing.T) {
BootstrapExpect: 1, BootstrapExpect: 1,
DataDir: "/tmp/data1", DataDir: "/tmp/data1",
ProtocolVersion: 1, ProtocolVersion: 1,
AdvertiseAddr: "127.0.0.1:4647",
BindAddr: "127.0.0.1",
NumSchedulers: 1, NumSchedulers: 1,
}, },
Ports: &Ports{
HTTP: 4646,
RPC: 4647,
Serf: 4648,
},
Addresses: &Addresses{
HTTP: "127.0.0.1",
RPC: "127.0.0.1",
Serf: "127.0.0.1",
},
AdvertiseAddrs: &AdvertiseAddrs{
RPC: "127.0.0.1",
Serf: "127.0.0.1",
},
} }
c2 := &Config{ c2 := &Config{
@ -55,7 +67,6 @@ func TestConfig_Merge(t *testing.T) {
NodeName: "node2", NodeName: "node2",
DataDir: "/tmp/dir2", DataDir: "/tmp/dir2",
LogLevel: "DEBUG", LogLevel: "DEBUG",
HttpAddr: "0.0.0.0:80",
EnableDebug: true, EnableDebug: true,
LeaveOnInt: true, LeaveOnInt: true,
LeaveOnTerm: true, LeaveOnTerm: true,
@ -63,6 +74,7 @@ func TestConfig_Merge(t *testing.T) {
SyslogFacility: "local0.debug", SyslogFacility: "local0.debug",
DisableUpdateCheck: true, DisableUpdateCheck: true,
DisableAnonymousSignature: true, DisableAnonymousSignature: true,
BindAddr: "127.0.0.2",
Telemetry: &Telemetry{ Telemetry: &Telemetry{
StatsiteAddr: "127.0.0.2:8125", StatsiteAddr: "127.0.0.2:8125",
StatsdAddr: "127.0.0.2:8125", StatsdAddr: "127.0.0.2:8125",
@ -83,11 +95,23 @@ func TestConfig_Merge(t *testing.T) {
BootstrapExpect: 2, BootstrapExpect: 2,
DataDir: "/tmp/data2", DataDir: "/tmp/data2",
ProtocolVersion: 2, ProtocolVersion: 2,
AdvertiseAddr: "127.0.0.2:4647",
BindAddr: "127.0.0.2",
NumSchedulers: 2, NumSchedulers: 2,
EnabledSchedulers: []string{structs.JobTypeBatch}, EnabledSchedulers: []string{structs.JobTypeBatch},
}, },
Ports: &Ports{
HTTP: 20000,
RPC: 21000,
Serf: 22000,
},
Addresses: &Addresses{
HTTP: "127.0.0.2",
RPC: "127.0.0.2",
Serf: "127.0.0.2",
},
AdvertiseAddrs: &AdvertiseAddrs{
RPC: "127.0.0.2",
Serf: "127.0.0.2",
},
} }
result := c1.Merge(c2) result := c1.Merge(c2)
@ -231,3 +255,44 @@ func TestConfig_LoadConfig(t *testing.T) {
t.Fatalf("bad: %#v", config) t.Fatalf("bad: %#v", config)
} }
} }
func TestConfig_Listener(t *testing.T) {
config := DefaultConfig()
// Fails on invalid input
if _, err := config.Listener("tcp", "nope", 8080); err == nil {
t.Fatalf("expected addr error")
}
if _, err := config.Listener("nope", "127.0.0.1", 8080); err == nil {
t.Fatalf("expected protocol err")
}
if _, err := config.Listener("tcp", "127.0.0.1", -1); err == nil {
t.Fatalf("expected port error")
}
// Works with valid inputs
ln, err := config.Listener("tcp", "127.0.0.1", 24000)
if err != nil {
t.Fatalf("err: %s", err)
}
ln.Close()
if net := ln.Addr().Network(); net != "tcp" {
t.Fatalf("expected tcp, got: %q", net)
}
if addr := ln.Addr().String(); addr != "127.0.0.1:24000" {
t.Fatalf("expected 127.0.0.1:4646, got: %q", addr)
}
// Falls back to default bind address if non provided
config.BindAddr = "0.0.0.0"
ln, err = config.Listener("tcp4", "", 24000)
if err != nil {
t.Fatalf("err: %s", err)
}
ln.Close()
if addr := ln.Addr().String(); addr != "0.0.0.0:24000" {
t.Fatalf("expected 0.0.0.0:24000, got: %q", addr)
}
}

View file

@ -31,9 +31,9 @@ type HTTPServer struct {
// NewHTTPServer starts new HTTP server over the agent // NewHTTPServer starts new HTTP server over the agent
func NewHTTPServer(agent *Agent, config *Config, logOutput io.Writer) (*HTTPServer, error) { func NewHTTPServer(agent *Agent, config *Config, logOutput io.Writer) (*HTTPServer, error) {
// Start the listener // Start the listener
ln, err := net.Listen("tcp", config.HttpAddr) ln, err := config.Listener("tcp", config.Addresses.HTTP, config.Ports.HTTP)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to start HTTP listener on %s: %v", config.HttpAddr, err) return nil, fmt.Errorf("failed to start HTTP listener: %v", err)
} }
// Create the mux // Create the mux

View file

@ -268,7 +268,7 @@ func TestParseRegion(t *testing.T) {
} }
s.Server.parseRegion(req, &region) s.Server.parseRegion(req, &region)
if region != "region1" { if region != "global" {
t.Fatalf("bad %s", region) t.Fatalf("bad %s", region)
} }
} }

View file

@ -16,7 +16,7 @@ func TestHTTP_JobsList(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.JobRegisterResponse var resp structs.JobRegisterResponse
if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil {
@ -62,7 +62,7 @@ func TestHTTP_JobsRegister(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
buf := encodeReq(args) buf := encodeReq(args)
@ -93,7 +93,7 @@ func TestHTTP_JobsRegister(t *testing.T) {
// Check the job is registered // Check the job is registered
getReq := structs.JobSpecificRequest{ getReq := structs.JobSpecificRequest{
JobID: job.ID, JobID: job.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var getResp structs.SingleJobResponse var getResp structs.SingleJobResponse
if err := s.Agent.RPC("Job.GetJob", &getReq, &getResp); err != nil { if err := s.Agent.RPC("Job.GetJob", &getReq, &getResp); err != nil {
@ -112,7 +112,7 @@ func TestHTTP_JobQuery(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.JobRegisterResponse var resp structs.JobRegisterResponse
if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil {
@ -157,7 +157,7 @@ func TestHTTP_JobUpdate(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
buf := encodeReq(args) buf := encodeReq(args)
@ -188,7 +188,7 @@ func TestHTTP_JobUpdate(t *testing.T) {
// Check the job is registered // Check the job is registered
getReq := structs.JobSpecificRequest{ getReq := structs.JobSpecificRequest{
JobID: job.ID, JobID: job.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var getResp structs.SingleJobResponse var getResp structs.SingleJobResponse
if err := s.Agent.RPC("Job.GetJob", &getReq, &getResp); err != nil { if err := s.Agent.RPC("Job.GetJob", &getReq, &getResp); err != nil {
@ -207,7 +207,7 @@ func TestHTTP_JobDelete(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.JobRegisterResponse var resp structs.JobRegisterResponse
if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil {
@ -241,7 +241,7 @@ func TestHTTP_JobDelete(t *testing.T) {
// Check the job is gone // Check the job is gone
getReq := structs.JobSpecificRequest{ getReq := structs.JobSpecificRequest{
JobID: job.ID, JobID: job.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var getResp structs.SingleJobResponse var getResp structs.SingleJobResponse
if err := s.Agent.RPC("Job.GetJob", &getReq, &getResp); err != nil { if err := s.Agent.RPC("Job.GetJob", &getReq, &getResp); err != nil {
@ -259,7 +259,7 @@ func TestHTTP_JobForceEvaluate(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.JobRegisterResponse var resp structs.JobRegisterResponse
if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil {
@ -298,7 +298,7 @@ func TestHTTP_JobEvaluations(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.JobRegisterResponse var resp structs.JobRegisterResponse
if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil {
@ -343,7 +343,7 @@ func TestHTTP_JobAllocations(t *testing.T) {
job := mock.Job() job := mock.Job()
args := structs.JobRegisterRequest{ args := structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.JobRegisterResponse var resp structs.JobRegisterResponse
if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Job.Register", &args, &resp); err != nil {

View file

@ -16,7 +16,7 @@ func TestHTTP_NodesList(t *testing.T) {
node := mock.Node() node := mock.Node()
args := structs.NodeRegisterRequest{ args := structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.NodeUpdateResponse var resp structs.NodeUpdateResponse
if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil {
@ -62,7 +62,7 @@ func TestHTTP_NodeForceEval(t *testing.T) {
node := mock.Node() node := mock.Node()
args := structs.NodeRegisterRequest{ args := structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.NodeUpdateResponse var resp structs.NodeUpdateResponse
if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil {
@ -110,7 +110,7 @@ func TestHTTP_NodeAllocations(t *testing.T) {
node := mock.Node() node := mock.Node()
args := structs.NodeRegisterRequest{ args := structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.NodeUpdateResponse var resp structs.NodeUpdateResponse
if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil {
@ -164,7 +164,7 @@ func TestHTTP_NodeDrain(t *testing.T) {
node := mock.Node() node := mock.Node()
args := structs.NodeRegisterRequest{ args := structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.NodeUpdateResponse var resp structs.NodeUpdateResponse
if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil {
@ -212,7 +212,7 @@ func TestHTTP_NodeQuery(t *testing.T) {
node := mock.Node() node := mock.Node()
args := structs.NodeRegisterRequest{ args := structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.NodeUpdateResponse var resp structs.NodeUpdateResponse
if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil { if err := s.Agent.RPC("Node.Register", &args, &resp); err != nil {

View file

@ -26,7 +26,7 @@ func TestAllocEndpoint_List(t *testing.T) {
// Lookup the jobs // Lookup the jobs
get := &structs.AllocListRequest{ get := &structs.AllocListRequest{
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp structs.AllocListResponse var resp structs.AllocListResponse
if err := msgpackrpc.CallWithCodec(codec, "Alloc.List", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Alloc.List", get, &resp); err != nil {
@ -61,7 +61,7 @@ func TestAllocEndpoint_GetAlloc(t *testing.T) {
// Lookup the jobs // Lookup the jobs
get := &structs.AllocSpecificRequest{ get := &structs.AllocSpecificRequest{
AllocID: alloc.ID, AllocID: alloc.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp structs.SingleAllocResponse var resp structs.SingleAllocResponse
if err := msgpackrpc.CallWithCodec(codec, "Alloc.GetAlloc", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Alloc.GetAlloc", get, &resp); err != nil {

View file

@ -16,7 +16,7 @@ import (
) )
const ( const (
DefaultRegion = "region1" DefaultRegion = "global"
DefaultDC = "dc1" DefaultDC = "dc1"
DefaultSerfPort = 4648 DefaultSerfPort = 4648
) )

View file

@ -24,7 +24,7 @@ func TestEvalEndpoint_GetEval(t *testing.T) {
// Lookup the eval // Lookup the eval
get := &structs.EvalSpecificRequest{ get := &structs.EvalSpecificRequest{
EvalID: eval1.ID, EvalID: eval1.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp structs.SingleEvalResponse var resp structs.SingleEvalResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.GetEval", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.GetEval", get, &resp); err != nil {
@ -71,7 +71,7 @@ func TestEvalEndpoint_Dequeue(t *testing.T) {
// Dequeue the eval // Dequeue the eval
get := &structs.EvalDequeueRequest{ get := &structs.EvalDequeueRequest{
Schedulers: defaultSched, Schedulers: defaultSched,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.EvalDequeueResponse var resp structs.EvalDequeueResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.Dequeue", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.Dequeue", get, &resp); err != nil {
@ -118,7 +118,7 @@ func TestEvalEndpoint_Ack(t *testing.T) {
get := &structs.EvalAckRequest{ get := &structs.EvalAckRequest{
EvalID: out.ID, EvalID: out.ID,
Token: token, Token: token,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.GenericResponse var resp structs.GenericResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.Ack", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.Ack", get, &resp); err != nil {
@ -154,7 +154,7 @@ func TestEvalEndpoint_Nack(t *testing.T) {
get := &structs.EvalAckRequest{ get := &structs.EvalAckRequest{
EvalID: out.ID, EvalID: out.ID,
Token: token, Token: token,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.GenericResponse var resp structs.GenericResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.Nack", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.Nack", get, &resp); err != nil {
@ -202,7 +202,7 @@ func TestEvalEndpoint_Update(t *testing.T) {
get := &structs.EvalUpdateRequest{ get := &structs.EvalUpdateRequest{
Evals: []*structs.Evaluation{eval2}, Evals: []*structs.Evaluation{eval2},
EvalToken: token, EvalToken: token,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.GenericResponse var resp structs.GenericResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.Update", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.Update", get, &resp); err != nil {
@ -247,7 +247,7 @@ func TestEvalEndpoint_Create(t *testing.T) {
get := &structs.EvalUpdateRequest{ get := &structs.EvalUpdateRequest{
Evals: []*structs.Evaluation{eval1}, Evals: []*structs.Evaluation{eval1},
EvalToken: token, EvalToken: token,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.GenericResponse var resp structs.GenericResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.Create", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.Create", get, &resp); err != nil {
@ -280,7 +280,7 @@ func TestEvalEndpoint_Reap(t *testing.T) {
// Reap the eval // Reap the eval
get := &structs.EvalDeleteRequest{ get := &structs.EvalDeleteRequest{
Evals: []string{eval1.ID}, Evals: []string{eval1.ID},
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.GenericResponse var resp structs.GenericResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.Reap", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.Reap", get, &resp); err != nil {
@ -313,7 +313,7 @@ func TestEvalEndpoint_List(t *testing.T) {
// Lookup the eval // Lookup the eval
get := &structs.EvalListRequest{ get := &structs.EvalListRequest{
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp structs.EvalListResponse var resp structs.EvalListResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.List", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.List", get, &resp); err != nil {
@ -348,7 +348,7 @@ func TestEvalEndpoint_Allocations(t *testing.T) {
// Lookup the eval // Lookup the eval
get := &structs.EvalSpecificRequest{ get := &structs.EvalSpecificRequest{
EvalID: alloc1.EvalID, EvalID: alloc1.EvalID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp structs.EvalAllocationsResponse var resp structs.EvalAllocationsResponse
if err := msgpackrpc.CallWithCodec(codec, "Eval.Allocations", get, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Eval.Allocations", get, &resp); err != nil {

View file

@ -225,7 +225,7 @@ func TestServer_HeartbeatTTL_Failover(t *testing.T) {
node := mock.Node() node := mock.Node()
req := &structs.NodeRegisterRequest{ req := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response

View file

@ -20,7 +20,7 @@ func TestJobEndpoint_Register(t *testing.T) {
job := mock.Job() job := mock.Job()
req := &structs.JobRegisterRequest{ req := &structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -87,7 +87,7 @@ func TestJobEndpoint_Register_Existing(t *testing.T) {
job := mock.Job() job := mock.Job()
req := &structs.JobRegisterRequest{ req := &structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -171,7 +171,7 @@ func TestJobEndpoint_Evaluate(t *testing.T) {
job := mock.Job() job := mock.Job()
req := &structs.JobRegisterRequest{ req := &structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -186,7 +186,7 @@ func TestJobEndpoint_Evaluate(t *testing.T) {
// Force a re-evaluation // Force a re-evaluation
reEval := &structs.JobEvaluateRequest{ reEval := &structs.JobEvaluateRequest{
JobID: job.ID, JobID: job.ID,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -240,7 +240,7 @@ func TestJobEndpoint_Deregister(t *testing.T) {
job := mock.Job() job := mock.Job()
reg := &structs.JobRegisterRequest{ reg := &structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -252,7 +252,7 @@ func TestJobEndpoint_Deregister(t *testing.T) {
// Deregister // Deregister
dereg := &structs.JobDeregisterRequest{ dereg := &structs.JobDeregisterRequest{
JobID: job.ID, JobID: job.ID,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp2 structs.JobDeregisterResponse var resp2 structs.JobDeregisterResponse
if err := msgpackrpc.CallWithCodec(codec, "Job.Deregister", dereg, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Job.Deregister", dereg, &resp2); err != nil {
@ -314,7 +314,7 @@ func TestJobEndpoint_GetJob(t *testing.T) {
job := mock.Job() job := mock.Job()
reg := &structs.JobRegisterRequest{ reg := &structs.JobRegisterRequest{
Job: job, Job: job,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -328,7 +328,7 @@ func TestJobEndpoint_GetJob(t *testing.T) {
// Lookup the job // Lookup the job
get := &structs.JobSpecificRequest{ get := &structs.JobSpecificRequest{
JobID: job.ID, JobID: job.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp2 structs.SingleJobResponse var resp2 structs.SingleJobResponse
if err := msgpackrpc.CallWithCodec(codec, "Job.GetJob", get, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Job.GetJob", get, &resp2); err != nil {
@ -371,7 +371,7 @@ func TestJobEndpoint_ListJobs(t *testing.T) {
// Lookup the jobs // Lookup the jobs
get := &structs.JobListRequest{ get := &structs.JobListRequest{
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp2 structs.JobListResponse var resp2 structs.JobListResponse
if err := msgpackrpc.CallWithCodec(codec, "Job.List", get, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Job.List", get, &resp2); err != nil {
@ -409,7 +409,7 @@ func TestJobEndpoint_Allocations(t *testing.T) {
// Lookup the jobs // Lookup the jobs
get := &structs.JobSpecificRequest{ get := &structs.JobSpecificRequest{
JobID: alloc1.JobID, JobID: alloc1.JobID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp2 structs.JobAllocationsResponse var resp2 structs.JobAllocationsResponse
if err := msgpackrpc.CallWithCodec(codec, "Job.Allocations", get, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Job.Allocations", get, &resp2); err != nil {
@ -444,7 +444,7 @@ func TestJobEndpoint_Evaluations(t *testing.T) {
// Lookup the jobs // Lookup the jobs
get := &structs.JobSpecificRequest{ get := &structs.JobSpecificRequest{
JobID: eval1.JobID, JobID: eval1.JobID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp2 structs.JobEvaluationsResponse var resp2 structs.JobEvaluationsResponse
if err := msgpackrpc.CallWithCodec(codec, "Job.Evaluations", get, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Job.Evaluations", get, &resp2); err != nil {

View file

@ -20,9 +20,8 @@ func Node() *structs.Node {
IOPS: 150, IOPS: 150,
Networks: []*structs.NetworkResource{ Networks: []*structs.NetworkResource{
&structs.NetworkResource{ &structs.NetworkResource{
Public: true, Device: "eth0",
CIDR: "192.168.0.100/32", CIDR: "192.168.0.100/32",
ReservedPorts: []int{22},
MBits: 1000, MBits: 1000,
}, },
}, },
@ -31,6 +30,14 @@ func Node() *structs.Node {
CPU: 0.1, CPU: 0.1,
MemoryMB: 256, MemoryMB: 256,
DiskMB: 4 * 1024, DiskMB: 4 * 1024,
Networks: []*structs.NetworkResource{
&structs.NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
ReservedPorts: []int{22},
MBits: 1,
},
},
}, },
Links: map[string]string{ Links: map[string]string{
"consul": "foobar.dc1", "consul": "foobar.dc1",
@ -75,6 +82,12 @@ func Job() *structs.Job {
Resources: &structs.Resources{ Resources: &structs.Resources{
CPU: 0.5, CPU: 0.5,
MemoryMB: 256, MemoryMB: 256,
Networks: []*structs.NetworkResource{
&structs.NetworkResource{
MBits: 50,
DynamicPorts: 1,
},
},
}, },
}, },
}, },
@ -113,16 +126,30 @@ func Alloc() *structs.Allocation {
NodeID: "foo", NodeID: "foo",
TaskGroup: "web", TaskGroup: "web",
Resources: &structs.Resources{ Resources: &structs.Resources{
CPU: 1.0, CPU: 0.5,
MemoryMB: 1024, MemoryMB: 256,
DiskMB: 1024,
IOPS: 10,
Networks: []*structs.NetworkResource{ Networks: []*structs.NetworkResource{
&structs.NetworkResource{ &structs.NetworkResource{
Public: true, Device: "eth0",
CIDR: "192.168.0.100/32", IP: "192.168.0.100",
ReservedPorts: []int{12345}, ReservedPorts: []int{12345},
MBits: 100, MBits: 100,
DynamicPorts: 1,
},
},
},
TaskResources: map[string]*structs.Resources{
"web": &structs.Resources{
CPU: 0.5,
MemoryMB: 256,
Networks: []*structs.NetworkResource{
&structs.NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
ReservedPorts: []int{5000},
MBits: 50,
DynamicPorts: 1,
},
}, },
}, },
}, },

View file

@ -21,7 +21,7 @@ func TestClientEndpoint_Register(t *testing.T) {
node := mock.Node() node := mock.Node()
req := &structs.NodeRegisterRequest{ req := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -57,7 +57,7 @@ func TestClientEndpoint_Deregister(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -69,7 +69,7 @@ func TestClientEndpoint_Deregister(t *testing.T) {
// Deregister // Deregister
dereg := &structs.NodeDeregisterRequest{ dereg := &structs.NodeDeregisterRequest{
NodeID: node.ID, NodeID: node.ID,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp2 structs.GenericResponse var resp2 structs.GenericResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.Deregister", dereg, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.Deregister", dereg, &resp2); err != nil {
@ -100,7 +100,7 @@ func TestClientEndpoint_UpdateStatus(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -119,7 +119,7 @@ func TestClientEndpoint_UpdateStatus(t *testing.T) {
dereg := &structs.NodeUpdateStatusRequest{ dereg := &structs.NodeUpdateStatusRequest{
NodeID: node.ID, NodeID: node.ID,
Status: structs.NodeStatusInit, Status: structs.NodeStatusInit,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp2 structs.NodeUpdateResponse var resp2 structs.NodeUpdateResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateStatus", dereg, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateStatus", dereg, &resp2); err != nil {
@ -159,7 +159,7 @@ func TestClientEndpoint_UpdateStatus_HeartbeatOnly(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -178,7 +178,7 @@ func TestClientEndpoint_UpdateStatus_HeartbeatOnly(t *testing.T) {
dereg := &structs.NodeUpdateStatusRequest{ dereg := &structs.NodeUpdateStatusRequest{
NodeID: node.ID, NodeID: node.ID,
Status: node.Status, Status: node.Status,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp2 structs.NodeUpdateResponse var resp2 structs.NodeUpdateResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateStatus", dereg, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateStatus", dereg, &resp2); err != nil {
@ -205,7 +205,7 @@ func TestClientEndpoint_UpdateDrain(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -218,7 +218,7 @@ func TestClientEndpoint_UpdateDrain(t *testing.T) {
dereg := &structs.NodeUpdateDrainRequest{ dereg := &structs.NodeUpdateDrainRequest{
NodeID: node.ID, NodeID: node.ID,
Drain: true, Drain: true,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp2 structs.NodeDrainUpdateResponse var resp2 structs.NodeDrainUpdateResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateDrain", dereg, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateDrain", dereg, &resp2); err != nil {
@ -249,7 +249,7 @@ func TestClientEndpoint_GetNode(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -263,7 +263,7 @@ func TestClientEndpoint_GetNode(t *testing.T) {
// Lookup the node // Lookup the node
get := &structs.NodeSpecificRequest{ get := &structs.NodeSpecificRequest{
NodeID: node.ID, NodeID: node.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp2 structs.SingleNodeResponse var resp2 structs.SingleNodeResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.GetNode", get, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.GetNode", get, &resp2); err != nil {
@ -300,7 +300,7 @@ func TestClientEndpoint_GetAllocs(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -323,7 +323,7 @@ func TestClientEndpoint_GetAllocs(t *testing.T) {
// Lookup the allocs // Lookup the allocs
get := &structs.NodeSpecificRequest{ get := &structs.NodeSpecificRequest{
NodeID: node.ID, NodeID: node.ID,
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp2 structs.NodeAllocsResponse var resp2 structs.NodeAllocsResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.GetAllocs", get, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.GetAllocs", get, &resp2); err != nil {
@ -360,7 +360,7 @@ func TestClientEndpoint_GetAllocs_Blocking(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -388,7 +388,7 @@ func TestClientEndpoint_GetAllocs_Blocking(t *testing.T) {
get := &structs.NodeSpecificRequest{ get := &structs.NodeSpecificRequest{
NodeID: node.ID, NodeID: node.ID,
QueryOptions: structs.QueryOptions{ QueryOptions: structs.QueryOptions{
Region: "region1", Region: "global",
MinQueryIndex: 50, MinQueryIndex: 50,
MaxQueryTime: time.Second, MaxQueryTime: time.Second,
}, },
@ -422,7 +422,7 @@ func TestClientEndpoint_UpdateAlloc(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -448,7 +448,7 @@ func TestClientEndpoint_UpdateAlloc(t *testing.T) {
// Update the alloc // Update the alloc
update := &structs.AllocUpdateRequest{ update := &structs.AllocUpdateRequest{
Alloc: []*structs.Allocation{clientAlloc}, Alloc: []*structs.Allocation{clientAlloc},
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp2 structs.NodeAllocsResponse var resp2 structs.NodeAllocsResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateAlloc", update, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.UpdateAlloc", update, &resp2); err != nil {
@ -551,7 +551,7 @@ func TestClientEndpoint_Evaluate(t *testing.T) {
// Re-evaluate // Re-evaluate
req := &structs.NodeEvaluateRequest{ req := &structs.NodeEvaluateRequest{
NodeID: alloc.NodeID, NodeID: alloc.NodeID,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -614,7 +614,7 @@ func TestClientEndpoint_ListNodes(t *testing.T) {
node := mock.Node() node := mock.Node()
reg := &structs.NodeRegisterRequest{ reg := &structs.NodeRegisterRequest{
Node: node, Node: node,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response
@ -627,7 +627,7 @@ func TestClientEndpoint_ListNodes(t *testing.T) {
// Lookup the node // Lookup the node
get := &structs.NodeListRequest{ get := &structs.NodeListRequest{
QueryOptions: structs.QueryOptions{Region: "region1"}, QueryOptions: structs.QueryOptions{Region: "global"},
} }
var resp2 structs.NodeListResponse var resp2 structs.NodeListResponse
if err := msgpackrpc.CallWithCodec(codec, "Node.List", get, &resp2); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Node.List", get, &resp2); err != nil {

View file

@ -194,6 +194,6 @@ func evaluateNodePlan(snap *state.StateSnapshot, plan *structs.Plan, nodeID stri
proposed = append(proposed, plan.NodeAllocation[nodeID]...) proposed = append(proposed, plan.NodeAllocation[nodeID]...)
// Check if these allocations fit // Check if these allocations fit
fit, _, err := structs.AllocsFit(node, proposed) fit, _, _, err := structs.AllocsFit(node, proposed, nil)
return fit, err return fit, err
} }

View file

@ -13,7 +13,7 @@ func testRegisterNode(t *testing.T, s *Server, n *structs.Node) {
// Create the register request // Create the register request
req := &structs.NodeRegisterRequest{ req := &structs.NodeRegisterRequest{
Node: n, Node: n,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
// Fetch the response // Fetch the response

View file

@ -40,7 +40,7 @@ func TestPlanEndpoint_Submit(t *testing.T) {
plan.EvalToken = token plan.EvalToken = token
req := &structs.PlanRequest{ req := &structs.PlanRequest{
Plan: plan, Plan: plan,
WriteRequest: structs.WriteRequest{Region: "region1"}, WriteRequest: structs.WriteRequest{Region: "global"},
} }
var resp structs.PlanResponse var resp structs.PlanResponse
if err := msgpackrpc.CallWithCodec(codec, "Plan.Submit", req, &resp); err != nil { if err := msgpackrpc.CallWithCodec(codec, "Plan.Submit", req, &resp); err != nil {

View file

@ -44,7 +44,7 @@ func TestRPC_forwardRegion(t *testing.T) {
t.Fatalf("err: %v", err) t.Fatalf("err: %v", err)
} }
err = s2.forwardRegion("region1", "Status.Ping", struct{}{}, &out) err = s2.forwardRegion("global", "Status.Ping", struct{}{}, &out)
if err != nil { if err != nil {
t.Fatalf("err: %v", err) t.Fatalf("err: %v", err)
} }

View file

@ -29,7 +29,7 @@ func TestStatusVersion(t *testing.T) {
arg := &structs.GenericRequest{ arg := &structs.GenericRequest{
QueryOptions: structs.QueryOptions{ QueryOptions: structs.QueryOptions{
Region: "region1", Region: "global",
AllowStale: true, AllowStale: true,
}, },
} }
@ -72,7 +72,7 @@ func TestStatusLeader(t *testing.T) {
arg := &structs.GenericRequest{ arg := &structs.GenericRequest{
QueryOptions: structs.QueryOptions{ QueryOptions: structs.QueryOptions{
Region: "region1", Region: "global",
AllowStale: true, AllowStale: true,
}, },
} }
@ -92,7 +92,7 @@ func TestStatusPeers(t *testing.T) {
arg := &structs.GenericRequest{ arg := &structs.GenericRequest{
QueryOptions: structs.QueryOptions{ QueryOptions: structs.QueryOptions{
Region: "region1", Region: "global",
AllowStale: true, AllowStale: true,
}, },
} }

View file

@ -41,60 +41,49 @@ func FilterTerminalAllocs(allocs []*Allocation) []*Allocation {
return allocs[:n] return allocs[:n]
} }
// PortsOvercommited checks if any ports are over-committed. // AllocsFit checks if a given set of allocations will fit on a node.
// This does not handle CIDR subsets, and computes for the entire // The netIdx can optionally be provided if its already been computed.
// CIDR block currently. // If the netIdx is provided, it is assumed that the client has already
func PortsOvercommited(r *Resources) bool { // ensured there are no collisions.
for _, net := range r.Networks { func AllocsFit(node *Node, allocs []*Allocation, netIdx *NetworkIndex) (bool, string, *Resources, error) {
ports := make(map[int]struct{})
for _, port := range net.ReservedPorts {
if _, ok := ports[port]; ok {
return true
}
ports[port] = struct{}{}
}
}
return false
}
// AllocsFit checks if a given set of allocations will fit on a node
func AllocsFit(node *Node, allocs []*Allocation) (bool, *Resources, error) {
// Compute the utilization from zero // Compute the utilization from zero
used := new(Resources) used := new(Resources)
for _, net := range node.Resources.Networks {
used.Networks = append(used.Networks, &NetworkResource{
Public: net.Public,
CIDR: net.CIDR,
})
}
// Add the reserved resources of the node // Add the reserved resources of the node
if node.Reserved != nil { if node.Reserved != nil {
if err := used.Add(node.Reserved); err != nil { if err := used.Add(node.Reserved); err != nil {
return false, nil, err return false, "", nil, err
} }
} }
// For each alloc, add the resources // For each alloc, add the resources
for _, alloc := range allocs { for _, alloc := range allocs {
if err := used.Add(alloc.Resources); err != nil { if err := used.Add(alloc.Resources); err != nil {
return false, nil, err return false, "", nil, err
} }
} }
// Check that the node resources are a super set of those // Check that the node resources are a super set of those
// that are being allocated // that are being allocated
if !node.Resources.Superset(used) { if superset, dimension := node.Resources.Superset(used); !superset {
return false, used, nil return false, dimension, used, nil
} }
// Ensure ports are not over commited // Create the network index if missing
if PortsOvercommited(used) { if netIdx == nil {
return false, used, nil netIdx = NewNetworkIndex()
if netIdx.SetNode(node) || netIdx.AddAllocs(allocs) {
return false, "reserved port collision", used, nil
}
}
// Check if the network is overcommitted
if netIdx.Overcommitted() {
return false, "bandwidth exceeded", used, nil
} }
// Allocations fit! // Allocations fit!
return true, used, nil return true, "", used, nil
} }
// ScoreFit is used to score the fit based on the Google work published here: // ScoreFit is used to score the fit based on the Google work published here:

View file

@ -39,25 +39,48 @@ func TestFilterTerminalALlocs(t *testing.T) {
} }
} }
func TestPortsOvercommitted(t *testing.T) { func TestAllocsFit_PortsOvercommitted(t *testing.T) {
r := &Resources{ n := &Node{
Resources: &Resources{
Networks: []*NetworkResource{ Networks: []*NetworkResource{
&NetworkResource{ &NetworkResource{
ReservedPorts: []int{22, 80}, CIDR: "10.0.0.0/8",
}, MBits: 100,
&NetworkResource{ },
ReservedPorts: []int{22, 80},
}, },
}, },
}
if PortsOvercommited(r) {
t.Fatalf("bad")
} }
// Overcommit 22 a1 := &Allocation{
r.Networks[1].ReservedPorts[1] = 22 TaskResources: map[string]*Resources{
if !PortsOvercommited(r) { "web": &Resources{
t.Fatalf("bad") Networks: []*NetworkResource{
&NetworkResource{
IP: "10.0.0.1",
MBits: 50,
ReservedPorts: []int{8000},
},
},
},
},
}
// Should fit one allocation
fit, _, _, err := AllocsFit(n, []*Allocation{a1}, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
if !fit {
t.Fatalf("Bad")
}
// Should not fit second allocation
fit, _, _, err = AllocsFit(n, []*Allocation{a1, a1}, nil)
if err != nil {
t.Fatalf("err: %v", err)
}
if fit {
t.Fatalf("Bad")
} }
} }
@ -82,7 +105,7 @@ func TestAllocsFit(t *testing.T) {
IOPS: 50, IOPS: 50,
Networks: []*NetworkResource{ Networks: []*NetworkResource{
&NetworkResource{ &NetworkResource{
CIDR: "10.0.0.0/8", IP: "10.0.0.1",
MBits: 50, MBits: 50,
ReservedPorts: []int{80}, ReservedPorts: []int{80},
}, },
@ -98,7 +121,7 @@ func TestAllocsFit(t *testing.T) {
IOPS: 50, IOPS: 50,
Networks: []*NetworkResource{ Networks: []*NetworkResource{
&NetworkResource{ &NetworkResource{
CIDR: "10.0.0.0/8", IP: "10.0.0.1",
MBits: 50, MBits: 50,
ReservedPorts: []int{8000}, ReservedPorts: []int{8000},
}, },
@ -107,7 +130,7 @@ func TestAllocsFit(t *testing.T) {
} }
// Should fit one allocation // Should fit one allocation
fit, used, err := AllocsFit(n, []*Allocation{a1}) fit, _, used, err := AllocsFit(n, []*Allocation{a1}, nil)
if err != nil { if err != nil {
t.Fatalf("err: %v", err) t.Fatalf("err: %v", err)
} }
@ -124,7 +147,7 @@ func TestAllocsFit(t *testing.T) {
} }
// Should not fit second allocation // Should not fit second allocation
fit, used, err = AllocsFit(n, []*Allocation{a1, a1}) fit, _, used, err = AllocsFit(n, []*Allocation{a1, a1}, nil)
if err != nil { if err != nil {
t.Fatalf("err: %v", err) t.Fatalf("err: %v", err)
} }

202
nomad/structs/network.go Normal file
View file

@ -0,0 +1,202 @@
package structs
import (
"fmt"
"math/rand"
"net"
)
const (
// MinDynamicPort is the smallest dynamic port generated
MinDynamicPort = 20000
// MaxDynamicPort is the largest dynamic port generated
MaxDynamicPort = 60000
// maxRandPortAttempts is the maximum number of attempt
// to assign a random port
maxRandPortAttempts = 20
)
// NetworkIndex is used to index the available network resources
// and the used network resources on a machine given allocations
type NetworkIndex struct {
AvailNetworks []*NetworkResource // List of available networks
AvailBandwidth map[string]int // Bandwidth by device
UsedPorts map[string]map[int]struct{} // Ports by IP
UsedBandwidth map[string]int // Bandwidth by device
}
// NewNetworkIndex is used to construct a new network index
func NewNetworkIndex() *NetworkIndex {
return &NetworkIndex{
AvailBandwidth: make(map[string]int),
UsedPorts: make(map[string]map[int]struct{}),
UsedBandwidth: make(map[string]int),
}
}
// Overcommitted checks if the network is overcommitted
func (idx *NetworkIndex) Overcommitted() bool {
for device, used := range idx.UsedBandwidth {
avail := idx.AvailBandwidth[device]
if used > avail {
return true
}
}
return false
}
// SetNode is used to setup the available network resources. Returns
// true if there is a collision
func (idx *NetworkIndex) SetNode(node *Node) (collide bool) {
// Add the available CIDR blocks
for _, n := range node.Resources.Networks {
if n.Device != "" {
idx.AvailNetworks = append(idx.AvailNetworks, n)
idx.AvailBandwidth[n.Device] = n.MBits
}
}
// Add the reserved resources
if r := node.Reserved; r != nil {
for _, n := range r.Networks {
if idx.AddReserved(n) {
collide = true
}
}
}
return
}
// AddAllocs is used to add the used network resources. Returns
// true if there is a collision
func (idx *NetworkIndex) AddAllocs(allocs []*Allocation) (collide bool) {
for _, alloc := range allocs {
for _, task := range alloc.TaskResources {
if len(task.Networks) == 0 {
continue
}
n := task.Networks[0]
if idx.AddReserved(n) {
collide = true
}
}
}
return
}
// AddReserved is used to add a reserved network usage, returns true
// if there is a port collision
func (idx *NetworkIndex) AddReserved(n *NetworkResource) (collide bool) {
// Add the port usage
used := idx.UsedPorts[n.IP]
if used == nil {
used = make(map[int]struct{})
idx.UsedPorts[n.IP] = used
}
for _, port := range n.ReservedPorts {
if _, ok := used[port]; ok {
collide = true
} else {
used[port] = struct{}{}
}
}
// Add the bandwidth
idx.UsedBandwidth[n.Device] += n.MBits
return
}
// yieldIP is used to iteratively invoke the callback with
// an available IP
func (idx *NetworkIndex) yieldIP(cb func(net *NetworkResource, ip net.IP) bool) {
inc := func(ip net.IP) {
for j := len(ip) - 1; j >= 0; j-- {
ip[j]++
if ip[j] > 0 {
break
}
}
}
for _, n := range idx.AvailNetworks {
ip, ipnet, err := net.ParseCIDR(n.CIDR)
if err != nil {
continue
}
for ip := ip.Mask(ipnet.Mask); ipnet.Contains(ip); inc(ip) {
if cb(n, ip) {
return
}
}
}
}
// AssignNetwork is used to assign network resources given an ask.
// If the ask cannot be satisfied, returns nil
func (idx *NetworkIndex) AssignNetwork(ask *NetworkResource) (out *NetworkResource, err error) {
idx.yieldIP(func(n *NetworkResource, ip net.IP) (stop bool) {
// Convert the IP to a string
ipStr := ip.String()
// Check if we would exceed the bandwidth cap
availBandwidth := idx.AvailBandwidth[n.Device]
usedBandwidth := idx.UsedBandwidth[n.Device]
if usedBandwidth+ask.MBits > availBandwidth {
err = fmt.Errorf("bandwidth exceeded")
return
}
// Check if any of the reserved ports are in use
for _, port := range ask.ReservedPorts {
if _, ok := idx.UsedPorts[ipStr][port]; ok {
err = fmt.Errorf("reserved port collision")
return
}
}
// Create the offer
offer := &NetworkResource{
Device: n.Device,
IP: ipStr,
ReservedPorts: ask.ReservedPorts,
}
// Check if we need to generate any ports
for i := 0; i < ask.DynamicPorts; i++ {
attempts := 0
PICK:
attempts++
if attempts > maxRandPortAttempts {
err = fmt.Errorf("dynamic port selection failed")
return
}
randPort := MinDynamicPort + rand.Intn(MaxDynamicPort-MinDynamicPort)
if _, ok := idx.UsedPorts[ipStr][randPort]; ok {
goto PICK
}
if IntContains(offer.ReservedPorts, randPort) {
goto PICK
}
offer.ReservedPorts = append(offer.ReservedPorts, randPort)
}
// Stop, we have an offer!
out = offer
err = nil
return true
})
return
}
// IntContains scans an integer slice for a value
func IntContains(haystack []int, needle int) bool {
for _, item := range haystack {
if item == needle {
return true
}
}
return false
}

View file

@ -0,0 +1,349 @@
package structs
import (
"net"
"reflect"
"testing"
)
func TestNetworkIndex_Overcommitted(t *testing.T) {
idx := NewNetworkIndex()
// Consume some network
reserved := &NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
MBits: 505,
ReservedPorts: []int{8000, 9000},
}
collide := idx.AddReserved(reserved)
if collide {
t.Fatalf("bad")
}
if !idx.Overcommitted() {
t.Fatalf("have no resources")
}
// Add resources
n := &Node{
Resources: &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
CIDR: "192.168.0.100/32",
MBits: 1000,
},
},
},
}
idx.SetNode(n)
if idx.Overcommitted() {
t.Fatalf("have resources")
}
// Double up our ussage
idx.AddReserved(reserved)
if !idx.Overcommitted() {
t.Fatalf("should be overcommitted")
}
}
func TestNetworkIndex_SetNode(t *testing.T) {
idx := NewNetworkIndex()
n := &Node{
Resources: &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
CIDR: "192.168.0.100/32",
MBits: 1000,
},
},
},
Reserved: &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
ReservedPorts: []int{22},
MBits: 1,
},
},
},
}
collide := idx.SetNode(n)
if collide {
t.Fatalf("bad")
}
if len(idx.AvailNetworks) != 1 {
t.Fatalf("Bad")
}
if idx.AvailBandwidth["eth0"] != 1000 {
t.Fatalf("Bad")
}
if idx.UsedBandwidth["eth0"] != 1 {
t.Fatalf("Bad")
}
if _, ok := idx.UsedPorts["192.168.0.100"][22]; !ok {
t.Fatalf("Bad")
}
}
func TestNetworkIndex_AddAllocs(t *testing.T) {
idx := NewNetworkIndex()
allocs := []*Allocation{
&Allocation{
TaskResources: map[string]*Resources{
"web": &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
MBits: 20,
ReservedPorts: []int{8000, 9000},
},
},
},
},
},
&Allocation{
TaskResources: map[string]*Resources{
"api": &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
MBits: 50,
ReservedPorts: []int{10000},
},
},
},
},
},
}
collide := idx.AddAllocs(allocs)
if collide {
t.Fatalf("bad")
}
if idx.UsedBandwidth["eth0"] != 70 {
t.Fatalf("Bad")
}
if _, ok := idx.UsedPorts["192.168.0.100"][8000]; !ok {
t.Fatalf("Bad")
}
if _, ok := idx.UsedPorts["192.168.0.100"][9000]; !ok {
t.Fatalf("Bad")
}
if _, ok := idx.UsedPorts["192.168.0.100"][10000]; !ok {
t.Fatalf("Bad")
}
}
func TestNetworkIndex_AddReserved(t *testing.T) {
idx := NewNetworkIndex()
reserved := &NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
MBits: 20,
ReservedPorts: []int{8000, 9000},
}
collide := idx.AddReserved(reserved)
if collide {
t.Fatalf("bad")
}
if idx.UsedBandwidth["eth0"] != 20 {
t.Fatalf("Bad")
}
if _, ok := idx.UsedPorts["192.168.0.100"][8000]; !ok {
t.Fatalf("Bad")
}
if _, ok := idx.UsedPorts["192.168.0.100"][9000]; !ok {
t.Fatalf("Bad")
}
// Try to reserve the same network
collide = idx.AddReserved(reserved)
if !collide {
t.Fatalf("bad")
}
}
func TestNetworkIndex_yieldIP(t *testing.T) {
idx := NewNetworkIndex()
n := &Node{
Resources: &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
CIDR: "192.168.0.100/30",
MBits: 1000,
},
},
},
Reserved: &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
ReservedPorts: []int{22},
MBits: 1,
},
},
},
}
idx.SetNode(n)
var out []string
idx.yieldIP(func(n *NetworkResource, ip net.IP) (stop bool) {
out = append(out, ip.String())
return
})
expect := []string{"192.168.0.100", "192.168.0.101",
"192.168.0.102", "192.168.0.103"}
if !reflect.DeepEqual(out, expect) {
t.Fatalf("bad: %v", out)
}
}
func TestNetworkIndex_AssignNetwork(t *testing.T) {
idx := NewNetworkIndex()
n := &Node{
Resources: &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
CIDR: "192.168.0.100/30",
MBits: 1000,
},
},
},
Reserved: &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
ReservedPorts: []int{22},
MBits: 1,
},
},
},
}
idx.SetNode(n)
allocs := []*Allocation{
&Allocation{
TaskResources: map[string]*Resources{
"web": &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
MBits: 20,
ReservedPorts: []int{8000, 9000},
},
},
},
},
},
&Allocation{
TaskResources: map[string]*Resources{
"api": &Resources{
Networks: []*NetworkResource{
&NetworkResource{
Device: "eth0",
IP: "192.168.0.100",
MBits: 50,
ReservedPorts: []int{10000},
},
},
},
},
},
}
idx.AddAllocs(allocs)
// Ask for a reserved port
ask := &NetworkResource{
ReservedPorts: []int{8000},
}
offer, err := idx.AssignNetwork(ask)
if err != nil {
t.Fatalf("err: %v", err)
}
if offer == nil {
t.Fatalf("bad")
}
if offer.IP != "192.168.0.101" {
t.Fatalf("bad: %#v", offer)
}
if len(offer.ReservedPorts) != 1 || offer.ReservedPorts[0] != 8000 {
t.Fatalf("bad: %#v", offer)
}
// Ask for dynamic ports
ask = &NetworkResource{
DynamicPorts: 3,
}
offer, err = idx.AssignNetwork(ask)
if err != nil {
t.Fatalf("err: %v", err)
}
if offer == nil {
t.Fatalf("bad")
}
if offer.IP != "192.168.0.100" {
t.Fatalf("bad: %#v", offer)
}
if len(offer.ReservedPorts) != 3 {
t.Fatalf("bad: %#v", offer)
}
// Ask for reserved + dynamic ports
ask = &NetworkResource{
ReservedPorts: []int{12345},
DynamicPorts: 3,
}
offer, err = idx.AssignNetwork(ask)
if err != nil {
t.Fatalf("err: %v", err)
}
if offer == nil {
t.Fatalf("bad")
}
if offer.IP != "192.168.0.100" {
t.Fatalf("bad: %#v", offer)
}
if len(offer.ReservedPorts) != 4 || offer.ReservedPorts[0] != 12345 {
t.Fatalf("bad: %#v", offer)
}
// Ask for too much bandwidth
ask = &NetworkResource{
MBits: 1000,
}
offer, err = idx.AssignNetwork(ask)
if err.Error() != "bandwidth exceeded" {
t.Fatalf("err: %v", err)
}
if offer != nil {
t.Fatalf("bad")
}
}
func TestIntContains(t *testing.T) {
l := []int{1, 2, 10, 20}
if IntContains(l, 50) {
t.Fatalf("bad")
}
if !IntContains(l, 20) {
t.Fatalf("bad")
}
if !IntContains(l, 1) {
t.Fatalf("bad")
}
}

View file

@ -538,12 +538,22 @@ type Resources struct {
Networks []*NetworkResource Networks []*NetworkResource
} }
// NetIndexByCIDR scans the list of networks for a matching // Copy returns a deep copy of the resources
// CIDR, returning the index. This currently ONLY handles func (r *Resources) Copy() *Resources {
// an exact match and not a subset CIDR. newR := new(Resources)
func (r *Resources) NetIndexByCIDR(cidr string) int { *newR = *r
n := len(r.Networks)
newR.Networks = make([]*NetworkResource, n)
for i := 0; i < n; i++ {
newR.Networks[i] = r.Networks[i].Copy()
}
return newR
}
// NetIndex finds the matching net index using device name
func (r *Resources) NetIndex(n *NetworkResource) int {
for idx, net := range r.Networks { for idx, net := range r.Networks {
if net.CIDR == cidr { if net.Device == n.Device {
return idx return idx
} }
} }
@ -551,36 +561,22 @@ func (r *Resources) NetIndexByCIDR(cidr string) int {
} }
// Superset checks if one set of resources is a superset // Superset checks if one set of resources is a superset
// of another. // of another. This ignores network resources, and the NetworkIndex
func (r *Resources) Superset(other *Resources) bool { // should be used for that.
func (r *Resources) Superset(other *Resources) (bool, string) {
if r.CPU < other.CPU { if r.CPU < other.CPU {
return false return false, "cpu exhausted"
} }
if r.MemoryMB < other.MemoryMB { if r.MemoryMB < other.MemoryMB {
return false return false, "memory exhausted"
} }
if r.DiskMB < other.DiskMB { if r.DiskMB < other.DiskMB {
return false return false, "disk exhausted"
} }
if r.IOPS < other.IOPS { if r.IOPS < other.IOPS {
return false return false, "iops exhausted"
} }
for _, net := range r.Networks { return true, ""
idx := other.NetIndexByCIDR(net.CIDR)
if idx >= 0 {
if net.MBits < other.Networks[idx].MBits {
return false
}
}
}
// Check that other does not have a network we are missing
for _, net := range other.Networks {
idx := r.NetIndexByCIDR(net.CIDR)
if idx == -1 {
return false
}
}
return true
} }
// Add adds the resources of the delta to this, potentially // Add adds the resources of the delta to this, potentially
@ -594,12 +590,14 @@ func (r *Resources) Add(delta *Resources) error {
r.DiskMB += delta.DiskMB r.DiskMB += delta.DiskMB
r.IOPS += delta.IOPS r.IOPS += delta.IOPS
for _, net := range delta.Networks { for _, n := range delta.Networks {
idx := r.NetIndexByCIDR(net.CIDR) // Find the matching interface by IP or CIDR
idx := r.NetIndex(n)
if idx == -1 { if idx == -1 {
return fmt.Errorf("missing network for CIDR %s", net.CIDR) r.Networks = append(r.Networks, n.Copy())
} else {
r.Networks[idx].Add(n)
} }
r.Networks[idx].Add(net)
} }
return nil return nil
} }
@ -607,10 +605,23 @@ func (r *Resources) Add(delta *Resources) error {
// NetworkResource is used to represesent available network // NetworkResource is used to represesent available network
// resources // resources
type NetworkResource struct { type NetworkResource struct {
Public bool // Is this a public address? Device string // Name of the device
CIDR string // CIDR block of addresses CIDR string // CIDR block of addresses
IP string // IP address
ReservedPorts []int // Reserved ports ReservedPorts []int // Reserved ports
MBits int // Throughput MBits int // Throughput
DynamicPorts int // Dynamically assigned ports
}
// Copy returns a deep copy of the network resource
func (n *NetworkResource) Copy() *NetworkResource {
newR := new(NetworkResource)
*newR = *n
if n.ReservedPorts != nil {
newR.ReservedPorts = make([]int, len(n.ReservedPorts))
copy(newR.ReservedPorts, n.ReservedPorts)
}
return newR
} }
// Add adds the resources of the delta to this, potentially // Add adds the resources of the delta to this, potentially
@ -620,13 +631,13 @@ func (n *NetworkResource) Add(delta *NetworkResource) {
n.ReservedPorts = append(n.ReservedPorts, delta.ReservedPorts...) n.ReservedPorts = append(n.ReservedPorts, delta.ReservedPorts...)
} }
n.MBits += delta.MBits n.MBits += delta.MBits
n.DynamicPorts += delta.DynamicPorts
} }
const ( const (
// JobTypeNomad is reserved for internal system tasks and is // JobTypeNomad is reserved for internal system tasks and is
// always handled by the CoreScheduler. // always handled by the CoreScheduler.
JobTypeCore = "_core" JobTypeCore = "_core"
JobTypeSystem = "system"
JobTypeService = "service" JobTypeService = "service"
JobTypeBatch = "batch" JobTypeBatch = "batch"
) )
@ -871,10 +882,14 @@ type Allocation struct {
// TaskGroup is the name of the task group that should be run // TaskGroup is the name of the task group that should be run
TaskGroup string TaskGroup string
// Resources is the set of resources allocated as part // Resources is the total set of resources allocated as part
// of this allocation of the task group. // of this allocation of the task group.
Resources *Resources Resources *Resources
// TaskResources is the set of resources allocated to each
// task. These should sum to the total Resources.
TaskResources map[string]*Resources
// Metrics associated with this allocation // Metrics associated with this allocation
Metrics *AllocMetric Metrics *AllocMetric
@ -964,6 +979,9 @@ type AllocMetric struct {
// ClassExhausted is the number of nodes exhausted by class // ClassExhausted is the number of nodes exhausted by class
ClassExhausted map[string]int ClassExhausted map[string]int
// DimensionExhaused provides the count by dimension or reason
DimensionExhaused map[string]int
// Scores is the scores of the final few nodes remaining // Scores is the scores of the final few nodes remaining
// for placement. The top score is typically selected. // for placement. The top score is typically selected.
Scores map[string]float64 Scores map[string]float64
@ -999,7 +1017,7 @@ func (a *AllocMetric) FilterNode(node *Node, constraint string) {
} }
} }
func (a *AllocMetric) ExhaustedNode(node *Node) { func (a *AllocMetric) ExhaustedNode(node *Node, dimension string) {
a.NodesExhausted += 1 a.NodesExhausted += 1
if node != nil && node.NodeClass != "" { if node != nil && node.NodeClass != "" {
if a.ClassExhausted == nil { if a.ClassExhausted == nil {
@ -1007,6 +1025,12 @@ func (a *AllocMetric) ExhaustedNode(node *Node) {
} }
a.ClassExhausted[node.NodeClass] += 1 a.ClassExhausted[node.NodeClass] += 1
} }
if dimension != "" {
if a.DimensionExhaused == nil {
a.DimensionExhaused = make(map[string]int)
}
a.DimensionExhaused[dimension] += 1
}
} }
func (a *AllocMetric) ScoreNode(node *Node, name string, score float64) { func (a *AllocMetric) ScoreNode(node *Node, name string, score float64) {

View file

@ -5,20 +5,21 @@ import (
"testing" "testing"
) )
func TestResource_NetIndexByCIDR(t *testing.T) { func TestResource_NetIndex(t *testing.T) {
r := &Resources{ r := &Resources{
Networks: []*NetworkResource{ Networks: []*NetworkResource{
&NetworkResource{CIDR: "10.0.0.0/8"}, &NetworkResource{Device: "eth0"},
&NetworkResource{CIDR: "127.0.0.0/24"}, &NetworkResource{Device: "lo0"},
&NetworkResource{Device: ""},
}, },
} }
if idx := r.NetIndexByCIDR("10.0.0.0/8"); idx != 0 { if idx := r.NetIndex(&NetworkResource{Device: "eth0"}); idx != 0 {
t.Fatalf("Bad: %d", idx) t.Fatalf("Bad: %d", idx)
} }
if idx := r.NetIndexByCIDR("127.0.0.0/24"); idx != 1 { if idx := r.NetIndex(&NetworkResource{Device: "lo0"}); idx != 1 {
t.Fatalf("Bad: %d", idx) t.Fatalf("Bad: %d", idx)
} }
if idx := r.NetIndexByCIDR("10.0.0.0/16"); idx != -1 { if idx := r.NetIndex(&NetworkResource{Device: "eth1"}); idx != -1 {
t.Fatalf("Bad: %d", idx) t.Fatalf("Bad: %d", idx)
} }
} }
@ -29,36 +30,24 @@ func TestResource_Superset(t *testing.T) {
MemoryMB: 2048, MemoryMB: 2048,
DiskMB: 10000, DiskMB: 10000,
IOPS: 100, IOPS: 100,
Networks: []*NetworkResource{
&NetworkResource{
CIDR: "10.0.0.0/8",
MBits: 100,
},
},
} }
r2 := &Resources{ r2 := &Resources{
CPU: 1.0, CPU: 1.0,
MemoryMB: 1024, MemoryMB: 1024,
DiskMB: 5000, DiskMB: 5000,
IOPS: 50, IOPS: 50,
Networks: []*NetworkResource{
&NetworkResource{
CIDR: "10.0.0.0/8",
MBits: 50,
},
},
} }
if !r1.Superset(r1) { if s, _ := r1.Superset(r1); !s {
t.Fatalf("bad") t.Fatalf("bad")
} }
if !r1.Superset(r2) { if s, _ := r1.Superset(r2); !s {
t.Fatalf("bad") t.Fatalf("bad")
} }
if r2.Superset(r1) { if s, _ := r2.Superset(r1); s {
t.Fatalf("bad") t.Fatalf("bad")
} }
if !r2.Superset(r2) { if s, _ := r2.Superset(r2); !s {
t.Fatalf("bad") t.Fatalf("bad")
} }
} }
@ -84,7 +73,7 @@ func TestResource_Add(t *testing.T) {
IOPS: 50, IOPS: 50,
Networks: []*NetworkResource{ Networks: []*NetworkResource{
&NetworkResource{ &NetworkResource{
CIDR: "10.0.0.0/8", IP: "10.0.0.1",
MBits: 50, MBits: 50,
ReservedPorts: []int{80}, ReservedPorts: []int{80},
}, },
@ -115,6 +104,48 @@ func TestResource_Add(t *testing.T) {
} }
} }
func TestResource_Add_Network(t *testing.T) {
r1 := &Resources{}
r2 := &Resources{
Networks: []*NetworkResource{
&NetworkResource{
MBits: 50,
DynamicPorts: 2,
},
},
}
r3 := &Resources{
Networks: []*NetworkResource{
&NetworkResource{
MBits: 25,
DynamicPorts: 1,
},
},
}
err := r1.Add(r2)
if err != nil {
t.Fatalf("Err: %v", err)
}
err = r1.Add(r3)
if err != nil {
t.Fatalf("Err: %v", err)
}
expect := &Resources{
Networks: []*NetworkResource{
&NetworkResource{
MBits: 75,
DynamicPorts: 3,
},
},
}
if !reflect.DeepEqual(expect.Networks, r1.Networks) {
t.Fatalf("bad: %#v %#v", expect.Networks[0], r1.Networks[0])
}
}
func TestEncodeDecode(t *testing.T) { func TestEncodeDecode(t *testing.T) {
type FooRequest struct { type FooRequest struct {
Foo string Foo string

View file

@ -311,6 +311,15 @@ func (s *GenericScheduler) inplaceUpdate(updates []allocTuple) []allocTuple {
continue continue
} }
// Restore the network offers from the existing allocation.
// We do not allow network resources (reserved/dynamic ports)
// to be updated. This is guarded in taskUpdated, so we can
// safely restore those here.
for task, resources := range option.TaskResources {
existing := update.Alloc.TaskResources[task]
resources.Networks = existing.Networks
}
// Create a shallow copy // Create a shallow copy
newAlloc := new(structs.Allocation) newAlloc := new(structs.Allocation)
*newAlloc = *update.Alloc *newAlloc = *update.Alloc
@ -319,6 +328,7 @@ func (s *GenericScheduler) inplaceUpdate(updates []allocTuple) []allocTuple {
newAlloc.EvalID = s.eval.ID newAlloc.EvalID = s.eval.ID
newAlloc.Job = s.job newAlloc.Job = s.job
newAlloc.Resources = size newAlloc.Resources = size
newAlloc.TaskResources = option.TaskResources
newAlloc.Metrics = s.ctx.Metrics() newAlloc.Metrics = s.ctx.Metrics()
newAlloc.DesiredStatus = structs.AllocDesiredStatusRun newAlloc.DesiredStatus = structs.AllocDesiredStatusRun
newAlloc.ClientStatus = structs.AllocClientStatusPending newAlloc.ClientStatus = structs.AllocClientStatusPending
@ -361,36 +371,29 @@ func (s *GenericScheduler) computePlacements(place []allocTuple) error {
// Attempt to match the task group // Attempt to match the task group
option, size := s.stack.Select(missing.TaskGroup) option, size := s.stack.Select(missing.TaskGroup)
// Handle a placement failure
var nodeID, status, desc, clientStatus string
if option == nil {
status = structs.AllocDesiredStatusFailed
desc = "failed to find a node for placement"
clientStatus = structs.AllocClientStatusFailed
} else {
nodeID = option.Node.ID
status = structs.AllocDesiredStatusRun
clientStatus = structs.AllocClientStatusPending
}
// Create an allocation for this // Create an allocation for this
alloc := &structs.Allocation{ alloc := &structs.Allocation{
ID: structs.GenerateUUID(), ID: structs.GenerateUUID(),
EvalID: s.eval.ID, EvalID: s.eval.ID,
Name: missing.Name, Name: missing.Name,
NodeID: nodeID,
JobID: s.job.ID, JobID: s.job.ID,
Job: s.job, Job: s.job,
TaskGroup: missing.TaskGroup.Name, TaskGroup: missing.TaskGroup.Name,
Resources: size, Resources: size,
Metrics: s.ctx.Metrics(), Metrics: s.ctx.Metrics(),
DesiredStatus: status,
DesiredDescription: desc,
ClientStatus: clientStatus,
} }
if nodeID != "" {
// Set fields based on if we found an allocation option
if option != nil {
alloc.NodeID = option.Node.ID
alloc.TaskResources = option.TaskResources
alloc.DesiredStatus = structs.AllocDesiredStatusRun
alloc.ClientStatus = structs.AllocClientStatusPending
s.plan.AppendAlloc(alloc) s.plan.AppendAlloc(alloc)
} else { } else {
alloc.DesiredStatus = structs.AllocDesiredStatusFailed
alloc.DesiredDescription = "failed to find a node for placement"
alloc.ClientStatus = structs.AllocClientStatusFailed
s.plan.AppendFailed(alloc) s.plan.AppendFailed(alloc)
failedTG[missing.TaskGroup] = alloc failedTG[missing.TaskGroup] = alloc
} }

View file

@ -382,6 +382,15 @@ func TestServiceSched_JobModify_InPlace(t *testing.T) {
t.Fatalf("bad: %#v", out) t.Fatalf("bad: %#v", out)
} }
h.AssertEvalStatus(t, structs.EvalStatusComplete) h.AssertEvalStatus(t, structs.EvalStatusComplete)
// Verify the network did not change
for _, alloc := range out {
for _, resources := range alloc.TaskResources {
if resources.Networks[0].ReservedPorts[0] != 5000 {
t.Fatalf("bad: %#v", alloc)
}
}
}
} }
func TestServiceSched_JobDeregister(t *testing.T) { func TestServiceSched_JobDeregister(t *testing.T) {

View file

@ -12,6 +12,7 @@ import (
type RankedNode struct { type RankedNode struct {
Node *structs.Node Node *structs.Node
Score float64 Score float64
TaskResources map[string]*structs.Resources
// Allocs is used to cache the proposed allocations on the // Allocs is used to cache the proposed allocations on the
// node. This can be shared between iterators that require it. // node. This can be shared between iterators that require it.
@ -22,6 +23,27 @@ func (r *RankedNode) GoString() string {
return fmt.Sprintf("<Node: %s Score: %0.3f>", r.Node.ID, r.Score) return fmt.Sprintf("<Node: %s Score: %0.3f>", r.Node.ID, r.Score)
} }
func (r *RankedNode) ProposedAllocs(ctx Context) ([]*structs.Allocation, error) {
if r.Proposed != nil {
return r.Proposed, nil
}
p, err := ctx.ProposedAllocs(r.Node.ID)
if err != nil {
return nil, err
}
r.Proposed = p
return p, nil
}
func (r *RankedNode) SetTaskResources(task *structs.Task,
resource *structs.Resources) {
if r.TaskResources == nil {
r.TaskResources = make(map[string]*structs.Resources)
}
r.TaskResources[task.Name] = resource
}
// RankFeasibleIterator is used to iteratively yield nodes along // RankFeasibleIterator is used to iteratively yield nodes along
// with ranking metadata. The iterators may manage some state for // with ranking metadata. The iterators may manage some state for
// performance optimizations. // performance optimizations.
@ -111,63 +133,90 @@ func (iter *StaticRankIterator) Reset() {
type BinPackIterator struct { type BinPackIterator struct {
ctx Context ctx Context
source RankIterator source RankIterator
resources *structs.Resources
evict bool evict bool
priority int priority int
tasks []*structs.Task
} }
// NewBinPackIterator returns a BinPackIterator which tries to fit the given // NewBinPackIterator returns a BinPackIterator which tries to fit tasks
// resources, potentially evicting other tasks based on a given priority. // potentially evicting other tasks based on a given priority.
func NewBinPackIterator(ctx Context, source RankIterator, resources *structs.Resources, evict bool, priority int) *BinPackIterator { func NewBinPackIterator(ctx Context, source RankIterator, evict bool, priority int) *BinPackIterator {
iter := &BinPackIterator{ iter := &BinPackIterator{
ctx: ctx, ctx: ctx,
source: source, source: source,
resources: resources,
evict: evict, evict: evict,
priority: priority, priority: priority,
} }
return iter return iter
} }
func (iter *BinPackIterator) SetResources(r *structs.Resources) {
iter.resources = r
}
func (iter *BinPackIterator) SetPriority(p int) { func (iter *BinPackIterator) SetPriority(p int) {
iter.priority = p iter.priority = p
} }
func (iter *BinPackIterator) SetTasks(tasks []*structs.Task) {
iter.tasks = tasks
}
func (iter *BinPackIterator) Next() *RankedNode { func (iter *BinPackIterator) Next() *RankedNode {
OUTER:
for { for {
// Get the next potential option // Get the next potential option
option := iter.source.Next() option := iter.source.Next()
if option == nil { if option == nil {
return nil return nil
} }
nodeID := option.Node.ID
// Get the proposed allocations // Get the proposed allocations
var proposed []*structs.Allocation proposed, err := option.ProposedAllocs(iter.ctx)
if option.Proposed != nil {
proposed = option.Proposed
} else {
p, err := iter.ctx.ProposedAllocs(nodeID)
if err != nil { if err != nil {
iter.ctx.Logger().Printf("[ERR] sched.binpack: failed to get proposed allocations for '%s': %v", iter.ctx.Logger().Printf(
nodeID, err) "[ERR] sched.binpack: failed to get proposed allocations: %v",
err)
continue continue
} }
proposed = p
option.Proposed = p // Index the existing network usage
netIdx := structs.NewNetworkIndex()
netIdx.SetNode(option.Node)
netIdx.AddAllocs(proposed)
// Assign the resources for each task
total := new(structs.Resources)
for _, task := range iter.tasks {
taskResources := task.Resources.Copy()
// Check if we need a network resource
if len(taskResources.Networks) > 0 {
ask := taskResources.Networks[0]
offer, err := netIdx.AssignNetwork(ask)
if offer == nil {
iter.ctx.Metrics().ExhaustedNode(option.Node,
fmt.Sprintf("network: %s", err))
continue OUTER
}
// Reserve this to prevent another task from colliding
netIdx.AddReserved(offer)
// Update the network ask to the offer
taskResources.Networks = []*structs.NetworkResource{offer}
}
// Store the task resource
option.SetTaskResources(task, taskResources)
// Accumulate the total resource requirement
total.Add(taskResources)
} }
// Add the resources we are trying to fit // Add the resources we are trying to fit
proposed = append(proposed, &structs.Allocation{Resources: iter.resources}) proposed = append(proposed, &structs.Allocation{Resources: total})
// Check if these allocations fit, if they do not, simply skip this node // Check if these allocations fit, if they do not, simply skip this node
fit, util, _ := structs.AllocsFit(option.Node, proposed) fit, dim, util, _ := structs.AllocsFit(option.Node, proposed, netIdx)
if !fit { if !fit {
iter.ctx.Metrics().ExhaustedNode(option.Node) iter.ctx.Metrics().ExhaustedNode(option.Node, dim)
continue continue
} }
@ -220,22 +269,15 @@ func (iter *JobAntiAffinityIterator) Next() *RankedNode {
if option == nil { if option == nil {
return nil return nil
} }
nodeID := option.Node.ID
// Get the proposed allocations // Get the proposed allocations
var proposed []*structs.Allocation proposed, err := option.ProposedAllocs(iter.ctx)
if option.Proposed != nil {
proposed = option.Proposed
} else {
p, err := iter.ctx.ProposedAllocs(nodeID)
if err != nil { if err != nil {
iter.ctx.Logger().Printf("[ERR] sched.job-anti-affinity: failed to get proposed allocations for '%s': %v", iter.ctx.Logger().Printf(
nodeID, err) "[ERR] sched.job-anti-aff: failed to get proposed allocations: %v",
err)
continue continue
} }
proposed = p
option.Proposed = p
}
// Determine the number of collisions // Determine the number of collisions
collisions := 0 collisions := 0

View file

@ -68,11 +68,16 @@ func TestBinPackIterator_NoExistingAlloc(t *testing.T) {
} }
static := NewStaticRankIterator(ctx, nodes) static := NewStaticRankIterator(ctx, nodes)
resources := &structs.Resources{ task := &structs.Task{
Name: "web",
Resources: &structs.Resources{
CPU: 1024, CPU: 1024,
MemoryMB: 1024, MemoryMB: 1024,
},
} }
binp := NewBinPackIterator(ctx, static, resources, false, 0)
binp := NewBinPackIterator(ctx, static, false, 0)
binp.SetTasks([]*structs.Task{task})
out := collectRanked(binp) out := collectRanked(binp)
if len(out) != 2 { if len(out) != 2 {
@ -137,11 +142,16 @@ func TestBinPackIterator_PlannedAlloc(t *testing.T) {
}, },
} }
resources := &structs.Resources{ task := &structs.Task{
Name: "web",
Resources: &structs.Resources{
CPU: 1024, CPU: 1024,
MemoryMB: 1024, MemoryMB: 1024,
},
} }
binp := NewBinPackIterator(ctx, static, resources, false, 0)
binp := NewBinPackIterator(ctx, static, false, 0)
binp.SetTasks([]*structs.Task{task})
out := collectRanked(binp) out := collectRanked(binp)
if len(out) != 1 { if len(out) != 1 {
@ -207,11 +217,16 @@ func TestBinPackIterator_ExistingAlloc(t *testing.T) {
} }
noErr(t, state.UpsertAllocs(1000, []*structs.Allocation{alloc1, alloc2})) noErr(t, state.UpsertAllocs(1000, []*structs.Allocation{alloc1, alloc2}))
resources := &structs.Resources{ task := &structs.Task{
Name: "web",
Resources: &structs.Resources{
CPU: 1024, CPU: 1024,
MemoryMB: 1024, MemoryMB: 1024,
},
} }
binp := NewBinPackIterator(ctx, static, resources, false, 0)
binp := NewBinPackIterator(ctx, static, false, 0)
binp.SetTasks([]*structs.Task{task})
out := collectRanked(binp) out := collectRanked(binp)
if len(out) != 1 { if len(out) != 1 {
@ -280,11 +295,16 @@ func TestBinPackIterator_ExistingAlloc_PlannedEvict(t *testing.T) {
plan := ctx.Plan() plan := ctx.Plan()
plan.NodeUpdate[nodes[0].Node.ID] = []*structs.Allocation{alloc1} plan.NodeUpdate[nodes[0].Node.ID] = []*structs.Allocation{alloc1}
resources := &structs.Resources{ task := &structs.Task{
Name: "web",
Resources: &structs.Resources{
CPU: 1024, CPU: 1024,
MemoryMB: 1024, MemoryMB: 1024,
},
} }
binp := NewBinPackIterator(ctx, static, resources, false, 0)
binp := NewBinPackIterator(ctx, static, false, 0)
binp.SetTasks([]*structs.Task{task})
out := collectRanked(binp) out := collectRanked(binp)
if len(out) != 2 { if len(out) != 2 {

View file

@ -76,7 +76,7 @@ func NewGenericStack(batch bool, ctx Context, baseNodes []*structs.Node) *Generi
// by a particular task group. Only enable eviction for the service // by a particular task group. Only enable eviction for the service
// scheduler as that logic is expensive. // scheduler as that logic is expensive.
evict := !batch evict := !batch
s.binPack = NewBinPackIterator(ctx, rankSource, nil, evict, 0) s.binPack = NewBinPackIterator(ctx, rankSource, evict, 0)
// Apply the job anti-affinity iterator. This is to avoid placing // Apply the job anti-affinity iterator. This is to avoid placing
// multiple allocations on the same node for this job. The penalty // multiple allocations on the same node for this job. The penalty
@ -149,11 +149,18 @@ func (s *GenericStack) Select(tg *structs.TaskGroup) (*RankedNode, *structs.Reso
// Update the parameters of iterators // Update the parameters of iterators
s.taskGroupDrivers.SetDrivers(drivers) s.taskGroupDrivers.SetDrivers(drivers)
s.taskGroupConstraint.SetConstraints(constr) s.taskGroupConstraint.SetConstraints(constr)
s.binPack.SetResources(size) s.binPack.SetTasks(tg.Tasks)
// Find the node with the max score // Find the node with the max score
option := s.maxScore.Next() option := s.maxScore.Next()
// Ensure that the task resources were specified
if option != nil && len(option.TaskResources) != len(tg.Tasks) {
for _, task := range tg.Tasks {
option.SetTaskResources(task, task.Resources)
}
}
// Store the compute time // Store the compute time
s.ctx.Metrics().AllocationTime = time.Since(start) s.ctx.Metrics().AllocationTime = time.Since(start)
return option, size return option, size

View file

@ -227,6 +227,11 @@ func tasksUpdated(a, b *structs.TaskGroup) bool {
if !reflect.DeepEqual(at.Config, bt.Config) { if !reflect.DeepEqual(at.Config, bt.Config) {
return true return true
} }
// Inspect the network to see if the resource ask is different
if !reflect.DeepEqual(at.Resources.Networks, bt.Resources.Networks) {
return true
}
} }
return false return false
} }

View file

@ -259,4 +259,10 @@ func TestTasksUpdated(t *testing.T) {
if !tasksUpdated(j1.TaskGroups[0], j5.TaskGroups[0]) { if !tasksUpdated(j1.TaskGroups[0], j5.TaskGroups[0]) {
t.Fatalf("bad") t.Fatalf("bad")
} }
j6 := mock.Job()
j6.TaskGroups[0].Tasks[0].Resources.Networks[0].DynamicPorts = 3
if !tasksUpdated(j1.TaskGroups[0], j6.TaskGroups[0]) {
t.Fatalf("bad")
}
} }

View file

@ -29,15 +29,36 @@ var offset uint64
// TestServerConfig is the main server configuration struct. // TestServerConfig is the main server configuration struct.
type TestServerConfig struct { type TestServerConfig struct {
HTTPAddr string `json:"http_addr,omitempty"`
Bootstrap bool `json:"bootstrap,omitempty"` Bootstrap bool `json:"bootstrap,omitempty"`
DataDir string `json:"data_dir,omitempty"` DataDir string `json:"data_dir,omitempty"`
Region string `json:"region,omitempty"` Region string `json:"region,omitempty"`
DisableCheckpoint bool `json:"disable_update_check"` DisableCheckpoint bool `json:"disable_update_check"`
LogLevel string `json:"log_level,omitempty"` LogLevel string `json:"log_level,omitempty"`
Ports *PortsConfig `json:"ports,omitempty"`
Server *ServerConfig `json:"server,omitempty"`
Client *ClientConfig `json:"client,omitempty"`
DevMode bool `json:"-"`
Stdout, Stderr io.Writer `json:"-"` Stdout, Stderr io.Writer `json:"-"`
} }
// Ports is used to configure the network ports we use.
type PortsConfig struct {
HTTP int `json:"http,omitempty"`
RPC int `json:"rpc,omitempty"`
Serf int `json:"serf,omitempty"`
}
// ServerConfig is used to configure the nomad server.
type ServerConfig struct {
Enabled bool `json:"enabled"`
Bootstrap bool `json:"bootstrap"`
}
// ClientConfig is used to configure the client
type ClientConfig struct {
Enabled bool `json:"enabled"`
}
// ServerConfigCallback is a function interface which can be // ServerConfigCallback is a function interface which can be
// passed to NewTestServerConfig to modify the server config. // passed to NewTestServerConfig to modify the server config.
type ServerConfigCallback func(c *TestServerConfig) type ServerConfigCallback func(c *TestServerConfig)
@ -51,7 +72,18 @@ func defaultServerConfig() *TestServerConfig {
DisableCheckpoint: true, DisableCheckpoint: true,
Bootstrap: true, Bootstrap: true,
LogLevel: "DEBUG", LogLevel: "DEBUG",
HTTPAddr: fmt.Sprintf("127.0.0.1:%d", 20000+idx), Ports: &PortsConfig{
HTTP: 20000 + idx,
RPC: 21000 + idx,
Serf: 22000 + idx,
},
Server: &ServerConfig{
Enabled: true,
Bootstrap: true,
},
Client: &ClientConfig{
Enabled: false,
},
} }
} }
@ -62,6 +94,7 @@ type TestServer struct {
t *testing.T t *testing.T
HTTPAddr string HTTPAddr string
SerfAddr string
HttpClient *http.Client HttpClient *http.Client
} }
@ -110,8 +143,13 @@ func NewTestServer(t *testing.T, cb ServerConfigCallback) *TestServer {
stderr = nomadConfig.Stderr stderr = nomadConfig.Stderr
} }
args := []string{"agent", "-config", configFile.Name()}
if nomadConfig.DevMode {
args = append(args, "-dev")
}
// Start the server // Start the server
cmd := exec.Command("nomad", "agent", "-dev", "-config", configFile.Name()) cmd := exec.Command("nomad", args...)
cmd.Stdout = stdout cmd.Stdout = stdout
cmd.Stderr = stderr cmd.Stderr = stderr
if err := cmd.Start(); err != nil { if err := cmd.Start(); err != nil {
@ -126,7 +164,8 @@ func NewTestServer(t *testing.T, cb ServerConfigCallback) *TestServer {
PID: cmd.Process.Pid, PID: cmd.Process.Pid,
t: t, t: t,
HTTPAddr: nomadConfig.HTTPAddr, HTTPAddr: fmt.Sprintf("127.0.0.1:%d", nomadConfig.Ports.HTTP),
SerfAddr: fmt.Sprintf("127.0.0.1:%d", nomadConfig.Ports.Serf),
HttpClient: client, HttpClient: client,
} }

2
website/.buildpacks Normal file
View file

@ -0,0 +1,2 @@
https://github.com/heroku/heroku-buildpack-ruby.git
https://github.com/hashicorp/heroku-buildpack-middleman.git

3
website/Gemfile Normal file
View file

@ -0,0 +1,3 @@
source "https://rubygems.org"
gem "middleman-hashicorp", github: "hashicorp/middleman-hashicorp"

182
website/Gemfile.lock Normal file
View file

@ -0,0 +1,182 @@
GIT
remote: git://github.com/hashicorp/middleman-hashicorp.git
revision: 76f0f284ad44cea0457484ea83467192f02daf87
specs:
middleman-hashicorp (0.1.0)
bootstrap-sass (~> 3.3)
builder (~> 3.2)
less (~> 2.6)
middleman (~> 3.3)
middleman-livereload (~> 3.4)
middleman-minify-html (~> 3.4)
middleman-syntax (~> 2.0)
rack-contrib (~> 1.2)
rack-protection (~> 1.5)
rack-rewrite (~> 1.5)
rack-ssl-enforcer (~> 0.2)
redcarpet (~> 3.2)
therubyracer (~> 0.12)
thin (~> 1.6)
GEM
remote: https://rubygems.org/
specs:
activesupport (4.1.12)
i18n (~> 0.6, >= 0.6.9)
json (~> 1.7, >= 1.7.7)
minitest (~> 5.1)
thread_safe (~> 0.1)
tzinfo (~> 1.1)
autoprefixer-rails (5.2.1)
execjs
json
bootstrap-sass (3.3.5.1)
autoprefixer-rails (>= 5.0.0.1)
sass (>= 3.3.0)
builder (3.2.2)
celluloid (0.16.0)
timers (~> 4.0.0)
chunky_png (1.3.4)
coffee-script (2.4.1)
coffee-script-source
execjs
coffee-script-source (1.9.1.1)
commonjs (0.2.7)
compass (1.0.3)
chunky_png (~> 1.2)
compass-core (~> 1.0.2)
compass-import-once (~> 1.0.5)
rb-fsevent (>= 0.9.3)
rb-inotify (>= 0.9)
sass (>= 3.3.13, < 3.5)
compass-core (1.0.3)
multi_json (~> 1.0)
sass (>= 3.3.0, < 3.5)
compass-import-once (1.0.5)
sass (>= 3.2, < 3.5)
daemons (1.2.3)
em-websocket (0.5.1)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0)
erubis (2.7.0)
eventmachine (1.0.7)
execjs (2.5.2)
ffi (1.9.10)
git-version-bump (0.15.1)
haml (4.0.6)
tilt
hike (1.2.3)
hitimes (1.2.2)
hooks (0.4.0)
uber (~> 0.0.4)
htmlcompressor (0.2.0)
http_parser.rb (0.6.0)
i18n (0.7.0)
json (1.8.3)
kramdown (1.8.0)
less (2.6.0)
commonjs (~> 0.2.7)
libv8 (3.16.14.11)
listen (2.10.1)
celluloid (~> 0.16.0)
rb-fsevent (>= 0.9.3)
rb-inotify (>= 0.9)
middleman (3.3.12)
coffee-script (~> 2.2)
compass (>= 1.0.0, < 2.0.0)
compass-import-once (= 1.0.5)
execjs (~> 2.0)
haml (>= 4.0.5)
kramdown (~> 1.2)
middleman-core (= 3.3.12)
middleman-sprockets (>= 3.1.2)
sass (>= 3.4.0, < 4.0)
uglifier (~> 2.5)
middleman-core (3.3.12)
activesupport (~> 4.1.0)
bundler (~> 1.1)
erubis
hooks (~> 0.3)
i18n (~> 0.7.0)
listen (>= 2.7.9, < 3.0)
padrino-helpers (~> 0.12.3)
rack (>= 1.4.5, < 2.0)
rack-test (~> 0.6.2)
thor (>= 0.15.2, < 2.0)
tilt (~> 1.4.1, < 2.0)
middleman-livereload (3.4.2)
em-websocket (~> 0.5.1)
middleman-core (>= 3.3)
rack-livereload (~> 0.3.15)
middleman-minify-html (3.4.1)
htmlcompressor (~> 0.2.0)
middleman-core (>= 3.2)
middleman-sprockets (3.4.2)
middleman-core (>= 3.3)
sprockets (~> 2.12.1)
sprockets-helpers (~> 1.1.0)
sprockets-sass (~> 1.3.0)
middleman-syntax (2.0.0)
middleman-core (~> 3.2)
rouge (~> 1.0)
minitest (5.7.0)
multi_json (1.11.2)
padrino-helpers (0.12.5)
i18n (~> 0.6, >= 0.6.7)
padrino-support (= 0.12.5)
tilt (~> 1.4.1)
padrino-support (0.12.5)
activesupport (>= 3.1)
rack (1.6.4)
rack-contrib (1.3.0)
git-version-bump (~> 0.15)
rack (~> 1.4)
rack-livereload (0.3.16)
rack
rack-protection (1.5.3)
rack
rack-rewrite (1.5.1)
rack-ssl-enforcer (0.2.8)
rack-test (0.6.3)
rack (>= 1.0)
rb-fsevent (0.9.5)
rb-inotify (0.9.5)
ffi (>= 0.5.0)
redcarpet (3.3.2)
ref (2.0.0)
rouge (1.9.1)
sass (3.4.16)
sprockets (2.12.4)
hike (~> 1.2)
multi_json (~> 1.0)
rack (~> 1.0)
tilt (~> 1.1, != 1.3.0)
sprockets-helpers (1.1.0)
sprockets (~> 2.0)
sprockets-sass (1.3.1)
sprockets (~> 2.0)
tilt (~> 1.1)
therubyracer (0.12.2)
libv8 (~> 3.16.14.0)
ref
thin (1.6.3)
daemons (~> 1.0, >= 1.0.9)
eventmachine (~> 1.0)
rack (~> 1.0)
thor (0.19.1)
thread_safe (0.3.5)
tilt (1.4.1)
timers (4.0.1)
hitimes
tzinfo (1.2.2)
thread_safe (~> 0.1)
uber (0.0.13)
uglifier (2.7.1)
execjs (>= 0.3.0)
json (>= 1.8.0)
PLATFORMS
ruby
DEPENDENCIES
middleman-hashicorp!

10
website/LICENSE.md Normal file
View file

@ -0,0 +1,10 @@
# Proprietary License
This license is temporary while a more official one is drafted. However,
this should make it clear:
* The text contents of this website are MPL 2.0 licensed.
* The design contents of this website are proprietary and may not be reproduced
or reused in any way other than to run the Vault website locally. The license
for the design is owned solely by HashiCorp, Inc.

1
website/Procfile Normal file
View file

@ -0,0 +1 @@
web: bundle exec thin start -p $PORT

24
website/README.md Normal file
View file

@ -0,0 +1,24 @@
# Nomad Website
This subdirectory contains the entire source for the [Nomad Website](https://nomadproject.io/).
This is a [Middleman](http://middlemanapp.com) project, which builds a static
site from these source files.
## Contributions Welcome!
If you find a typo or you feel like you can improve the HTML, CSS, or
JavaScript, we welcome contributions. Feel free to open issues or pull
requests like any normal GitHub project, and we'll merge it in.
## Running the Site Locally
Running the site locally is simple. Clone this repo and run the following
commands:
```
$ bundle
$ bundle exec middleman server
```
Then open up `http://localhost:4567`. Note that some URLs you may need to append
".html" to make them work (in the navigation).

38
website/Vagrantfile vendored Normal file
View file

@ -0,0 +1,38 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
$script = <<SCRIPT
sudo apt-get -y update
# RVM/Ruby
sudo apt-get -y install curl
curl -sSL https://get.rvm.io | bash -s stable
. ~/.bashrc
. ~/.bash_profile
rvm install 2.0.0
rvm --default use 2.0.0
# Middleman deps
cd /vagrant
bundle
# JS stuff
sudo apt-get install -y python-software-properties
sudo add-apt-repository -y ppa:chris-lea/node.js
sudo apt-get update -y
sudo apt-get install -y nodejs
# Get JS deps
cd /vagrant/source
npm install
SCRIPT
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "chef/ubuntu-12.04"
config.vm.network "private_network", ip: "33.33.30.10"
config.vm.provision "shell", inline: $script, privileged: false
config.vm.synced_folder ".", "/vagrant", type: "rsync"
end

15
website/config.rb Normal file
View file

@ -0,0 +1,15 @@
#-------------------------------------------------------------------------
# Configure Middleman
#-------------------------------------------------------------------------
set :base_url, "https://www.vaultproject.io/"
activate :hashicorp do |h|
h.version = ENV["VAULT_VERSION"]
h.bintray_enabled = ENV["BINTRAY_ENABLED"]
h.bintray_repo = "mitchellh/vault"
h.bintray_user = "mitchellh"
h.bintray_key = ENV["BINTRAY_API_KEY"]
h.minify_javascript = false
end

38
website/config.ru Normal file
View file

@ -0,0 +1,38 @@
require "rack"
require "rack/contrib/not_found"
require "rack/contrib/response_headers"
require "rack/contrib/static_cache"
require "rack/contrib/try_static"
require "rack/protection"
# Protect against various bad things
use Rack::Protection::JsonCsrf
use Rack::Protection::RemoteReferrer
use Rack::Protection::HttpOrigin
use Rack::Protection::EscapedParams
use Rack::Protection::XSSHeader
use Rack::Protection::FrameOptions
use Rack::Protection::PathTraversal
use Rack::Protection::IPSpoofing
# Properly compress the output if the client can handle it.
use Rack::Deflater
# Set the "forever expire" cache headers for these static assets. Since
# we hash the contents of the assets to determine filenames, this is safe
# to do.
use Rack::StaticCache,
:root => "build",
:urls => ["/images", "/javascripts", "/stylesheets", "/webfonts"],
:duration => 2,
:versioning => false
# Try to find a static file that matches our request, since Middleman
# statically generates everything.
use Rack::TryStatic,
:root => "build",
:urls => ["/"],
:try => [".html", "index.html", "/index.html"]
# 404 if we reached this point. Sad times.
run Rack::NotFound.new(File.expand_path("../build/404.html", __FILE__))

View file

@ -0,0 +1,12 @@
module SidebarHelpers
# This helps by setting the "active" class for sidebar nav elements
# if the YAML frontmatter matches the expected value.
def sidebar_current(expected)
current = current_page.data.sidebar_current || ""
if current.start_with?(expected)
return " class=\"active\""
else
return ""
end
end
end

2
website/source/.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
# Source folder
node_modules/

View file

@ -0,0 +1,5 @@
---
noindex: true
---
<h2>Page Not Found</h2>

BIN
website/source/assets/images/caret-green.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/caret-green@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/caret-light.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/caret-light@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/caret-white.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/caret-white@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/favicon.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/favicon@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-density.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-density@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-deploy.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-deploy@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-healing.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-healing@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-manage.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/feature-manage@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/hashi-logo.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/hashi-logo@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/hero-image.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/hero-image@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/hero-logotype.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/hero-logotype@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/icon-docker-outline.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/icon-docker-outline@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/icon-download.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/icon-download@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/icon-github.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/icon-github@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/logo-header.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/logo-header@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-amazon.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-amazon@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-docker.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-docker@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-engineyard.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-engineyard@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-google.png (Stored with Git LFS) Normal file

Binary file not shown.

BIN
website/source/assets/images/partner-google@2x.png (Stored with Git LFS) Normal file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show more