2015-09-22 23:18:26 +00:00
|
|
|
---
|
|
|
|
layout: "intro"
|
2015-09-23 01:34:30 +00:00
|
|
|
page_title: "Clustering"
|
2015-09-22 23:18:26 +00:00
|
|
|
sidebar_current: "getting-started-cluster"
|
|
|
|
description: |-
|
|
|
|
Join another Nomad client to create your first cluster.
|
|
|
|
---
|
|
|
|
|
2015-09-23 01:34:30 +00:00
|
|
|
# Clustering
|
2015-09-22 23:18:26 +00:00
|
|
|
|
2015-09-23 04:44:51 +00:00
|
|
|
We have started our first agent and run a job against it in development mode.
|
|
|
|
This demonstrates the ease of use and the workflow of Nomad, but did not show how
|
|
|
|
this could be extended to a scalable, production-grade configuration. In this step,
|
|
|
|
we will create our first real cluster with multiple nodes.
|
|
|
|
|
|
|
|
## Starting the Server
|
|
|
|
|
2015-09-23 05:45:23 +00:00
|
|
|
The first step is to create the config file for the server. Either download
|
2015-09-26 00:09:35 +00:00
|
|
|
the file from the [repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant),
|
|
|
|
or paste this into a file called `server.hcl`:
|
2015-09-23 05:45:23 +00:00
|
|
|
|
2016-12-16 01:17:31 +00:00
|
|
|
```hcl
|
2015-09-23 05:45:23 +00:00
|
|
|
# Increase log verbosity
|
|
|
|
log_level = "DEBUG"
|
|
|
|
|
|
|
|
# Setup data dir
|
|
|
|
data_dir = "/tmp/server1"
|
|
|
|
|
|
|
|
# Enable the server
|
|
|
|
server {
|
|
|
|
enabled = true
|
|
|
|
|
|
|
|
# Self-elect, should be 3 or 5 for production
|
|
|
|
bootstrap_expect = 1
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
This is a fairly minimal server configuration file, but it
|
|
|
|
is enough to start an agent in server only mode and have it
|
2015-10-10 20:51:43 +00:00
|
|
|
elected as a leader. The major change that should be made for
|
2015-09-23 05:45:23 +00:00
|
|
|
production is to run more than one server, and to change the
|
|
|
|
corresponding `bootstrap_expect` value.
|
|
|
|
|
|
|
|
Once the file is created, start the agent in a new tab:
|
|
|
|
|
|
|
|
```
|
|
|
|
$ sudo nomad agent -config server.hcl
|
|
|
|
==> WARNING: Bootstrap mode enabled! Potentially unsafe operation.
|
|
|
|
==> Starting Nomad agent...
|
|
|
|
==> Nomad agent configuration:
|
|
|
|
|
|
|
|
Client: false
|
|
|
|
Log Level: DEBUG
|
|
|
|
Region: global (DC: dc1)
|
|
|
|
Server: true
|
2017-07-26 21:24:50 +00:00
|
|
|
Version: 0.6.0
|
2015-09-23 05:45:23 +00:00
|
|
|
|
|
|
|
==> Nomad agent started! Log data will stream in below:
|
|
|
|
|
2015-09-28 00:47:47 +00:00
|
|
|
[INFO] serf: EventMemberJoin: nomad.global 127.0.0.1
|
|
|
|
[INFO] nomad: starting 4 scheduling worker(s) for [service batch _core]
|
|
|
|
[INFO] raft: Node at 127.0.0.1:4647 [Follower] entering Follower state
|
|
|
|
[INFO] nomad: adding server nomad.global (Addr: 127.0.0.1:4647) (DC: dc1)
|
|
|
|
[WARN] raft: Heartbeat timeout reached, starting election
|
|
|
|
[INFO] raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state
|
|
|
|
[DEBUG] raft: Votes needed: 1
|
|
|
|
[DEBUG] raft: Vote granted. Tally: 1
|
|
|
|
[INFO] raft: Election won. Tally: 1
|
|
|
|
[INFO] raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
|
|
|
|
[INFO] nomad: cluster leadership acquired
|
|
|
|
[INFO] raft: Disabling EnableSingleNode (bootstrap)
|
|
|
|
[DEBUG] raft: Node 127.0.0.1:4647 updated peer set (2): [127.0.0.1:4647]
|
2015-09-23 05:45:23 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
We can see above that client mode is disabled, and that we are
|
|
|
|
only running as the server. This means that this server will manage
|
|
|
|
state and make scheduling decisions but will not run any tasks.
|
|
|
|
Now we need some agents to run tasks!
|
|
|
|
|
2015-09-23 04:44:51 +00:00
|
|
|
## Starting the Clients
|
|
|
|
|
2015-09-23 05:45:23 +00:00
|
|
|
Similar to the server, we must first configure the clients. Either download
|
2015-09-26 00:09:35 +00:00
|
|
|
the configuration for client1 and client2 from the
|
|
|
|
[repository here](https://github.com/hashicorp/nomad/tree/master/demo/vagrant), or
|
2015-09-23 05:45:23 +00:00
|
|
|
paste the following into `client1.hcl`:
|
|
|
|
|
|
|
|
```
|
|
|
|
# Increase log verbosity
|
|
|
|
log_level = "DEBUG"
|
|
|
|
|
|
|
|
# Setup data dir
|
|
|
|
data_dir = "/tmp/client1"
|
|
|
|
|
|
|
|
# Enable the client
|
|
|
|
client {
|
|
|
|
enabled = true
|
|
|
|
|
|
|
|
# For demo assume we are talking to server1. For production,
|
|
|
|
# this should be like "nomad.service.consul:4647" and a system
|
|
|
|
# like Consul used for service discovery.
|
|
|
|
servers = ["127.0.0.1:4647"]
|
|
|
|
}
|
|
|
|
|
|
|
|
# Modify our port to avoid a collision with server1
|
|
|
|
ports {
|
|
|
|
http = 5656
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
Copy that file to `client2.hcl` and change the `data_dir` to
|
|
|
|
be "/tmp/client2" and the `http` port to 5657. Once you've created
|
|
|
|
both `client1.hcl` and `client2.hcl`, open a tab for each and
|
|
|
|
start the agents:
|
|
|
|
|
|
|
|
```
|
|
|
|
$ sudo nomad agent -config client1.hcl
|
|
|
|
==> Starting Nomad agent...
|
|
|
|
==> Nomad agent configuration:
|
|
|
|
|
|
|
|
Client: true
|
|
|
|
Log Level: DEBUG
|
|
|
|
Region: global (DC: dc1)
|
|
|
|
Server: false
|
2017-07-26 21:24:50 +00:00
|
|
|
Version: 0.6.0
|
2015-09-23 05:45:23 +00:00
|
|
|
|
|
|
|
==> Nomad agent started! Log data will stream in below:
|
|
|
|
|
|
|
|
[DEBUG] client: applied fingerprints [host memory storage arch cpu]
|
|
|
|
[DEBUG] client: available drivers [docker exec]
|
|
|
|
[DEBUG] client: node registration complete
|
|
|
|
...
|
|
|
|
```
|
|
|
|
|
|
|
|
In the output we can see the agent is running in client mode only.
|
|
|
|
This agent will be available to run tasks but will not participate
|
|
|
|
in managing the cluster or making scheduling decisions.
|
|
|
|
|
|
|
|
Using the [`node-status` command](/docs/commands/node-status.html)
|
|
|
|
we should see both nodes in the `ready` state:
|
|
|
|
|
|
|
|
```
|
|
|
|
$ nomad node-status
|
2016-02-06 00:28:20 +00:00
|
|
|
ID Datacenter Name Class Drain Status
|
|
|
|
fca62612 dc1 nomad <none> false ready
|
|
|
|
c887deef dc1 nomad <none> false ready
|
2015-09-23 05:45:23 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
We now have a simple three node cluster running. The only difference
|
|
|
|
between a demo and full production cluster is that we are running a
|
|
|
|
single server instead of three or five.
|
|
|
|
|
2015-09-23 04:44:51 +00:00
|
|
|
## Submit a Job
|
2015-09-22 23:18:26 +00:00
|
|
|
|
2015-09-23 05:45:23 +00:00
|
|
|
Now that we have a simple cluster, we can use it to schedule a job.
|
|
|
|
We should still have the `example.nomad` job file from before, but
|
|
|
|
verify that the `count` is still set to 3.
|
|
|
|
|
|
|
|
Then, use the [`run` command](/docs/commands/run.html) to submit the job:
|
|
|
|
|
|
|
|
```
|
|
|
|
$ nomad run example.nomad
|
2016-02-06 00:28:20 +00:00
|
|
|
==> Monitoring evaluation "8e0a7cf9"
|
2015-09-23 05:45:23 +00:00
|
|
|
Evaluation triggered by job "example"
|
2017-07-26 21:24:50 +00:00
|
|
|
Evaluation within deployment: "0917b771"
|
2016-02-06 00:28:20 +00:00
|
|
|
Allocation "501154ac" created: node "c887deef", group "cache"
|
|
|
|
Allocation "7e2b3900" created: node "fca62612", group "cache"
|
|
|
|
Allocation "9c66fcaf" created: node "c887deef", group "cache"
|
2015-09-23 05:45:23 +00:00
|
|
|
Evaluation status changed: "pending" -> "complete"
|
2016-02-06 00:28:20 +00:00
|
|
|
==> Evaluation "8e0a7cf9" finished with status "complete"
|
2015-09-23 05:45:23 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
We can see in the output that the scheduler assigned two of the
|
|
|
|
tasks for one of the client nodes and the remaining task to the
|
|
|
|
second client.
|
|
|
|
|
|
|
|
We can again use the [`status` command](/docs/commands/status.html) to verify:
|
|
|
|
|
|
|
|
```
|
|
|
|
$ nomad status example
|
|
|
|
ID = example
|
|
|
|
Name = example
|
2017-07-26 21:24:50 +00:00
|
|
|
Submit Date = 07/26/17 16:34:58 UTC
|
2015-09-23 05:45:23 +00:00
|
|
|
Type = service
|
|
|
|
Priority = 50
|
|
|
|
Datacenters = dc1
|
2016-02-06 00:28:20 +00:00
|
|
|
Status = running
|
|
|
|
Periodic = false
|
2017-07-26 21:24:50 +00:00
|
|
|
Parameterized = false
|
|
|
|
|
|
|
|
Summary
|
|
|
|
Task Group Queued Starting Running Failed Complete Lost
|
|
|
|
cache 0 0 3 0 0 0
|
|
|
|
|
|
|
|
Latest Deployment
|
|
|
|
ID = fc49bd6c
|
|
|
|
Status = running
|
|
|
|
Description = Deployment is running
|
|
|
|
|
|
|
|
Deployed
|
|
|
|
Task Group Desired Placed Healthy Unhealthy
|
|
|
|
cache 3 3 0 0
|
2015-09-23 05:45:23 +00:00
|
|
|
|
2016-06-25 02:15:55 +00:00
|
|
|
Allocations
|
2016-08-16 02:40:34 +00:00
|
|
|
ID Eval ID Node ID Task Group Desired Status Created At
|
2016-08-09 02:15:24 +00:00
|
|
|
501154ac 8e0a7cf9 c887deef cache run running 08/08/16 21:03:19 CDT
|
|
|
|
7e2b3900 8e0a7cf9 fca62612 cache run running 08/08/16 21:03:19 CDT
|
|
|
|
9c66fcaf 8e0a7cf9 c887deef cache run running 08/08/16 21:03:19 CDT
|
2015-09-23 05:45:23 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
We can see that all our tasks have been allocated and are running.
|
|
|
|
Once we are satisfied that our job is happily running, we can tear
|
|
|
|
it down with `nomad stop`.
|
|
|
|
|
2015-09-22 23:18:26 +00:00
|
|
|
## Next Steps
|
|
|
|
|
2017-09-16 19:05:20 +00:00
|
|
|
Nomad is now up and running. The cluster can be entirely managed from the commandline,
|
|
|
|
but Nomad also comes with a web interface that is hosted alongside the HTTP API.
|
|
|
|
Next, we'll [visit the UI in the browser](ui.html).
|
2015-09-22 23:18:26 +00:00
|
|
|
|