remove CT example

This commit is contained in:
Alex Dadgar 2016-06-30 11:51:57 -07:00
parent 0125c7e7e8
commit 93a3d3e7e5
1 changed files with 2 additions and 217 deletions

View File

@ -9,9 +9,8 @@ description: |-
# Task Configurations
Most tasks need to be paramaterized in some way. The simplest is via
command-line arguments but often times tasks consume complex or even dynamic
configurations in which the task should immediately restart and apply the new
configurations. Here we explore how to configure Nomad jobs to support many
command-line arguments but often times tasks consume complex configurations via
config files. Here we explore how to configure Nomad jobs to support many
common configuration use cases.
## Command-line Arguments
@ -102,217 +101,3 @@ task "example" {
Here we can see a basic example of downloading static configuration files. By
default, an `artifact` is downloaded to the task's `local/` directory but is
[configurable](/docs/jobspec/index.html#artifact_doc).
## Dynamic Config Files
Other applications, such as load-balancers, will need to have their
configuration dynamically updated as external state changes. To support these
use cases, we can leverage
[consul-template](http://github.com/hashicorp/consul-template). To run
consul-template inside a Nomad job, we download both consul-template, the
binary we want to run and our template. In the below example we can see how to
use consul-template to update HAProxy as more webservers come up.
First we create a template file for HAProxy (please refer to consul-template documentation):
```
global
maxconn 1000
defaults
mode http
timeout connect 5000
timeout client 10000
timeout server 10000
listen http-in
bind {{env "NOMAD_ADDR_inbound"}} {{range service "my-web"}}
server {{.Node}} {{.Address}}:{{.Port}}{{end}}
```
The above template will be updated to include the address of each service
registered in Consul with "my-web". As we scale the "my-web" task group in the
below job, the template should be updated and our HAProxy will load balance to
all instances.
```
job "web" {
datacenters = ["dc1"]
# Restrict our job to only linux as those are the binaries we are
# downloading
constraint {
attribute = "${attr.kernel.name}"
value = "linux"
}
group "web" {
# Start with count 1 and scale up
count = 1
# Create the web server
task "redis" {
driver = "exec"
# Put our Allocation ID in an index file and start a
# webserver to serve it. This way we know from which
# allocation we are being served from.
config {
command = "/bin/bash"
args = [
"-c",
"echo $NOMAD_ALLOC_ID > index.html; python -m SimpleHTTPServer $NOMAD_PORT_web"
]
}
resources {
cpu = 50
memory = 20
network {
mbits = 10
port "web" {
}
}
}
# Add the service to register our webserver
service {
name = "my-web"
port = "web"
check {
name = "alive"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
}
}
# Create the loadbalancer group which will contain two tasks.
# The first is consul-template which generates an HAProxy config and
# the other is HAProxy itself
group "loadbalancer" {
# Start with count 1 and scale up
count = 1
# Create the web server
task "consul-template" {
driver = "exec"
# Run consul-template that takes the downloaded template and stores
# the results in the shared alloc dir so that HAProxy can use it
config {
command = "consul-template"
args = ["-template", "local/haproxy.ctmpl:alloc/haproxy.conf"]
}
resources {
cpu = 500
memory = 100
network {
mbits = 10
port "inbound" {
}
}
}
# Download consul-template
artifact {
source = "https://releases.hashicorp.com/consul-template/0.15.0/consul-template_0.15.0_linux_amd64.zip"
}
# Download the template to generate.
# Can run python -m SimpleHTTPServer to host this while testing
artifact {
source = "http://127.0.0.1:8000/haproxy.ctmpl"
}
}
# Start HAProxy and use the config generated by consul-template
task "loadbalancer" {
driver = "docker"
config {
# This image uses Inotify to detect changes to the config and
# restarts HAProxy
image = "million12/haproxy"
network_mode = "host"
}
resources {
cpu = 500
memory = 100
}
env {
# Store the path to the config
HAPROXY_CONFIG = "alloc/haproxy.conf"
}
}
}
}
```
If the above example is run, when we curl the address in which HAProxy is
listening on we see that we only receive one Allocation ID in response and the
HAProxy configuration only includes one server:
```
$ curl http://127.0.0.1:27044
da68aa6f-29db-b3d5-d8c5-fd6a2338bb13
$ curl http://127.0.0.1:27044
da68aa6f-29db-b3d5-d8c5-fd6a2338bb13
$ nomad fs 63 alloc/haproxy.conf
global
maxconn 1000
defaults
mode http
timeout connect 5000
timeout client 10000
timeout server 10000
listen http-in
bind 127.0.0.1:27044
server nomad-server01 127.0.0.1:28843
```
However once we scale up the count of "my-web" from `count = 1` to `count = 3`
we see that the template was updated and we now load balance across all three
instances:
```
$ nomad fs 63 alloc/haproxy.conf
global
maxconn 1000
defaults
mode http
timeout connect 5000
timeout client 10000
timeout server 10000
listen http-in
bind 127.0.0.1:27044
server nomad-server01 127.0.0.1:28843
server nomad-server01 127.0.0.1:58402
server nomad-server01 127.0.0.1:36143
$ curl http://127.0.0.1:27044
da68aa6f-29db-b3d5-d8c5-fd6a2338bb13
$ curl http://127.0.0.1:27044
0e83bec8-d5f6-8ae4-a2cb-99b3f0468204
$ curl http://127.0.0.1:27044
4c8a3d17-dbc8-d03a-5f77-a541eb63859d
```
While this example uses a Docker container that detects configuration changes
for simplicity, the same can be accomplished be using a PID file and having
Consul Template execute a script that restarts HAProxy using the PID file.