Merge pull request #6768 from hashicorp/docs/cv/lb_to_learn
Migrating the Load-balancing guides to Nomad
This commit is contained in:
commit
1f26f27fd9
|
@ -50,7 +50,7 @@ Documentation & Guides
|
|||
* [Installing Nomad for Production](https://www.nomadproject.io/guides/operations/deployment-guide.html)
|
||||
* [Advanced Job Scheduling on Nomad with Affinities](https://www.nomadproject.io/guides/operating-a-job/advanced-scheduling/affinity.html)
|
||||
* [Increasing Nomad Fault Tolerance with Spread](https://www.nomadproject.io/guides/operating-a-job/advanced-scheduling/spread.html)
|
||||
* [Load Balancing on Nomad with Fabio & Consul](https://www.nomadproject.io/guides/load-balancing/fabio.html)
|
||||
* [Load Balancing on Nomad with Fabio & Consul](https://learn.hashicorp.com/guides/load-balancing/fabio)
|
||||
* [Deploying Stateful Workloads via Portworx](https://www.nomadproject.io/guides/stateful-workloads/portworx.html)
|
||||
* [Running Apache Spark on Nomad](https://www.nomadproject.io/guides/spark/spark.html)
|
||||
* [Integrating Vault with Nomad for Secrets Management](https://www.nomadproject.io/guides/operations/vault-integration/index.html)
|
||||
|
|
|
@ -665,8 +665,7 @@ below </h2>
|
|||
[creation-statements]: https://www.vaultproject.io/api/secret/databases/index.html#creation_statements
|
||||
[destination]: /docs/job-specification/template.html#destination
|
||||
[fabio]: https://github.com/fabiolb/fabio
|
||||
[fabio-job]: /guides/load-balancing/fabio.html#step-1-create-a-job-for-fabio
|
||||
[fabio-lb]: /guides/load-balancing/fabio.html
|
||||
[fabio-lb]: https://learn.hashicorp.com/guides/load-balancing/fabio
|
||||
[inline]: /docs/job-specification/template.html#inline-template
|
||||
[login]: https://www.vaultproject.io/docs/commands/login.html
|
||||
[nomad-alloc-fs]: /docs/commands/alloc/fs.html
|
||||
|
|
|
@ -1,233 +0,0 @@
|
|||
---
|
||||
layout: "guides"
|
||||
page_title: "Load Balancing with Nomad"
|
||||
sidebar_current: "guides-load-balancing-fabio"
|
||||
description: |-
|
||||
There are multiple approaches to load balancing within a Nomad cluster.
|
||||
One approach involves using [fabio][fabio]. Fabio integrates natively
|
||||
with Consul and provides rich features with an optional Web UI.
|
||||
---
|
||||
|
||||
# Load Balancing with Fabio
|
||||
|
||||
[Fabio][fabio] integrates natively with Consul and provides an optional Web UI
|
||||
to visualize routing.
|
||||
|
||||
The main use case for fabio is to distribute incoming HTTP(S) and TCP requests
|
||||
from the internet to frontend services that can handle these requests. This
|
||||
guide will show you one such example using [Apache][apache] web server.
|
||||
|
||||
## Reference Material
|
||||
|
||||
- [Fabio](https://github.com/fabiolb/fabio) on GitHub
|
||||
- [Load Balancing Strategies for Consul](https://www.hashicorp.com/blog/load-balancing-strategies-for-consul)
|
||||
- [Elastic Load Balancing][elb]
|
||||
|
||||
## Estimated Time to Complete
|
||||
|
||||
20 minutes
|
||||
|
||||
## Challenge
|
||||
|
||||
Think of a scenario where a Nomad operator needs to configure an environment to
|
||||
make Apache web server highly available behind an endpoint and distribute
|
||||
incoming traffic evenly.
|
||||
|
||||
## Solution
|
||||
|
||||
Deploy fabio as a
|
||||
[system][system]
|
||||
scheduler so that it can route incoming traffic evenly to the Apache web server
|
||||
group regardless of which client nodes Apache is running on. Place all client nodes
|
||||
behind an [AWS load balancer][elb] to
|
||||
provide the end user with a single endpoint for access.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To perform the tasks described in this guide, you need to have a Nomad
|
||||
environment with Consul installed. You can use this
|
||||
[repo](https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud)
|
||||
to easily provision a sandbox environment. This guide will assume a cluster with
|
||||
one server node and three client nodes.
|
||||
|
||||
-> **Please Note:** This guide is for demo purposes and is only using a single server
|
||||
node. In a production cluster, 3 or 5 server nodes are recommended.
|
||||
|
||||
## Steps
|
||||
|
||||
### Step 1: Create a Job for Fabio
|
||||
|
||||
Create a job for Fabio and name it `fabio.nomad`
|
||||
|
||||
```hcl
|
||||
job "fabio" {
|
||||
datacenters = ["dc1"]
|
||||
type = "system"
|
||||
|
||||
group "fabio" {
|
||||
task "fabio" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "fabiolb/fabio"
|
||||
network_mode = "host"
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 200
|
||||
memory = 128
|
||||
network {
|
||||
mbits = 20
|
||||
port "lb" {
|
||||
static = 9999
|
||||
}
|
||||
port "ui" {
|
||||
static = 9998
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Setting `type` to [system][system] will ensure that fabio is run on all clients.
|
||||
Please note that the `network_mode` option is set to `host` so that fabio can
|
||||
communicate with Consul which is also running on the client nodes.
|
||||
|
||||
### Step 2: Run the Fabio Job
|
||||
|
||||
We can now register our fabio job:
|
||||
|
||||
```shell
|
||||
$ nomad job run fabio.nomad
|
||||
==> Monitoring evaluation "fba4f04a"
|
||||
Evaluation triggered by job "fabio"
|
||||
Allocation "6e6367d4" created: node "f3739267", group "fabio"
|
||||
Allocation "d17573b4" created: node "28d7f859", group "fabio"
|
||||
Allocation "f3ad9b16" created: node "510898b6", group "fabio"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "fba4f04a" finished with status "complete"
|
||||
```
|
||||
At this point, you should be able to visit any one of your client nodes at port
|
||||
`9998` and see the web interface for fabio. The routing table will be empty
|
||||
since we have not yet deployed anything that fabio can route to.
|
||||
Accordingly, if you visit any of the client nodes at port `9999` at this
|
||||
point, you will get a `404` HTTP response. That will change soon.
|
||||
|
||||
### Step 3: Create a Job for Apache Web Server
|
||||
|
||||
Create a job for Apache and name it `webserver.nomad`
|
||||
|
||||
```hcl
|
||||
job "webserver" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "webserver" {
|
||||
count = 3
|
||||
restart {
|
||||
attempts = 2
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
ephemeral_disk {
|
||||
size = 300
|
||||
}
|
||||
|
||||
task "apache" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "httpd:latest"
|
||||
port_map {
|
||||
http = 80
|
||||
}
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "http" {}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "apache-webserver"
|
||||
tags = ["urlprefix-/"]
|
||||
port = "http"
|
||||
check {
|
||||
name = "alive"
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Notice the tag in the service stanza begins with `urlprefix-`. This is how a
|
||||
path is registered with fabio. In this case, we are registering the path '/'
|
||||
with fabio (which will route us to the default page for Apache web server).
|
||||
|
||||
### Step 4: Run the Job for Apache Web Server
|
||||
|
||||
We can now register our job for Apache:
|
||||
|
||||
```shell
|
||||
$ nomad job run webserver.nomad
|
||||
==> Monitoring evaluation "c7bcaf40"
|
||||
Evaluation triggered by job "webserver"
|
||||
Evaluation within deployment: "e3603b50"
|
||||
Allocation "20951ad4" created: node "510898b6", group "webserver"
|
||||
Allocation "43807686" created: node "28d7f859", group "webserver"
|
||||
Allocation "7b60eb24" created: node "f3739267", group "webserver"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "c7bcaf40" finished with status "complete"
|
||||
```
|
||||
You have now deployed and registered your web servers with fabio! At this point,
|
||||
you should be able to visit any of the Nomad clients at port `9999` and
|
||||
see the default web page for Apache web server. If you visit fabio's web
|
||||
interface by going to any of the client nodes at port `9998`, you will see that
|
||||
the routing table has been populated as shown below (**Note:** your destination IP
|
||||
addresses will be different).
|
||||
|
||||
[![Routing Table][routing-table]][routing-table]
|
||||
|
||||
Feel free to reduce the `count` in `webserver.nomad` for testing purposes. You
|
||||
will see that you still get routed to the Apache home page by accessing
|
||||
any client node on port `9999`. Accordingly, the routing table
|
||||
in the web interface on port `9999` will reflect the changes.
|
||||
|
||||
### Step 5: Place Nomad Client Nodes Behind AWS Load Balancer
|
||||
|
||||
At this point, you are ready to place your Nomad client nodes behind an AWS load
|
||||
balancer. Your Nomad client nodes may change over time, and it is important
|
||||
to provide your end users with a single endpoint to access your services. This guide will use the [Classic Load Balancer][classic-lb].
|
||||
|
||||
The AWS [documentation][classic-lb-doc] provides instruction on how to create a
|
||||
load balancer. The basic steps involve creating a load balancer, registering
|
||||
instances behind the load balancer (in our case these will be the Nomad client
|
||||
nodes), creating listeners, and configuring health checks.
|
||||
|
||||
Once you are done
|
||||
with this, you should be able to hit the DNS name of your load balancer at port
|
||||
80 (or whichever port you configured in your listener) and see the home page of
|
||||
Apache web server. If you configured your listener to also forward traffic to
|
||||
the web interface at port `9998`, you should be able to access that as well.
|
||||
|
||||
[![Home Page][lb-homepage]][lb-homepage]
|
||||
|
||||
[![Routing Table][lb-routing-table]][lb-routing-table]
|
||||
|
||||
[apache]: https://httpd.apache.org/
|
||||
[classic-lb]: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html
|
||||
[classic-lb-doc]: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-getting-started.html
|
||||
[elb]: https://aws.amazon.com/elasticloadbalancing/
|
||||
[fabio]: https://fabiolb.net/
|
||||
[lb-homepage]: /assets/images/lb-homepage.png
|
||||
[lb-routing-table]: /assets/images/lb-routing-table.png
|
||||
[routing-table]: /assets/images/routing-table.png
|
||||
[system]: /docs/schedulers.html#system
|
|
@ -1,279 +0,0 @@
|
|||
---
|
||||
layout: "guides"
|
||||
page_title: "Load Balancing with HAProxy"
|
||||
sidebar_current: "guides-load-balancing-haproxy"
|
||||
description: |-
|
||||
There are multiple approaches to load balancing within a Nomad cluster.
|
||||
One approach involves using [HAProxy][haproxy] which natively integrates with
|
||||
service discovery data from Consul.
|
||||
---
|
||||
|
||||
# Load Balancing with HAProxy
|
||||
|
||||
The main use case for HAProxy in this scenario is to distribute incoming HTTP(S)
|
||||
and TCP requests from the internet to frontend services that can handle these
|
||||
requests. This guide will show you one such example using a demo web
|
||||
application.
|
||||
|
||||
HAProxy version 1.8+ (LTS) includes the [server-template] directive, which lets
|
||||
users specify placeholder backend servers to populate HAProxy’s load balancing
|
||||
pools. Server-template can use Consul as one of these backend servers,
|
||||
requesting SRV records from Consul DNS.
|
||||
|
||||
## Reference Material
|
||||
|
||||
- [HAProxy][haproxy]
|
||||
- [Load Balancing Strategies for Consul][lb-strategies]
|
||||
|
||||
## Estimated Time to Complete
|
||||
|
||||
20 minutes
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To perform the tasks described in this guide, you need to have a Nomad
|
||||
environment with Consul installed. You can use this [repo][terraform-repo] to
|
||||
easily provision a sandbox environment. This guide will assume a cluster with
|
||||
one server node and three client nodes.
|
||||
|
||||
-> **Note:** This guide is for demo purposes and only assumes a single server
|
||||
node. Please consult the [reference architecture][reference-arch] for production
|
||||
configuration.
|
||||
|
||||
## Steps
|
||||
|
||||
### Step 1: Create a Job for Demo Web App
|
||||
|
||||
Create a job for a demo web application and name the file `webapp.nomad`:
|
||||
|
||||
```hcl
|
||||
job "demo-webapp" {
|
||||
datacenters = ["dc1"]
|
||||
|
||||
group "demo" {
|
||||
count = 3
|
||||
|
||||
task "server" {
|
||||
env {
|
||||
PORT = "${NOMAD_PORT_http}"
|
||||
NODE_IP = "${NOMAD_IP_http}"
|
||||
}
|
||||
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "hashicorp/demo-webapp-lb-guide"
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "http" {}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "demo-webapp"
|
||||
port = "http"
|
||||
|
||||
check {
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "2s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that this job deploys 3 instances of our demo web application which we will
|
||||
load balance with HAProxy in the next few steps.
|
||||
|
||||
### Step 2: Deploy the Demo Web App
|
||||
|
||||
We can now deploy our demo web application:
|
||||
|
||||
```shell
|
||||
$ nomad run webapp.nomad
|
||||
==> Monitoring evaluation "8f3af425"
|
||||
Evaluation triggered by job "demo-webapp"
|
||||
Evaluation within deployment: "dc4c1925"
|
||||
Allocation "bf9f850f" created: node "d16a11fb", group "demo"
|
||||
Allocation "25e0496a" created: node "b78e27be", group "demo"
|
||||
Allocation "a97e7d39" created: node "01d3eb32", group "demo"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "8f3af425" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 3: Create a Job for HAProxy
|
||||
|
||||
Create a job for HAProxy and name it `haproxy.nomad`. This will be our load
|
||||
balancer that will balance requests to the deployed instances of our web
|
||||
application.
|
||||
|
||||
```hcl
|
||||
job "haproxy" {
|
||||
region = "global"
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "haproxy" {
|
||||
count = 1
|
||||
|
||||
task "haproxy" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "haproxy:2.0"
|
||||
network_mode = "host"
|
||||
|
||||
volumes = [
|
||||
"local/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg",
|
||||
]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOF
|
||||
defaults
|
||||
mode http
|
||||
|
||||
frontend stats
|
||||
bind *:1936
|
||||
stats uri /
|
||||
stats show-legends
|
||||
no log
|
||||
|
||||
frontend http_front
|
||||
bind *:8080
|
||||
default_backend http_back
|
||||
|
||||
backend http_back
|
||||
balance roundrobin
|
||||
server-template mywebapp 10 _demo-webapp._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check
|
||||
|
||||
resolvers consul
|
||||
nameserver consul 127.0.0.1:53
|
||||
accepted_payload_size 8192
|
||||
hold valid 5s
|
||||
EOF
|
||||
|
||||
destination = "local/haproxy.cfg"
|
||||
}
|
||||
|
||||
service {
|
||||
name = "haproxy"
|
||||
check {
|
||||
name = "alive"
|
||||
type = "tcp"
|
||||
port = "http"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 200
|
||||
memory = 128
|
||||
|
||||
network {
|
||||
mbits = 10
|
||||
|
||||
port "http" {
|
||||
static = 8080
|
||||
}
|
||||
|
||||
port "haproxy_ui" {
|
||||
static = 1936
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Take note of the following key points from the HAProxy configuration we have
|
||||
defined:
|
||||
|
||||
- The `balance type` under the `backend http_back` stanza in the HAProxy config
|
||||
is round robin and will load balance across the available service in order.
|
||||
- The `server-template` option allows Consul service registrations to configure
|
||||
HAProxy's backend server pool. Because of this, you do not need to explicitly
|
||||
add your backend servers' IP addresses. We have specified a server-template
|
||||
named mywebapp. The template name is not tied to the service name which is
|
||||
registered in Consul.
|
||||
- `_demo-webapp._tcp.service.consul` allows HAProxy to use the DNS SRV record for
|
||||
the backend service `demo-webapp.service.consul` to discover the available
|
||||
instances of the service.
|
||||
|
||||
Additionally, keep in mind the following points from the Nomad job spec:
|
||||
|
||||
- We have statically set the port of our load balancer to `8080`. This will
|
||||
allow us to query `haproxy.service.consul:8080` from anywhere inside our cluster
|
||||
so we can reach our web application.
|
||||
- Please note that although we have defined the template [inline][inline], we
|
||||
could alternatively use the template stanza [in conjunction with the artifact
|
||||
stanza][remote-template] to download an input template from a remote source
|
||||
such as an S3 bucket.
|
||||
|
||||
### Step 4: Run the HAProxy Job
|
||||
|
||||
We can now register our HAProxy job:
|
||||
|
||||
```shell
|
||||
$ nomad run haproxy.nomad
|
||||
==> Monitoring evaluation "937b1a2d"
|
||||
Evaluation triggered by job "haproxy"
|
||||
Evaluation within deployment: "e8214434"
|
||||
Allocation "53145b8b" created: node "d16a11fb", group "haproxy"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "937b1a2d" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 5: Check the HAProxy Statistics Page
|
||||
|
||||
You can visit the statistics and monitoring page for HAProxy at
|
||||
`http://<Your-HAProxy-IP-address>:1936`. You can use this page to verify your
|
||||
settings and for basic monitoring.
|
||||
|
||||
[![Home Page][haproxy_ui]][haproxy_ui]
|
||||
|
||||
Notice there are 10 pre-provisioned load balancer backend slots for your service
|
||||
but that only three of them are being used, corresponding to the three allocations in the current job.
|
||||
|
||||
### Step 6: Make a Request to the Load Balancer
|
||||
|
||||
If you query the HAProxy load balancer, you should be able to see a response
|
||||
similar to the one shown below (this command should be run from a
|
||||
node inside your cluster):
|
||||
|
||||
```shell
|
||||
$ curl haproxy.service.consul:8080
|
||||
Welcome! You are on node 172.31.54.242:20124
|
||||
```
|
||||
|
||||
Note that your request has been forwarded to one of the several deployed
|
||||
instances of the demo web application (which is spread across 3 Nomad clients).
|
||||
The output shows the IP address of the host it is deployed on. If you repeat
|
||||
your requests, you will see that the IP address changes.
|
||||
|
||||
* Note: if you would like to access HAProxy from outside your cluster, you
|
||||
can set up a load balancer in your environment that maps to an active port
|
||||
`8080` on your clients (or whichever port you have configured for HAProxy to
|
||||
listen on). You can then send your requests directly to your external load
|
||||
balancer.
|
||||
|
||||
[consul-template]: https://github.com/hashicorp/consul-template#consul-template
|
||||
[consul-temp-syntax]: https://github.com/hashicorp/consul-template#service
|
||||
[haproxy]: http://www.haproxy.org/
|
||||
[haproxy_ui]: /assets/images/haproxy_ui.png
|
||||
[inline]: /docs/job-specification/template.html#inline-template
|
||||
[lb-strategies]: https://www.hashicorp.com/blog/configuring-third-party-loadbalancers-with-consul-nginx-haproxy-f5/
|
||||
[reference-arch]: /guides/install/production/reference-architecture.html#high-availability
|
||||
[remote-template]: /docs/job-specification/template.html#remote-template
|
||||
[server-template]: https://www.haproxy.com/blog/whats-new-haproxy-1-8/#server-template-configuration-directive
|
||||
[template-stanza]: /docs/job-specification/template.html
|
||||
[terraform-repo]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud
|
||||
|
|
@ -9,16 +9,13 @@ description: |-
|
|||
|
||||
# Load Balancing
|
||||
|
||||
There are multiple approaches to set up load balancing across a Nomad cluster.
|
||||
These guides have been migrated to [HashiCorp's Learn website].
|
||||
|
||||
Most of these methods assume Consul is installed alongside Nomad (see [Load
|
||||
Balancing Strategies for
|
||||
Consul](https://www.hashicorp.com/blog/load-balancing-strategies-for-consul)).
|
||||
You can follow these links to specific guides at Learn:
|
||||
|
||||
- [Fabio](/guides/load-balancing/fabio.html)
|
||||
- [NGINX](/guides/load-balancing/nginx.html)
|
||||
- [HAProxy](/guides/load-balancing/haproxy.html)
|
||||
- [Traefik](/guides/load-balancing/traefik.html)
|
||||
- [Fabio](https://learn.hashicorp.com/nomad/load-balancing/fabio)
|
||||
- [NGINX](https://learn.hashicorp.com/nomad/load-balancing/nginx)
|
||||
- [HAProxy](https://learn.hashicorp.com/nomad/load-balancing/haproxy)
|
||||
- [Traefik](https://learn.hashicorp.com/nomad/load-balancing/traefik)
|
||||
|
||||
Please refer to the specific documentation above or in the sidebar for more
|
||||
detailed information about each strategy.
|
||||
[HashiCorp's Learn website]: https://learn.hashicorp.com/nomad?track=load-balancing#load-balancing
|
|
@ -1,293 +0,0 @@
|
|||
---
|
||||
layout: "guides"
|
||||
page_title: "Load Balancing with NGINX"
|
||||
sidebar_current: "guides-load-balancing-nginx"
|
||||
description: |-
|
||||
There are multiple approaches to load balancing within a Nomad cluster.
|
||||
One approach involves using [NGINX][nginx]. NGINX works well with Nomad's
|
||||
template stanza to allow for dynamic updates to its load balancing
|
||||
configuration.
|
||||
---
|
||||
|
||||
# Load Balancing with NGINX
|
||||
|
||||
You can use Nomad's [template stanza][template-stanza] to configure
|
||||
[NGINX][nginx] so that it can dynamically update its load balancer configuration
|
||||
to scale along with your services.
|
||||
|
||||
The main use case for NGINX in this scenario is to distribute incoming HTTP(S)
|
||||
and TCP requests from the internet to frontend services that can handle these
|
||||
requests. This guide will show you one such example using a demo web
|
||||
application.
|
||||
|
||||
## Reference Material
|
||||
|
||||
- [NGINX][nginx]
|
||||
- [Load Balancing Strategies for Consul][lb-strategies]
|
||||
|
||||
## Estimated Time to Complete
|
||||
|
||||
20 minutes
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To perform the tasks described in this guide, you need to have a Nomad
|
||||
environment with Consul installed. You can use this [repo][terraform-repo] to
|
||||
easily provision a sandbox environment. This guide will assume a cluster with
|
||||
one server node and three client nodes.
|
||||
|
||||
-> **Note:** This guide is for demo purposes and only assumes a single server
|
||||
node. Please consult the [reference architecture][reference-arch] for production
|
||||
configuration.
|
||||
|
||||
## Steps
|
||||
|
||||
### Step 1: Create a Job for Demo Web App
|
||||
|
||||
Create a job for a demo web application and name the file `webapp.nomad`:
|
||||
|
||||
```hcl
|
||||
job "demo-webapp" {
|
||||
datacenters = ["dc1"]
|
||||
|
||||
group "demo" {
|
||||
count = 3
|
||||
|
||||
task "server" {
|
||||
env {
|
||||
PORT = "${NOMAD_PORT_http}"
|
||||
NODE_IP = "${NOMAD_IP_http}"
|
||||
}
|
||||
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "hashicorp/demo-webapp-lb-guide"
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "http"{}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "demo-webapp"
|
||||
port = "http"
|
||||
|
||||
check {
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "2s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Note that this job deploys 3 instances of our demo web application which we will
|
||||
load balance with NGINX in the next few steps.
|
||||
|
||||
### Step 2: Deploy the Demo Web App
|
||||
|
||||
We can now deploy our demo web application:
|
||||
|
||||
```shell
|
||||
$ nomad run webapp.nomad
|
||||
==> Monitoring evaluation "ea1e8528"
|
||||
Evaluation triggered by job "demo-webapp"
|
||||
Allocation "9b4bac9f" created: node "e4637e03", group "demo"
|
||||
Allocation "c386de2d" created: node "983a64df", group "demo"
|
||||
Allocation "082653f0" created: node "f5fdf017", group "demo"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "ea1e8528" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 3: Create a Job for NGINX
|
||||
|
||||
Create a job for NGINX and name it `nginx.nomad`. This will be our load balancer
|
||||
that will balance requests to the deployed instances of our web application.
|
||||
|
||||
```hcl
|
||||
job "nginx" {
|
||||
datacenters = ["dc1"]
|
||||
|
||||
group "nginx" {
|
||||
count = 1
|
||||
|
||||
task "nginx" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "nginx"
|
||||
|
||||
port_map {
|
||||
http = 80
|
||||
}
|
||||
|
||||
volumes = [
|
||||
"local:/etc/nginx/conf.d",
|
||||
]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOF
|
||||
upstream backend {
|
||||
{{ range service "demo-webapp" }}
|
||||
server {{ .Address }}:{{ .Port }};
|
||||
{{ else }}server 127.0.0.1:65535; # force a 502
|
||||
{{ end }}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location / {
|
||||
proxy_pass http://backend;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
destination = "local/load-balancer.conf"
|
||||
change_mode = "signal"
|
||||
change_signal = "SIGHUP"
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
|
||||
port "http" {
|
||||
static = 8080
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "nginx"
|
||||
port = "http"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- We are using Nomad's [template][template-stanza] to populate the load balancer
|
||||
configuration for NGINX. The underlying tool being used is [Consul
|
||||
Template][consul-template]. You can use Consul Template's documentation to
|
||||
learn more about the [syntax][consul-temp-syntax] needed to interact with
|
||||
Consul. In this case, we are going to query the address and port of our demo
|
||||
service called `demo-webapp`.
|
||||
- We have statically set the port of our load balancer to `8080`. This will
|
||||
allow us to query `nginx.service.consul:8080` from anywhere inside our cluster
|
||||
so we can reach our web application.
|
||||
- Please note that although we have defined the template [inline][inline], we
|
||||
can use the template stanza [in conjunction with the artifact
|
||||
stanza][remote-template] to download an input template from a remote source
|
||||
such as an S3 bucket.
|
||||
|
||||
### Step 4: Run the NGINX Job
|
||||
|
||||
We can now register our NGINX job:
|
||||
|
||||
```shell
|
||||
$ nomad run nginx.nomad
|
||||
==> Monitoring evaluation "45da5a89"
|
||||
Evaluation triggered by job "nginx"
|
||||
Allocation "c7f8af51" created: node "983a64df", group "nginx"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "45da5a89" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 5: Verify Load Balancer Configuration
|
||||
|
||||
Consul Template supports [blocking queries][ct-blocking-queries]. This means
|
||||
your NGINX deployment (which is using the [template][template-stanza] stanza)
|
||||
will be notified immediately when a change in the health of one of the service
|
||||
endpoints occurs and will re-render a new load balancer configuration file that
|
||||
only includes healthy service instances.
|
||||
|
||||
You can use the [alloc fs][alloc-fs] command on your NGINX allocation to read
|
||||
the rendered load balancer configuration file.
|
||||
|
||||
First, obtain the allocation ID of your NGINX deployment (output below is
|
||||
abbreviated):
|
||||
|
||||
```shell
|
||||
$ nomad status nginx
|
||||
ID = nginx
|
||||
Name = nginx
|
||||
...
|
||||
Summary
|
||||
Task Group Queued Starting Running Failed Complete Lost
|
||||
nginx 0 0 1 0 0 0
|
||||
|
||||
Allocations
|
||||
ID Node ID Task Group Version Desired Status Created Modified
|
||||
76692834 f5fdf017 nginx 0 run running 17m40s ago 17m25s ago
|
||||
```
|
||||
|
||||
* Keep in mind your allocation ID will be different.
|
||||
|
||||
Next, use the `alloc fs` command to read the load balancer config:
|
||||
|
||||
```shell
|
||||
$ nomad alloc fs 766 nginx/local/load-balancer.conf
|
||||
upstream backend {
|
||||
|
||||
server 172.31.48.118:21354;
|
||||
|
||||
server 172.31.52.52:25958;
|
||||
|
||||
server 172.31.52.7:29728;
|
||||
|
||||
}
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
location / {
|
||||
proxy_pass http://backend;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
At this point, you can change the count of your `demo-webapp` job and repeat the
|
||||
previous command to verify the load balancer config is dynamically changing.
|
||||
|
||||
### Step 6: Make a Request to the Load Balancer
|
||||
|
||||
If you query the NGINX load balancer, you should be able to see a response
|
||||
similar to the one shown below (this command should be run from a node inside
|
||||
your cluster):
|
||||
|
||||
```shell
|
||||
$ curl nginx.service.consul:8080
|
||||
Welcome! You are on node 172.31.48.118:21354
|
||||
```
|
||||
|
||||
Note that your request has been forwarded to one of the several deployed
|
||||
instances of the demo web application (which is spread across 3 Nomad clients).
|
||||
The output shows the IP address of the host it is deployed on. If you repeat
|
||||
your requests, you will see that the IP address changes.
|
||||
|
||||
* Note: if you would like to access NGINX from outside your cluster, you can set
|
||||
up a load balancer in your environment that maps to an active port `8080` on
|
||||
your clients (or whichever port you have configured for NGINX to listen on).
|
||||
You can then send your requests directly to your external load balancer.
|
||||
|
||||
[alloc-fs]: /docs/commands/alloc/fs.html
|
||||
[consul-template]: https://github.com/hashicorp/consul-template#consul-template
|
||||
[consul-temp-syntax]: https://github.com/hashicorp/consul-template#service
|
||||
[ct-blocking-queries]: https://github.com/hashicorp/consul-template#key
|
||||
[inline]: /docs/job-specification/template.html#inline-template
|
||||
[lb-strategies]: https://www.hashicorp.com/blog/configuring-third-party-loadbalancers-with-consul-nginx-haproxy-f5/
|
||||
[nginx]: https://www.nginx.com/
|
||||
[reference-arch]: /guides/install/production/reference-architecture.html#high-availability
|
||||
[remote-template]: /docs/job-specification/template.html#remote-template
|
||||
[template-stanza]: /docs/job-specification/template.html
|
||||
[terraform-repo]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud
|
||||
|
|
@ -1,265 +0,0 @@
|
|||
---
|
||||
layout: "guides"
|
||||
page_title: "Load Balancing with Traefik"
|
||||
sidebar_current: "guides-load-balancing-traefik"
|
||||
description: |-
|
||||
There are multiple approaches to load balancing within a Nomad cluster.
|
||||
One approach involves using [Traefik][traefik] which natively integrates
|
||||
with service discovery data from Consul.
|
||||
---
|
||||
|
||||
# Load Balancing with Traefik
|
||||
|
||||
The main use case for Traefik in this scenario is to distribute incoming HTTP(S)
|
||||
and TCP requests from the internet to frontend services that can handle these
|
||||
requests. This guide will show you one such example using a demo web
|
||||
application.
|
||||
|
||||
Traefik can natively integrate with Consul using the [Consul Catalog
|
||||
Provider][traefik-consul-provider] and can use [tags][traefik-tags] to route
|
||||
traffic.
|
||||
|
||||
## Reference Material
|
||||
|
||||
- [Traefik][traefik]
|
||||
- [Traefik Consul Catalog Provider Documentation][traefik-consul-provider]
|
||||
|
||||
## Estimated Time to Complete
|
||||
|
||||
20 minutes
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To perform the tasks described in this guide, you need to have a Nomad
|
||||
environment with Consul installed. You can use this [repo][terraform-repo] to
|
||||
easily provision a sandbox environment. This guide will assume a cluster with
|
||||
one server node and three client nodes.
|
||||
|
||||
-> **Note:** This guide is for demo purposes and only assumes a single server
|
||||
node. Please consult the [reference architecture][reference-arch] for production
|
||||
configuration.
|
||||
|
||||
## Steps
|
||||
|
||||
### Step 1: Create a Job for Demo Web App
|
||||
|
||||
Create a job for a demo web application and name the file `webapp.nomad`:
|
||||
|
||||
```hcl
|
||||
job "demo-webapp" {
|
||||
datacenters = ["dc1"]
|
||||
|
||||
group "demo" {
|
||||
count = 3
|
||||
|
||||
task "server" {
|
||||
env {
|
||||
PORT = "${NOMAD_PORT_http}"
|
||||
NODE_IP = "${NOMAD_IP_http}"
|
||||
}
|
||||
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "hashicorp/demo-webapp-lb-guide"
|
||||
}
|
||||
|
||||
resources {
|
||||
network {
|
||||
mbits = 10
|
||||
port "http" {}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "demo-webapp"
|
||||
port = "http"
|
||||
tags = [
|
||||
"traefik.tags=service",
|
||||
"traefik.frontend.rule=PathPrefixStrip:/myapp",
|
||||
]
|
||||
|
||||
check {
|
||||
type = "http"
|
||||
path = "/"
|
||||
interval = "2s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Note that this job deploys 3 instances of our demo web application which we
|
||||
will load balance with Traefik in the next few steps.
|
||||
- We are using tags to configure routing to our web app. Even though our
|
||||
application listens on `/`, it is possible to define `/myapp` as the route
|
||||
because of the [`PathPrefixStrip`][matchers] option.
|
||||
|
||||
### Step 2: Deploy the Demo Web App
|
||||
|
||||
We can now deploy our demo web application:
|
||||
|
||||
```shell
|
||||
$ nomad run webapp.nomad
|
||||
==> Monitoring evaluation "a2061ab7"
|
||||
Evaluation triggered by job "demo-webapp"
|
||||
Evaluation within deployment: "8ca6d358"
|
||||
Allocation "1d14babe" created: node "2d6eea6e", group "demo"
|
||||
Allocation "3abb950d" created: node "a62fa99d", group "demo"
|
||||
Allocation "c65e14bf" created: node "a209a662", group "demo"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "a2061ab7" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 3: Create a Job for Traefik
|
||||
|
||||
Create a job for Traefik and name it `traefik.nomad`. This will be our load
|
||||
balancer that will balance requests to the deployed instances of our web
|
||||
application.
|
||||
|
||||
```hcl
|
||||
job "traefik" {
|
||||
region = "global"
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
group "traefik" {
|
||||
count = 1
|
||||
|
||||
task "traefik" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "traefik:1.7"
|
||||
network_mode = "host"
|
||||
|
||||
volumes = [
|
||||
"local/traefik.toml:/etc/traefik/traefik.toml",
|
||||
]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOF
|
||||
[entryPoints]
|
||||
[entryPoints.http]
|
||||
address = ":8080"
|
||||
[entryPoints.traefik]
|
||||
address = ":8081"
|
||||
|
||||
[api]
|
||||
|
||||
dashboard = true
|
||||
|
||||
# Enable Consul Catalog configuration backend.
|
||||
[consulCatalog]
|
||||
|
||||
endpoint = "127.0.0.1:8500"
|
||||
|
||||
domain = "consul.localhost"
|
||||
|
||||
prefix = "traefik"
|
||||
|
||||
constraints = ["tag==service"]
|
||||
EOF
|
||||
|
||||
destination = "local/traefik.toml"
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 100
|
||||
memory = 128
|
||||
|
||||
network {
|
||||
mbits = 10
|
||||
|
||||
port "http" {
|
||||
static = 8080
|
||||
}
|
||||
|
||||
port "api" {
|
||||
static = 8081
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "traefik"
|
||||
check {
|
||||
name = "alive"
|
||||
type = "tcp"
|
||||
port = "http"
|
||||
interval = "10s"
|
||||
timeout = "2s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- We have statically set the port of our load balancer to `8080`. This will
|
||||
allow us to query `traefik.service.consul:8080` at the appropriate paths (as
|
||||
configured in the tags section of `webapp.nomad` from anywhere inside our
|
||||
cluster so we can reach our web application.
|
||||
- The Traefik dashboard is configured at port `8081`.
|
||||
- Please note that although we have defined the template [inline][inline], we
|
||||
could alternatively use the template stanza [in conjunction with the artifact
|
||||
stanza][remote-template] to download an input template from a remote source
|
||||
such as an S3 bucket.
|
||||
|
||||
### Step 4: Run the Traefik Job
|
||||
|
||||
We can now register our Traefik job:
|
||||
|
||||
```shell
|
||||
$ nomad run traefik.nomad
|
||||
==> Monitoring evaluation "e22ce276"
|
||||
Evaluation triggered by job "traefik"
|
||||
Evaluation within deployment: "c6466497"
|
||||
Allocation "695c5632" created: node "a62fa99d", group "traefik"
|
||||
Evaluation status changed: "pending" -> "complete"
|
||||
==> Evaluation "e22ce276" finished with status "complete"
|
||||
```
|
||||
|
||||
### Step 5: Check the Traefik Dashboard
|
||||
|
||||
You can visit the dashboard for Traefik at
|
||||
`http://<Your-Traefik-IP-address>:8081`. You can use this page to verify your
|
||||
settings and for basic monitoring.
|
||||
|
||||
[![Home Page][traefik_ui]][traefik_ui]
|
||||
|
||||
### Step 6: Make a Request to the Load Balancer
|
||||
|
||||
If you query the Traefik load balancer, you should be able to see a response
|
||||
similar to the one shown below (this command should be run from a
|
||||
node inside your cluster):
|
||||
|
||||
```shell
|
||||
$ curl http://traefik.service.consul:8080/myapp
|
||||
Welcome! You are on node 172.31.28.103:28893
|
||||
```
|
||||
|
||||
Note that your request has been forwarded to one of the several deployed
|
||||
instances of the demo web application (which is spread across 3 Nomad clients).
|
||||
The output shows the IP address of the host it is deployed on. If you repeat
|
||||
your requests, you will see that the IP address changes.
|
||||
|
||||
* Note: if you would like to access Traefik from outside your cluster, you
|
||||
can set up a load balancer in your environment that maps to an active port
|
||||
`8080` on your clients (or whichever port you have configured for Traefik to
|
||||
listen on). You can then send your requests directly to your external load
|
||||
balancer.
|
||||
|
||||
[inline]: /docs/job-specification/template.html#inline-template
|
||||
[matchers]: https://docs.traefik.io/v1.4/basics/#matchers
|
||||
[reference-arch]: /guides/install/production/reference-architecture.html#high-availability
|
||||
[remote-template]: /docs/job-specification/template.html#remote-template
|
||||
[template-stanza]: /docs/job-specification/template.html
|
||||
[terraform-repo]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud
|
||||
[traefik]: https://traefik.io/
|
||||
[traefik_ui]: /assets/images/traefik_ui.png
|
||||
[traefik-consul-provider]: https://docs.traefik.io/v1.7/configuration/backends/consulcatalog/
|
||||
[traefik-tags]: https://docs.traefik.io/v1.5/configuration/backends/consulcatalog/#tags
|
|
@ -554,7 +554,7 @@ to send out notifications to a [receiver][receivers] of your choice.
|
|||
[consul_sd_config]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#%3Cconsul_sd_config%3E
|
||||
[env]: /docs/runtime/environment.html
|
||||
[fabio]: https://fabiolb.net/
|
||||
[fabio-lb]: /guides/load-balancing/fabio.html
|
||||
[fabio-lb]: https://learn.hashicorp.com/guides/load-balancing/fabio
|
||||
[new-targets]: /assets/images/new-targets.png
|
||||
[prometheus-targets]: /assets/images/prometheus-targets.png
|
||||
[running-jobs]: /assets/images/running-jobs.png
|
||||
|
|
|
@ -258,26 +258,6 @@
|
|||
|
||||
<li<%= sidebar_current("guides-load-balancing") %>>
|
||||
<a href="/guides/load-balancing/load-balancing.html">Load Balancing</a>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("guides-load-balancing-fabio") %>>
|
||||
<a href="/guides/load-balancing/fabio.html">Fabio</a>
|
||||
</li>
|
||||
</ul>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("guides-load-balancing-nginx") %>>
|
||||
<a href="/guides/load-balancing/nginx.html">NGINX</a>
|
||||
</li>
|
||||
</ul>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("guides-load-balancing-haproxy") %>>
|
||||
<a href="/guides/load-balancing/haproxy.html">HAProxy</a>
|
||||
</li>
|
||||
</ul>
|
||||
<ul class="nav">
|
||||
<li<%= sidebar_current("guides-load-balancing-traefik") %>>
|
||||
<a href="/guides/load-balancing/traefik.html">Traefik</a>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
|
||||
<li<%= sidebar_current("guides-governance-and-policy") %>>
|
||||
|
|
|
@ -44,6 +44,11 @@
|
|||
/intro/getting-started/ui.html https://learn.hashicorp.com/nomad/getting-started/ui
|
||||
/intro/getting-started/next-steps.html https://learn.hashicorp.com/nomad/getting-started/next-steps
|
||||
|
||||
/guides/load-balancing/fabio.html https://learn.hashicorp.com/nomad/load-balancing/fabio
|
||||
/guides/load-balancing/nginx.html https://learn.hashicorp.com/nomad/load-balancing/nginx
|
||||
/guides/load-balancing/haproxy.html https://learn.hashicorp.com/nomad/load-balancing/haproxy
|
||||
/guides/load-balancing/traefik.html https://learn.hashicorp.com/nomad/load-balancing/traefik
|
||||
|
||||
# Website
|
||||
/community.html /resources.html
|
||||
|
||||
|
|
Loading…
Reference in New Issue