docs: Add Persistent Workload using Host Volumes guide (#6263)

* Added Persistent Workload guide using Host Volumes

* Update website/source/guides/stateful-workloads/stateful-workloads.html.md

Co-Authored-By: Danielle <dani@hashicorp.com>

* fix client config and job spec formatting

* fix typo in description

* fix navigation for both stateful workloads guides

* show output from nomad node status to verify host volumes

* Add value prop info; info about HA

From feedback, added more information about the value proposition for
host volumes (h/t @rkettelerij), and corrected an orphaned bit from
the original guide this one was created from.
This commit is contained in:
Charlie Voiselle 2019-09-05 12:33:10 -04:00 committed by Omar Khawaja
parent 43363f9337
commit 967c80005d
4 changed files with 393 additions and 7 deletions

View File

@ -0,0 +1,378 @@
---
layout: "guides"
page_title: "Stateful Workloads with Nomad Host Volumes"
sidebar_current: "guides-stateful-workloads-host-volumes"
description: |-
There are multiple approaches to deploying stateful applications in Nomad.
This guide uses Nomad Host to Volumes deploy a MySQL database.
---
# Stateful Workloads with Nomad Host Volumes
Nomad Host Volumes can manage storage for stateful workloads running inside your
Nomad cluster. This guide walks you through deploying a MySQL workload to a node
containing supporting storage.
Nomad host volumes provide a more workload-agnostic way to specify resources,
available for Nomad drivers like `exec`, `java`, and `docker`. See the
[`host_volume` specification][host_volume spec] for more information about
supported drivers.
Nomad is also aware of host volumes during the scheduling process, enabling it
to make scheduling decisions based on the availability of host volumes on a
specific client.
This can be contrasted with Nomad support for Docker volumes. Because Docker
volumes are managed outside of Nomad and the Nomad scheduled is not aware of
them, Docker volumes have to either be deployed to all clients or operators have
to use an additional, manually-maintained constraint to inform the scheduler
where they are present.
## Reference Material
- [Nomad `host_volume` specification][host_volume spec]
- [Nomad `volume` specification][volume spec]
- [Nomad `volume_mount` specification][volume_mount spec]
## Estimated Time to Complete
20 minutes
## Challenge
Deploy a MySQL database that needs to be able to persist data without using
operator-configured Docker volumes.
## Solution
Configure Nomad Host Volumes on a Nomad client node in order to persist data
in the event that the container is restarted.
## Prerequisites
To perform the tasks described in this guide, you need to have a Nomad
environment with Consul installed. You can use this [project][repo] to easily
provision a sandbox environment. This guide will assume a cluster with one
server node and three client nodes.
~> **Please Note:** This guide is for demo purposes and is only using a single
server node. In a production cluster, 3 or 5 server nodes are recommended.
### Prerequisite 1: Install the MySQL client
We will use the MySQL client to connect to our MySQL database and verify our data.
Ensure it is installed on a node with access to port 3306 on your Nomad clients:
Ubuntu:
```bash
$ sudo apt install mysql-client
```
CentOS:
```bash
$ sudo yum install mysql
```
macOS via Homebrew:
```bash
$ brew install mysql-client
```
### Step 1: Create a Directory to Use as a Mount Target
On a Nomad client node in your cluster, create a directory that will be used for
persisting the MySQL data. For this example, let's create the directory
`/opt/mysql/data`.
```bash
sudo mkdir -p /opt/mysql/data
```
You might need to change the owner on this folder if the Nomad client does not
run as the `root` user.
```bash
sudo chown «Nomad user» /opt/mysql/data
```
### Step 2: Configure the `mysql` Host Volume on the Client
Edit the Nomad configuration on this Nomad client to create the Host Volume.
Add the following to the `client` stanza of your Nomad configuration:
```hcl
host_volume "mysql" {
path = "/data/mysql"
read_only = false
}
```
Save this change, and then restart the Nomad service on this client to make the
Host Volume active. While still on the client, you can easily verify that the
host volume is configured by using the `nomad node status` command as shown
below:
```shell
$ nomad node status -short -self
ID = 12937fa7
Name = ip-172-31-15-65
Class = <none>
DC = dc1
Drain = false
Eligibility = eligible
Status = ready
Host Volumes = mysql
Drivers = docker,exec,java,mock_driver,raw_exec,rkt
...
```
### Step 3: Create the `mysql.nomad` Job File
We are now ready to deploy a MySQL database that can use Nomad Host Volumes for
storage. Create a file called `mysql.nomad` and provide it the following
contents:
```hcl
job "mysql-server" {
datacenters = ["dc1"]
type = "service"
group "mysql-server" {
count = 1
volume "mysql" {
type = "host"
config {
source = "mysql"
}
}
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
task "mysql-server" {
driver = "docker"
volume_mount {
volume = "mysql"
destination = "/var/lib/mysql"
}
env = {
"MYSQL_ROOT_PASSWORD" = "password"
}
config {
image = "hashicorp/mysql-portworx-demo:latest"
port_map {
db = 3306
}
}
resources {
cpu = 500
memory = 1024
network {
port "db" {
static = 3306
}
}
}
service {
name = "mysql-server"
port = "db"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
}
}
```
* The service name is `mysql-server` which we will use later to connect to the
database.
### Step 4: Deploy the MySQL Database
Register the job file you created in the previous step with the following
command:
```
$ nomad run mysql.nomad
==> Monitoring evaluation "aa478d82"
Evaluation triggered by job "mysql-server"
Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "aa478d82" finished with status "complete"
```
Check the status of the allocation and ensure the task is running:
```
$ nomad status mysql-server
ID = mysql-server
...
Summary
Task Group Queued Starting Running Failed Complete Lost
mysql-server 0 0 1 0 0 0
```
### Step 5: Connect to MySQL
Using the mysql client (installed in [Prerequisite 1]), connect to the database
and access the information:
```
mysql -h mysql-server.service.consul -u web -p -D itemcollection
```
The password for this demo database is `password`.
~> **Please Note:** This guide is for demo purposes and does not follow best
practices for securing database passwords. See [Keeping Passwords
Secure][password-security] for more information.
Consul is installed alongside Nomad in this cluster so we were able to
connect using the `mysql-server` service name we registered with our task in
our job file.
### Step 6: Add Data to MySQL
Once you are connected to the database, verify the table `items` exists:
```
mysql> show tables;
+--------------------------+
| Tables_in_itemcollection |
+--------------------------+
| items |
+--------------------------+
1 row in set (0.00 sec)
```
Display the contents of this table with the following command:
```
mysql> select * from items;
+----+----------+
| id | name |
+----+----------+
| 1 | bike |
| 2 | baseball |
| 3 | chair |
+----+----------+
3 rows in set (0.00 sec)
```
Now add some data to this table (after we terminate our database in Nomad and
bring it back up, this data should still be intact):
```
mysql> INSERT INTO items (name) VALUES ('glove');
```
Run the `INSERT INTO` command as many times as you like with different values.
```
mysql> INSERT INTO items (name) VALUES ('hat');
mysql> INSERT INTO items (name) VALUES ('keyboard');
```
Once you you are done, type `exit` and return back to the Nomad client command
line:
```
mysql> exit
Bye
```
### Step 7: Stop and Purge the Database Job
Run the following command to stop and purge the MySQL job from the cluster:
```
$ nomad stop -purge mysql-server
==> Monitoring evaluation "6b784149"
Evaluation triggered by job "mysql-server"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "6b784149" finished with status "complete"
```
Verify no jobs are running in the cluster:
```
$ nomad status
No running jobs
```
In more advanced cases, the directory backing the host volume could be a mounted
network filesystem, like NFS, or cluster-aware filesystem, like glusterFS. This
can enable more complex, automatic failure-recovery scenarios in the event of a
node failure.
### Step 8: Re-deploy the Database
Using the `mysql.nomad` job file from [Step
3](#step-3-create-the-mysql-nomad-job-file), re-deploy the database to the Nomad
cluster.
```
==> Monitoring evaluation "61b4f648"
Evaluation triggered by job "mysql-server"
Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "61b4f648" finished with status "complete"
```
### Step 9: Verify Data
Once you re-connect to MySQL, you should be able to see that the information you
added prior to destroying the database is still present:
```
mysql> select * from items;
+----+----------+
| id | name |
+----+----------+
| 1 | bike |
| 2 | baseball |
| 3 | chair |
| 4 | glove |
| 5 | hat |
| 6 | keyboard |
+----+----------+
6 rows in set (0.00 sec)
```
### Step 10: Tidying Up
Once you have completed this guide, you should perform the following cleanup steps:
* Stop and purge the `mysql-server` job.
* Remove the `host_volume "mysql"` stanza from your Nomad client configuration
and restart the Nomad service on that client
* Remove the /opt/mysql/data folder and as much of the directory tree that you
no longer require.
[Prerequisite 1]: #prerequisite-1-install-the-mysql-client
[host_volume spec]: /docs/configuration/client.html#host_volume-stanza
[volume spec]: /docs/job-specification/volume.html
[volume_mount spec]: /docs/job-specification/volume_mount.html
[password-security]: https://dev.mysql.com/doc/refman/8.0/en/password-security.html
[repo]: https://github.com/hashicorp/nomad/tree/master/terraform#provision-a-nomad-cluster-in-the-cloud

View File

@ -1,7 +1,7 @@
---
layout: "guides"
page_title: "Stateful Workloads with Portworx"
sidebar_current: "guides-stateful-workloads"
sidebar_current: "guides-stateful-workloads-portworx"
description: |-
There are multiple approaches to deploying stateful applications in Nomad.
This guide uses Portworx deploy a MySQL database.

View File

@ -9,6 +9,14 @@ description: |-
# Stateful Workloads
Nomad allows a user to mount persistent data from local or remote storage volumes
into task environments in a couple of ways — host volume mounts or Docker Volume
drivers.
Nomad host volumes allow you to mount any directory on the Nomad client into an
allocation. These mounts can then be connected to individual tasks within a task
group.
The Docker task driver's support for [volumes][docker-volumes] enables Nomad to
integrate with software-defined storage (SDS) solutions like
[Portworx][portworx] to support stateful workloads. Please keep in mind that
@ -17,14 +25,11 @@ delegated to the SDS providers. Please assess all factors and risks when
utilizing such providers to run stateful workloads (such as your production
database).
Nomad will be adding first class features in the near future that will allow a
user to mount local or remote storage volumes into task environments in a
consistent way across all task drivers and storage providers.
Please refer to the specific documentation links below or in the sidebar for
more detailed information about using specific storage integrations.
- [Host Volumes](/guides/stateful-workloads/host-volumes.html)
- [Portworx](/guides/stateful-workloads/portworx.html)
[docker-volumes]: /docs/drivers/docker.html#volumes
[portworx]: https://docs.portworx.com/install-with-other/nomad
[portworx]: https://docs.portworx.com/install-with-other/nomad

View File

@ -207,7 +207,10 @@
<li<%= sidebar_current("guides-stateful-workloads") %>>
<a href="/guides/stateful-workloads/stateful-workloads.html">Stateful Workloads</a>
<ul class="nav">
<li<%= sidebar_current("guides-portworx") %>>
<li<%= sidebar_current("guides-stateful-workloads-host-volumes") %>>
<a href="/guides/stateful-workloads/host-volumes.html">Host Volumes</a>
</li>
<li<%= sidebar_current("guides-stateful-workloads-portworx") %>>
<a href="/guides/stateful-workloads/portworx.html">Portworx</a>
</li>
</ul>