Nomad
Stateful workloads with Nomad host volumes
Nomad host volumes can manage storage for stateful workloads running inside your Nomad cluster. This tutorial walks you through deploying a MySQL workload using a host volume for persistent storage.
Nomad host volumes provide a workload-agnostic way to specify resources,
available for Nomad drivers like exec
, java
, and docker
. See the
host_volume
specification for more information about
supported drivers. Nomad is also aware of host volumes during the scheduling
process, enabling it to make scheduling decisions based on the availability of
host volumes on a specific client.
This can be contrasted with Nomad support for Docker volumes. Because Docker volumes are managed outside of Nomad and the Nomad scheduler is not aware of them, Docker volumes have to either be deployed to all clients or operators have to use an additional, manually-maintained constraint to inform the scheduler where they are present.
Prerequisites
To perform the tasks described in this guide, you need to have a Nomad environment (v0.12.0 or greater) with Consul installed. You can use this Terraform environment to provision a sandbox environment. This tutorial will assume a cluster with one server node and three client nodes.
Note
This tutorial is for demo purposes and only assumes a single server node. Please consult the reference architecture for production configuration.
Install the MySQL client
You will use the MySQL client to connect to our MySQL database and verify our data. Ensure it is installed on a node with access to port 3306 on your Nomad clients:
Ubuntu:
$ sudo apt install mysql-client
CentOS:
$ sudo yum install mysql
macOS via Homebrew:
$ brew install mysql-client
Build the host volume
Create a target directory
On a Nomad client node in your cluster, create a directory that will be used for
persisting the MySQL data. For this example, let's create the directory
/opt/mysql/data
.
$ sudo mkdir -p /opt/mysql/data
You might need to change the owner on this folder if the Nomad client does not
run as the root
user.
$ sudo chown «Nomad user» /opt/mysql/data
Configure the client
Edit the Nomad configuration on this Nomad client to create the host volume.
Add a host_volume
block to the client
block of your Nomad configuration:
host_volume "mysql" {
path = "/opt/mysql/data"
read_only = false
}
Save this change, and then restart the Nomad service on this client to make the
host volume active. While still on the client, you can verify that the host
volume is configured by using the nomad node status
command as shown below:
$ nomad node status -short -self
ID = 12937fa7
Name = ip-172-31-15-65
Class = <none>
DC = dc1
Drain = false
Eligibility = eligible
Status = ready
Host Volumes = mysql
Drivers = docker,exec,java,mock_driver,raw_exec,rkt
...
Deploy MySQL
Create the job file
You are now ready to deploy a MySQL database that can use Nomad host volumes for
storage. Create a file called mysql.nomad.hcl
and provide it the following
contents:
job "mysql-server" {
datacenters = ["dc1"]
type = "service"
group "mysql-server" {
count = 1
volume "mysql" {
type = "host"
read_only = false
source = "mysql"
}
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
task "mysql-server" {
driver = "docker"
volume_mount {
volume = "mysql"
destination = "/var/lib/mysql"
read_only = false
}
env = {
"MYSQL_ROOT_PASSWORD" = "password"
}
config {
image = "hashicorp/mysql-portworx-demo:latest"
ports = ["db"]
}
resources {
cpu = 500
memory = 1024
}
service {
name = "mysql-server"
port = "db"
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
network {
port "db" {
static = 3306
}
}
}
}
Notes about the above job specification
The service name is
mysql-server
which you will use later to connect to the database.The
read_only
argument is supplied on all of the volume-related stanzas in to help highlight all of the places you would need to change to make a read-only volume mount. Please see thehost_volume
,volume
, andvolume_mount
specifications for more details.For lower-memory instances, you might need to reduce the requested memory in the resources stanza to harmonize with available resources in your cluster.
Run the job
Register the job file you created in the previous step with the following command:
$ nomad run mysql.nomad.hcl
==> Monitoring evaluation "aa478d82"
Evaluation triggered by job "mysql-server"
Allocation "6c3b3703" created: node "be8aad4e", group "mysql-server"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "aa478d82" finished with status "complete"
Check the status of the allocation and ensure the task is running:
$ nomad status mysql-server
ID = mysql-server
...
Summary
Task Group Queued Starting Running Failed Complete Lost
mysql-server 0 0 1 0 0 0
Write data to MySQL
Connect to MySQL
Using the mysql client (installed earlier), connect to the database and access the information:
$ mysql -h mysql-server.service.consul -u web -p -D itemcollection
The password for this demo database is password
.
Note
This tutorial is for demo purposes and does not follow best practices for securing database passwords. See Keeping Passwords Secure for more information.
Consul is installed alongside Nomad in this cluster so you are able to
connect using the mysql-server
service name you registered with our task in
our job file.
Add test data
Once you are connected to the database, verify the table items
exists:
mysql> show tables;
+--------------------------+
| Tables_in_itemcollection |
+--------------------------+
| items |
+--------------------------+
1 row in set (0.00 sec)
Display the contents of this table with the following command:
mysql> select * from items;
+----+----------+
| id | name |
+----+----------+
| 1 | bike |
| 2 | baseball |
| 3 | chair |
+----+----------+
3 rows in set (0.00 sec)
Now add some data to this table (after you terminate our database in Nomad and bring it back up, this data should still be intact):
mysql> INSERT INTO items (name) VALUES ('glove');
Run the INSERT INTO
command as many times as you like with different values.
mysql> INSERT INTO items (name) VALUES ('hat');
mysql> INSERT INTO items (name) VALUES ('keyboard');
Once you are done, type exit
and return back to the Nomad client command
line:
mysql> exit
Bye
Destroy the database job
Run the following command to stop and purge the MySQL job from the cluster:
$ nomad stop -purge mysql-server
==> Monitoring evaluation "6b784149"
Evaluation triggered by job "mysql-server"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "6b784149" finished with status "complete"
Verify no jobs are running in the cluster:
$ nomad status
No running jobs
In more advanced cases, the directory backing the host volume could be a mounted network filesystem, like NFS, or cluster-aware filesystem, like glusterFS. This can enable more complex, automatic failure-recovery scenarios in the event of a node failure.
Re-deploy and verify
Using the mysql.nomad.hcl
job file from earlier, re-deploy the
database to the Nomad cluster.
$ nomad run mysql.nomad.hcl
==> Monitoring evaluation "61b4f648"
Evaluation triggered by job "mysql-server"
Allocation "8e1324d2" created: node "be8aad4e", group "mysql-server"
Evaluation status changed: "pending" -> "complete"
==> Evaluation "61b4f648" finished with status "complete"
Once you re-connect to MySQL, you should be able to see that the information you added prior to destroying the database is still present:
mysql> select * from items;
+----+----------+
| id | name |
+----+----------+
| 1 | bike |
| 2 | baseball |
| 3 | chair |
| 4 | glove |
| 5 | hat |
| 6 | keyboard |
+----+----------+
6 rows in set (0.00 sec)
Clean up
Once you have completed this guide, you should perform the following cleanup steps:
Stop and purge the
mysql-server
job.Remove the
host_volume "mysql"
stanza from your Nomad client configuration and restart the Nomad service on that clientRemove the /opt/mysql/data folder and as much of the directory tree that you no longer require.
Summary
In this guide, you configured a host volume on a Nomad client using a client-local directory. You created a job that mounted this volume to a Docker MySQL container and wrote data that persisted beyond the job's lifecycle.