Vault
Vault integration and retrieving dynamic secrets
Challenge
Nomad provides a flexible workload orchestrator to deploy and manage different types of workloads. These workloads will likely need to authenticate with other services, such as an application API or database management system. Providing workloads with secure access to credentials for these services is critical to ensure secure operations.
Solution
Nomad can deploy these workloads while quickly and safely retrieving dynamic credentials by integrating with Vault. This integration allows your applications to retrieve dynamic credentials from Vault for various tasks and invalidate the credentials when finished.
In this tutorial, you deploy a web application that needs to authenticate against a PostgreSQL database to display data from a table to the user.
Prerequisites
To perform the tasks described in this guide, you must have the following installed:
Lab setup
Warning
Do not run services in development mode in production. Development mode starts a limited configuration and is used only for testing.
Deploy Nomad
Open a new terminal and start the Nomad development agent
$ sudo nomad agent -dev \ -bind 0.0.0.0 \ -network-interface='{{ GetDefaultInterfaces | attr "name" }}'
If prompted, enter the password for your operating system.
Open a new terminal and export an environment variable for the Nomad server address.
$ export NOMAD_ADDR=http://localhost:4646
Verify connectivity to the Nomad cluster.
$ nomad node status ID DC Name Class Drain Eligibility Status 13416cb7 dc1 user-C05G17CLKD <none> false eligible ready
The Nomad server is ready.
Deploy Vault
Open a terminal and start a Vault development server with
root
as the root token.$ vault server -dev -dev-root-token-id root
The Vault development server defaults to running at
127.0.0.1:8200
. The server is now initialized and unsealed.Return to the terminal where you set the
NOMAD_ADDR
and export an environment variable for the Vault server address.$ export VAULT_ADDR=http://127.0.0.1:8200
Export an environment variable for the Vault token.
$ export VAULT_TOKEN=root
Verify connectivity to the Vault cluster.
$ vault status Key Value --- ----- Recovery Seal Type shamir Initialized true Sealed false Total Recovery Shares 1 Threshold 1 Version 1.14.3 Storage Type raft ...snipped...
Note
Note: For these tasks, you can use Vault's root token. However, we recommend that you use root tokens only for the initial setup or in emergencies.
The Vault server is ready.
You should now have 3 terminals open. One each for:
- Nomad running in dev mode
- Vault running in dev mode
- Working terminal to configure the environment with the
VAULT_ADDR
andNOMAD_ADDR
environment variables set
Configure Vault
Write a policy for Nomad server tokens
To use the Vault integration, you must provide a Vault token to your Nomad servers using a token with an appropriate policy for the Nomad servers.
Create a policy file for the Nomad server in a file named
nomad-server-policy.hcl
.$ tee nomad-server-policy.hcl <<EOF # Allow creating tokens under "nomad-cluster" token role. The token role name # should be updated if "nomad-cluster" is not used. path "auth/token/create/nomad-cluster" { capabilities = ["update"] } # Allow looking up "nomad-cluster" token role. The token role name should be # updated if "nomad-cluster" is not used. path "auth/token/roles/nomad-cluster" { capabilities = ["read"] } # Allow looking up the token passed to Nomad to validate # the token has the # proper capabilities. This is provided by the "default" policy. path "auth/token/lookup-self" { capabilities = ["read"] } # Allow looking up incoming tokens to validate they have permissions to access # the tokens they are requesting. This is only required if # `allow_unauthenticated` is set to false. path "auth/token/lookup" { capabilities = ["update"] } # Allow revoking tokens that should no longer exist. This allows revoking # tokens for dead tasks. path "auth/token/revoke-accessor" { capabilities = ["update"] } # Allow checking the capabilities of our own token. This is used to validate the # token upon startup. path "sys/capabilities-self" { capabilities = ["update"] } # Allow our own token to be renewed. path "auth/token/renew-self" { capabilities = ["update"] } EOF
Write a policy called
nomad-server
using thenomad-server-policy.hcl
file.$ vault policy write nomad-server nomad-server-policy.hcl Success! Uploaded policy: nomad-server
Create a role
You will now create a role in Vault for Nomad. The role allows you to manage access to Vault by attaching the policy that defines what actions are permitted.
Create a role file for the Nomad server in a file named
nomad-cluster-role.json
.$ tee nomad-cluster-role.json <<EOF { "allowed_policies": "access-tables", "token_explicit_max_ttl": 0, "name": "nomad-cluster", "orphan": true, "token_period": 259200, "renewable": true } EOF
The
access-tables
policy is listed in theallowed_policies
key. A job running in this Nomad cluster will only be allowed to use theaccess-tables
policy.Note
If you would like to allow all policies to be used by any job in the Nomad cluster except for the ones you specifically prohibit, then use the
disallowed_policies
key instead and simply list the policies that should not be granted. If you take this approach, be sure to includenomad-server
in the disallowed policies group. An example of this is shown below:{ "disallowed_policies": "nomad-server", "token_explicit_max_ttl": 0, "name": "nomad-cluster", "orphan": true, "token_period": 259200, "renewable": true }
Create the role named
nomad-cluster
.$ vault write /auth/token/roles/nomad-cluster @nomad-cluster-role.json Success! Data written to: auth/token/roles/nomad-cluster
Create a policy file named
access-tables-policy.hcl
.$ tee access-tables-policy.hcl <<EOF path "database/creds/accessdb" { capabilities = ["read"] } EOF
Create the policy named
access-tables
.$ vault policy write access-tables access-tables-policy.hcl Success! Uploaded policy: access-tables
Generate the token for the Nomad server.
$ vault token create -policy nomad-server -period 72h -orphan Key Value --- ----- token hvs.CAESIAGcVO4w2AZPlXU2_sUlA2kl0U01kfg_r3Kj4WosFMyxGh4KHGh2cy5Hazg1MDYxUlpqM2VkSEIwdm5BV1M0OVU token_accessor XGqfsrIHEuztv7L73hedz4mu token_duration 72h token_renewable true token_policies ["default" "nomad-server"] identity_policies [] policies ["default" "nomad-server"]
The
-orphan
flag is included when generating the Nomad server token above to prevent revocation of the token when its parent expires. Vault typically creates tokens with a parent-child relationship. When an ancestor token is revoked, all of its descendant tokens and their associated leases are revoked as well.The
nomad-server-policy.hcl
policy permits Nomad to renew the token when it expires.
Configure Nomad
With Vault configured, you will now configure Nomad for Vault.
Return to the terminal you started Nomad in dev mode and type
ctrl-c
to stop Nomad.Export an environment variable with the token generated in the previous step.
$ export VAULT_TOKEN=<actual-vault-token>
Export an environment variable with the Vault server address.
$ export VAULT_ADDR=http://127.0.0.1:8200
Create a Nomad configuration file that includes the Vault stanza to enable the integration.
$ tee nomad.hcl <<EOF vault { enabled = true address = "$VAULT_ADDR" task_token_ttl = "1h" create_from_role = "nomad-cluster" token = "$VAULT_TOKEN" } EOF
Restart Nomad in dev mode and specify the config file.
$ sudo nomad agent -dev \ -config=nomad.hcl \ -bind 0.0.0.0 \ -network-interface='{{ GetDefaultInterfaces | attr "name" }}'
Return to the terminal where you set the
NOMAD_ADDR
environment variable and confirm connectivity to the Nomad cluster.$ nomad node status ID DC Name Class Drain Eligibility Status 13416cb7 dc1 user-C05G17CLKD <none> false eligible ready
For production environments, Vault integration needs to be enabled on the client nodes as well. Configure the
vault
stanza in your Nomad clients' configuration file (located at/etc/nomad.d/nomad.hcl
). The Nomad clients do not need to be provided with a Vault token.vault { enabled = true address = "http://vault.address:8200" }
Deploy PostgreSQL
You will now configure a connection between Vault and a database server.
Create a Nomad job called
db.nomad.hcl
.$ tee db.nomad.hcl <<EOF job "postgres-nomad-demo" { datacenters = ["dc1"] group "db" { network { port "db"{ static = 5432 } } task "server" { driver = "docker" config { image = "hashicorp/postgres-nomad-demo:latest" ports = ["db"] } service { name = "database" port = "db" provider = "nomad" check { type = "tcp" interval = "2s" timeout = "2s" } } } } } EOF
Run the Nomad job to start the PostgreSQL database server.
$ nomad run db.nomad.hcl ==> 2023-09-25T12:44:24-04:00: Monitoring evaluation "8e587b48" 2023-09-25T12:44:24-04:00: Evaluation triggered by job "postgres-nomad-demo" 2023-09-25T12:44:24-04:00: Evaluation within deployment: "1df7a26c" 2023-09-25T12:44:24-04:00: Evaluation status changed: "pending" -> "complete" ==> 2023-09-25T12:44:24-04:00: Evaluation "8e587b48" finished with status "complete" but failed to place all allocations: 2023-09-25T12:44:24-04:00: Evaluation "86dce5d2" waiting for additional capacity to place remainder ==> 2023-09-25T12:44:24-04:00: Monitoring deployment "1df7a26c" ✓ Deployment "1df7a26c" successful 2023-09-25T12:44:44-04:00 ID = 1df7a26c Job ID = postgres-nomad-demo Job Version = 0 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline db 1 1 1 0 2023-09-25T12:54:42-04:00
Verify the job is running.
$ nomad status postgres-nomad-demo ID = postgres-nomad-demo Name = postgres-nomad-demo Submit Date = 2023-09-25T11:58:15-04:00 Type = service Priority = 50 Datacenters = dc1 Namespace = default Node Pool = default Status = running Periodic = false Parameterized = false Summary Task Group Queued Starting Running Failed Complete Lost Unknown db 0 0 1 0 0 0 0 Latest Deployment ID = 1df7a26c Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline db 1 1 1 0 2023-09-25T12:54:42-04:00 Allocations ID Node ID Task Group Version Desired Status Created Modified c41dfa49 6abaac8b db 0 run running 18m12s ago 17m59s ago
Configure Vault database secret engine
The Vault database secrets engine provides users and services the ability to generate dynamic, on-demand credentials instead of creating static, long-lived credentials.
Enable the database secrets engine
$ vault secrets enable database Success! Enabled the database secrets engine at: database/
Retrieve the IP address of the
postgres-nomad-database
service and export an environment variable namedPOSTGRES_IP
.$ POSTGRES_IP=$(nomad service info -json database |jq -r '.[0] | .Address')
Create a file named
connection.json
with connection information to the PostgreSQL.$ tee connection.json <<EOF { "plugin_name": "postgresql-database-plugin", "allowed_roles": "accessdb", "connection_url": "postgresql://{{username}}:{{password}}@$POSTGRES_IP:5432/postgres?sslmode=disable", "username": "postgres", "password": "postgres123" } EOF
The connection information allows Vault to connect to the database and create users with specific privileges. In a production setting, it is recommended to give Vault credentials with enough privileges to generate database credentials dynamically and manage their lifecycle.
Create the connection between Vault and PostgreSQL.
$ vault write database/config/postgresql @connection.json Success! Data written to: database/config/postgresql
Create a file named
accessdb.sql
with a creation statement for PostgreSQL.$ tee accessdb.sql <<EOF CREATE USER "{{name}}" WITH ENCRYPTED PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO "{{name}}"; GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO "{{name}}"; GRANT ALL ON SCHEMA public TO "{{name}}"; EOF
The preceding SQL is used in the
creation_statements
parameter of the Vaultaccessdb
role to specify the privileges that the dynamic credentials being generated will possess. In this case, the dynamic database user will have broad privileges that include the ability to read from the tables that the application will need to access.Create a Vault role to manage database privileges.
$ vault write database/roles/accessdb db_name=postgresql \ creation_statements=@accessdb.sql default_ttl=1h max_ttl=24h
Example output:
Success! Data written to: database/roles/accessdb
Test generating dynamic credentials with Vault.
$ vault read database/creds/accessdb Key Value --- ----- lease_id database/creds/accessdb/PxCYrDs5YVv2KIoTw2F9CGUF lease_duration 1h lease_renewable true password z4e-GWefKjtXz85u-EQ4 username v-token-accessdb-NPgciBnpW0vx3AWEqbdr-1695670459
Recall from the previous section that you specified a policy named
access-tables
in theallowed_policies
section of the Vault role. You will create this policy now and give it the capability to read from thedatabase/creds/accessdb
endpoint (the same endpoint you read from in the previous step to generate credentials for the database). You will then specify this policy in the Nomad job which will allow it to retrieve credentials for itself to access the database.Create a file named
access-tables-policy.hcl
.$ tee access-tables-policy.hcl <<EOF path "database/creds/accessdb" { capabilities = ["read"] } EOF
Create the
access-tables
policy in Vault.$ vault policy write access-tables access-tables-policy.hcl Success! Uploaded policy: access-tables
Deploy a Nomad job
You are ready to deploy the web application and give it the necessary policy and configuration to communicate with the PostgreSQL database.
Create a file called
web-app.nomad.hcl
.$ tee web-app.nomad.hcl <<EOF job "nomad-vault-demo" { datacenters = ["dc1"] group "demo" { network { port "http" { to = 8080 } ## You might need to point the container's DNS to a ## resolver that can answer Consul queries at port 53. # dns { # servers = ["x.x.x.x"] # } } task "server" { vault { policies = ["access-tables"] } driver = "docker" config { image = "hashicorp/nomad-vault-demo:latest" ports = ["http"] volumes = [ "secrets/config.json:/etc/demo/config.json" ] } template { data = <<EOF {{ with secret "database/creds/accessdb" }} { "host": "$POSTGRES_IP", "port": 5432, "username": "{{ .Data.username }}", "password": {{ .Data.password | toJSON }}, "db": "postgres" } {{ end }} EOF destination = "secrets/config.json" } service { name = "nomad-vault-demo" port = "http" provider = "nomad" tags = [ "urlprefix-/", ] check { type = "tcp" interval = "2s" timeout = "2s" } } } } } EOF
There are a few key points to note here:
The job specifies the
access-tables
policy in the vault stanza of this job. The Nomad client receives a token with this policy attached. Recall from the previous step that this policy allows the application to read from thedatabase/creds/accessdb
endpoint in Vault and retrieve credentials.The job uses the template stanza's vault integration to populate the JSON configuration file that the application needs. Please note that although the job defines the template inline, you can use the template stanza in conjunction with the artifact stanza to download an input template from a remote source such as an S3 bucket.
The job templates use the
toJSON
function to ensure the password is encoded as a JSON string. Any templated value which may contain special characters (like quotes or newlines) should be passed through thetoJSON
function.Finally, note that the destination of the template is the secrets task directory. This ensures the data is not accessible with a command like
nomad alloc fs
or filesystem APIs.
Run the Nomad job.
$ nomad run web-app.nomad.hcl ==> 2023-09-26T10:37:03-04:00: Monitoring evaluation "cfa9a2d4" 2023-09-26T10:37:03-04:00: Evaluation triggered by job "nomad-vault-demo" 2023-09-26T10:37:03-04:00: Allocation "3ef0cf37" created: node "49822052", group "demo" 2023-09-26T10:37:04-04:00: Evaluation within deployment: "183bdaf6" 2023-09-26T10:37:04-04:00: Evaluation status changed: "pending" -> "complete" ==> 2023-09-26T10:37:04-04:00: Evaluation "cfa9a2d4" finished with status "complete" ==> 2023-09-26T10:37:04-04:00: Monitoring deployment "183bdaf6" ✓ Deployment "183bdaf6" successful 2023-09-26T10:37:17-04:00 ID = 183bdaf6 Job ID = nomad-vault-demo Job Version = 0 Status = successful Description = Deployment completed successfully Deployed Task Group Desired Placed Healthy Unhealthy Progress Deadline demo 1 1 1 0 2023-09-26T10:47:16-04:00
Retrieve information about the
nomad-vault-demo
app.$ WEBAPP_INFO=$(nomad service info -json nomad-vault-demo | jq -r)
Verify the app is making a connection to the PostgreSQL database.
$ curl $(echo $WEBAPP_INFO | jq -r '.[0] | .Address'):$(echo $WEBAPP_INFO | jq -r '.[0] | .Port')/names
Example output:
<!DOCTYPE html> <html> <body> <h1>Welcome!</h1> <h2> If everything worked correctly, you should be able to see a list of names below </h2> <hr /> <h4>John Doe</h4> <h4>Peter Parker</h4> <h4>Clifford Roosevelt</h4> <h4>Bruce Wayne</h4> <h4>Steven Clark</h4> <h4>Mary Jane</h4> </body> <html></html> </html>
Next steps
In this tutorial you deployed a PostgreSQL with Nomad as a job. You then created and secured the login credentials with Vault.