Nomad
group Block
Placement | job -> group |
The group
block defines a series of tasks that should be co-located on the
same Nomad client. Any task within a group will be placed on the same
client.
job "docs" {
group "example" {
# ...
}
}
group
Parameters
constraint
(Constraint: nil)
- This can be provided multiple times to define additional constraints.affinity
(Affinity: nil)
- This can be provided multiple times to define preferred placement criteria.spread
(Spread: nil)
- This can be provided multiple times to define criteria for spreading allocations across a node attribute or metadata. See the Nomad spread reference for more details.count
(int)
- Specifies the number of instances that should be running under for this group. This value must be non-negative. This defaults to themin
value specified in thescaling
block, if present; otherwise, this defaults to1
.consul
(Consul: nil)
- Specifies Consul configuration options specific to the group.ephemeral_disk
(EphemeralDisk: nil)
- Specifies the ephemeral disk requirements of the group. Ephemeral disks can be marked as sticky and support live data migrations.meta
(Meta: nil)
- Specifies a key-value map that annotates with user-defined metadata.migrate
(Migrate: nil)
- Specifies the group strategy for migrating off of draining nodes. Only service jobs with a count greater than 1 support migrate blocks.network
(Network: <optional>)
- Specifies the network requirements and configuration, including static and dynamic port allocations, for the group.reschedule
(Reschedule: nil)
- Allows to specify a rescheduling strategy. Nomad will then attempt to schedule the task on another node if any of the group allocation statuses become "failed".restart
(Restart: nil)
- Specifies the restart policy for all tasks in this group. If omitted, a default policy exists for each job type, which can be found in the restart block documentation.service
(Service: nil)
- Specifies integrations with Nomad or Consul for service discovery. Nomad automatically registers each service when an allocation is started and de-registers them when the allocation is destroyed.shutdown_delay
(string: "0s")
- Specifies the duration to wait when stopping a group's tasks. The delay occurs between Consul or Nomad service deregistration and sending each task a shutdown signal. Ideally, services would fail health checks once they receive a shutdown signal. Alternatively,shutdown_delay
may be set to give in-flight requests time to complete before shutting down. A group levelshutdown_delay
will run regardless if there are any defined group services and only applies to these services. In addition, tasks may have their ownshutdown_delay
which waits between de-registering task services and stopping the task.stop_after_client_disconnect
(string: "")
- Specifies a duration after which a Nomad client will stop allocations, if it cannot communicate with the servers. By default, a client will not stop an allocation until explicitly told to by a server. A client that fails to heartbeat to a server within theheartbeat_grace
window and any allocations running on it will be marked "lost" and Nomad will schedule replacement allocations. The replaced allocations will normally continue to run on the non-responsive client. But you may want them to stop instead — for example, allocations requiring exclusive access to an external resource. When specified, the Nomad client will stop them after this duration. The Nomad client process must be running for this to occur. This setting cannot be used withmax_client_disconnect
.max_client_disconnect
(string: "")
- Specifies a duration during which a Nomad client will attempt to reconnect allocations after it fails to heartbeat in theheartbeat_grace
window. See the example code below for more details. This setting cannot be used withstop_after_client_disconnect
.task
(Task: <required>)
- Specifies one or more tasks to run within this group. This can be specified multiple times, to add a task as part of the group.update
(Update: nil)
- Specifies the task's update strategy. When omitted, a default update strategy is applied.vault
(Vault: nil)
- Specifies the set of Vault policies required by all tasks in this group. Overrides avault
block set at thejob
level.volume
(Volume: nil)
- Specifies the volumes that are required by tasks within the group.
consul
Parameters
namespace
(string: "")
Enterprise - The Consul namespace in which group and task-level services within the group will be registered. Use oftemplate
to access Consul KV will read from the specified Consul namespace. Specifyingnamespace
takes precedence over the-consul-namespace
command line argument injob run
.
group
Examples
The following examples only show the group
blocks. Remember that the
group
block is only valid in the placements listed above.
Specifying Count
This example specifies that 5 instances of the tasks within this group should be running:
group "example" {
count = 5
}
Tasks with Constraint
This example shows two abbreviated tasks with a constraint on the group. This will restrict the tasks to 64-bit operating systems.
group "example" {
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
task "cache" {
# ...
}
task "server" {
# ...
}
}
Metadata
This example show arbitrary user-defined metadata on the group:
group "example" {
meta {
my-key = "my-value"
}
}
Network
This example shows network constraints as specified in the network block
which uses the bridge
networking mode, dynamically allocates two ports, and
statically allocates one port:
group "example" {
network {
mode = "bridge"
port "http" {}
port "https" {}
port "lb" {
static = "8889"
}
}
}
Service Discovery
This example creates a service in Consul. To read more about service discovery in Nomad, please see the Nomad service discovery documentation.
group "example" {
network {
port "api" {}
}
service {
name = "example"
port = "api"
tags = ["default"]
check {
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
task "api" { ... }
}
Stop After Client Disconnect
This example shows how stop_after_client_disconnect
interacts with
other blocks. For the first
group, after the default 10 second
heartbeat_grace
window expires and 90 more seconds passes, the
server will reschedule the allocation. The client will wait 90 seconds
before sending a stop signal (SIGTERM
) to the first-task
task. After 15 more seconds because of the task's kill_timeout
, the
client will send SIGKILL
. The second
group does not have
stop_after_client_disconnect
, so the server will reschedule the
allocation after the 10 second heartbeat_grace
expires. It will
not be stopped on the client, regardless of how long the client is out
of touch.
Note that if the server's clocks are not closely synchronized with each other, the server may reschedule the group before the client has stopped the allocation. Operators should ensure that clock drift between servers is as small as possible.
Note also that a group using this feature will be stopped on the client if the Nomad server cluster fails, since the client will be unable to contact any server in that case. Groups opting in to this feature are therefore exposed to an additional runtime dependency and potential point of failure.
group "first" {
stop_after_client_disconnect = "90s"
task "first-task" {
kill_timeout = "15s"
}
}
group "second" {
task "second-task" {
kill_timeout = "5s"
}
}
Max Client Disconnect
max_client_disconnect
specifies a duration during which a Nomad client will
attempt to reconnect allocations after it fails to heartbeat in the
heartbeat_grace
window.
By default, allocations running on a client that fails to heartbeat will be marked "lost". When a client reconnects, its allocations, which may still be healthy, will restart because they have been marked "lost". This can cause issues with stateful tasks or tasks with long restart times.
Instead, an operator may desire that these allocations reconnect without a
restart. When max_client_disconnect
is specified, the Nomad server will mark
clients that fail to heartbeat as "disconnected" rather than "down", and will
mark allocations on a disconnected client as "unknown" rather than "lost". These
allocations may continue to run on the disconnected client. Replacement
allocations will be scheduled according to the allocations' reschedule policy
until the disconnected client reconnects. Once a disconnected client reconnects,
Nomad will compare the "unknown" allocations with their replacements and keep
the one with the best node score. If the max_client_disconnect
duration
expires before the client reconnects, the allocations will be marked "lost".
Clients that contain "unknown" allocations will transition to "disconnected"
rather than "down" until the last max_client_disconnect
duration has expired.
In the example code below, if both of these task groups were placed on the same
client and that client experienced a network outage, both of the group's
allocations would be marked as "disconnected" at two minutes because of the
client's heartbeat_grace
value of "2m". If the network outage continued for
eight hours, and the client continued to fail to heartbeat, the client would
remain in a "disconnected" state, as the first group's max_client_disconnect
is twelve hours. Once all groups' max_client_disconnect
durations are
exceeded, in this case in twelve hours, the client node will be marked as "down"
and the allocation will be marked as "lost". If the client had reconnected
before twelve hours had passed, the allocations would gracefully reconnect
without a restart.
Max Client Disconnect is useful for edge deployments, or scenarios when
operators want zero on-client downtime due to node connectivity issues. This
setting cannot be used with stop_after_client_disconnect
.
# server_config.hcl
server {
enabled = true
heartbeat_grace = "2m"
}
# jobspec.nomad
group "first" {
max_client_disconnect = "12h"
task "first-task" {
...
}
}
group "second" {
max_client_disconnect = "6h"
task "second-task" {
...
}
}
Note: The max_client_disconnect
feature is only supported on Nomad
version 1.3.0 and above. If you run a job with max_client_disconnect
on servers
where some servers are not upgraded to 1.3.0, the max_client_disconnect
flag will be ignored. Deploying a job with max_client_disconnect
to a
datacenter
of Nomad clients where all clients are not 1.3.0 or above is unsupported.