Nomad
scaling Block
Placement | job -> group -> scaling job -> group -> task -> scaling |
The scaling
block allows configuring scaling options for a task
or a
group
, for the purpose of supporting external autoscalers like the
Nomad Autoscaler and scaling
via the Nomad UI. This block is not supported within jobs of type system
.
When placed at the group
level, the scaling policy will be of type
horizontal application scaling, controlling the value
of count
for the group.
job "example" {
datacenters = ["dc1"]
group "cache" {
count = 1
scaling {
enabled = true
min = 0
max = 10
policy {
# ...
}
}
# ...
}
}
When placed at the task
level, the scaling policy will be of type
Dynamic Application Sizing, controlling the resources
values of
the task. In this scenario, the scaling
block must have a label indicating
which resource will be controlled. Valid names are cpu
and mem
.
job "example" {
datacenters = ["dc1"]
group "cache" {
task "redis" {
driver = "docker"
config {
image = "redis:7"
}
resources {
cpu = 100
memory = 256
}
scaling "cpu" {
enabled = true
min = 100
max = 500
policy {
# ...
}
}
scaling "mem" {
enabled = true
min = 64
max = 512
policy {
# ...
}
}
}
}
}
scaling
Parameters
min
-(int: nil)
- The minimum acceptable count for the task group. This should be honored by the external autoscaler. It will also be honored by Nomad during job updates and scaling operations. Defaults to the specified task groupcount
.max
-(int: <required>)
- The maximum acceptable count for the task group. This should be honored by the external autoscaler. It will also be honored by Nomad during job updates and scaling operations.enabled
-(bool: false)
- Whether the scaling policy is enabled. This is intended to allow temporarily disabling an autoscaling policy, and should be honored by the external autoscaler.policy
-(map<string|...>: nil)
- The autoscaling policy. This is opaque to Nomad, consumed and parsed only by the external autoscaler. Therefore, its contents are specific to the autoscaler; consult the Nomad Autoscaler documentation for more details.