Nomad
containerd Task Driver
Name: containerd-driver
Homepage: https://github.com/Roblox/nomad-driver-containerd
containerd (containerd.io
) is a lightweight container
daemon for running and managing container lifecycle. Docker daemon also uses
containerd.
dockerd (docker daemon) --> containerd --> containerd-shim --> runc
nomad-driver-containerd
enables Nomad clients to launch containers directly
using containerd, without Docker. The Docker daemon is therefore not required on
the host system.
See the project's homepage for more details.
Client Requirements
The containerd task driver is not built into Nomad. It must be
downloaded
onto the client host in the configured plugin
directory.
Linux (Ubuntu >=16.04) with
containerd
(>=1.3) installed.containerd-driver
binary in Nomad's plugin_dir.
Capabilities
The containerd-driver
implements the following capabilities.
Feature | Implementation |
---|---|
send signals | true |
exec | true |
filesystem isolation | image |
network isolation | host, group, task, none |
volume mounting | true |
For sending signals, one can use nomad alloc signal
command.
For exec'ing into the container, one can use nomad alloc exec
command.
Task Configuration
Since Docker also relies on containerd for managing container lifecycle, the
example job created by nomad init -short
can easily be adapted
to use containerd-driver
instead:
job "redis" {
datacenters = ["dc1"]
group "redis-group" {
task "redis-task" {
driver = "containerd-driver"
config {
image = "docker.io/library/redis:alpine"
}
resources {
cpu = 500
memory = 256
}
}
}
}
The containerd task driver supports the following parameters:
image
- (Required) OCI image (Docker is also OCI compatible) for your container.
config {
image = "docker.io/library/redis:alpine"
}
image_pull_timeout
- (Optional) A time duration that controls how longcontainerd-driver
will wait before cancelling an in-progress pull of the OCI image as specified inimage
. Defaults to"5m"
.command
- (Optional) Command to override command defined in the image.
config {
command = "some-command"
}
args
- (Optional) Arguments to the command.
config {
args = [
"arg1",
"arg2",
]
}
auth
- (Optional) Provide authentication for a private registry (See Authentication below).entrypoint
- (Optional) A string list overriding the image's entrypoint.cwd
- (Optional) Specify the current working directory (cwd) for your container process. If the directory does not exist, one will be created for you.privileged
- (Optional)true
orfalse
(default) Run container in privileged mode. Your container will have all Linux capabilities when running in privileged mode.
config {
privileged = true
}
pids_limit
- (Optional) An integer value that specifies the pid limit for the container. Defaults to unlimited.pid_mode
- (Optional)host
or not set (default). Set tohost
to share the PID namespace with the host.host_dns
- (Optional)true
(default) orfalse
By default, a container launched usingcontainerd-driver
will use host/etc/resolv.conf
. This is similar to Docker's behavior. However, if you don't want to use host DNS, you can turn off this flag by settinghost_dns=false
.seccomp
- (Optional) Enable default seccomp profile. List of allowed syscalls.seccomp_profile
- (Optional) Path to custom seccomp profile.seccomp
must be set totrue
in order to useseccomp_profile
.The default
docker
seccomp profile found in the Moby repository can be downloaded, and modified (by removing/adding syscalls) to create a custom seccomp profile. The custom seccomp profile can then be saved under/opt/seccomp/seccomp.json
on the Nomad client nodes.
config {
seccomp = true
seccomp_profile = "/opt/seccomp/seccomp.json"
}
shm_size
- (Optional) Size of /dev/shm e.g.128M
if you want 128 MB of /dev/shm.sysctl
- (Optional) A key-value map of sysctl configurations to set to the containers on start.
config {
sysctl = {
"net.core.somaxconn" = "16384"
"net.ipv4.ip_forward" = "1"
}
}
readonly_rootfs
- (Optional)true
orfalse
(default) Container root filesystem will be read-only.
config {
readonly_rootfs = true
}
host_network
- (Optional)true
orfalse
(default) Enable host network. This is equivalent to--net=host
in docker.
config {
host_network = true
}
extra_hosts
- (Optional) A list of hosts, given as host:IP, to be added to/etc/hosts
.hostname
- (Optional) The hostname to assign to the container. When launching more than one of a task (usingcount
) with this option set, every container the task starts will have the same hostname.cap_add
- (Optional) Add individual capabilities.
config {
cap_add = [
"CAP_SYS_ADMIN",
"CAP_CHOWN",
"CAP_SYS_CHROOT"
]
}
cap_drop
- (Optional) Drop individual capabilities.
config {
cap_drop = [
"CAP_SYS_ADMIN",
"CAP_CHOWN",
"CAP_SYS_CHROOT"
]
}
devices
- (Optional) A list of devices to be exposed to the container.
config {
devices = [
"/dev/loop0",
"/dev/loop1"
]
}
mounts
- (Optional) A list of mounts to be mounted in the container. Volume, bind and tmpfs type mounts are supported. fstab stylemount options
are supported.type
- (Optional) Supported values arevolume
,bind
ortmpfs
. Default:volume
.target
- (Required) Target path in the container.source
- (Optional) Source path on the host.options
- (Optional) fstab stylemount options
. NOTE: For bind mounts, at leastrbind
andro
are required.
config {
mounts = [
{
type = "bind"
target = "/tmp/t1"
source = "/tmp/s1"
options = ["rbind", "ro"]
}
]
}
Networking
nomad-driver-containerd
supports host and bridge networks.
NOTE: host
and bridge
are mutually exclusive options, and only one of
them should be used at a time.
Host network can be enabled by setting
host_network
totrue
in task config of the job spec (see host_network under Task Configuration).Bridge network can be enabled by setting the
network
block in the task group section of the job spec.
network {
mode = "bridge"
}
You need to install CNI plugins on Nomad client nodes under /opt/cni/bin
before you can use bridge
networks.
Instructions for installing CNI plugins.
$ curl -L -o cni-plugins.tgz "https://github.com/containernetworking/plugins/releases/download/v1.0.0/cni-plugins-linux-$( [ $(uname -m) = aarch64 ] && echo arm64 || echo amd64)"-v1.0.0.tgz
$ sudo mkdir -p /opt/cni/bin
$ sudo tar -C /opt/cni/bin -xzf cni-plugins.tgz
Also, ensure your Linux operating system distribution has been configured to allow container traffic through the bridge network to be routed via iptables. These tunables can be set as follows:
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-arptables
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
To preserve these settings on startup of a Nomad client node, add a file
including the following to /etc/sysctl.d/
or remove the file your Linux
distribution puts in that directory.
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
Port Forwarding
Nomad supports both static
and dynamic
port mapping.
- Static ports
Static port mapping can be added in the network
block.
network {
mode = "bridge"
port "lb" {
static = 8889
to = 8889
}
}
Here, host
port 8889
is mapped to container
port 8889
.
NOTE: static ports are usually not recommended, except for
system
or specialized jobs like load balancers.
- Dynamic ports
Dynamic port mapping is also enabled in the network
block.
network {
mode = "bridge"
port "http" {
to = 8080
}
}
Here, nomad will allocate a dynamic port on the host
and that port
will be mapped to 8080
in the container.
You can read more about configuring networking under the network
block documentation.
Service discovery
Nomad schedules workloads of various types across a cluster of generic hosts. Because of this, placement is not known in advance and you will need to use service discovery to connect tasks to other services deployed across your cluster. Nomad integrates with Consul to provide service discovery and monitoring.
A service
block can be added to your job spec, to enable service discovery.
The service block instructs Nomad to register a service with Consul.
Authentication
auth
block allow you to set credentials for your private registry e.g. if you want
to pull an image from a private repository in docker hub.
auth
block can be set either in Driver Config
or Task Config
or both.
If set at both places, Task Config
auth will take precedence over Driver Config
auth.
NOTE: In the below example, user
and pass
are just placeholder values which need to be
replaced by actual username
and password
, when specifying the credentials. Below auth
block can be used for both Driver Config
and Task Config
.
auth {
username = "user"
password = "pass"
}
Plugin Options
enabled
- (Optional) Thecontainerd
driver may be disabled on hosts by setting this option tofalse
(defaults totrue
).containerd_runtime
- (Required) Runtime forcontainerd
e.g.io.containerd.runc.v1
orio.containerd.runc.v2
stats_interval
- (Optional) This value defines how frequently you want to sendTaskStats
to nomad client. (defaults to1 second
).allow_privileged
- (Optional) If set tofalse
, driver will deny running privileged jobs. (defaults totrue
).
An example of using these plugin options with the new plugin syntax is shown below:
plugin "containerd-driver" {
config {
enabled = true
containerd_runtime = "io.containerd.runc.v2"
stats_interval = "5s"
}
}
Please note the plugin name should match whatever name you have specified for the external driver in the plugin_dir directory.