Consul
Consul with containers
In this tutorial, you will learn how to deploy two, joined Consul agents each running in separate Docker containers. You will also register a service and perform basic maintenance operations. The two Consul agents will form a small datacenter.
By following this tutorial you will learn how to:
- Get the Docker image for Consul
- Configure and run a Consul server
- Configure and run a Consul client
- Interact with the Consul agents
- Perform maintenance operations (backup your Consul data, stop a Consul agent, etc.)
The tutorial is Docker-focused, but the principles you will learn apply to other container runtimes as well.
Security Warning
This tutorial is not for production use. Refer to the [Consul Reference Architecture](/consul/tutorials/production-deploy/reference-architecture) for Consul best practices and the [Docker Documentation](https://docs.docker.com/) for Docker best practices.To extend the concepts you will learn in this tutorial, check out the deploy a secure local Consul datacenter using Docker Compose tutorial to learn more about deploying a secure Consul datacenter.
Prerequisites
Docker
You will need a local install of Docker running on your machine for this tutorial. You can find the instructions for installing Docker on your specific operating system here.
Consul (Optional)
If you would like to interact with your containerized Consul agents using a local install of Consul, follow the instructions here and install the binary somewhere on your PATH.
Get the Docker image
First, pull the latest image. You will use Consul's official Docker image in this tutorial.
$ docker pull consul
Check the image was downloaded by listing Docker images that match consul
.
$ docker images -f 'reference=consul'
REPOSITORY TAG IMAGE ID CREATED SIZE
consul latest 8b6c5f52aa82 18 hours ago 149MB
Configure and run a Consul server
Next, you will use Docker command-line flags to start the agent as a server, configure networking, and bootstrap the datacenter when one server is up.
$ docker run \
-d \
-p 8500:8500 \
-p 8600:8600/udp \
--name=badger \
consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=0.0.0.0
Since you started the container in detached mode, -d
, the process will run in the background. You also set port mapping to your local machine as well as binding the client interface of our agent to 0.0.0.0. This allows you to work directly with the Consul datacenter from your local machine and to access Consul's UI and DNS over localhost. Finally, you are using Docker's default bridge network.
Note, the Consul Docker image sets up the Consul configuration directory at /consul/config
by default. The agent will load any configuration files placed in that directory.
The configuration directory is not exposed as a volume and will not persist data. Consul uses it only during startup and does not store any state there.
To avoid mounting volumes or copying files to the container, you can also save configuration JSON to that directory via the environment variable CONSUL_LOCAL_CONFIG
, which will be covered later in the tutorial.
Discover the server IP address
You can find the IP address of the Consul server by executing the consul members
command inside of the badger
container.
$ docker exec badger consul members
Node Address Status Type Build Protocol DC Partition Segment
server-1 172.17.0.2:8301 alive server 1.13.3 2 dc1 default <all>
Configure and run a Consul client
Next, deploy a containerized Consul client and instruct it to join the server by giving it the server's IP address. Do not use detached mode, so you can reference the client logs during later steps.
$ docker run \
--name=fox \
consul agent -node=client-1 -retry-join=172.17.0.2
==> Starting Consul agent...
Version: '1.14.3'
Build Date: '2022-12-13 17:13:55 +0000 UTC'
Node ID: '2edc1554-13de-1476-fc20-1212aa29126d'
Node name: 'client-1'
Datacenter: 'dc1' (Segment: '')
Server: false (Bootstrap: false)
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, gRPC-TLS: -1, DNS: 8600)
Cluster Addr: 172.17.0.3 (LAN: 8301, WAN: 8302)
Gossip Encryption: false
Auto-Encrypt-TLS: false
HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2
gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2
Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2
==> Log data will now stream in as it occurs:
2022-12-15T18:59:45.065Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: client-1 172.17.0.3
2022-12-15T18:59:45.065Z [INFO] agent.router: Initializing LAN area manager
2022-12-15T18:59:45.065Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=tcp
2022-12-15T18:59:45.066Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=udp
2022-12-15T18:59:45.066Z [INFO] agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http
2022-12-15T18:59:45.066Z [INFO] agent: started state syncer
2022-12-15T18:59:45.066Z [INFO] agent: Consul agent running!
2022-12-15T18:59:45.066Z [INFO] agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce hcp k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere"
2022-12-15T18:59:45.066Z [INFO] agent: Joining cluster...: cluster=LAN
2022-12-15T18:59:45.066Z [INFO] agent: (LAN) joining: lan_addresses=["172.17.0.2"]
2022-12-15T18:59:45.066Z [WARN] agent.router.manager: No servers available
2022-12-15T18:59:45.067Z [ERROR] agent.anti_entropy: failed to sync remote state: error="No known Consul servers"
2022-12-15T18:59:45.070Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: server-1 172.17.0.2
2022-12-15T18:59:45.071Z [INFO] agent.client: adding server: server="server-1 (Addr: tcp/172.17.0.2:8300) (DC: dc1)"
2022-12-15T18:59:45.071Z [INFO] agent: (LAN) joined: number_of_nodes=1
2022-12-15T18:59:45.071Z [INFO] agent: Join cluster completed. Synced with initial agents: cluster=LAN num_agents=1
2022-12-15T18:59:46.454Z [INFO] agent: Synced node info
In a new terminal, check that the client has joined by executing the consul members
command again in the Consul server container.
$ docker exec badger consul members
Node Address Status Type Build Protocol DC Partition Segment
server-1 172.17.0.2:8301 alive server 1.14.3 2 dc1 default <all>
client-1 172.17.0.3:8301 alive client 1.14.3 2 dc1 default <default>
Now that you have a small datacenter, you can register a service and perform maintenance operations.
Register a service
Start a service in a third container and register it with the Consul client. The basic service increments a number every time it is accessed and returns that number.
Pull the container.
$ docker pull hashicorp/counting-service:0.0.2
Run the container with port forwarding so that you can access it from your web browser by visiting http://localhost:9001.
$ docker run \
-p 9001:9001 \
-d \
--name=weasel \
hashicorp/counting-service:0.0.2
Next, you will register the counting service with the Consul client by adding a service definition file called counting.json
in the directory consul/config
.
$ docker exec fox /bin/sh -c "echo '{\"service\": {\"name\": \"counting\", \"tags\": [\"go\"], \"port\": 9001}}' >> /consul/config/counting.json"
Since the Consul client does not automatically detect changes in the configuration directory, you will need to issue a reload command for the same container.
$ docker exec fox consul reload
Configuration reload triggered
If you go back to the terminal window where you started the client, there should be log entries confirming that the Consul client received the hangup signal, reloaded its configuration, and synced the counting service.
2022-12-15T19:02:24.945Z [INFO] agent: Synced service: service=counting
Use Consul DNS to discover the counting service
Now you can query Consul for the location of your service using the following dig command against Consul's DNS.
$ dig @127.0.0.1 -p 8600 counting.service.consul
; <<>> DiG 9.10.6 <<>> @127.0.0.1 -p 8600 counting.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61865
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;counting.service.consul. IN A
;; ANSWER SECTION:
counting.service.consul. 0 IN A 172.17.0.3
;; Query time: 4 msec
;; SERVER: 127.0.0.1#8600(127.0.0.1)
;; WHEN: Thu Dec 15 13:03:30 CST 2022
;; MSG SIZE rcvd: 68
You can also access your newly registered service from Consul's UI, http://localhost:8500.
Consul container maintenance operations
Access containers
You can access a containerized Consul datacenter in several different ways.
Docker exec
You can execute Consul commands directly inside of your Consul containers using docker exec
.
$ docker exec <container_id> consul members
Node Address Status Type Build Protocol DC Partition Segment
server-1 172.17.0.2:8301 alive server 1.14.3 2 dc1 default <all>
client-1 172.17.0.3:8301 alive client 1.14.3 2 dc1 default <default>
Docker exec attach
You can also issue commands inside of your container by opening an interactive shell and using the Consul binary included in the container.
$ docker exec -it <container_id> /bin/sh
/ # consul members
Node Address Status Type Build Protocol DC Partition Segment
server-1 172.17.0.2:8301 alive server 1.14.3 2 dc1 default <all>
client-1 172.17.0.3:8301 alive client 1.14.3 2 dc1 default <default>
Local Consul binary
If you have a local Consul binary in your PATH you can also export the CONSUL_HTTP_ADDR
environment variable to point to the HTTP address of a remote Consul server.
$ export CONSUL_HTTP_ADDR=<consul_server_ip>:8500
This will allow you to bypass docker exec <container_id> consul <command>
and use consul <command>
directly.
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
server-1 172.17.0.2:8301 alive server 1.14.3 2 dc1 default <all>
client-1 172.17.0.3:8301 alive client 1.14.3 2 dc1 default <default>
In this tutorial, you are binding your containerized Consul server's client address
to 0.0.0.0
which allows you to communicate with your Consul datacenter using a local
Consul installation.
$ which consul
/usr/local/bin/consul
By default, the client address is bound to localhost.
$ consul members
Node Address Status Type Build Protocol DC Partition Segment
server-1 172.17.0.2:8301 alive server 1.14.3 2 dc1 default <all>
client-1 172.17.0.3:8301 alive client 1.14.3 2 dc1 default <default>
Stop, start, and restart containers
The official Consul container supports stopping, starting, and restarting. To stop a container, run docker stop
.
$ docker stop <container_id>
To start a container, run docker start
.
$ docker start <container_id>
To do an in-memory reload, send a SIGHUP to the container.
$ docker kill --signal=HUP <container_id>
Remove servers from the datacenter
As long as there are enough servers in the datacenter to maintain quorum, Consul's autopilot feature will handle removing servers whose containers were stopped. Autopilot's default settings are already configured correctly. If you override them, make sure that the following settings are appropriate.
cleanup_dead_servers
must be set to true to make sure that a stopped container is removed from the datacenter.last_contact_threshold
should be reasonably small, so that dead servers are removed quickly.server_stabilization_time
should be sufficiently large (on the order of several seconds) so that unstable servers are not added to the datacenter until they stabilize.
If the container running the currently-elected Consul server leader is stopped, a leader election will be triggered.
When a previously stopped server container is restarted using docker start <container_id>
, and it is configured to obtain a new IP, autopilot will add it back to the set of Raft peers with the same node-id and the new IP address, after which it can participate as a server again.
Backing-up data
You can back-up your Consul datacenter using the consul snapshot command.
$ docker exec <container_id> consul snapshot save backup.snap
This will leave the backup.snap
snapshot file inside of your container. If you are not saving your snapshot to a persistent volume then you will need to use docker cp
to move your snapshot to a location outside of your container.
$ docker cp <container_id>:backup.snap ./
Users running the Consul Enterprise Docker containers can run the consul snapshot agent to save backups automatically. Consul Enterprise's snapshot agent also allows you to save snapshots to Amazon S3 and Azure Blob Storage.
Environment variables
The Consul Docker image supports configuration via environment variables passed in from the Docker command line.
Consul agent
CONSUL_LOCAL_CONFIG
, CONSUL_CLIENT_INTERFACE
and CONSUL_BIND_INTERFACE
are passed to the container with the -e
flag.
CONSUL_LOCAL_CONFIG
supports passing a JSON string of keys and values. To define the datacenter
, server
, and turn on debugging with enable_debug
, use the following snippet when launching the Consul agent.
$ docker run \
-d \
-e CONSUL_LOCAL_CONFIG='{
"datacenter":"us_west",
"server":true,
"enable_debug":true
}' \
consul agent -server -bootstrap-expect=3
CONSUL_CLIENT_INTERFACE
is a string value representing the name of the interface on which Consul exposes DNS, gRPC, and HTTP APIs.
CONSUL_BIND_INTERFACE
is a string value representing the interface Consul uses for internal Consul cluster communication.
At runtime, these environment variables are passed as values for the -bind
and -client
arguments for the consul
binary.
A common implementation pattern includes using the same interface for the client and bind arguments. This isn't required; you have the option of configuring different interfaces for each value. An example is shown below.
$ docker run \
-d \
-e CONSUL_CLIENT_INTERFACE=en0 \
-e CONSUL_BIND_INTERFACE=en1 \
consul agent -server -bootstrap-expect-3
Setting the CONSUL_ALLOW_PRIVILEGED_PORTS
to true
runs setcap
on the Consul binary, allowing it to bind to privileged ports.
In this example consul agent
runs a DNS server on port 53, a privileged port, and sets the upstream DNS server to 8.8.8.8
via the -recursor
argument.
$ docker run -d --net=host -e 'CONSUL_ALLOW_PRIVILEGED_PORTS=true' consul agent -dns-port=53 -recursor=8.8.8.8
Not all Docker storage backends support this feature (notably AUFS). Read this aufs issue on github for docker-vault for more information.
Next steps
In this tutorial, you learned to deploy a containerized Consul datacenter. You also learned how to deploy a containerized service and how to configure your Consul client to register that service with your Consul datacenter.
You can continue learning how to deploy a secure Consul datacenter by completing the deploy a secure local Consul datacenter using Docker Compose tutorial.
For additional reference documentation on the official Docker image for Consul, refer to the following websites: