Consul
Securely connect your services with Consul service mesh
In the previous tutorial, you deployed Consul client agents and registered services to your Consul catalog.
In this tutorial, you will introduce zero trust security in your network by implementing Consul service mesh. This will enable secure service-to-service communication and allow you to leverage Consul's full suite of features.
In order to do this, you will edit the service definitions on your Consul clients, launch Envoy sidecar proxies, and create service intentions to allow traffic across your services in your network.
In this tutorial, you will:
- Review and create intentions to manage traffic permissions
- Modify Consul services' configuration for Consul service mesh
- Start Envoy sidecar proxy for each service in the mesh
- Restart the services to listen on localhost interface
Note
This tutorial is part of the Get Started collection, for this reason all the steps used to configure Consul agents and services are shown and require to be executed manually. If you are setting up a production environment you should codify and automate the installation and deployment process. Refer to the VM production patterns tutorial collection for Consul production deployment best practices.
Tutorial scenario
This tutorial uses HashiCups, a demo coffee shop application made up of several microservices running on VMs.
At the beginning of the tutorial, you have a Consul datacenter with one server and four clients running on VMs. The services connect directly to each other using the VMs address and access every service in the network.
By the end of this tutorial, you will have a fully deployed Consul service mesh with Envoy sidecar proxies running alongside each service. The services will be configured so they cannot be reachable, unless explicitly allowed through Consul service intentions.
Prerequisites
If you completed the previous tutorial, the infrastructure is already in place with all prerequisites needed.
Login into the bastion host VM
Terraform output provides a series of useful information, including bastion host IP address.
Login to the bastion host using ssh.
$ ssh -i certs/id_rsa.pem admin@`terraform output -raw ip_bastion`
Verify Envoy binary
Check on each of the client nodes to verify Envoy is installed.
- NGINX :
hashicups-nginx
- Frontend:
hashicups-frontend
- API:
hashicups-api
- Database:
hashicups-db
For example, to check Envoy installation on the Database VM.
Login to the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db
Verify Envoy binary is installed.
$ envoy --version
envoy version: cfa32deca25ac57c2bbecdad72807a9b13493fc1/1.26.4/Clean/RELEASE/BoringSSL
Check if the Envoy version is compatible with the Consul version running using the related compatibility matrix.
Return to the bastion host by exiting the SSH session.
$ exit
Repeat the steps for all VMs you want to add to the Consul service mesh.
Configure environment
This tutorial and interactive lab environment uses scripts in the tutorial's GitHub repository to generate the Consul configuration files for your client agents.
Define scenario environment variables again.
$ export DATACENTER="dc1"; \
export DOMAIN="consul"; \
export OUTPUT_FOLDER="./assets/scenario/conf/"; \
export CONSUL_CONFIG_DIR="/etc/consul.d/"
Configure the Consul CLI to interact with the Consul server.
$ export CONSUL_HTTP_ADDR="https://consul-server-0:8443" \
export CONSUL_HTTP_SSL=true \
export CONSUL_CACERT="${OUTPUT_FOLDER}secrets/consul-agent-ca.pem" \
export CONSUL_TLS_SERVER_NAME="server.${DATACENTER}.${DOMAIN}"
To interact with Consul, you need to set CONSUL_HTTP_TOKEN
to a valid Consul
token. For this tutorial, you will use the token you created during the ACL
bootstrap.
If you completed the previous tutorial, the bootstrap token is located the home
directory in a file named acl-token-bootstrap.json
.
$ export CONSUL_HTTP_TOKEN=`cat ./acl-token-bootstrap.json | jq -r ".SecretID"`
Review and create intentions
The initial Consul configuration denies all service connections by default. We recommend this setting in production environments to follow the "least-privilege" principle, by restricting all network access unless explicitly defined.
Intentions let you allow and restrict access between services. Intentions are destination-orientated — this means you create the intentions for the destination, then define which services can access it.
The following intentions are required for HashiCups:
Tip
Notice these descriptions show traffic going from the destination to the source service.
- The
db
service needs to be reached by theapi
service. - The
api
service needs to be reached by thenginx
services. - The
frontend
service needs to be reached by thenginx
service.
Use the provided script to generate service intentions.
$ bash ops/scenarios/99_supporting_scripts/generate_consul_service_intentions.sh
Parameter Check
Create global proxy configuration
Create intention configuration files
Check the files generated by the script.
$ tree ${OUTPUT_FOLDER}/global
./assets/scenario/conf//global
├── config-global-proxy-default.hcl
├── config-global-proxy-default.json
├── intention-allow-all.hcl
├── intention-allow-all.json
├── intention-api.hcl
├── intention-api.json
├── intention-db.hcl
├── intention-db.json
├── intention-frontend.hcl
└── intention-frontend.json
0 directories, 10 files
Finally apply the intentions to your Consul datacenter.
Use config write
to create the following intentions.
Create the intentions for the db
service.
$ consul config write ${OUTPUT_FOLDER}global/intention-db.hcl
Config entry written: service-intentions/hashicups-db
Create the intentions for the api
service.
$ consul config write ${OUTPUT_FOLDER}global/intention-api.hcl
Config entry written: service-intentions/hashicups-api
Create the intentions for the frontend
service.
$ consul config write ${OUTPUT_FOLDER}global/intention-frontend.hcl
Config entry written: service-intentions/hashicups-frontend
Apply new Consul client configuration
Before you can apply the new service configuration, you must copy the definition files into each Consul client VMs.
Tip
In the interactive lab environment, the HashiCups application nodes have a
running SSH server. As a result, you can use ssh
and scp
commands to
perform the following operations. If the nodes in your personal environment
does not have an SSH server, you may need to use a different approach to
create the configuration directories and copy the files.
First, configure the Consul configuration directory.
$ export CONSUL_REMOTE_CONFIG_DIR=/etc/consul.d/
Then, copy the configuration into the client VMs.
$ scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-db/svc/service_mesh/* hashicups-db:${CONSUL_REMOTE_CONFIG_DIR}; \
scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-api/svc/service_mesh/* hashicups-api:${CONSUL_REMOTE_CONFIG_DIR}; \
scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-frontend/svc/service_mesh/* hashicups-frontend:${CONSUL_REMOTE_CONFIG_DIR}; \
scp -i certs/id_rsa ${OUTPUT_FOLDER}/hashicups-nginx/svc/service_mesh/* hashicups-nginx:${CONSUL_REMOTE_CONFIG_DIR}
Start sidecar proxies on client VMs
Once you copied the configuration files on the different VMs, login on each Consul client VMs and restart the Consul agent.
Start sidecar proxy for Database
Login to the Database VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-db
Define the Consul configuration directory.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/
Setup a valid token to interact with Consul agent.
$ export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/agent-acl-tokens.hcl | grep agent | awk '{print $3}'| sed 's/"//g'`
Reload consul configuration.
$ consul reload
Configuration reload triggered
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-db-1 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to not lock the
terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Once the Envoy sidecar is started, exit the ssh session to return to the bastion host.
$ exit
Start sidecar proxy for API
Login to the API VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-api
Define the Consul configuration directory and setup a valid token to interact with Consul agent.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/ && \
export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/agent-acl-tokens.hcl | grep agent | awk '{print $3}'| sed 's/"//g'`
Reload consul configuration.
$ consul reload
Configuration reload triggered
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-api-1 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to not lock the
terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Once the Envoy sidecar is started, exit the ssh session to return to the bastion host.
$ exit
Start sidecar proxy for Frontend
Login to the Frontend VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-frontend
Define the Consul configuration directory and setup a valid token to interact with Consul agent.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/ && \
export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/agent-acl-tokens.hcl | grep agent | awk '{print $3}'| sed 's/"//g'`
Reload consul configuration.
$ consul reload
Configuration reload triggered
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-frontend-1 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to not lock the
terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Once the Envoy sidecar is started, exit the ssh session to return to the bastion host.
$ exit
Start sidecar proxy for NGINX
Login to the NGINX VM from the bastion host.
$ ssh -i certs/id_rsa hashicups-nginx
Define the Consul configuration directory and setup a valid token to interact with Consul agent.
$ export CONSUL_CONFIG_DIR=/etc/consul.d/ && \
export CONSUL_HTTP_TOKEN=`cat ${CONSUL_CONFIG_DIR}/agent-acl-tokens.hcl | grep agent | awk '{print $3}'| sed 's/"//g'`
Reload consul configuration.
$ consul reload
Configuration reload triggered
Finally, start the Envoy sidecar proxy for the service.
$ /usr/bin/consul connect envoy \
-token=${CONSUL_HTTP_TOKEN} \
-envoy-binary /usr/bin/envoy \
-sidecar-for hashicups-nginx-1 > /tmp/sidecar-proxy.log 2>&1 &
The command starts the Envoy sidecar proxy in the background to not lock the
terminal. You can access the Envoy log through the /tmp/sidecar-proxy.log
file.
Once the Envoy sidecar is started, exit the ssh session to return to the bastion host.
$ exit
Restart services to listen on localhost
Now that the service configuration is applied, intentions are applied, and Envoy sidecars are started for each service, all the components for the Consul service mesh are in place. The Consul sidecar proxies will route the services' traffic to the target destination.
Since traffic is flowing through the sidecar proxies, you no longer need to expose your services externally. As a result, reconfigure them to listen on the loopback interface only to improve overall security.
Reload the services to operate on the localhost
interface.
$ ssh -i certs/id_rsa hashicups-db "bash -c 'bash ./start_service.sh local'"; \
ssh -i certs/id_rsa hashicups-api "bash -c 'bash ./start_service.sh local'"; \
ssh -i certs/id_rsa hashicups-frontend "bash -c 'bash ./start_service.sh local'"; \
ssh -i certs/id_rsa hashicups-nginx "bash -c 'bash ./start_service.sh mesh'"
This tutorial still configures the NGINX service to listen on the VM's IP so you can still access it remotely. For production, we recommend using an ingress gateway to manage access to the service.
Retrieve the HashiCups UI address from Terraform.
$ terraform output -raw ui_hashicups
Open the address in a browser.
Confirm that HashiCups still works despite its services being configured to
communicate on localhost
. The Envoy sidecar proxies route each service's local
traffic to the relevant upstream.
Next steps
In this tutorial, you learned how to migrate your Consul services from service
discovery to Consul service mesh by updating each service's definitions,
starting Envoy sidecar proxies for each service, and updating the services'
dependencies to bind to localhost
.
In the process, you integrated Zero Trust Security in your network and learned how to define explicitly define service-to-service permissions using intentions.
At this point, the NGINX service used to expose the application externally is still accessible over an insecure connection. While it is possible to configure it for secure traffic, using TLS, Consul offers an integrated solution for this.
If you want to stop at this tutorial, you can destroy the infrastructure now.
From the ./self-managed/infrastruture/aws
folder of the repository, use
terraform
to destroy the infrastructure.
$ terraform destroy --auto-approve
In the next tutorial, you will learn how to add a Consul API Gateway to your service mesh and secure external network access to applications and services running in your Consul service mesh.
For more information about the topics covered in this tutorial, refer to the following resources: