Consul
Deploy seamless canary deployments with service splitters
Canary deployments (rolling deployments) let you release new software gradually, and identify and reduce the potential blast radius of a failed software release. This allows you to release new software with near-zero downtime. With canary deployments, you first route a small fraction of the service to the new version. When you confirm there are no errors, you slowly increase traffic to the new service until you fully promote the new environment.
In this tutorial, you will upgrade a demo application's service to a new version using the canary deployment strategy. In the process, you will learn how to use service resolvers and service splitters.
Scenario overview
HashiCups is a coffee shop demo application. It has a microservices architecture and uses Consul service mesh to securely connect the services. At the beginning of this tutorial, you will deploy the HashiCups microservices (nginx
, frontend
, public-api
, product-api
, product-db
, and payments
) to Consul service mesh. Unlike a typical HashiCups deployment, you will also deploy a second version of the frontend
service named frontend-v2
.
Next, you will use service resolvers to create service subset, one for each version (v1
and v2
). Then, you will use service splitters to progressive route more traffic to the new version of.
Prerequisites
The tutorial assumes that you are familiar with Consul and its core functionality. If you are new to Consul, refer to the Consul Getting Started tutorials collection.
For this tutorial, you will need:
- An AWS account configured for use with Terraform
- kubectl >= 1.27
- aws-cli >= 2.8
- terraform >= 1.4
- consul-k8s v1.2.0
- helm >= 3.0
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-canary-deployments.git
Change into the directory with the newly cloned repository.
$ cd learn-consul-canary-deployments
This repository contains Terraform configuration to spin up the initial infrastructure and all files to deploy Consul, the sample application, and the resources required for a canary deployment.
Here, you will find the following Terraform configuration:
eks-cluster.tf
defines Amazon EKS cluster deployment resourcesoutputs.tf
defines outputs you will use to authenticate and connect to your Kubernetes clusterproviders.tf
defines AWS and HCP provider definitions for Terraformvariables.tf
defines variables you can use to customize the tutorialvpc.tf
defines the AWS VPC resources
Additionally, you will find the following directories:
api-gw
contains the Kubernetes custom resource definitions (CRDs) required to deploy and configure the API gateway resourcesconsul
contains the Helm chart that configures your Consul instancehashicups
contains the Kubernetes definitions that deploys HashiCups, the sample applicationk8s-yamls
contains the service resolvers and service splitters Kubernetes CRDs that you will use for a canary deployment
Deploy infrastructure, Consul, and sample applications
Initialize your Terraform configuration to download the necessary providers and modules.
$ terraform init
Initializing the backend...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
Then, create the infrastructure. Confirm the run by entering yes
. This will take about 15 minutes to deploy your infrastructure. Feel free to explore the next sections of this tutorial while waiting for the resources to deploy.
$ terraform apply
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 52 added, 0 changed, 0 destroyed.
Configure your terminal to communicate with EKS
Now that you have deployed the Kubernetes cluster, configure kubectl
to interact with it.
$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_id)
Install Consul
You will now deploy Consul on your Kubernetes cluster with consul-k8s
. By default, Consul deploys into its own dedicated namespace (consul
). The Consul installation will use the Consul Helm chart file in the consul
directory.
Deploy Consul and confirm the installation with a y
.
$ consul-k8s install -config-file=consul/values.yaml
==> Checking if Consul can be installed
✓ No existing Consul installations found.
✓ No existing Consul persistent volume claims found
✓ No existing Consul secrets found.
==> Consul Installation Summary
Name: consul
Namespace: consul
##...
✓ Consul installed in namespace "consul".
Verify that you have installed Consul by inspecting the Kubernetes pods in the consul
namespace.
$ kubectl --namespace=consul get pods
NAME READY STATUS RESTARTS AGE
consul-connect-injector-69dd594bd4-bqxzz 1/1 Running 0 88s
consul-server-0 1/1 Running 0 88s
consul-server-1 1/1 Running 0 88s
consul-server-2 1/1 Running 0 88s
consul-webhook-cert-manager-5d67468847-6sg7p 1/1 Running 0 88s
Deploy sample application
Now that your Consul service mesh is operational in your cluster, deploy HashiCups, the sample application you will use to explore canary deployments with Consul.
Deploy the HashiCups.
$ kubectl apply --filename hashicups/
Check the pods to make sure they are all up and running. Notice there are two frontend
services.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-84999dc698-28nqg 2/2 Running 0 29s
frontend-v2-6d896675bf-9pbwc 2/2 Running 0 29s
nginx-84985894bd-qg7jp 2/2 Running 0 27s
payments-b4f5c6c58-svfvp 2/2 Running 0 27s
product-api-74c5f98f64-r4pvb 2/2 Running 0 25s
product-api-db-6c49b5dcb4-5h24n 2/2 Running 0 26s
public-api-5dc47dd74-2sfgw 3/3 Running 0 24s
Deploy API Gateway
In addition, deploy an API gateway. This lets you access HashiCups directly through a dedicated external IP address without any additional commands or configuration.
$ kubectl apply --filename api-gw/consul-api-gateway.yaml && \
kubectl wait --for=condition=accepted gateway/api-gateway --namespace=consul --timeout=90s && \
kubectl apply --filename api-gw/referencegrant.yaml && \
kubectl apply --filename api-gw/rbac.yaml && \
kubectl apply --filename api-gw/ingress.yaml
The following is the expected output:
gateway.gateway.networking.k8s.io/api-gateway created
gateway.gateway.networking.k8s.io/api-gateway condition met
referencegrant.gateway.networking.k8s.io/consul-reference-grant created
clusterrolebinding.rbac.authorization.k8s.io/consul-api-gateway-tokenreview-binding created
clusterrole.rbac.authorization.k8s.io/consul-api-gateway-auth created
clusterrolebinding.rbac.authorization.k8s.io/consul-api-gateway-auth-binding created
clusterrolebinding.rbac.authorization.k8s.io/consul-auth-binding created
serviceintentions.consul.hashicorp.com/api-gateway-hashicups created
httproute.gateway.networking.k8s.io/route-root created
Verify you have deployed API Gateway. You should find an output similar to the following.
$ kubectl get services --namespace=consul api-gateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-gateway LoadBalancer 172.20.226.56 a3f9c1d1439e64829a6c819f6981f4e8-1515381885.us-east-2.elb.amazonaws.com 80:32751/TCP 20s
Export the API gateway external IP address. You will reference this URL in the next sections to confirm you have configured the service splitters for canary deployments.
$ export APIGW_URL=$(kubectl get services --namespace=consul api-gateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') && echo $APIGW_URL
a8b474e3f14df443aba5c4de8fdbcd44-1864726477.us-east-2.elb.amazonaws.com
Configure your CLI to interact with Consul datacenter
In this section, you will set environment variables in your terminal so you can interact with your Consul datacenter.
Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable.
$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/bootstrap-token --template={{.data.token}} | base64 -d)
Set the Consul destination address. By default, Consul runs on port 8500
for http
and 8501
for https
.
$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
Review frontend services
The end-to-end deployment deploys two versions of the frontend service, one for v1
and the other for v2
.
Open hashicups/frontend-v1.yaml
. Notice that this deployment definition specifies a v1
tag and sets NEXT_PUBLIC_FOOTER_FLAG
to HashiCups-v1
. When you open the HashiCups UI, the footer will show this value. This will help you differentiate between the two frontend versions.
hashicups/frontend-v1.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
service: frontend
app: frontend
template:
metadata:
labels:
service: frontend
app: frontend
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9102"
consul.hashicorp.com/connect-inject: "true"
consul.hashicorp.com/service-meta-version: "1"
consul.hashicorp.com/service-tags: "v1"
spec:
serviceAccountName: frontend
containers:
- name: frontend
image: hashicorpdemoapp/frontend-nginx:v1.0.9
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
- name: NEXT_PUBLIC_PUBLIC_API_URL
value: "/"
- name: NEXT_PUBLIC_FOOTER_FLAG
value: "HashiCups-v1"
## ...
Open hashicups/frontend-v2.yaml
. Notice how all the values are the same except for the service tags (v2
) and the NEXT_PUBLIC_FOOTER_FLAG
value.
hashicups/frontend-v1.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-v2
spec:
replicas: 1
selector:
matchLabels:
service: frontend
app: frontend
template:
metadata:
labels:
service: frontend
app: frontend
annotations:
## ...
consul.hashicorp.com/service-meta-version: "2"
consul.hashicorp.com/service-tags: "v2"
spec:
serviceAccountName: frontend
containers:
- name: frontend
image: hashicorpdemoapp/frontend-nginx:v1.0.9
imagePullPolicy: Always
ports:
- containerPort: 3000
env:
## ...
- name: NEXT_PUBLIC_FOOTER_FLAG
value: "HashiCups-v2"
## ...
Confirm that there is only one Kubernetes frontend
service.
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend ClusterIP 172.20.203.253 <none> 3000/TCP 20s
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 120s
nginx ClusterIP 172.20.24.75 <none> 80/TCP 20s
payments ClusterIP 172.20.146.114 <none> 8080/TCP 20s
product-api ClusterIP 172.20.67.243 <none> 9090/TCP 20s
product-api-db ClusterIP 172.20.104.163 <none> 5432/TCP 20s
public-api ClusterIP 172.20.243.184 <none> 8080/TCP 20s
Retrieve the frontend service instances' Consul metadata. Each one will have a different version (v1
and v2
).
Run the following command to confirm two versions of the frontend service.
$ curl --insecure \
--header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \
$CONSUL_HTTP_ADDR/v1/catalog/service/frontend?pretty
The response should be similar to the following:
[
{
## ...
"Node": "ip-10-0-1-245.us-east-2.compute.internal-virtual",
"Address": "10.0.1.245",
"Datacenter": "dc1",
"ServiceID": "frontend-84999dc698-gxv7r-frontend",
"ServiceName": "frontend",
"ServiceTags": [
"v1"
],
## ...
},
{
## ...
"Node": "ip-10-0-3-176.us-east-2.compute.internal-virtual",
"Address": "10.0.3.176",
"Datacenter": "dc1",
"ServiceID": "frontend-v2-6d896675bf-r7kmg-frontend",
"ServiceName": "frontend",
"ServiceTags": [
"v2"
],
"ServiceAddress": "10.0.3.140",
## ...
}
]
Create service subsets with service resolvers
Service resolvers define how Consul can satisfy a discovery request for a given service name. For example, with service resolvers, you can:
- configure service subsets based on the instance's metadata
- redirect traffic to another service (redirect)
- control where to send traffic if frontend is unhealthy (failover)
- route traffic to the same instance based on a header (consistent load balancing)
Refer to the service resolver documentation for a full list of use cases and configuration parameters.
For canary deployments, you need to define a service resolver that creates service subsets based on the instance's version.
Inspect k8s-yaml/frontend-resolver.hcl
. This defines a service resolver named frontend
with two subsets (v1
and v2
) based on the instance's metadata (Service.Meta.version
). In addition, it specifies v1
as the default subset so Consul will send all traffic to the v1
frontend service.
k8s-yamls/frontend-resolver.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend
spec:
defaultSubset: v1
subsets:
v1:
filter: 'Service.Meta.version == 1'
v2:
filter: 'Service.Meta.version == 2'
Create the service resolver.
$ kubectl apply -f k8s-yamls/service-resolver.yaml
Confirm traffic goes to v1
only.
$ for i in `seq 1 10`; do echo -n "$i. " && curl -s $APIGW_URL | sed -n 's/.*\(HashiCups-v1\).*/\1/p;s/.*\(HashiCups-v2\).*/\1/p' && echo ""; done
1. HashiCups-v1
2. HashiCups-v1
3. HashiCups-v1
4. HashiCups-v1
5. HashiCups-v1
6. HashiCups-v1
7. HashiCups-v1
8. HashiCups-v1
9. HashiCups-v1
10. HashiCups-v1
Open the frontend service's Routing page in the Consul dashboard. Notice the UI shows that the service splitter will route incoming traffic to the v1 frontend
service subset.
Send partial traffic to new service
Service splitters enable you to easily implement canary tests by splitting traffic across services and subsets. Refer to the service splitter documentation for a full list of use cases and configuration parameters.
Inspect k8s-yamls/frontend-splitter-50-50.yaml
. This defines a service splitter for the frontend service, routing 50% of the traffic to v1
subset and the rest to v2
.
k8s-yamls/frontend-splitter-50-50.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: frontend
spec:
splits:
- weight: 50
serviceSubset: v1
- weight: 50
serviceSubset: v2
Create the service resolver.
$ kubectl apply -f k8s-yamls/frontend-splitter-50-50.yaml
Confirm traffic goes to both v1
and v2
. There might not be an exact 50/50 split since you are only sending 10 requests.
$ for i in `seq 1 10`; do echo -n "$i. " && curl -s $APIGW_URL | sed -n 's/.*\(HashiCups-v1\).*/\1/p;s/.*\(HashiCups-v2\).*/\1/p' && echo ""; done
1. HashiCups-v2
2. HashiCups-v1
3. HashiCups-v2
4. HashiCups-v2
5. HashiCups-v1
6. HashiCups-v2
7. HashiCups-v1
8. HashiCups-v1
9. HashiCups-v2
10. HashiCups-v1
Alternatively, retrieve the HashiCups URL and open it in your browser to confirm Consul splits traffic between the two versions.
$ echo $APIGW_URL
Find the version at the footer. Refresh the page several times — you will find the footer will show Hashicups - v2
50% of the time.
Open the frontend service's Routing page in the Consul dashboard. Notice the UI shows that the service splitter will route incoming traffic to both the v1 and v2 frontend
service subset.
Send all traffic to new service
Now that the canary deployment was successful, you will route 100% of the traffic to v2 to fully promote the frontend service.
Inspect k8s-yamls/frontend-splitter-v2-only.yaml
. This defines a service splitter for the frontend service, routing all the to v2
.
k8s-yamls/frontend-splitter-v2-only.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: frontend
spec:
splits:
- weight: 0
serviceSubset: v1
- weight: 100
serviceSubset: v2
Create the service resolver.
$ kubectl apply -f k8s-yamls/frontend-splitter-v2-only.yaml
Confirm traffic goes to v2
only.
$ for i in `seq 1 10`; do echo -n "$i. " && curl -s $APIGW_URL | sed -n 's/.*\(HashiCups-v1\).*/\1/p;s/.*\(HashiCups-v2\).*/\1/p' && echo ""; done
1. HashiCups-v2
2. HashiCups-v2
3. HashiCups-v2
4. HashiCups-v2
5. HashiCups-v2
6. HashiCups-v2
7. HashiCups-v2
8. HashiCups-v2
9. HashiCups-v2
10. HashiCups-v2
Configure future canary deployments
For future canary deployments, you will need to modify the service resolver to define the new service subset. If no services reference the old subset, we recommend you remove it from the service resolver to keep your environments clean. Remember to update the defaultSubset
if you do prune old service subsets to ensure your service points to the right version.
The following is an example of an updated service resolver, where x
represents the current version and y
represents the next version.
k8s-yamls/frontend-resolver.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceResolver
metadata:
name: frontend
spec:
defaultSubset: vX
subsets:
vX:
filter: 'Service.Meta.version == X'
vY:
filter: 'Service.Meta.version == Y'
Once you do this, you need to update your service splitter to reference the new service subsets.
k8s-yamls/frontend-splitter-50-50.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceSplitter
metadata:
name: frontend
spec:
splits:
- weight: 50
serviceSubset: vX
- weight: 50
serviceSubset: vY
Clean up environment
Destroy the Terraform resources to clean up your environment. Enter yes
to confirm the destroy operation.
$ terraform destroy
Due to race conditions with the various cloud resources created in this tutorial, you may need to run the destroy
operation twice to ensure all resources have been properly removed.
Next steps
In this tutorial, you upgraded a service using canary deployments before fully promoting the new service. This lets you release new versions gradually, allowing you to identify and address any potential blast radius from a failed software release. In the process, you learned how to use Consul's service splitter and service resolvers.
To learn more about service resolvers, refer to the service resolver documentation for a full list of use cases and configuration parameters.
To learn more about service splitters, refer to the service splitters documentation for a full list of use cases and configuration parameters.