Consul
Upgrade services with canary deployments
Canary deployments (rolling deployments) let you release new software gradually, and identify and mitigate the potential blast radius of a failed software release. This allows you to release new software with near-zero downtime. With canary deployments, you initially route a small fraction of the service to the new version. When you confirm there are no errors, you incrementally increase traffic to the new service until you fully promote the new environment.
In this tutorial, you will upgrade the HashiCups frontend
service to a new
version using the canary deployment strategy. In the process, you will learn how
to use service resolvers and service splitters.
Prerequisites
This tutorial assumes you have completed the previous tutorial.
In addition, you must have the following installed locally.
Configure Consul CLI to HCP Consul Dedicated cluster
In your terminal, navigate to the directory that contains the Terraform configuration you used to deploy the end-to-end HCP Consul Dedicated deployment.
$ cd ~/learn-hcp-consul-end-to-end-deployment
You will, now, configure the Consul CLI to connect to your HCP Consul Dedicated cluster by retrieving the HCP Consul Dedicated address and root token.
First, retrieve the HCP Consul Dedicated cluster URL and export it as an environment
variable named CONSUL_HTTP_ADDR
.
$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)
Then, retrieve the root token and export it as an environment variable named
CONSUL_HTTP_TOKEN
.
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)
Confirm your Consul CLI can connect to your HCP Consul Dedicated cluster by retrieving its members.
$ consul members
Node Address Status Type Build Protocol DC Segment
ip-172-25-33-42 172.25.33.42:8301 alive server 1.11.8+ent 2 consul-quickstart-1663917827001 <all>
ip-10-0-4-201.us-west-2.compute.internal 10.0.4.72:8301 alive client 1.11.8+ent 2 consul-quickstart-1663917827001 <default>
ip-10-0-5-235.us-west-2.compute.internal 10.0.5.247:8301 alive client 1.11.8+ent 2 consul-quickstart-1663917827001 <default>
ip-10-0-6-135.us-west-2.compute.internal 10.0.6.184:8301 alive client 1.11.8+ent 2 consul-quickstart-1663917827001 <default>
Review HashiCups services
The end-to-end deployment deploys two versions of the frontend service, one for
v1
and the other for v2
.
Retrieve the frontend service instances' Consul metadata. Each one will have a
different version (v1
and v2
).
Tip
Your output may be different depending on your end-to-end deployment.
Run the following command to confirm two versions of the frontend service.
$ curl --silent --header "Authorization: Bearer $CONSUL_HTTP_TOKEN" $CONSUL_HTTP_ADDR/v1/catalog/service/frontend | jq .[].ServiceMeta
{
"external-source": "nomad",
"version": "v1"
}
{
"external-source": "nomad",
"version": "v2"
}
Create service resolvers
Service resolvers define how Consul can satisfy a discovery request for a given service name. For example, with service resolvers, you can:
- configure service subsets based on the instance's metadata
- redirect traffic to another service (redirect)
- control where to send traffic if
frontend
is unhealthy (failover) - route traffic to the same instance based on a header (consistent load balancing)
Refer to the service resolver documentation for a full list of use cases and configuration parameters.
For canary deployments, you will define a service resolver that creates service subsets based on the instance's version.
Create a file named frontend-resolver.hcl
. This defines a service resolver
named frontend
with two subsets (v1
and v2
) based on the instance's
metadata (Service.Meta.version
).
frontend-resolver.hcl
Kind = "service-resolver"
Name = "frontend"
DefaultSubset = "v1"
Subsets = {
v1 = {
Filter = "Service.Meta.version == v1"
}
v2 = {
Filter = "Service.Meta.version == v2"
}
}
$ consul config write frontend-resolver.hcl
Config entry written: service-resolver/frontend
Upgrade via canary deployment
Service splitters enable you to easily implement canary tests by splitting traffic across services and subsets. Refer to the service splitter documentation for a full list of use cases and configuration parameters.
Create a file named frontend-splitter.hcl
. This defines a service splitter for
the frontend
service, routing 70% of the traffic to v1
and the rest to v2
.
frontend-splitter.hcl
Kind = "service-splitter"
Name = "frontend"
Splits = [
{
Weight = 70
ServiceSubset = "v1"
},
{
Weight = 30
ServiceSubset = "v2"
},
]
$ consul config write frontend-splitter.hcl
Config entry written: service-splitter/frontend
Confirm canary deployment
Open the frontend service's Routing page in the HCP Consul Dedicated dashboard. Notice
the UI shows that the service splitter will route incoming traffic to the
different frontend
service subsets (v1
and v2
), as defined by the service
resolver.
Retrieve the HashiCups URL and open it in your browser to confirm it works. Your URL may be different depending on the end-to-end deployment you selected.
$ terraform output hashicups_url
"http://a596dcb5be98f445b84c3ab864dec385-440577046.us-west-2.elb.amazonaws.com"
Find the version at the footer. Refresh the page several times — you will find
the footer will show Hashicups - v2
30% of the time.
Promote frontend service
Now that the canary deployment was successful, you will route 100% of the
traffic to v2
to fully promote the frontend
service.
Update frontend-splitter.hcl
to route all traffic to v2
.
frontend-splitter.hcl
Kind = "service-splitter"
Name = "frontend"
Splits = [
{
Weight = 0
ServiceSubset = "v1"
},
{
Weight = 100
ServiceSubset = "v2"
},
]
Run the following command to configure the service splitter to route all traffic
to v2
.
$ consul config write frontend-splitter.hcl
Config entry written: service-splitter/frontend
Open HashiCups and confirm that "Hashicups - v2" appears consistently in the footer.
Clean up resources
Now that you completed the tutorial, destroy the resources you created with
Terraform. Enter yes
to confirm the destruction process.
$ terraform destroy
## ...
Destroy complete! Resources: xx destroyed.
Next steps
In this tutorial, you upgraded a service using canary deployments before fully promoting the new service. This lets you release new versions gradually, allowing you to identify and address any potential blast radius from a failed software release. In the process, you learned how to use Consul's service splitter and service resolvers.
Learn advanced traffic splitting using service routers, service splitters, and service resolvers by completing the Deploy seamless canary deployments with service splitters tutorial.
To learn more about service resolvers, refer to the service resolver documentation for a full list of use cases and configuration parameters.
To learn more about service splitters, refer to the service splitters documentation for a full list of use cases and configuration parameters.