Consul
Connect services between Consul datacenters with cluster peering
Service meshes provide secure communication across your services within and across your infrastructure, including on-premises and cloud environments. As your organization scales, it may need to deploy services in multiple cloud providers in different regions. Cluster peering enables you to connect multiple Consul clusters, letting services in one cluster securely communicate with services in the other.
Cluster peering removes some of the administrative burdens associated with WAN federation. Because there is no primary cluster, administrative boundaries are clearly separated per cluster since changes in one Consul cluster does not affect peered clusters. For more information on differences between WAN federation and cluster peering, refer to the cluster peering documentation.
In this tutorial you will:
- Deploy two managed Kubernetes environments with Terraform
- Deploy Consul in each Kubernetes cluster
- Deploy the microservices from HashiCups, a demo application, in both Kubernetes cluster
- Peer the two Consul clusters
- Connect the services across the peered service mesh
Scenario overview
HashiCups is a coffee-shop demo application. It has a microservices architecture and uses Consul service mesh to securely connect the services. In this tutorial, you will deploy HashiCups services on Kubernetes clusters in two different regions. By peering the Consul clusters, the frontend services in one region will be able to communicate with the API services in the other.
HashiCups uses the following microservices:
- The
nginx
service is an NGINX instance that routes requests to thefrontend
microservice and serves as a reverse proxy to thepublic-api
service. - The
frontend
service provides a React-based UI. - The
public-api
service is a GraphQL public API that communicates with theproducts-api
and thepayments
services. - The
products-api
service stores the core HashiCups application logic, including authentication, coffee (product) information, and orders. - The
postgres
service is a Postgres database instance that stores user, product, and order information. - The
payments
service is a gRCP-based Java application service that handles customer payments.
Prerequisites
If you are not familiar with Consul's core functionality, refer to the Consul Getting Started tutorials collection first.
For this tutorial, you will need:
- A Google Cloud account configured for use with Terraform
- gcloud CLI with the
gke-cloud-auth-plugin
plugin installed
This tutorial uses Terraform automation to deploy the demo environment. You do not need to know Terraform to successfully complete this tutorial.
Clone example repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-cluster-peering.git
Change into the directory with the newly cloned repository.
$ cd learn-consul-cluster-peering
This repository has the following:
- The
dc1
directory contains Terraform configuration to deploy an GKE cluster inus-central1-a
. - The
dc2
directory contains Terraform configuration to deploy an GKE cluster inus-west1-a
. - The
k8s-yamls
directory contains YAML configuration files that support this tutorial. - The
hashicups-v1.0.2
directory contains YAML configuration files for deploying HashiCups.
Deploy Kubernetes clusters and Consul
In this section, you will create a Kubernetes cluster on each datacenter, and install Consul to provide service mesh functionality.
Initialize the Terraform configuration for dc1
to download the necessary providers and modules.
$ terraform -chdir=google-cloud/dc1 init
Initializing the backend...
## ...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
Use the terraform.tfvars.example
template file to create a terraform.tfvars
file in the dc1 folder.
$ cp google-cloud/dc1/terraform.tfvars.example google-cloud/dc1/terraform.tfvars
Edit this file to specify your project ID and zones. By default, dc1
deploys to us-central1-a
.
google-cloud/dc1/terraform.tfvars
project = "xx-000000000000000000000000000"
zone = "us-central1-a"
Open a new terminal window and initialize the Terraform configuration for dc2
.
$ terraform -chdir=google-cloud/dc2 init
Initializing the backend...
## ...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
Next, use the terraform.tfvars.example
template file to create a terraform.tfvars
file in the dc2 folder.
$ cp google-cloud/dc2/terraform.tfvars.example google-cloud/dc2/terraform.tfvars
Edit this file to specify your project ID and zones. By default, dc2
deploys to us-west1-b
.
google-cloud/dc2/terraform.tfvars
project = "xx-000000000000000000000000000"
zone = "us-west1-a"
Then, deploy the resources for dc1
. Confirm the run by entering yes
. This will take about 15 minutes to deploy your infrastructure.
$ terraform -chdir=google-cloud/dc1 apply
## ...
Plan: 33 to add, 0 to change, 0 to destroy.
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 25 added, 0 changed, 0 destroyed.
Outputs:
get-credentials_command = "gcloud container clusters get-credentials --zone us-central1-a dc1"
rename-context_cmd = "kubectl config rename-context gke_xx-000000000000000000000000000_us-central1-a_dc1 dc1"
Deploy the resources for dc2
. Confirm the run by entering yes
.
$ terraform -chdir=google-cloud/dc2 apply
## ...
Plan: 15 to add, 0 to change, 0 to destroy.
## ...
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
Review Consul deployment
Terraform automatically deploys Consul into each datacenter using the Helm charts consul-helm-dc1.yaml
and consul-helm-dc2.yaml
located in the ./k8s-yamls/
folder. The following configuration shows the mandatory features for cluster peering enabled in dc1
. Features enabled in dc2
are equivalent.
k8s-yamls/consul-helm-dc1.yaml
global:
datacenter: dc1
##...
peering:
enabled: true
tls:
enabled: true
##...
meshGateway:
enabled: true
replicas: 1
##...
Configure kubectl
Now that you have deployed the two datacenters, configure the kubectl
tool to interact with the Kubernetes cluster in the first datacenter.
Notice that this command stores the cluster connection information for dc1
.
Set the current Google Cloud project, and then fetch the related credentials.
$ eval $(terraform -chdir=google-cloud/dc1 output -raw set-project_command)
Updated property [core/project].
$ eval $(terraform -chdir=google-cloud/dc1 output -raw get-credentials_command)
Fetching cluster endpoint and auth data.
kubeconfig entry generated for dc1.
Then, rename the first cluster context to dc1. This lets you target this specific Kubernetes cluster in later commands.
$ eval $(terraform -chdir=google-cloud/dc1 output -raw rename-context_command)Context "gke_hc-f7aeccc6321b46ccb29e97f1481_us-central1-a_dc1" renamed to "dc1".
Configure the kubectl
tool to interact with the Kubernetes cluster in the second datacenter.
Notice that this command stores the cluster connection information for dc2
.
$ eval $(terraform -chdir=google-cloud/dc2 output -raw get-credentials_command)
Fetching cluster endpoint and auth data.
kubeconfig entry generated for dc2.
Then, rename the second cluster context to dc2.
$ eval $(terraform -chdir=google-cloud/dc2 output -raw rename-context_command)
Context "gke_hc-f7aeccc6321b46ccb29e97f1481_us-west1-a_dc2" renamed to "dc2".
Verify that Consul was successfully deployed by Terraform in dc1
by inspecting the Kubernetes pods in the dc1 consul
namespace.
$ kubectl --context=dc1 --namespace=consul get pods
NAME READY STATUS RESTARTS AGE
api-gateway-5bd4dc47cf-mgjsl 1/1 Running 0 110s
consul-connect-injector-7d68465cf9-fmkxc 1/1 Running 0 107s
consul-mesh-gateway-5554894784-w7pm4 1/1 Running 0 107s
consul-server-0 1/1 Running 0 106s
consul-server-1 1/1 Running 0 106s
consul-server-2 1/1 Running 0 106s
consul-webhook-cert-manager-f59d67cb9-xjv7h 1/1 Running 0 107s
prometheus-server-8455cbf87d-5bn4x 2/2 Running 0 107s
Then, verify that Consul was successfully deploeyd by Terraform in dc2
by inspecting the Kubernetes pods in the dc2 consul
namespace.
$ kubectl --context=dc2 --namespace=consul get pods
NAME READY STATUS RESTARTS AGE
consul-connect-injector-5f68c84545-ndc45 1/1 Running 0 110s
consul-mesh-gateway-675696cd94-hwvzz 1/1 Running 0 110s
consul-server-0 1/1 Running 0 110s
consul-server-1 1/1 Running 0 110s
consul-server-2 1/1 Running 0 110s
consul-webhook-cert-manager-f59d67cb9-nkc6t 1/1 Running 0 110s
prometheus-server-8455cbf87d-wscdm 2/2 Running 0 110s
Verify HashiCups deployment
The Terraform deployments for both dc1
and dc2
include a subset of the HashiCups microservices. The dc1
Kubernetes cluster hosts the frontend services, while the dc2
Kubernetes cluster hosts the API and database services. Later in this tutorial, you will connect the Consul datacenters to form the complete HashiCups deployment. The following diagram illustrates how HashiCups is deployed across the two clusters.
Verify HashiCups on first cluster
In dc1
, you can view the HashiCups frontend, but the demo application will not display any products because products-api
is not deployed.
Terraform deploys in dc1
the K8s YAML files present in hashicups-v1.0.2/dc1
. Verify that Terraform has successfully deployed these services by listing the Kubernetes services.
$ kubectl --context=dc1 get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend ClusterIP 172.20.9.134 <none> 3000/TCP 22s
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 26m
nginx ClusterIP 172.20.9.95 <none> 80/TCP 19s
payments ClusterIP 172.20.22.222 <none> 1800/TCP 13s
public-api ClusterIP 172.20.153.101 <none> 8080/TCP 16s
List the services registered with Consul in dc1
. This command runs consul catalog services
in one of the Consul server agents.
$ kubectl exec --namespace=consul -it --context=dc1 consul-server-0 -- consul catalog services
api-gateway
consul
frontend
frontend-sidecar-proxy
mesh-gateway
nginx
nginx-sidecar-proxy
payments
payments-sidecar-proxy
public-api
public-api-sidecar-proxy
Verify HashiCups on second cluster
Terraform deploys the remaining two HashiCups services in dc2
from the K8s YAML files present in hashicups-v1.0.2/dc2
. Verify that Terraform has successfully deployed products-api
and postgres
by listing the Kubernetes services in dc2
.
$ kubectl --context=dc2 get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 15m
postgres ClusterIP 172.20.165.165 <none> 5432/TCP 28s
products-api ClusterIP 172.20.222.90 <none> 9090/TCP 33s
List the services registered with Consul in dc2
.
$ kubectl exec --namespace=consul -it --context=dc2 consul-server-0 -- consul catalog services
consul
mesh-gateway
postgres
postgres-sidecar-proxy
products-api
products-api-sidecar-proxy
Explore the Consul UI (optional)
Retrieve the Consul UI address for dc1
and open it in your browser.
$ echo "https://$(kubectl --context=dc1 --namespace=consul get services consul-ui -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
https://35.238.125.40
Retrieve the Consul UI address for dc2
and open it in your browser.
$ echo "https://$(kubectl --context=dc2 --namespace=consul get services consul-ui -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
https://34.83.93.166
Explore HashiCups in browser
Open the HashiCups application in your browser and notice that it displays no products.
$ export APIGW_URL=http://$(kubectl --context=dc1 --namespace=consul get services api-gateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') && echo $APIGW_URL
http://1.2.3.4
At this point, the public-api
service in dc1
is configured to connect to its local instance of the products-api
service, however there is currently no instance of products-api
on dc1
. To use the instance of products-api
hosted in another datacenter, you will first configure Consul cluster peering, and then point the upstream to the products-api
service in dc2
.
Configure Consul cluster peering
Tip
Consul cluster peering works on both Enterprise and OSS versions of Consul. On Consul OSS, you can only peer clusters between the default
partitions. On Consul Enterprise, you can peer clusters between any partition.
You will now peer the two data centers to enable services in dc1
to communicate to products-api
in dc2
.
Consul cluster peering works by defining two cluster roles:
- A peering acceptor is the cluster that generates a peering token and accepts an incoming peering connection.
- A peering dialer is the cluster that uses a peering token to make an outbound peering connection with the cluster that generated the token.
Configure cluster peering traffic routing
You can peer Consul clusters by either directly connecting Consul server nodes or connecting the Consul mesh gateways.
Most Kubernetes deployments will not let services connect outside the cluster. This prevents the Consul server pods from communicating to other Kubernetes clusters. Therefore, we recommend configuring the clusters to use mesh gateways for peering. The following file configures the Consul clusters to use mesh gateways for cluster peering:
./k8s-yamls/peer-through-meshgateways.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: Mesh
metadata:
name: mesh
spec:
peering:
peerThroughMeshGateways: true
Configure cluster peering traffic for both dc1
and dc2
to be routed via the mesh gateways.
$ for dc in {dc1,dc2}; do kubectl --context=$dc apply -f k8s-yamls/peer-through-meshgateways.yaml; done
mesh.consul.hashicorp.com/mesh created
mesh.consul.hashicorp.com/mesh created
There are two modes for routing traffic from local services to remote services when cluster peering connections are routed through mesh gateways. In remote
mode, your local services contact the remote mesh gateway in order to reach remote services. In local
mode, your local services contact their local gateway in order to reach remote services. Refer to the modes documentation and well as the Mesh architecture diagram for more information.
We recommend you use local
mode because most Kubernetes deployments do not allow local services to connect outside the cluster. The following configuration specifies local
mode for traffic routed over the mesh gateways:
./k8s-yamls/originate-via-meshgateways.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ProxyDefaults
metadata:
name: global
spec:
meshGateway:
mode: local
Configure local
mode for traffic routed over the mesh gateways for both dc1
and dc2
.
$ for dc in {dc1,dc2}; do kubectl --context=$dc apply -f k8s-yamls/originate-via-meshgateways.yaml; done
proxydefaults.consul.hashicorp.com/global created
proxydefaults.consul.hashicorp.com/global created
Create a peering token
Configuring the peering acceptor role for a cluster generates a peering token and waits to accept an incoming peering connection. The following configuration sets dc1
as the peering acceptor:
./k8s-yamls/acceptor-on-dc1-for-dc2.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringAcceptor
metadata:
name: dc2
spec:
peer:
secret:
name: "peering-token-dc2"
key: "data"
backend: "kubernetes"
Configure a PeeringAcceptor
role for dc1
.
$ kubectl --context=dc1 apply -f k8s-yamls/acceptor-on-dc1-for-dc2.yaml
peeringacceptor.consul.hashicorp.com/dc2 created
Confirm you successfully created the peering acceptor custom resource definition (CRD).
$ kubectl --context=dc1 get peeringacceptors
NAME SYNCED LAST SYNCED AGE
dc2 True 5s 5s
Confirm that the PeeringAcceptor CRD generated a peering token secret.
$ kubectl --context=dc1 get secrets peering-token-dc2
NAME TYPE DATA AGE
peering-token-dc2 Opaque 1 97s
Import the peering token generated in dc1
into dc2
.
$ kubectl --context=dc1 get secret peering-token-dc2 -o yaml | kubectl --context=dc2 apply -f -
secret/peering-token-dc2 created
Establish a connection between clusters
Configuring a peering dialer role for a cluster makes an outbound peering connection towards a peering acceptor cluster using the specified peering token. The following configuration sets dc2
as the peering dialer and peering-token-dc2
as its token.
./k8s-yamls/dialer-dc2.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: PeeringDialer
metadata:
name: dc1
spec:
peer:
secret:
name: "peering-token-dc2"
key: "data"
backend: "kubernetes"
Configure a PeeringDialer
role for dc2
. This will create a peering connection from the second datacenter towards the first one.
$ kubectl --context=dc2 apply -f k8s-yamls/dialer-dc2.yaml
peeringdialer.consul.hashicorp.com/dc1 created
Verify that the two Consul clusters are peered. This command queries the peering
API endpoint on the Consul server agent in dc1
.
$ kubectl exec --namespace=consul -it --context=dc1 consul-server-0 \
-- curl --cacert /consul/tls/ca/tls.crt --header "X-Consul-Token: $(kubectl --context=dc1 --namespace=consul get secrets consul-bootstrap-acl-token -o go-template='{{.data.token|base64decode}}')" "https://127.0.0.1:8501/v1/peering/dc2" \
| jq
Notice the state is Active
, which means that the two clusters are peered successfully.
{
"ID": "1aa44921-8081-a16d-4290-0210b205bcc9",
"Name": "dc2",
"State": "ACTIVE",
"PeerCAPems": [
"-----BEGIN CERTIFICATE-----\nMIICDTCCAbOgAwIBAgIBCjAKBggqhkjOPQQDAjAwMS4wLAYDVQQDEyVwcmktMTNn\nZGx5bC5jb25zdWwuY2EuNjk5MmE2YjYuY29uc3VsMB4XDTIyMTEyMzE1MzYzN1oX\nDTMyMTEyMDE1MzYzN1owMDEuMCwGA1UEAxMlcHJpLTEzZ2RseWwuY29uc3VsLmNh\nLjY5OTJhNmI2LmNvbnN1bDBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABIaEuBVb\nGFJlqqXCCwhyYEwyKvhQV19IfCS+K6Uc+W5VI6+t7zsQHVFl2qy+i2Z0Rj8QnEYD\nI0YelQuVrRFV7x6jgb0wgbowDgYDVR0PAQH/BAQDAgGGMA8GA1UdEwEB/wQFMAMB\nAf8wKQYDVR0OBCIEIAoE4KUzkS2rUvB2ra1EC1BuubYh8fJmPp4dwXIf+mTMMCsG\nA1UdIwQkMCKAIAoE4KUzkS2rUvB2ra1EC1BuubYh8fJmPp4dwXIf+mTMMD8GA1Ud\nEQQ4MDaGNHNwaWZmZTovLzY5OTJhNmI2LWVlMGUtNDUzMC1iMGRkLWM2ZTA3Y2Iy\nOGU3My5jb25zdWwwCgYIKoZIzj0EAwIDSAAwRQIhAPRoqiwv4o5urXQnrP3cxU4y\n6dSffViR1ZBbFUdPvYTFAiAS8jGNn3Me0NyhRTCgK+bEJfw8wVLJK4wWZRCr42/e\nxw==\n-----END CERTIFICATE-----\n"
],
"StreamStatus": {
"ImportedServices": null,
"ExportedServices": null,
"LastHeartbeat": "2022-11-23T18:21:08.069062738Z",
"LastReceive": "2022-11-23T18:21:08.069062738Z",
"LastSend": "2022-11-23T18:20:53.071747214Z"
},
"CreateIndex": 695,
"ModifyIndex": 701,
"Remote": {
"Partition": "",
"Datacenter": "dc2"
}
}
Export the products-api service
After you peer the Consul clusters, you need to create a configuration entry that defines the services you want to export to other clusters. Consul uses this configuration entry to advertise those services' information and connect those services across Consul clusters. The following configuration exports the products-api
service into the dc1
peer.
apiVersion: consul.hashicorp.com/v1alpha1
kind: ExportedServices
metadata:
name: default ## The name of the partition containing the service
spec:
services:
- name: products-api ## The name of the service you want to export
consumers:
- peer: dc1 ## The name of the peering connection that receives the service
In dc2
, apply the ExportedServices
custom resource file that exports the products-api
service to dc1
.
$ kubectl --context=dc2 apply -f k8s-yamls/exportedsvc-products-api.yaml
exportedservices.consul.hashicorp.com/default created
Confirm that the Consul cluster in dc1
can access the products-api
in dc2
. This command queries the services
API endpoint on the Consul server agent in dc1
about the products-api
service from dc2
for its sidecar service ID and the related peer name.
$ kubectl \
--context=dc1 --namespace=consul exec -it consul-server-0 \
-- curl --cacert /consul/tls/ca/tls.crt \
--header "X-Consul-Token: $(kubectl --context=dc1 --namespace=consul get secrets consul-bootstrap-acl-token -o go-template='{{.data.token|base64decode}}')" "https://127.0.0.1:8501/v1/health/connect/products-api?peer=dc2" \
| jq '.[].Service.ID,.[].Service.PeerName'
Notice the output contains the products-api
sidecar service ID and the name of the related cluster peering.
k8s-yamls/exportedsvc-products-api.yaml
"products-api-sidecar-proxy-instance-0"
"dc2"
Create a cross-cluster service intention
In order for communication from products-api
service in dc1
to reach public-api
in dc2
, you must define a ServiceIntentions
custom resource definition that enables communication from products-api
service in dc1
.
k8s-yamls/intention-dc1-public-api-to-dc2-products-api.yaml
apiVersion: consul.hashicorp.com/v1alpha1
kind: ServiceIntentions
metadata:
name: dc1-public-api-to-dc2-products-api
spec:
destination:
name: products-api
sources:
- name: public-api
action: allow
peer: dc1
Create a service intention in dc2
that that allows communication from the public-api
service in dc1
to the products-api
service in dc2
.
$ kubectl --context=dc2 apply -f k8s-yamls/intention-dc1-public-api-to-dc2-products-api.yaml
serviceintentions.consul.hashicorp.com/dc1-public-api-to-dc2-products-api created
Modify upstream for the public-api service
Next, modify the public-api
service definition in dc1
with the updated configuration pointing to upstream in dc2
. In this scenario, the correct destination is products-api.virtual.dc2.consul
.
Consul uses the DNS syntax for service virtual IP lookups to contact services across peered datacenters. The upstream address is formatted as <service>.virtual[.<namespace>].<peer>.<domain>
. The namespace
segment is only available in Consul Enterprise.
k8s-yamls/public-api-peer.yaml
##...
apiVersion: apps/v1
kind: Deployment
metadata:
name: public-api
spec:
##...
template:
##...
spec:
serviceAccountName: public-api
containers:
- name: public-api
image: hashicorpdemoapp/public-api:v0.0.6
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: BIND_ADDRESS
value: ":8080"
- name: PRODUCT_API_URI
value: "http://products-api.virtual.dc2.consul"
- name: PAYMENT_API_URI
value: "http://payments:1800"
##...
Apply the updated public-api
service definition in dc1
with the upstream targeted at dc2
.
$ kubectl --context=dc1 apply -f k8s-yamls/public-api-peer.yaml
service/public-api unchanged
serviceaccount/public-api unchanged
servicedefaults.consul.hashicorp.com/public-api unchanged
deployment.apps/public-api configured
Verify peered Consul services
Verify the cluster peering is operational by opening the HashiCups application in your browser. Notice that it now displays a curated selection of coffee drinks.
$ export APIGW_URL=http://$(kubectl --context=dc1 --namespace=consul get services api-gateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') && echo $APIGW_URL
http://1.2.3.4/
Destroy environment
Now that you have peered two Consul clusters, you will now remove the exported service and cluster peering, before destroying the environment.
Remove exported service (optional)
Stop the products-api
service from being exported.
$ kubectl --context=dc2 delete -f k8s-yamls/exportedsvc-products-api.yaml
exportedservices.consul.hashicorp.com "default" deleted
Remove cluster peering (optional)
To remove a peering connection, delete both the PeeringAcceptor
and PeeringDialer
resources.
First, delete the PeeringDialer
from dc2
.
$ kubectl --context=dc2 delete -f k8s-yamls/dialer-dc2.yaml
peeringdialer.consul.hashicorp.com "dc1" deleted
Then, delete the PeeringAcceptor
from dc1
.
$ kubectl --context=dc1 delete -f k8s-yamls/acceptor-on-dc1-for-dc2.yaml
peeringacceptor.consul.hashicorp.com "dc2" deleted
Verify the two clusters are no longer peered by querying the /health
HTTP endpoint in dc1
.
$ kubectl exec --namespace=consul -it --context=dc1 consul-server-0 -- curl --insecure 'https://127.0.0.1:8501/v1/health/connect/backend?peer=dc2'
[]
Delete supporting infrastructure
Destroy the supporting infrastructure in your first datacenter.
$ terraform -chdir=google-cloud/dc1 destroy
##...
Plan: 0 to add, 0 to change, 33 to destroy.
##...
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
##...
Destroy complete! Resources: 33 destroyed.
Next, destroy the supporting infrastructure in your second datacenter.
$ terraform -chdir=google-cloud/dc2 destroy
##...
Plan: 0 to add, 0 to change, 15 to destroy.
##...
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
##...
Destroy complete! Resources: 15 destroyed.
Next steps
In this tutorial, you used the Consul cluster peering to route traffic across service meshes in two Consul clusters. In the process, you learned the benefits of using cluster peering for cluster interconnections with minimal shared administrative overhead.
For a much easier experience in configuring cluster peering, consider using Consul on the HashiCorp Cloud Platform (HCP). In HCP, you can easily launch and operate a fully-managed Consul platform in the cloud.
Feel free to explore these tutorials and collections to learn more about Consul service mesh, microservices, and Kubernetes security.