Consul
Multi-cluster applications with Consul Enterprise admin partitions
Consul admin partitions give organizations the option to define and administer boundaries for services using Consul. This can help organizations managing services across teams and business units. Teams can benefit from managing and customizing their own Consul environment, without impacting other teams, or other Consul environments.
Some organizations want to allow organizational or business units to deploy their own installations of Consul Enterprise on their own Kubernetes clusters. Centrally managing multiple Consul installations can be an operational challenge for organizations. Instead of giving teams their own clusters, the organization can consolidate these installations onto a shared multi-tenant server cluster. This cluster serves as the control plane for Consul clients in the tenant clusters, and ensures separation between tenants of the system. This deployment model provides teams the autonomy to configure Consul and application networking as they require. This increases the team's flexibility to manage application deployments, and eliminates the operational overhead associated with managing individual server clusters.
In this tutorial, you will install and configure Consul Enterprise on two Kubernetes clusters with Helm. You will configure Admin Partitions and deploy a micro-services application called HashiCups. The services will be distributed across these two Kubernetes clusters using Consul admin partitions. A diagram showing this architecture is displayed below.
Prerequisites
- An AWS account that is capable of deploying Amazon Elastic Kubernetes Service resources.
- A Consul Enterprise license. Request a trial license on the Consul Enterprise trial registration page.
Warning
The sample repository is a Terraform project that deploys billable resources to your Amazon Web Services account. At minimum it will deploy 2 Amazon EKS Clusters and the supporting infrastructure for it, including at least one additional Classic Load Balancer. A cleanup script is provided at the end of this tutorial, but you are ultimately responsible for the resources deployed to your account.
Setting up the infrastructure
Clone GitHub repository
If using the sample repository, begin by cloning it locally, and change into the directory where the sample code is stored.
$ git clone https://github.com/hashicorp/learn-consul-kubernetes.git
Navigate into the repository folder.
$ cd learn-consul-kubernetes
Check out the git-tag associated with this tutorial.
$ git checkout v0.0.19
Navigate into the project folder for this tutorial.
$ cd consul-enterprise-admin-partitions-eks
Uploading the Consul Enterprise license
Start the tutorial by placing your Consul Enterprise license file in the consul_enterprise/
directory before deploying the infrastructure. Terraform uploads the license on your behalf. Ensure the file is named consul.hclic
Deploy the Infrastructure
Start by initializing the terraform project.
$ terraform init
Next, review the terraform plan.
$ terraform plan
You are now ready to deploy the infrastructure to AWS.
$ terraform apply -auto-approve
The infrastructure deployment takes approximately 30-40 minutes. You can use this time to review this tutorial, or watch the video below, demonstrating the usage and benefits of Consul admin partitions.
Deploying Kubernetes
Throughout the tutorial, you will complete tasks in context of Kubernetes and Consul Enterprise. To keep track, make note of the labels for each Consul and Kubernetes identifier:
Kubernetes cluster primary manages the Consul Enterprise server cluster.
Kubernetes cluster secondary manages the Consul Enterprise client cluster.
License upload verification
Verify the Consul Enterprise license secret named consul-ent-license
exists on both clusters with the two following commands.
$ kubectl get secrets --context primary --namespace consul
NAME TYPE DATA AGE
consul-ent-license Opaque 1 70m
default-token-m2lbv kubernetes.io/service-account-token 3 77m
$ kubectl get secrets --context secondary --namespace consul
NAME TYPE DATA AGE
consul-ent-license Opaque 1 69m
default-token-m2lbv kubernetes.io/service-account-token 3 76m
Installing Consul Enterprise with Helm
Install Consul Enterprise with Helm by creating two values files for your Consul Enterprise installations. Navigate into the consul_enterprise
folder.
$ cd consul_enterprise
For the Consul server
cluster, create a values file named consul-values-server.yaml
. For the Consul client cluster, create a values file named consul-values-client.yaml
.
$ touch consul-values-server.yaml consul-values-client.yaml
Add the HashiCorp helm chart repository to your helm installation.
$ helm repo add hashicorp https://helm.releases.hashicorp.com && helm repo update
The configuration for the server
and the client
clusters are shown below. Copy and paste each tab's content into the values files you previously created for the server
and the client
.
These files are configured for the tutorial. In non-tutorial deployments, make note of the highlighted lines. Your organization may have specific guidance for you to follow regarding the values in the highlighted lines.
Consul Enterprise values files
consul-values-server.yaml
global:
# If you are using the sample repository, do not change the global.name values in the consul-values files.
name: server
image: 'hashicorp/consul-enterprise:1.11.4-ent'
enableConsulNamespaces: true
# The name of the datacenter. Admin partitions are only supported in the same datacenter today.
datacenter: galaxy
enterpriseLicense:
# The Kubernetes secret key/value pair for your Consul Enterprise license. If using the terraform project provided with this tutorial
# this license is automatically uploaded for you as a secret if you place the license inside the `consul_enterprise` folder before deploying.
secretName: 'consul-ent-license'
secretKey: 'key'
# The stanza that sets up Admins Partitions and the reason you are reading this tutorial :)
adminPartitions:
enabled: true
service:
annotations: |
"service.beta.kubernetes.io/aws-load-balancer-scheme": "internal"
acls:
manageSystemACLs: true
logLevel: 'debug'
gossipEncryption:
autoGenerate: true
tls:
enableAutoEncrypt: true
enabled: true
verify: false
serverAdditionalDNSSANs:
# Subject Alternative Names that the TLS certificate will validate. In this case,
# these are wildcard values for Amazon EKS and Elastic Load Balancer services suppporting this tutorial.
# Note that wildcard values are not recommended for use in production environments.
- '*.us-east-1.elb.amazonaws.com'
- '*.gr7.us-east-1.eks.amazonaws.com'
server:
exposeGossipAndRpcPorts: true
replicas: 1
ui:
enabled: true
connectInject:
aclBindingRuleSelector: ''
consulNamepaces:
mirroringK8S: true
transparentProxy:
defaultEnabled: false
# inject an envoy sidecar into every new pod, except for those with annotations that prevent injection
enabled: true
# these settings enable L7 metrics collection and are new in 1.5
centralConfig:
enabled: true
# proxyDefaults is a raw json string that will be applied to all Connect
# proxy sidecar pods that can include any valid configuration for the
# configured proxy.
# proxyDefaults: |
# {
# "envoy_prometheus_bind_addr": "0.0.0.0:9102"
# }
controller:
enabled: true
When admin partitions are enabled, the Consul server
cluster generates a partition for itself, named default
. For Consul client
clusters which register themselves to the Consul server
cluster, these partition names are configurable in the global.adminPartitions
stanza of the cluster's values file. For this tutorial, refer to line 14 of the consul-values-client.yaml
values file shown above.
To learn more about Consul on Kubernetes, visit the Consul on Kubernetes repository.
Consul Enterprise Server
For the server
Consul installation, install Consul Enterprise on the primary
Kubernetes cluster.
$ kubectl config use-context primary && helm install --wait hashicorp-server hashicorp/consul --namespace consul --version "0.43.0" --values consul-values-server.yaml
Switched to context "primary".
NAME: hashicorp-server
LAST DEPLOYED: Mon Nov 29 14:35:20 2021
NAMESPACE: consul
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!
Now that you have deployed Consul, you should look over the docs on using
Consul with Kubernetes available here:
https://www.consul.io/docs/platform/k8s/index.html
Your release is named hashicorp-server.
To learn more about the release, run:
$ helm status hashicorp-server
$ helm get all hashicorp-server
Confirm the Consul primary
cluster has admin partitions enabled, and includes a member named server-server-0
. Use the consul members
command inside the server-server-0
pod.
$ kubectl exec -it server-server-0 --namespace consul consul members
Node Address Status Type Build Protocol DC Partition Segment
server-server-0 10.100.2.57:8301 alive server 1.11.4+ent 2 galaxy default <all>
ip-10-100-1-218.ec2.internal 10.100.1.238:8301 alive client 1.11.4+ent 2 galaxy default <default>
ip-10-100-1-43.ec2.internal 10.100.1.125:8301 alive client 1.11.4+ent 2 galaxy default <default>
ip-10-100-2-148.ec2.internal 10.100.2.93:8301 alive client 1.11.4+ent 2 galaxy default <default>
ip-10-100-2-44.ec2.internal 10.100.2.135:8301 alive client 1.11.4+ent 2 galaxy default <default>
Observe the existence of the default partition name in the Partition column. The partition for the Consul server
cluster is always named default
.
Installing the Consul Enterprise client cluster.
Before installing Consul Enterprise on the Kubernetes secondary
cluster, configure the Consul client
installation to communicate with two following services:
The Kubernetes
primary
cluster's admin partition service, created by Consul.The Kubernetes control plane hosted on its own cluster.
Confirm the Kubernetes primary
cluster has an active service named consul-partition-service
with a value in the EXTERNAL-IP
column. This service is created by Consul during the Helm installation. This value is a hostname or IP address. A Consul client
uses this hostname to access and register its admin partition with the Consul Enterprise server
cluster. Copy this hostname. A shortcut to obtain the EXTERNAL-IP
is provided below.
Note
Admin Partitions, when enabled, creates a Load Balancer for itself. In AWS, this creates a new Classic Load Balancer, a billable resource.
$ kubectl get services --context primary --selector="app=consul,component=server" --namespace consul --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].hostname}{end}"
a781098c658b44fa79d145d7a4eb192d-1813301156.us-east-1.elb.amazonaws.com
Edit the file consul-values-client.yaml
, adding the hostname as a value for the client.join
and externalServers.host
keys.
Next, copy the control-plane hostname for the secondary
Kubernetes cluster. A shortcut is provided below.
$ TERM=dumb kubectl --context secondary cluster-info | awk '/Kubernetes control plane/ {print $NF}'
The prefix to kubectl
, TERM=dumb
, removes the stylization of text that cluster-info
outputs when returning the hostname. This can cause issues when copying and pasting text, when the stylization is present.
Update the externalServers.k8sAuthMethodHost
key with the control-plane hostname of the secondary
Kubernetes cluster.
Below, observe the keys for each value where to update the consul-values-client.yaml
file.
consul-values-client.yaml
externalServers:
enabled: true
hosts:
# The hostname of the admin-partition `LoadBalancer` service Running on the server cluster
- YourValueGoesHere
useSystemRoots: false
# The client cluster's hostname for the Kubernetes control plane. Obtained by running `kubectl --cluster-info` for that cluster.
k8sAuthMethodHost: YourValueGoesHere
client:
enabled: true
join:
# The hostname of the admin-partition `LoadBalancer` service Running on the server cluster
- YourValueGoesHere
Secrets Management for the Consul client cluster
For the Consul client
installation, configure the Consul Enterprise clusters to use HTTPS. The Consul server
cluster is already configured to use HTTPS. However, the Consul client
cluster requires you to perform a few actions before enabling HTTPS.
Copy the HTTPS CA certifcate and HTTPS CA Key from the primary
Kubernetes cluster to the secondary
Kubernetes cluster.
$ kubectl get --namespace consul secret server-ca-cert --context primary -o yaml | kubectl apply --context secondary --namespace consul --filename - && kubectl get --namespace consul secret server-ca-key --context primary -o yaml | kubectl apply --namespace consul --context secondary --filename -
secret/server-ca-cert created
secret/server-ca-key created
The Consul Enterprise client
cluster needs additional secrets generated by the Consul Enterprise server
installation. These secrets let the client
authenticate, and communicate with the Consul server
cluster. You will need access control list (ACL) tokens for the admin partition, client services, and for the initial bootstrap of the Consul client
cluster. You will also need the gossip encryption key required to use gossip encryption on the Consul client
cluster.
Use the commands below to copy the secrets from the primary
Kubernetes cluster to the secondary
Kubernetes cluster.
Partition Token
$ kubectl get --namespace consul secret server-partitions-acl-token --context primary -o yaml | kubectl apply --namespace consul --context secondary --filename -
secret/server-partitions-acl-token created
Client ACL token
$ kubectl get --namespace consul secret server-client-acl-token --context primary -o yaml | kubectl apply --namespace consul --context secondary --filename -
secret/server-client-acl-token created
Bootstrap ACL Token
$ kubectl get --namespace consul secret server-bootstrap-acl-token --context primary -o yaml | kubectl apply --namespace consul --context secondary --filename -
secret/server-bootstrap-acl-token created
Gossip Encryption Key
$ kubectl get --namespace consul secret server-gossip-encryption-key --context primary -o yaml | kubectl apply --namespace consul --context secondary --filename -
secret/server-gossip-encryption-key created
Confirm the secrets copied from the primary
Kubernetes cluster, are present on the secondary
Kubernetes cluster.
$ kubectl get secrets --namespace consul --context secondary
NAME TYPE DATA AGE
consul-ent-license Opaque 1 69m
default-token-brwk8 kubernetes.io/service-account-token 3 77m
server-bootstrap-acl-token Opaque 1 60s
server-ca-cert Opaque 1 100s
server-ca-key Opaque 1 98s
server-client-acl-token Opaque 1 68s
server-gossip-encryption-key Opaque 1 52s
server-partitions-acl-token Opaque 1 79s
The secondary
Kubernetes cluster is prepared for Consul client
cluster installation. Deploy theconsul-values-client.yaml
chart with Helm.
$ kubectl config use-context secondary && helm install --wait hashicorp-client hashicorp/consul --namespace consul --version "0.43.0" --values consul-values-client.yaml
Switched to context "secondary".
NAME: hashicorp-client
LAST DEPLOYED: Sat Jan 15 17:48:02 2022
NAMESPACE: consul
STATUS: deployed
REVISION: 1
NOTES:
Thank you for installing HashiCorp Consul!
Now that you have deployed Consul, you should look over the docs on using
Consul with Kubernetes available here:
https://www.consul.io/docs/platform/k8s/index.html
Your release is named hashicorp-client.
To learn more about the release, run:
$ helm status hashicorp-client
$ helm get all hashicorp-client
Confirm the Consul Enterprise services and pods on the secondary
Kubernetes cluster have been deployed successfully. Issue the commands below in your environment for verification.
$ kubectl get svc --namespace consul --context secondary
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
client-connect-injector-svc ClusterIP 10.100.63.19 <none> 443/TCP 4m49s
client-controller-webhook ClusterIP 10.100.106.50 <none> 443/TCP 4m49s
Note the difference from the output of this command in comparison to previously inputting the same command on the Consul primary
cluster. There is no server-server-0
pod running on this Kubernetes cluster, nor a consul-partition-service
service.
Verify the pods for Consul Enterprise are present and active on the Kubernetes secondary
cluster.
$ kubectl get pods --namespace consul --context secondary
NAME READY STATUS RESTARTS AGE
client-client-4tmnx 1/1 Running 0 4m6s
client-client-5gf6q 1/1 Running 0 4m6s
client-client-bvllz 1/1 Running 0 4m6s
client-client-vgght 1/1 Running 0 4m6s
client-connect-injector-74f86c67c7-54cdq 1/1 Running 0 4m6s
client-connect-injector-74f86c67c7-zlxng 1/1 Running 0 4m6s
client-controller-59457d56f4-rdqkh 1/1 Running 0 4m6s
client-webhook-cert-manager-648668664d-5lq7s 1/1 Running 0 4m6s
Confirming the client cluster's admin partition.
When you deployed the file consul-values-client.yaml
, a partition for the Consul client
cluster was created, named tereknor
. Confirm this admin partition is present on the Consul client
cluster and is registered to the Consul server
cluster.
From the output of the kubectl get pods --context secondary
command above, use one of the client pods affixed with random characters on the end, such as client-client-4tmnx
, to verify the members in the Consul client
cluster.
$ kubectl exec --namespace consul -it --context secondary -c consul client-client-4tmnx consul members
Node Address Status Type Build Protocol DC Partition Segment
server-server-0 10.100.2.57:8301 alive server 1.11.4+ent 2 galaxy default <all>
ip-172-30-1-118.ec2.internal 172.30.1.31:8301 alive client 1.11.4+ent 2 galaxy tereknor
ip-172-30-1-241.ec2.internal 172.30.1.30:8301 alive client 1.11.4+ent 2 galaxy tereknor
ip-172-30-2-241.ec2.internal 172.30.2.108:8301 alive client 1.11.4+ent 2 galaxy tereknor
ip-172-30-2-33.ec2.internal 172.30.2.121:8301 alive client 1.11.4+ent 2 galaxy tereknor
Notice that the server-server-0
node from the Consul Enterprise server
cluster is listed as the server for the client
cluster, in the type
column. This confirms that your Consul client
cluster is registered to the Consul server
cluster.
For additional verification, check the logs on the same pod, with kubectl
.
$ kubectl logs --namespace consul --context secondary client-client-4tmnx -c consul
2022-01-15T23:48:33.297Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: server-server-0 10.100.2.57
To further verify the partition is registered to the Consul server
cluster, use consul partition list
. This command returns all partitions the Consul server
cluster is aware of, and their descriptions. This is an authenticated command. You will use the ACL token from the primary
cluster to authenticate, and pass the value as an argument to the -token
flag in consul partition list
.
Begin by setting the token in a shell variable. The decoded value in your shell output will be unique in comparison to this tutorial's output below.
$ ACL_TOKEN=$(kubectl get secret server-bootstrap-acl-token --namespace consul --context primary --template '{{ .data.token | base64decode }}') && echo "${ACL_TOKEN}"
2a5f0240-8be6-98c3-5c37-9db4723f67ad
Use the variable $ACL_TOKEN
as the argument for the -token
flag.
$ kubectl exec -it --namespace consul --context primary server-server-0 -- consul partition list -token "${ACL_TOKEN}"
default:
Description:
Builtin Default Partition
tereknor:
Description:
Created by Helm installation
To observe the members of a specific partition, use consul catalog nodes
with the --partition
flag, using the $ACL_TOKEN
to use this authenticated command.
$ kubectl exec -it --namespace consul --context primary server-server-0 -- consul catalog nodes --partition tereknor -token "${ACL_TOKEN}"
Node ID Address Partition DC
ip-172-30-1-8.ec2.internal 8c54f491 172.30.1.8 tereknor galaxy
ip-172-30-1-84.ec2.internal aded415b 172.30.1.84 tereknor galaxy
ip-172-30-2-45.ec2.internal 3a9077c0 172.30.2.45 tereknor galaxy
ip-172-30-2-91.ec2.internal efef2194 172.30.2.91 tereknor galaxy
Deploy HashiCups as a distributed application
To deploy an application to your cluster, you will use HashiCups, a micro-services architecture. HashiCups serves as a shopping cart for HashiCorp-themed beverages. It has specific permissions for service communication: Its database communicates with a product API service. The front end communicates with a public API service. The public api brokers communication between the product api and the payments service.
In this tutorial, the postgres database resides on the primary
Kubernetes cluster, belonging to the Consul server
cluster. This is a common database that other products can connect to, not just HashiCups. The remaining services are deployed to the secondary
Kubernetes cluster, belonging to the client
Consul Enterprise cluster.
With both Kubernetes clusters, any pod deployed will be bootstrapped with a consul agent to each container. When resources are deployed, they are assigned to the partition of that cluster. For the Consul server
cluster, resources will belong to the default
partition. For Consul client
clusters, resources will belong to the tereknor
partition.
Exporting services with Custom Resource Definitions
In your installation, Consul Enterprise is configured with Access Control Lists enabled. With ACLs enabled, you will export the services that communicate across partitions, namely the database service, and the products API. You will use Consul Intentions to grant services access to these cross-cluster services.
Note
Learn about Access Control Lists and Consul Intentions by visiting the Access Control Lists Learn Tutorial and the Application Aware Intentions with Consul Service Mesh Learn Tutorial
Primary Cluster
$ kubectl apply --filename hashicups/crds/crd-exported-service-primary.yaml --filename hashicups/crds/crd-intentions.yaml --context primary
exportedservices.consul.hashicorp.com/default created
serviceintentions.consul.hashicorp.com/public-api created
serviceintentions.consul.hashicorp.com/products-api created
serviceintentions.consul.hashicorp.com/payments created
serviceintentions.consul.hashicorp.com/postgres created
Secondary Cluster
$ kubectl apply --filename hashicups/crds/crd-exported-service-secondary.yaml --filename hashicups/crds/crd-intentions.yaml --context secondary
exportedservices.consul.hashicorp.com/tereknor created
serviceintentions.consul.hashicorp.com/public-api created
serviceintentions.consul.hashicorp.com/products-api created
serviceintentions.consul.hashicorp.com/payments created
serviceintentions.consul.hashicorp.com/postgres created
Deploying HashiCups across clusters
The following code block groups create a tab for each service by their kind
value in the Kubernetes plan file. To deploy the services, a final tab Complete
is available in each code block group, containing all the of the services in one single continuous block. The deployment instructions reflect deploying the contents of this tab with a single file.
Primary Cluster
The database will be deployed inside Consul server
cluster's default
partition, on the primary
Kubernetes cluster. Consul assigns the services to their partition automatically.
Database
Create a file called postgres.yaml
and paste the contents of the "Complete" tab inside.
$ touch postgres.yaml
Service: Database
postgres-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
type: ClusterIP
ports:
- port: 5432
targetPort: 5432
selector:
app: postgres
Deploy the database service:
$ kubectl apply --context primary --filename postgres.yaml
service/postgres created
servicedefaults.consul.hashicorp.com/postgres created
serviceaccount/postgres created
deployment.apps/postgres created
Secondary Cluster
You will deploy the remaining HashiCups micro-services to the Consul client
cluster's tereknor
partition, on the secondary
Kubernetes cluster. Consul assigns the services to their partition automatically.
Products API
With the Products API deployment, note the db_connection
string in the data.config
stanza in the ConfigMap
plan file. The database is deployed to the primary
Kubernetes cluster in a separate network, but Consul Enterprise makes this service available to the Kubernetes secondary
cluster via the loopback address of 127.0.0.1
, via the Envoy Proxy. Learn about the Envoy proxy in the Load Balancing Services in Consul Service Mesh with Envoy Learn Tutorial.
Create a file called products-api.yaml
and paste the contents of the "Complete" tab inside.
$ touch products-api.yaml
Service: Products API
products-api-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: products-api
labels:
app: products-api
spec:
selector:
app: products-api
ports:
- name: http
protocol: TCP
port: 9090
targetPort: 9090
Deploy the Products API service:
$ kubectl apply --context secondary --filename products-api.yaml
service/products-api created
serviceaccount/products-api created
servicedefaults.consul.hashicorp.com/products-api created
configmap/db-configmap created
deployment.apps/products-api created
Public API
Create a file called public-api.yaml
and paste the contents of the "Complete" tab inside.
$ touch public-api.yaml
Service: Public API
public-api-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: public-api
labels:
app: public-api
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
selector:
app: public-api
Deploy the Public API service:
$ kubectl apply --context secondary --filename public-api.yaml
service/public-api created
serviceaccount/public-api created
deployment.apps/public-api created
Payments
Create a file called payments.yaml
and paste the contents of the "Complete" tab inside.
$ touch payments.yaml
Service: Payments API
payments-service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: payments
labels:
app: payments
spec:
selector:
app: payments
ports:
- name: http
protocol: TCP
port: 1800
targetPort: 8080
Deploy the Payments service:
$ kubectl apply --context secondary --filename payments.yaml
service/payments created
serviceaccount/payments created
servicedefaults.consul.hashicorp.com/payments created
deployment.apps/payments created
Frontend
Create a file called frontend.yaml
and paste the contents of the "Complete" tab inside.
$ touch frontend.yaml
Service: Frontend
frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: frontend
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: frontend
Deploy the Frontend service:
$ kubectl apply --context secondary --filename frontend.yaml
service/frontend created
serviceaccount/frontend created
servicedefaults.consul.hashicorp.com/frontend created
configmap/nginx-configmap created
deployment.apps/frontend created
Validation steps
Next, verify the HashiCups deployment is active.
HashiCups services are deployed
Observe the pods for each Kubernetes plan file you deployed are in a ready state according to the READY
column. Each pod is ready when all defined pods are available. For example, the public-api
is defined with 3 pods. In the output below, 3 of 3 pods are ready.
$ kubectl get pods --namespace consul --context secondary
NAME READY STATUS RESTARTS AGE
client-client-4tmnx 1/1 Running 0 34m
client-client-5gf6q 1/1 Running 0 34m
client-client-bvllz 1/1 Running 0 34m
client-client-vgght 1/1 Running 0 34m
client-connect-injector-74f86c67c7-54cdq 1/1 Running 0 34m
client-connect-injector-74f86c67c7-zlxng 1/1 Running 0 34m
client-controller-59457d56f4-rdqkh 1/1 Running 0 34m
client-webhook-cert-manager-648668664d-5lq7s 1/1 Running 0 34m
$ kubectl get pods --context secondary
NAME READY STATUS RESTARTS AGE
frontend-86bff5b5b6-jn6sm 2/2 Running 0 40s
payments-7564d7bfb5-26qdq 2/2 Running 0 3m27s
products-api-7bbd9458d7-tfvhf 2/2 Running 5 8m48s
public-api-77b4665585-9lvfv 3/3 Running 0 6m8s
HashiCups frontend is reachable
Verify the frontend
service has a value in the EXTERNAL-IP
column. Copy the EXTERNAL-IP
value for the frontend
service, and paste it into your web browser.
DNS propagation for the HashiCups application can take a few minutes on AWS, so you may initially observe NXDOMAIN
errors returned by DNS until the DNS Time To Live (TTL) refreshes its understanding of services associated with that hostname.
$ kubectl get services --context secondary
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 10.100.17.104 a14e155508a234ced82ba80c50baa6de-2091774586.us-east-1.elb.amazonaws.com 80:30975/TCP 2m9s
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4h1m
products-api ClusterIP 10.100.25.42 <none> 9090/TCP 73s
public-api ClusterIP 10.100.147.110 <none> 8080/TCP 108s
When the DNS successfully refreshes the hostname association for the frontend
service, the HashiCups front page loads, containing a graphical carousel of different types of HashiCups beverages.
Database can be written to by the frontend
In the portal, select a beverage and click on the Buy
button. Enter in fake credit card information and press Submit Payment
.
When you receive a status of "Payment processed successfully, card details returned for demo purposes, not for production", then the database in the primary
cluster is successfully communicating with services across the Consul and Kubernetes clusters.
Cleanup
Use terraform to remove the infrastructure from your account, now that this tutorial is complete. With AWS, this takes approximately 20 minutes.
$ cd .. && terraform destroy -auto-approve
Every effort has been made to make the cleanup of infrastructure resources in Terraform seamless. However, there are known issues with Terraform and the AWS provider, in which EKS creates resources for itself, not managed by Terraform, causing issues with removing all resources. A shim script in this repository mitigates this issue during terraform destroy
. Due to timing of the Amazon API when removing resources, you may encounter issues where the infrastructure does not completely finish its removal. Inputting terraform destroy
again should resolve the issue. But if not, you will need to remove the resources manually, from your AWS account.
Next steps
In this tutorial, you deployed a micro-services application, HashiCups, across multiple Kubernetes and Consul Enterprise clusters. You deployed a frontend service connected to a database deployed to another Kubernetes cluster, inside of another network. You confirmed that the client
partition was registered to the server
Consul Enterprise cluster.
To reiterate, Consul admin partitions give organizations the option to define and administer boundaries for services using Consul. This can assist organizations managing services across teams and business units. Teams can now benefit from managing and customizing their own Consul environment, without any impact to other Consul environments.
To learn more about admin partitions: