Consul
Deploy HCP Consul Dedicated
HashiCorp Cloud Platform (HCP) Consul lets you start using Consul for service discovery and service mesh with less setup time. It does this by providing fully managed Consul servers. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure device.
In this tutorial, you will deploy an HCP Consul Dedicated server cluster, your choice of Kubernetes or virtual machine Consul clients, and a demo application. Then, you will explore how the demo application leverages Consul service mesh and interact with Consul using the CLI and UI.
In the following tutorials, you will interact with intentions to learn how to control service access within the service mesh, and route traffic using service resolvers and service splitters.
Prerequisites
The tutorial assumes that you are familiar with the standard Consul workflow. If you're new to Consul itself, refer first to the Getting Started tutorials for Kubernetes or virtual machines (VMs).
While you can deploy a HCP Consul Dedicated server and connect the Consul clients in your cloud environments manually, this tutorial uses a Terraform quickstart configuration to significantly reduce deployment time.
You do not need to be an expert with Terraform to complete this tutorial or use this quickstart template.
For this tutorial, you will need:
- The Terraform v1.0.0+ installed
- Git installed
- An HCP account configured for use with Terraform
- An AWS account with AWS Credentials configured for use with Terraform
- The
awscli v2.7.31+
configured.
Tip
This Get Started with HCP collection currently only supports HCP Consul on AWS. Visit the Deploy HCP Consul Dedicated with VM using Terraform and Deploy HCP Consul Dedicated with AKS using Terraform to learn how to deploy HCP Consul Dedicated on Azure.
Retrieve end-to-end Terraform configuration
The HCP Portal has a quickstart template that deploys an end-to-end development environment so you can quickly observe HCP Consul Dedicated in action. This Terraform configuration:
- Creates a new HashiCorp virtual network (HVN) and single-node Consul development server
- [Optional] Connects the HVN with your AWS VPC. This is only required if you select the EC2 option. For EKS, Consul Dataplane connects to the HCP Consul Dedicated cluster on the public IP address so you do not need to peer the networks.
- Provisions an AWS EKS cluster or virtual machine (EC2) instance and installs a Consul client
- Deploys HashiCups, a demo application that uses Consul service mesh
Note
These modules are only intended for demonstration purposes. While the Consul clients will be deployed secure-by-default, the limited configuration options presented by the module are to aid a user in quickly getting started.
Click below to learn more about these steps.
This architectural diagram shows a standard HCP Consul Dedicated cluster peered to a virtual network (for example, an AWS VPC or an Azure VNet). The virtual network has Consul clients and services on the service mesh.
To retrieve the end-to-end Terraform configuration, visit the HCP Portal, select Consul, then click Create Cluster.
Select AWS, then select the Terraform Automation creation method.
Select your runtime and scroll to the bottom to find the generated Terraform code.
Click on Copy code to copy it to your clipboard and save it in a file named main.tf
.
Note
Content should resemble the example below. This example is not guaranteed to be up to date. Always refer to the Terraform configuration presented in the HCP Portal.
main.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.43"
}
hcp = {
source = "hashicorp/hcp"
version = ">= 0.18.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.4.1"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.3.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.11.3"
}
}
}
provider "aws" {
region = local.vpc_region
}
provider "helm" {
kubernetes {
host = local.install_eks_cluster ? data.aws_eks_cluster.cluster[0].endpoint : ""
cluster_ca_certificate = local.install_eks_cluster ? base64decode(data.aws_eks_cluster.cluster[0].certificate_authority.0.data) : ""
token = local.install_eks_cluster ? data.aws_eks_cluster_auth.cluster[0].token : ""
}
}
provider "kubernetes" {
host = local.install_eks_cluster ? data.aws_eks_cluster.cluster[0].endpoint : ""
cluster_ca_certificate = local.install_eks_cluster ? base64decode(data.aws_eks_cluster.cluster[0].certificate_authority.0.data) : ""
token = local.install_eks_cluster ? data.aws_eks_cluster_auth.cluster[0].token : ""
}
provider "kubectl" {
host = local.install_eks_cluster ? data.aws_eks_cluster.cluster[0].endpoint : ""
cluster_ca_certificate = local.install_eks_cluster ? base64decode(data.aws_eks_cluster.cluster[0].certificate_authority.0.data) : ""
token = local.install_eks_cluster ? data.aws_eks_cluster_auth.cluster[0].token : ""
load_config_file = false
}
data "aws_availability_zones" "available" {
filter {
name = "zone-type"
values = ["availability-zone"]
}
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.78.0"
name = "${local.cluster_id}-vpc"
cidr = "10.0.0.0/16"
azs = data.aws_availability_zones.available.names
public_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
private_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
}
data "aws_eks_cluster" "cluster" {
count = local.install_eks_cluster ? 1 : 0
name = module.eks[0].cluster_id
}
data "aws_eks_cluster_auth" "cluster" {
count = local.install_eks_cluster ? 1 : 0
name = module.eks[0].cluster_id
}
module "eks" {
count = local.install_eks_cluster ? 1 : 0
source = "terraform-aws-modules/eks/aws"
version = "17.24.0"
kubeconfig_api_version = "client.authentication.k8s.io/v1beta1"
cluster_name = "${local.cluster_id}-eks"
cluster_version = "1.21"
subnets = module.vpc.private_subnets
vpc_id = module.vpc.vpc_id
manage_aws_auth = false
node_groups = {
application = {
name_prefix = "hashicups"
instance_types = ["t3a.medium"]
desired_capacity = 3
max_capacity = 3
min_capacity = 3
}
}
}
# The HVN created in HCP
resource "hcp_hvn" "main" {
hvn_id = local.hvn_id
cloud_provider = "aws"
region = local.hvn_region
cidr_block = "172.25.32.0/20"
}
module "aws_hcp_consul" {
source = "hashicorp/hcp-consul/aws"
version = "~> 0.8.9"
hvn = hcp_hvn.main
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
route_table_ids = module.vpc.private_route_table_ids
security_group_ids = local.install_eks_cluster ? [module.eks[0].cluster_primary_security_group_id] : [""]
}
resource "hcp_consul_cluster" "main" {
cluster_id = local.cluster_id
hvn_id = hcp_hvn.main.hvn_id
public_endpoint = true
tier = "development"
}
resource "hcp_consul_cluster_root_token" "token" {
cluster_id = hcp_consul_cluster.main.id
}
module "eks_consul_client" {
source = "hashicorp/hcp-consul/aws//modules/hcp-eks-client"
version = "~> 0.8.9"
boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id
cluster_id = hcp_consul_cluster.main.cluster_id
consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file)
consul_hosts = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"]
consul_version = hcp_consul_cluster.main.consul_version
datacenter = hcp_consul_cluster.main.datacenter
gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"]
k8s_api_endpoint = local.install_eks_cluster ? module.eks[0].cluster_endpoint : ""
# The EKS node group will fail to create if the clients are
# created at the same time. This forces the client to wait until
# the node group is successfully created.
depends_on = [module.eks]
}
module "demo_app" {
count = local.install_demo_app ? 1 : 0
source = "hashicorp/hcp-consul/aws//modules/k8s-demo-app"
version = "~> 0.8.9"
depends_on = [module.eks_consul_client]
}
output "consul_root_token" {
value = hcp_consul_cluster_root_token.token.secret_id
sensitive = true
}
output "consul_url" {
value = hcp_consul_cluster.main.public_endpoint ? (
hcp_consul_cluster.main.consul_public_endpoint_url
) : (
hcp_consul_cluster.main.consul_private_endpoint_url
)
}
output "kubeconfig_filename" {
value = abspath(one(module.eks[*].kubeconfig_filename))
}
output "helm_values_filename" {
value = abspath(module.eks_consul_client.helm_values_file)
}
output "hashicups_url" {
value = one(module.demo_app[*].hashicups_url)
}
output "next_steps" {
value = "HashiCups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."
}
output "howto_connect" {
value = <<EOF
${local.install_demo_app ? "The demo app, HashiCups, Has been installed for you and its components registered in Consul." : ""}
${local.install_demo_app ? "To access HashiCups navigate to: ${module.demo_app[0].hashicups_url}" : ""}
To access Consul from your local client run:
export CONSUL_HTTP_ADDR="${hcp_consul_cluster.main.consul_public_endpoint_url}"
export CONSUL_HTTP_TOKEN=$(terraform output consul_root_token)
${local.install_eks_cluster ? "You can access your provisioned eks cluster by first running following command" : ""}
${local.install_eks_cluster ? "export KUBECONFIG=$(terraform output -raw kubeconfig_filename)" : ""}
Consul has been installed in the default namespace. To explore what has been installed run:
kubectl get pods
EOF
}
Locals
The HCP Consul Dedicated UI will guide you into selecting the correct values for the local
variables. You can edit the cluster_id
and hvn_id
but make sure it does not
conflict with other deployments you have in your organization.
vpc_region
- This is the region where you deployed your VPC.hvn_region
- The HashiCorp Virtual Network (HVN) region.cluster_id
- The HCP Consul Dedicated cluster ID. Use a unique name to identify your HCP Consul Dedicated cluster. HCP will pre-populate this variable with a name and use the following the naming patternconsul-quickstart-<unique-ID>
.hvn_id
- The HCP HVN ID. Use a unique name to identify your HVN. HCP will pre-populate this variable with a name and use the following the naming patternconsul-quickstart-<unique-ID>-hvn
.
In addition, based on the run-time, you will have the following additional local variables.
install_demo_app
- This deploys the HashiCups demo application that will let you quickly explore how services interact with Consul service mesh.install_eks_cluster
- This deploys an EKS cluster and configures to connect to your HCP Consul Dedicated cluster.
Deploy end-to-end development environment
Now that you have the Terraform configuration, you are now ready to deploy your infrastructure. Before you continue, verify that you have populated your AWS and HCP credentials as mentioned in the prerequisites.
Initialize the configuration to install the necessary providers and modules.
$ terraform init
Initializing the backend...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
Next, deploy the end-to-end development environment. Confirm the apply with a
yes
.
$ terraform apply
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 91 added, 0 changed, 0 destroyed.
Outputs:
consul_root_token = <sensitive>
consul_url = "https://consul-quickstart-1663917827001.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
hashicups_url = "http://a997491ad692947029c3bf826f2fbe72-1316116595.us-west-2.elb.amazonaws.com"
helm_values_filename = "/Users/dos/Desktop/gs-hcp-consul/eks/helm_values_consul-quickstart-1663917827001"
howto_connect = <<EOT
The demo app, HashiCups, Has been installed for you and its components registered in Consul.
To access HashiCups navigate to: http://a997491ad692947029c3bf826f2fbe72-1316116595.us-west-2.elb.amazonaws.com
To access Consul from your local client run:
export CONSUL_HTTP_ADDR="https://consul-quickstart-1663917827001.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
export CONSUL_HTTP_TOKEN=$(terraform output consul_root_token)
You can access your provisioned eks cluster by first running following command
export KUBECONFIG=$(terraform output -raw kubeconfig_filename)
Consul has been installed in the default namespace. To explore what has been installed run:
kubectl get pods
EOT
kubeconfig_filename = "/Users/dos/Desktop/gs-hcp-consul/eks/kubeconfig_consul-quickstart-1663917827001-eks"
next_steps = "HashiCups Application will be ready in ~2 minutes. Use 'terraform output consul_root_token' to retrieve the root token."
Once you confirm, it will take a few minutes for Terraform to set up your end-to-end development environment.
Verify created resources
Once Terraform completes, you can verify the resources using the Consul UI or CLI.
Verify with Consul UI
Retrieve your HCP Consul Dedicated dashboard URL and open it in your browser.
$ terraform output consul_url
"https://consul-quickstart-1663917827002.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
Next, retrieve your Consul root token. You will use this token to authenticate your Consul dashboard.
$ terraform output consul_root_token
"00000000-0000-0000-0000-000000000000"
In your HCP Consul Dedicated dashboard, sign in with the root token you just retrieved.
You should find a list of services that include consul
, ingress-gateway
,
and your HashiCups services.
Verify with Consul CLI
In order to use the CLI, you must set environment variables that store your ACL token and HCP Consul Dedicated cluster address.
First, set your CONSUL_HTTP_ADDR
environment variable.
$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)
Then, set your CONSUL_HTTP_TOKEN
environment variable.
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)
Retrieve a list of members in your datacenter to verify your Consul CLI is set up properly.
$ consul members
Node Address Status Type Build Protocol DC Segment
ip-172-25-33-42 172.25.33.42:8301 alive server 1.11.8+ent 2 consul-quickstart-1663917827001 <all>
ip-10-0-4-201.us-west-2.compute.internal 10.0.4.72:8301 alive client 1.11.8+ent 2 consul-quickstart-1663917827001 <default>
ip-10-0-5-235.us-west-2.compute.internal 10.0.5.247:8301 alive client 1.11.8+ent 2 consul-quickstart-1663917827001 <default>
ip-10-0-6-135.us-west-2.compute.internal 10.0.6.184:8301 alive client 1.11.8+ent 2 consul-quickstart-1663917827001 <default>
Verify demo application
The end-to-end development environment deploys HashiCups. Visit the hashicups
URL to verify that Terraform deployed HashiCups successfully, and its services
can communicate with each other.
Tip
View the Kubernetes manifest files that defines the HashiCups application to learn more about how the HashiCups services interact with each other.
Retrieve your HashiCups URL and open it in your browser.
$ terraform output hashicups_url
"http://a997491ad692947029c3bf826f2fbe72-1316116595.us-west-2.elb.amazonaws.com"
Next steps
In this tutorial, you deployed an HCP Consul Dedicated server cluster, your choice of Kubernetes or virtual machine Consul clients, and a demo application. Then, you explored how the demo application leverages Consul service mesh and interacted with Consul using the CLI and UI.
In the next tutorial, you will interact with intentions to learn how to control service access within the service mesh.
To learn more about HCP Consul Dedicated, visit the HCP Consul Dedicated documentation. For additional runtimes and cloud providers, visit the following tutorials: