Consul
Deploy HCP Consul Dedicated with VM using Terraform
HashiCorp Cloud Platform (HCP) Consul is a fully managed Service Mesh as a Service (SMaaS) version of Consul. The HCP Portal has a quickstart template that deploys an end-to-end development environment so you can see HCP Consul Dedicated in action. This Terraform configuration:
- Creates a new HashiCorp virtual network (VNet) and single-node Consul development server
- Connects the HVN with your Azure virtual network (VNet)
- Provisions an Azure virtual machine (VM) instance and installs a Consul client
- Deploys HashiCups, a demo application that uses Consul service mesh
In this tutorial, you will use the HCP Consul Dedicated Terraform automation workflow to deploy an end-to-end deployment environment. In the process, you will review the Terraform configuration to better understand how the various components of the development environment interact with each other. This will equip you with the skills to deploy and adopt HCP Consul Dedicated for your own workloads.
Prerequisites
To complete this tutorial you will need the following.
- Terraform v1.0.0+ CLI installed
- An HCP account configured for use with Terraform
- an Azure account
- the Azure CLI
In order for Terraform to run operations on your behalf, login into Azure.
$ az login
Generate Terraform configuration
You can generate a Terraform configuration for the end-to-end deployment directly from the Overview page in your HCP organization.
Click on the tab(s) below to go through each step to select the Terraform Automation deployment method.
Once you have selected the Terraform automation workflow, the HCP Portal presents two options:
- Use an existing virtual network (VNet)
- Create a new virtual network (VNet)
Select the tab for your preferred deployment method.
Fill in all the fields. The HCP region must be the same as your VNet region to reduce latency between the HCP Consul Dedicated server cluster and the Consul client running on the virtual machine.
The wizard will use this to customize your Terraform configuration, so it can deploy an HVN and peer it to your existing VNet.
Tip
Click on the Where can I find this? links to get help in locating the right values for each fields.
Once you have filled in all the fields, scroll down to the Terraform
Configuration section to find the generated Terraform configuration. Click on
Copy code to copy it to your clipboard and save it in a file named
main.tf
.
Click on the accordion to find an example Terraform configuration. This example is not guaranteed to be up-to-date. Always refer to and use the configuration provided by the HCP UI.
main.tf
locals {
hvn_region = "westus2"
hvn_id = "consul-quickstart-1658469789875-hvn"
cluster_id = "consul-quickstart-1658469789875"
subscription_id = "{{ .SubscriptionID }}"
vnet_rg_name = "{{ .VnetRgName }}"
vnet_id = "/subscriptions/{{ .SubscriptionID }}/resourceGroups/{{ .VnetRgName }}/providers/Microsoft.Network/virtualNetworks/{{ .VnetName }}"
subnet1_id = "/subscriptions/{{ .SubscriptionID }}/resourceGroups/{{ .VnetRgName }}/providers/Microsoft.Network/virtualNetworks/{{ .VnetName }}/subnets/{{ .SubnetName }}"
}
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65"
}
azuread = {
source = "hashicorp/azuread"
version = "~> 2.14"
}
hcp = {
source = "hashicorp/hcp"
version = ">= 0.23.1"
}
random = {
source = "hashicorp/random"
version = "3.2.0"
}
}
required_version = ">= 1.0.11"
}
provider "azurerm" {
subscription_id = local.subscription_id
features {}
}
provider "azuread" {}
provider "hcp" {}
provider "random" {}
provider "consul" {
address = hcp_consul_cluster.main.consul_public_endpoint_url
datacenter = hcp_consul_cluster.main.datacenter
token = hcp_consul_cluster_root_token.token.secret_id
}
data "azurerm_subscription" "current" {}
resource "random_string" "vm_admin_password" {
length = 16
}
data "azurerm_resource_group" "rg" {
name = local.vnet_rg_name
}
resource "azurerm_network_security_group" "nsg" {
name = "${local.cluster_id}-nsg"
resource_group_name = data.azurerm_resource_group.rg.name
location = data.azurerm_resource_group.rg.location
}
resource "hcp_hvn" "hvn" {
hvn_id = local.hvn_id
cloud_provider = "azure"
region = local.hvn_region
cidr_block = "172.25.32.0/20"
}
module "hcp_peering" {
source = "hashicorp/hcp-consul/azurerm"
version = "~> 0.2.5"
# Required
tenant_id = data.azurerm_subscription.current.tenant_id
subscription_id = data.azurerm_subscription.current.subscription_id
hvn = hcp_hvn.hvn
vnet_rg = data.azurerm_resource_group.rg.name
vnet_id = local.vnet_id
subnet_ids = [local.subnet1_id]
# Optional
security_group_names = [azurerm_network_security_group.nsg.name]
prefix = local.cluster_id
}
resource "hcp_consul_cluster" "main" {
cluster_id = local.cluster_id
hvn_id = hcp_hvn.hvn.hvn_id
public_endpoint = true
tier = "development"
}
resource "hcp_consul_cluster_root_token" "token" {
cluster_id = hcp_consul_cluster.main.id
}
module "vm_client" {
source = "hashicorp/hcp-consul/azurerm//modules/hcp-vm-client"
version = "~> 0.2.5"
resource_group = data.azurerm_resource_group.rg.name
location = data.azurerm_resource_group.rg.location
nsg_name = azurerm_network_security_group.nsg.name
allowed_ssh_cidr_blocks = ["0.0.0.0/0"]
allowed_http_cidr_blocks = ["0.0.0.0/0"]
subnet_id = local.subnet1_id
vm_admin_password = random_string.vm_admin_password.result
client_config_file = hcp_consul_cluster.main.consul_config_file
client_ca_file = hcp_consul_cluster.main.consul_ca_file
root_token = hcp_consul_cluster_root_token.token.secret_id
consul_version = hcp_consul_cluster.main.consul_version
}
output "consul_root_token" {
value = hcp_consul_cluster_root_token.token.secret_id
sensitive = true
}
output "vm_admin_password" {
value = random_string.vm_admin_password.result
sensitive = true
}
output "consul_url" {
value = hcp_consul_cluster.main.consul_public_endpoint_url
}
output "nomad_url" {
value = "http://${module.vm_client.public_ip}:8081"
}
output "hashicups_url" {
value = "http://${module.vm_client.public_ip}"
}
output "vm_client_public_ip" {
value = module.vm_client.public_ip
}
output "next_steps" {
value = <<EOT
Hashicups Application will be ready in ~5 minutes.
Use 'terraform output consul_root_token' to retrieve the root token.
EOT
}
The locals
block reflects the values of your existing VNet and resource group,
in addition to pre-populated fields with reasonable defaults.
- The
hvn_region
defines the HashiCorp Virtual Network (HVN) region. - The
hvn_id
defines your HVN ID. HCP will pre-populate this with a unique name that uses this pattern:consul-quickstart-UNIQUE_ID-hvn
. - The
cluster_id
defines your HCP Consul Dedicated cluster ID. HCP will pre-populate this with a unique name that uses this pattern:consul-quickstart-UNIQUE_ID
. - The
subscription_id
defines your Azure subscription ID. - The
vnet_rg_name
defines the resource group your VNet is in. - The
vnet_id
defines your VNet ID. Terraform will use this to set up a peering connection between the HVN and your VNet. - The
subnet1_id
defines your subnet ID. Terraform will use this to set up a peering connection between the HVN and your subnet. In addition, it will deploy the Azure VM into this subnet.
Tip
The hvn_id
and cluster_id
must be unique within your HCP
organization.
Deploy resources
Now that you have the Terraform configuration saved in a main.tf
file, you are
ready to deploy the HVN, HCP Consul Dedicated cluster, and end-to-end development
environment.
Verify that you have completed all the steps listed in the Prerequisites.
Note
If you are deploying into an existing VNet, ensure the subnet has internet connectivity.
Initialize your Terraform configuration to download the necessary Terraform providers and modules.
$ terraform init
Deploy the resources. Enter yes
when prompted to accept your changes.
$ terraform apply
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
### ...
Apply complete! Resources: 28 added, 0 changed, 0 destroyed.
Outputs:
consul_root_token = <sensitive>
consul_url = "https://servers-public-consul-87b79a95.b7fd1247.z1.hashicorp.cloud"
hashicups_url = "http://20.29.240.125"
next_steps = <<EOT
Hashicups Application will be ready in ~5 minutes.
Use 'terraform output consul_root_token' to retrieve the root token.
To SSH into your VM:
pem=~/.ssh/hashicups.pem
tf output -raw private_key_openssh > $pem
chmod 400 $pem
ssh -i $pem adminuser@$(tf output -raw vm_client_public_ip)
EOT
nomad_url = "http://20.29.240.125:8081"
private_key_openssh = <sensitive>
vm_client_public_ip = "20.29.240.125"
Once you confirm, it will take a few minutes for Terraform to set up your end-to-end development environment. While you are waiting for Terraform to complete, proceed to the next section to review the Terraform configuration in more detail to better understand how to set up HCP Consul Dedicated for your workloads.
Review Terraform configuration
The Terraform configuration deploys an end-to-end development environment by:
- Creating a new HashiCorp virtual network (VNet) and single-node Consul development server
- Connecting the HVN with your Azure virtual network (VNet)
- Provisioning an Azure VM instance and installing a Consul client
- Deploying HashiCups, a demo application that uses Consul service mesh
Prior to starting these steps, Terraform first retrieves information about your Azure environment.
Terraform uses a data resource to retrieve information about your current Azure subscription and your existing resource group.
main.tf
data "azurerm_subscription" "current" {}
data "azurerm_resource_group" "rg" {
name = local.vnet_rg_name
}
The Terraform configuration also defines an Azure network security group. When Terraform configures a peering connection, it will add Consul-specific rules to this network security groups.
main.tf
resource "azurerm_network_security_group" "nsg" {
name = "${local.cluster_id}-nsg"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
}
Create HVN and HCP Consul Dedicated
This Terraform configuration defines
hcp_hvn
and
hcp_consul_cluster
to deploy your HVN and HCP Consul Dedicated.
The HVN resource references the
hvn_id
andhvn_regions
local values. The resource also uses172.25.32.0/20
as a default for its CIDR block. Your HVN's CIDR block should not conflict with your VNet CIDR block.main.tf
resource "hcp_hvn" "hvn" { hvn_id = local.hvn_id cloud_provider = "azure" region = local.hvn_region cidr_block = "172.25.32.0/20" }
The HCP Consul Dedicated resource references the HVN's ID. This is because HashiCorp will deploy the HCP Consul Dedicated cluster into the HVN. The HCP Consul Dedicated cluster has a public endpoint and is in the
development
cluster tier. Development tier HCP Consul Dedicated clusters only have one server agent.For production workloads, we do not recommend public endpoints for HCP Consul Dedicated.
Note
HCP Consul Dedicated Azure only supports
development
cluster tiers for public beta.main.tf
resource "hcp_consul_cluster" "main" { cluster_id = local.cluster_id hvn_id = hcp_hvn.hvn.hvn_id public_endpoint = true tier = "development" }
Connect HVN with VNet configuration
This Terraform configuration uses the
hashicorp/hcp-consul/azurerm
Terraform module to connect the HVN with your VNet configuration. This module:
- creates and accepts a peering connection between the HVN and VNet
- creates HVN routes that direct HCP traffic to subnet's CIDR ranges
- creates the necessary Azure ingress rules for HCP Consul Dedicated to communicate with the Consul clients
Notice that the module references the HVN and network security group in addition to your existing resource group, VNet, subnet.
main.tf
module "hcp_peering" {
source = "hashicorp/hcp-consul/azurerm"
version = "~> 0.2.5"
# Required
tenant_id = data.azurerm_subscription.current.tenant_id
subscription_id = data.azurerm_subscription.current.subscription_id
hvn = hcp_hvn.hvn
vnet_rg = data.azurerm_resource_group.rg.name
vnet_id = local.vnet_id
subnet_ids = [local.subnet1_id]
# Optional
security_group_names = [azurerm_network_security_group.nsg.name]
prefix = local.cluster_id
}
Provision Azure VM and install Consul client configuration
This Terraform configuration uses the
hashicorp/hcp-consul/azurerm//modules/hcp-vm-client
Terraform module to deploy the Azure VM, set up SSH, install the Consul client,
and deploy the HashiCups demo application.
This section will only cover the resources required to deploy the Azure VM and set up the Consul client to connect to HCP Consul Dedicated.
In this tutorial, you will apply HCP Consul Dedicated's secure-by-default design with
Terraform by configuring your Azure VM instances with the gossip encryption key,
the Consul CA cert, and a permissive ACL token. As a result, the hcp-vm-client
module requires the HCP Consul Dedicated cluster token (root ACL token) and HCP Consul Dedicated
client configuration (CA certificate and gossip encryption key).
The HCP Consul Dedicated cluster token bootstraps the cluster's ACL system. The configuration uses
hcp_consul_cluster_root_token
to generate a cluster token.Note
The resource will generate a cluster token, a sensitive value. For production workloads, refer to a list of recommendations for storing sensitive information in Terraform.
main.tf
resource "hcp_consul_cluster_root_token" "token" { cluster_id = hcp_consul_cluster.main.id }
The
hcp_consul_cluster
resource has attributes that store the cluster's CA certificate, gossip encryption key, private CA file, private HCP Consul Dedicated URL and more.main.tf
module "vm_client" { source = "hashicorp/hcp-consul/azurerm//modules/hcp-vm-client" version = "~> 0.2.5" ## ... client_config_file = hcp_consul_cluster.main.consul_config_file client_ca_file = hcp_consul_cluster.main.consul_ca_file root_token = hcp_consul_cluster_root_token.token.secret_id ## ... }
The hcp-vm-client
module source contains all the files the module uses. Refer
to the
main.tf
for a complete list of resources deployed by the module. In addition:
The
templates/user_data.sh
file serves as an entrypoint. It loads, configures, and runssetup.sh
.The
templates/service
file is a template for a systemd service. This will let the Consul client to run as a daemon (background) service on the VM instance. It will also automatically restart the Consul client if it fails.The
templates/setup.sh
file contains the core logic to configure the Consul client. First, the script sets up container networking (setup_networking
), then downloads the Consul binary and Docker (setup_deps
).
Deploy HashiCups configuration
In addition to creating the Azure VM and setting up the Consul client, the
hashicorp/hcp-consul/azurerm//modules/hcp-vm-client
Terraform module also sets up Nomad and deploys the HashiCups demo application.
While you can leverage HCP Consul Dedicated with any application, running HashiCups on
Nomad highlights the service mesh capabilities on an application with a
microservices architecture.
The
templates/setup.sh
file contains the automation to install and configure Nomad on the VM. It also
starts the HashiCups application.
The
main.tf
file defines the Nomad service start command. Notice that the service starts
Nomad with the -dev-connect
flag, which enables Nomad integration with Consul
service mesh.
hcp-vm-client/main.tf
resource "azurerm_linux_virtual_machine" "vm" {
user_data = base64encode(templatefile("${path.module}/templates/user_data.sh", {
setup = base64gzip(templatefile("${path.module}/templates/setup.sh", {
## …
nomad_service = base64encode(templatefile("${path.module}/templates/service", {
service_name = "nomad",
service_cmd = "/usr/bin/nomad agent -dev-connect -consul-token=${var.root_token}",
})),
## …
}
}
}
The
templates/hashicups.nomad
contains the Nomad job for HashiCups. The HashiCups
Since HCP Consul Dedicated on Azure is secure by default, the datacenter is created with a
"default deny"
intention
in place. This means that, by default, no services can interact with each other
until an operator explicitly allows them to do so by creating intentions for
each inter-service operation they wish to allow. The
intentions.tf
file defines service intentions between the HashiCups services, enabling them to
communicate with each other.
Verify created resources
Once Terraform completes, you can verify the resources using the HCP Consul Dedicated UI or through the Consul CLI.
Consul UI
Retrieve your HCP Consul Dedicated dashboard URL and open it in your browser.
$ terraform output -raw consul_url
https://servers-public-consul-87b79a95.b7fd1247.z1.hashicorp.cloud
Next, retrieve your Consul root token. You will use this token to authenticate your Consul dashboard.
$ terraform output -raw consul_root_token
00000000-0000-0000-0000-000000000000
In your HCP Consul Dedicated dashboard, sign in with the root token you just retrieved.
You should find a list of services that include consul
, nomad-client
and
your HashiCups services.
Consul CLI configuration
In order to use the CLI, you must set environment variables that store your ACL token and HCP Consul Dedicated cluster address.
First, set your CONSUL_HTTP_ADDR
environment variable.
$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)
Then, set your CONSUL_HTTP_TOKEN
environment variable.
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)
Retrieve a list of members in your datacenter to verify your Consul CLI is set up properly.
$ consul members
Node Address Status Type Build Protocol DC Segment
fb3bd391-77f9-5b9a-9bb6-70284563d584 172.25.32.4:8301 alive server 1.11.6+ent 2 consul-quickstart-1654229446149 <all>
vmclient-client-vm 10.0.1.4:8301 alive client 1.11.6+ent 2 consul-quickstart-1654229446149 <default>
HashiCups application
The end-to-end development environment deploy HashiCups. Visit the hashicups
URL to verify that Terraform deployed HashiCups successfully, and its services
can communicate with each other.
Tip
View the Nomad job that defines the HashiCups application to learn more about how the HashiCups services interact with each other.
Retrieve your HashiCups URL and open it in your browser.
$ terraform output -raw hashicups_url
http://20.29.240.125
Clean up resources
Now that you completed the tutorial, destroy the resources you created with
Terraform. Enter yes
to confirm the destruction process.
$ terraform destroy
## ...
Destroy complete! Resources: 28 destroyed.
Next steps
In this tutorial, you have deployed an end-to-end deployment and review the Terraform configuration that defines the deployment.
If you encounter any issues, please contact the HCP team at support.hashicorp.com.