Consul
Terraform resources for HCP Consul Dedicated automation
In this tutorial collection you learned how to use the Terraform templates you can obtain from the HCP UI to deploy a demo application in a new HCP Consul Dedicated cluster.
This tutorial will give you some more details on the Terraform resources you used to manage your HCP Consul Dedicated cluster and to deploy your application.
- Terraform
hcp
provider - contains the resources and the data sources to interact with the HCP platform. - Terraform
hcp-aws-consul
module - demo module to deploy an example application on either EC2 or EKS.
Terraform hcp
provider
This is the main provider you want to use if you want to deploy and manage HCP resources. This provider contains the resources and the data sources to interact with the HCP platform.
HashiCorp Virtual Network (HVN)
Everything in HashiCorp Cloud Platform (HCP) starts with the HashiCorp Virtual Network (HVN).
HVNs enable you to deploy HashiCorp Cloud products without having to manage the networking details. They give you a simple setup for creating a network on AWS, in the region of your choice, and with the option to specify a CIDR range.
The hcp
provider permits you to create a new HVN using a module or to retrieve
information from an existing HVN using a data source. The latter is used in case
you want to deploy new resources in an existing HVN.
module
hcp_hvn
- The HVN resource allows you to create a HashiCorp Virtual Network in HCP.resource "hcp_hvn" "example" { hvn_id = "main-hvn" cloud_provider = "aws" region = "us-west-2" cidr_block = "172.25.16.0/20" }
data source
hcp_hvn
- The HVN data source provides information about an existing HashiCorp Virtual Network.data "hcp_hvn" "example" { hvn_id = var.hvn_id }
HCP Consul Dedicated cluster
Once you have an HVN created and configured. HCP Consul Dedicated enables you to quickly deploy Consul clusters in AWS across various environments while offloading the administrative burden to the SRE experts at HashiCorp.
The hcp
provider permits you to create a new HCP Consul Dedicated cluster using a module
or to retrieve information from an existing cluster using a data source. The
latter is used in case you want to use an existing HCP Consul Dedicated deployment and to
connect clients to it.
module
hcp_consul_cluster
- The Consul cluster resource allows you to manage an HCP Consul Dedicated cluster.resource "hcp_consul_cluster" "example" { cluster_id = "consul-cluster" hvn_id = hcp_hvn.example.hvn_id tier = "development" }
data source
hcp_consul_cluster
- The cluster data source provides information about an existing HCP Consul Dedicated cluster.data "hcp_consul_cluster" "example" { cluster_id = var.cluster_id }
Network peering with your VPC
Creating a network peering from your HVN will allow you to connect and launch AWS resources to your HCP account. Peer your Amazon VPC with your HVN to enable resource access. After creating, you will need to accept the peering request and set up your VPC’s security groups and routing table on your AWS account.
The Amazon VPC can be managed with the AWS provider
while the peering can be managed using some resources available in the hcp
provider.
resource
hcp_aws_network_peering
- The AWS network peering resource allows you to manage a network peering between an HVN and a peer AWS VPC.resource "hcp_aws_network_peering" "dev" { hvn_id = hcp_hvn.main.hvn_id peering_id = "dev" peer_vpc_id = aws_vpc.peer.id peer_account_id = aws_vpc.peer.owner_id peer_vpc_region = data.aws_arn.peer.region }
resource
hcp_hvn_route
- The HVN route resource allows you to manage an HVN route.resource "hcp_hvn_route" "example-peering-route" { hvn_link = hcp_hvn.main.self_link hvn_route_id = "peering-route" destination_cidr = aws_vpc.peer.cidr_block target_link = hcp_aws_network_peering.example.self_link }
Terraform hcp-aws-consul
module
Once you have fully deployed HCP Consul Dedicated, you need to deploy Consul clients inside of the peered VPC to fully access your Consul features.
To help you deploy Consul clients, we have created a Terraform module
hcp-aws-consul
that provides some test scenarios that you can use as a blueprint for your
deployments.
Also called the root module, it takes as input a HVN, an AWS VPC, a list of AWS routing tables that need to route to the HVN CIDR block, and a list of AWS security groups to allow inbound Consul traffic.
module "hcp-aws" {
source = "hashicorp/hcp-consul/aws"
version = "0.0.1"
hvn_id = var.hvn_id
vpc_id = module.vpc.vpc_id
route_table_ids = module.vpc.public_route_table_ids
security_group_ids = [module.vpc.default_security_group_id]
}
Alongside the hcp-aws-consul
module, some examples and submodules represent
specific example scenarios that you can try directly without any customization
and that you can use as a starting point for customizing the deploy for your
specific needs.
Note
These modules are only intended for demonstration purposes. While the Consul clients will be deployed secure-by-default, the limited configuration options presented by the module are to aid a user in quickly getting started.
Resources for EC2
submodule
hcp-ec2-client
- Install Consul and Nomad on an EC2 instance, connect to the HCP Consul Dedicated cluster and use Nomad to deploy a demo application.example
hcp-ec2-demo
- Use thehcp-ec2-client
submodule to create EC2 virtual machines and run Consul clients.module "aws_ec2_consul_client" { source = "hashicorp/hcp-consul/aws//modules/hcp-ec2-client" subnet_id = module.vpc.public_subnets[0] security_group_id = module.aws_hcp_consul.security_group_id allowed_ssh_cidr_blocks = ["0.0.0.0/0"] allowed_http_cidr_blocks = ["0.0.0.0/0"] client_config_file = hcp_consul_cluster.main.consul_config_file client_ca_file = hcp_consul_cluster.main.consul_ca_file root_token = hcp_consul_cluster_root_token.token.secret_id depends_on = [module.aws_hcp_consul] }
Resources for EKS
submodule
hcp-eks-client
- Install the Consul Helm chart on a provided Kubernetes cluster and deploy a demo application on the cluster.example
hcp-eks-demo
- Use thehcp-eks-client
submodule to deploy an EKS cluster to run Consul clients and deploys a demo application on the cluster.module "eks_consul_client" { source = "hashicorp/hcp-consul/aws//modules/hcp-eks-client" cluster_id = hcp_consul_cluster.main.cluster_id consul_hosts = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["retry_join"] k8s_api_endpoint = module.eks.cluster_endpoint boostrap_acl_token = hcp_consul_cluster_root_token.token.secret_id consul_ca_file = base64decode(hcp_consul_cluster.main.consul_ca_file) datacenter = hcp_consul_cluster.main.datacenter gossip_encryption_key = jsondecode(base64decode(hcp_consul_cluster.main.consul_config_file))["encrypt"] # The EKS node group will fail to create if the clients are # created at the same time. This forces the client to wait until # the node group is successfully created. depends_on = [module.eks] }
Next steps
In this tutorial you got an overview of the available Terraform resources used to manage your HCP Consul Dedicated cluster and workload.
If you encounter any issues, please contact the HCP team at support.hashicorp.com.