Terraform
Connect workspaces with run triggers
HCP Terraform's run triggers allow you to link workspaces so that a successful apply in a source workspace will queue a run in the workspace linked to it with a run trigger. For example, adding new subnets to your network configuration could trigger an update to your application configuration to rebalance servers across the new subnets.
When managing complex infrastructure with HCP Terraform, organizing your configuration into different workspaces helps you to better manage and design your infrastructure. Configuring run triggers between workspaces allows you to set up infrastructure pipelines as part of your overall deployment strategy.
In this tutorial, you will set up one workspace to manage your network and a second workspace to manage your application. You will configure a run trigger so that any changes to your network workspace will queue an apply step on your application workspace.
Prerequisites
This tutorial assumes that you are familiar with the Terraform and HCP Terraform workflows. If you are new to Terraform, complete the Get Started collection first. If you are new to HCP Terraform, complete the HCP Terraform Get Started tutorials first.
In order to complete this tutorial, you will need the following:
- Terraform v1.2+ installed locally.
- An AWS account.
- An HCP Terraform account with HCP Terraform locally authenticated.
- An HCP Terraform variable set configured with your AWS credentials.
Warning
There may be some charges from AWS associated with running this configuration. Please refer to the AWS pricing guide for more details. Instructions to remove the infrastructure you create can be found at the end of this tutorial.
Fork GitHub repositories
This tutorial uses two GitHub repositories, one for each workspace, which you will need to fork to use with your HCP Terraform account.
Network Repository
Navigate to the network repository. Use the Fork button in the upper right corner of that page to fork that repository into your account.
Inside this repository, you will find the Terraform configuration for your network infrastructure.
main.tf
configures theaws
provider, resources for your VPC, load balancer, and related networking infrastructure.terraform.tf
configures provider and terraform versions used by this configuration.variables.tf
declares variables, including the region and the number of public and private subnets.outputs.tf
declares the outputs for this module. The application workspace will use these outputs in its configuration.
Application Repository
Next, navigate to the application repository. Again, use the Fork button to fork this repository into your GitHub account.
The application repository is organized like the network repository, but with
one important difference — this module uses a terraform_remote_state
data
block at the top of main.tf
to retrieve the outputs from the network
workspace. Remote state data blocks allow you to share data between workspaces
in HCP Terraform. When used with the run trigger you will configure later in
this tutorial, this data block will allow the application workspace to respond to
changes to the network workspace.
data "terraform_remote_state" "network" {
backend = "remote"
config = {
organization = var.tfc_org_name
workspaces = {
name = var.tfc_network_workspace_name
}
}
}
This data block resource will connect to HCP Terraform to retrieve output
values from the indicated workspace, including the subnet and load balancer
configuration. Later in main.tf
, you can see that the "aws_instance" "app"
resource uses this data to configure the correct subnet and security groups for
each EC2 instance.
subnet_id = data.terraform_remote_state.network.outputs.private_subnet_ids[count.index % length(data.terraform_remote_state.network.outputs.private_subnet_ids)]
vpc_security_group_ids = data.terraform_remote_state.network.outputs.app_instance_security_group_ids
Tip
We recommend using provider-specific data sources when convenient. terraform_remote_state
is more flexible, but requires access to the whole Terraform state.
In the next section, you will create and configure workspaces for both of these repositories. Then, since the application infrastructure depends on the network infrastructure, you will set up a run trigger to connect them so that a change to your network infrastructure will reconfigure your application infrastructure as needed.
Configure workspaces
Login to HCP Terraform web UI. You may want to create an organization specifically for this example to separate it from any production infrastructure you are managing with HCP Terraform. You will need this organization name when configuring the application workspace.
Network workspace
Create the network workspace by following these steps:
- Navigate to Workspaces from the main menu, and click the + New Workspace button.
- Choose the Version control workflow.
- Connect the workspace to your GitHub account. If you have not connected your GitHub account to HCP Terraform, follow the prompts to do so.
- Connect to the
learn-terraform-run-triggers-network
GitHub repository you forked in the last step. - The workspace name will be
learn-terraform-run-triggers-network
. Leave the description blank. - Click the Create Workspace button.
Note
If this is the first time you have connected Terraform to GitHub, you will need to authenticate with GitHub first. Follow the prompts in HCP Terraform or refer to the Use VCS-Driven Workflow tutorial for instructions.
It will take a few moments for HCP Terraform to connect to GitHub and populate the workspace. While this process completes, click on Go to workspace overview, then choose Variables from the left nav.
Terraform will authenticate with AWS using environment variables with your
access key ID (AWS_ACCESS_KEY_ID
) and secret access key
(AWS_SECRET_ACCESS_KEY
). Click the Add variable button to add these two
variables. Select the Environment variable option for each and mark them as
sensitive.
Application workspace
Now that you have configured the network workspace, create the application workspace by following a similar set of steps.
- Navigate to Workspaces from the main menu, and click the + New Workspace button in the upper right corner.
- Choose the Version control workflow.
- Connect the workspace to your GitHub account.
- Connect to the
learn-terraform-run-triggers-application
GitHub repository you forked in the last step. - The workspace name will be
learn-terraform-run-triggers-application
. Leave the description blank. - Click the Create Workspace button.
Once you have created the application workspace, click on Go to workspace overview, then click Variables.
The remote state data block in the application configuration requires both the
organization name and workspace name. It is set up to use the workspace name
learn-terraform-run-triggers-network
by default, but your organization name
will be unique.
Click on the + Add variable button and create a new Terraform Variable with
the key tfc_org_name
, and set the value to the name of your HCP Terraform or
Terraform Enterprise organization. In the screenshot below, the organization
name is "hashicorp-learn" — you will need to change this to your organization
name.
Now add variables for your AWS access key ID (AWS_ACCESS_KEY_ID
) and secret
access key (AWS_SECRET_ACCESS_KEY
), just as you did for the network workspace.
Now you have two workspaces, one for your network and another for your application environment.
Next, you will configure a run trigger for the application workspace. Once the run trigger is configured, whenever the network workspace completes a successful apply step, a plan will automatically be queued in the application workspace.
Allow remote state access
HCP Terraform protects your state file by encrypting it at rest and automatically restricting access to it from other workspaces. Allow the application workspace to access the network workspace's state.
- Navigate to the
learn-terraform-run-triggers-network
workspace. - Select Settings from the main menu.
- On the General page, scroll down to the Remote state sharing section.
- Under Select workspaces to share with select
learn-terraform-run-triggers-application
from the drop-down menu. - Click the Save settings button at the bottom of the page.
Configure a run trigger
Now configure a run trigger for the application workspace.
- Navigate to the
learn-terraform-run-triggers-application
workspace. - Select Settings from the main menu.
- Select Run triggers from the settings menu.
- Under Source Workspaces select
learn-terraform-run-triggers-network
from the drop-down menu. - Click the Add workspace button.
Now queue a plan for the network workspace.
- Navigate to the
learn-terraform-run-triggers-network
workspace. - Select Actions > Start a new run.
- Choose the Plan and apply run type.
- Click the Start run button.
When the plan step is finished, there will be a message telling you that a
successful apply step for this workspace will trigger a run for the
learn-terraform-run-triggers-application
workspace.
Click Confirm & Apply to apply the plan.
It will take a few minutes for the apply step to complete and the network resources to be provisioned.
Once the apply step has completed, return to the application workspace. Notice that the run trigger you configured earlier has caused a new plan to be queued for this workspace. You need to manually apply all plans executed via run trigger.
Once the plan step is finished, click the See details button, then Confirm
& Apply and Confirm Plan to apply the run. After the apply step is
complete, click on the > Outputs interface to see the output values for this
run. Copy the value shown for public_dns_name
without the quotation marks and
paste it in your web browser's address bar to see the "Hello, world!" message
from the application.
Now that you have set up a run trigger between your two workspaces, a successful apply on the network workspace will queue a plan on the application workspace. You can use run triggers to coordinate between workspaces as part of your infrastructure pipelines with other automation tools.
Clean up
Destroy the infrastructure provisioned in these example workspaces to avoid unexpected charges from AWS. You must destroy this infrastructure in the correct order, because the VPC and associated infrastructure provisioned by the network workspace cannot be destroyed while there are EC2 instances provisioned by the application workspace which depend on it.
First, visit the application workspace. From the Settings menu, choose Destruction and Deletion. Ensure that Allow destroy plans is enabled. Next, click the Queue destroy plan button, and follow the steps to queue and confirm a destroy plan.
Once the destroy plan is complete, click Confirm & Apply followed by Confirm Plan to destroy your application resources.
Once the infrastructure has been successfully destroyed, return to the Settings > Destruction and Deletion page to delete the application workspace. Click the Delete from HCP Terraform button, and follow the prompt to delete your workspace from HCP Terraform.
Next, queue and apply a destroy plan for the network workspace by following the same steps. Once the infrastructure has been destroyed, delete the network workspace as well.
Next Steps
Infrastructure and application developers have common goals including automating integration and application delivery pipelines. Run triggers are one of the ways HCP Terraform supports infrastructure pipelines to satisfy the unique needs of infrastructure teams.
Further Reading
- Read more about run triggers and future plans for infrastructure pipelines in this blog post.
- Learn more in the documentation for run triggers and Remote State data sources.
- See how to automate run triggers using the HCP Terraform API.
- Now that you are comfortable using run triggers, try a more in-depth tutorial and Deploy Consul and Vault on a Kubernetes Cluster using Run Triggers.