Terraform
HCP Terraform's Run Environment
HCP Terraform is designed as an execution platform for Terraform, and most of its features are based around its ability to perform Terraform runs in a fleet of disposable worker VMs. This page describes some features of the run environment for Terraform runs managed by HCP Terraform.
The Terraform Worker VMs
HCP Terraform performs Terraform runs in single-use Linux virtual machines, running on an x86_64 architecture.
The operating system and other software installed on the worker VMs is an internal implementation detail of HCP Terraform. It is not part of a stable public interface, and is subject to change at any time.
Before Terraform is executed, the worker VM's shell environment is populated with environment variables from the workspace, the selected version of Terraform is installed, and the run's Terraform configuration version is made available.
Changes made to the worker during a run are not persisted to subsequent runs, since the VM is destroyed after the run is completed. Notably, this requires some additional care when installing additional software with a local-exec
provisioner; see Installing Additional Tools for more details.
Hands-on: Try the Upgrade Terraform Version in HCP Terraform tutorial.
Network Access to VCS and Infrastructure Providers
In order to perform Terraform runs, HCP Terraform needs network access to all of the resources being managed by Terraform.
If you are using the SaaS version of HCP Terraform, this means your VCS provider and any private infrastructure providers you manage with Terraform (including VMware vSphere, OpenStack, other private clouds, and more) must be internet accessible.
Terraform Enterprise instances must have network connectivity to any connected VCS providers or managed infrastructure providers.
Concurrency and Run Queuing
HCP Terraform uses multiple concurrent worker VMs, which take jobs from a global queue of runs that are ready for processing. (This includes confirmed applies, and plans that have just become the current run on their workspace.)
If the global queue has more runs than the workers can handle at once, some of them must wait until a worker becomes available. When the queue is backed up, HCP Terraform gives different priorities to different kinds of runs:
- Applies that will make changes to infrastructure have the highest priority.
- Normal plans have the next highest priority.
- Speculative plans have the lowest priority.
HCP Terraform can also delay some runs in order to make performance more consistent across organizations. If an organization requests a large number of runs at once, HCP Terraform queues some of them immediately, and delays the rest until some of the initial batch have finished; this allows every organization to continue performing runs even during periods of especially heavy load.
State Access and Authentication
HCP Terraform stores state for its workspaces.
When you trigger runs via the CLI workflow, Terraform reads from and writes to HCP Terraform's stored state. HCP Terraform uses the cloud
block for runs, overriding any existing backend in the configuration.
Note: The cloud
block is available in Terraform v1.1 and later. Previous versions can use the remote
backend to configure the CLI workflow and migrate state.
Autogenerated API Token
Instead of using existing user credentials, HCP Terraform generates a unique per-run API token and provides it to the Terraform worker in the CLI config file. When you run Terraform on the command line against a workspace configured for remote operations, you must have the cloud
block in your configuration and have a user or team API token with the appropriate permissions specified in your CLI config file. However, the run itself occurs within one of HCP Terraform's worker VMs and uses the per-run token for state access.
The per-run token can read and write state data for the workspace associated with the run, can download modules from the private registry, and may be granted access to read state from other workspaces in the organization. (Refer to cross-workspace state access for more details.) Per-run tokens cannot make any other calls to the HCP Terraform API and are not considered to be user, team, or organization tokens. They become invalid after the run is completed.
User Token
HCP Terraform uses the user token to access a workspace's state when you:
Run Terraform on the command line against a workspace that is not configured for remote operations. The user must have permission to read and write state versions for the workspace.
Run Terraform's state manipulation commands against an HCP Terraform workspace. The user must have permission to read and write state versions for the workspace.
Refer to Permissions for more details about workspace permissions.
Provider Authentication
Runs in HCP Terraform typically require some form of credentials to authenticate with infrastructure providers. Credentials can be provided statically through Environment or Terraform variables, or can be generated on a per-run basis through dynamic credentials for certain providers. Below are pros and cons to each approach.
Static Credentials
Pros
- Simple to setup
- Broad support across providers
Cons
- Requires regular manual rotation for enhanced security posture
- Large blast radius if a credential is exposed and needs to be revoked
Dynamic Credentials
Pros
- Eliminates the need for manual rotation of credentials on HCP Terraform
- HCP Terraform metadata - including the run's project, workspace, and run-phase - is encoded into every token to allow for granular permission scoping on the target cloud platform
- Credentials are short-lived, which reduces blast radius of potential credential exposure
Cons
- More complicated initial setup compared to using static credentials
- Not supported for all providers
The full list of supported providers and setup instructions can be found in the dynamic credentials documentation.
Environment Variables
HCP Terraform automatically injects the following environment variables for each run:
Variable Name | Description | Example |
---|---|---|
TFC_RUN_ID | A unique identifier for this run. | run-CKuwsxMGgMd4W7Ui |
TFC_WORKSPACE_NAME | The name of the workspace used in this run. | prod-load-balancers |
TFC_WORKSPACE_SLUG | The full slug of the configuration used in this run. This consists of the organization name and workspace name, joined with a slash. | acme-corp/prod-load-balancers |
TFC_CONFIGURATION_VERSION_GIT_BRANCH | The name of the branch that the associated Terraform configuration version was ingressed from. | main |
TFC_CONFIGURATION_VERSION_GIT_COMMIT_SHA | The full commit hash of the commit that the associated Terraform configuration version was ingressed from. | abcd1234... |
TFC_CONFIGURATION_VERSION_GIT_TAG | The name of the tag that the associated Terraform configuration version was ingressed from. | v0.1.0 |
TFC_PROJECT_NAME | The name of the project used in this run. | proj-name |
They are also available as Terraform input variables by defining a variable with the same name. For example:
variable "TFC_RUN_ID" {}