Vault
Manage codified Vault on HCP Vault Dedicated with Terraform
Challenge
In the first part of this lab, you learned how to migrate a codified configuration for Vault Community Edition to HCP Vault Dedicated.
Migrating to the cloud, however, is only part of the challenge for organizations. Once you have migrated from self-hosted Vault to Vault Dedicated, there are still operational tasks that need to be managed.
For example, once you have migrated your self-hosted instance of Vault to HCP, you will eventually need to scale Vault Dedicated or setup replication to meet increased demand in other regions.
Before Vault Dedicated this might require weeks, if not months of planning to ensure applications that relied on Vault still had access to the necessary Vault resources during any operational tasks.
Solution
By following the principles covered in Deploy HCP Vault Dedicated with Terraform, you can use Terraform to manage manage the lifecycle of your Vault Dedicated cluster - from initial creation, scaling up to a larger tier and size of the cluster, adding performance replication, and scaling the cluster down to a smaller tier.
One of the many benefits of Vault Dedicated is you can accomplish this with no downtime by making just a few changes to your existing Terraform configuration.
Scenario introduction
After successfully migrating your self-hosted cluster to HCP, your organization wants to increase its utilization of Vault. Based on your current application load, it is necessary to scale your Vault Dedicated cluster to multi-region configuration during peak business hours. After peak business hours you want to scale down your cluster to manage cost.
In this lab you will:
Use your existing Terraform configuration to scale your cluster from the dev tier to the plus tier to support increased Vault Dedicated utilization.
After scaling your Vault Dedicated cluster, you will add performance replication in a new HCP region to support workloads in multiple locations.
Once performance replication has been set up, you will delete the replica and scale the cluster back down to a lower tier.
Prerequisites
A macOS or Linux development host from which you perform most of the tasks; (this lab was lasted tested on macOS 11.6.5)
Existing Vault Dedicated cluster based on this sample Terraform configuration.
Note
To successfully follow along with this lab, you must enable the public interface on your HCP Vault Dedicated cluster.
Personas
This scenario involves 1 persona:
- admin persona runs Vault, applies configuration with Terraform, edits configuration and environment, and applies configuration to Vault Dedicated.
HCP requirements
An HCP account where you have deployed a Vault Dedicated development tier cluster. You should use the sample Terraform configuration to deploy the Vault cluster to follow this lab.
A service principal account assigned the
admin
role in the HCP Portal.
Prepare scenario environment
You can find the Terraform configuration that you will use for this scenario in the learn-manage-codified-hcp-vault- terraform GitHub repository.
Clone the repository.
$ git clone https://github.com/hashicorp/learn-manage-codified-hcp-vault-terraform.git
Change into the repository directory and examine the contents.
$ cd learn-manage-codified-hcp-vault-terraform/
Deploy HCP Vault Dedicated cluster
To follow this lab you must have a Vault Dedicated dev tier cluster available using the provided sample Terraform configuration.
Please expand and complete each section below if you do not already have an Vault Dedicated cluster available.
A service principal account allows Terraform to authenticate with HCP using its associated client and secret key.
Launch the HCP Portal and login.
Click Access control (IAM) in the left navigation menu, then click Service principals.
Click Create service principal.
Enter sp-terraform in the Name field, set the role to Admin and click Save. Check the HCP Vault Dedicated Permissions document for more information.
From the sp-terraform page, click Create service principal key.
Copy the Client ID then, in a terminal, set the
HCP_CLIENT_ID
environment variable to the copied value.$ export HCP_CLIENT_ID=<COPIED_CLIENT_ID>
Switch back to the HCP Portal and copy the Client Secret then, in a terminal, set the
HCP_CLIENT_SECRET
environment variable to the copied value.$ export HCP_CLIENT_SECRET=<COPIED_CLIENT_SECRET>
Terraform is now able to authenticate with HCP.
Using the Terraform configuration you previously cloned, create a new Vault Dedicated cluster.
From the
learn-manage-codified-hcp-vault-terraform
directory, initialize Terraform.$ terraform init
Apply the Terraform configuration to deploy the Vault Dedicated cluster.
$ terraform apply -auto-approve
Example output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # hcp_hvn.primary_cluster_hvn will be created + resource "hcp_hvn" "primary_cluster_hvn" { ...snip... hcp_vault_cluster.primary_cluster: Still creating... [8m0s elapsed] hcp_vault_cluster.primary_cluster: Creation complete after 8m2s [id=/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary] time_sleep.wait_30_primary: Creating... time_sleep.wait_30_primary: Still creating... [10s elapsed] time_sleep.wait_30_primary: Still creating... [20s elapsed] time_sleep.wait_30_primary: Still creating... [30s elapsed] time_sleep.wait_30_primary: Creation complete after 30s [id=2022-06-07T17:11:47Z] hcp_vault_cluster_admin_token.primary_cluster_token: Creating... hcp_vault_cluster_admin_token.primary_cluster_token: Creation complete after 2s [id=/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary/token] Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: namespace = "admin" primary_token = <sensitive> primary_vault_public_endpoint_url = "https://vault-cluster-primary-public-vault-flmnop.123459a2.z1.hashicorp.cloud:8200"
Add HCP Vault Dedicated environment variables
Setup your local environment to authenticate with Vault Dedicated. By using the example Terraform configuration, you can get the required values from the Terraform output.
Export
VAULT_ADDR
environment variable for the Vault Dedicated public URL.$ export VAULT_ADDR=$(terraform output -json | jq -r '.primary_vault_public_endpoint_url.value')
Export
VAULT_TOKEN
environment variable to contain the initial admin token.$ export VAULT_TOKEN=$(terraform output -json | jq -r '.primary_token.value')
Export
VAULT_NAMESPACE
environment variable to contain the top level Vault Dedicated namespace.$ export VAULT_NAMESPACE=$(terraform output -json | jq -r '.namespace.value')
Check the Vault server status.
$ vault status
Example output:
Key Value --- ----- Recovery Seal Type shamir Initialized true Sealed false Total Recovery Shares 1 Threshold 1 Version 1.10.3+ent Storage Type raft Cluster Name vault-cluster-2c82cb91 Cluster ID 16b68eeb-4a25-3523-ae0a-4d9fc6eb448c HA Enabled true HA Cluster https://172.25.17.200:8201 HA Mode active Active Since 2022-06-06T14:51:28.184879441Z Raft Committed Index 5381 Raft Applied Index 5381 Last WAL 1046
Validate that your admin token is working to provide correct access to Vault.
$ vault token lookup | grep policies policies [default hcp-root]
You are ready to proceed with managing your Vault Dedicated cluster.
If you deployed a new Vault Dedicated cluster and did not follow the migration lab, apply a sample Vault configuration.
When the cluster creation completes, apply the sample configuration from migration portion of the lab.
$ cd part1-config/config
Initialize Terraform.
$ terraform init
Apply the sample Terraform configuration.
$ terraform apply -auto-approve
Example output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # vault_auth_backend.userpass will be created + resource "vault_auth_backend" "userpass" { + accessor = (known after apply) + id = (known after apply) + path = (known after apply) + tune = (known after apply) + type = "userpass" } ...snip... vault_generic_secret.student_api_key: Creation complete after 0s [id=api-credentials/student/api-key] vault_generic_secret.golden: Creation complete after 0s [id=api-credentials/student/golden] vault_generic_secret.api-wizard-service: Creation complete after 0s [id=api-credentials/admin/api-wizard] Apply complete! Resources: 11 added, 0 changed, 0 destroyed.
Switch back to the
learn-manage-codified-hcp-vault-terraform
directory.$ cd ../..
You are now ready to proceed with the lab.
Review Terraform configuration
Examine the current Terraform configuration.
$ tree . ├── README.md ├── hcpvault.tf ├── main.tf ├── outputs.tf ├── part1-config │ ├── README.md │ └── config │ ├── acl-policies.tf │ ├── auth-methods.tf │ ├── main.tf │ ├── policies │ │ ├── admin-policy.hcl │ │ └── student-secrets.hcl │ ├── secrets-engines.tf │ ├── static-secrets.tf │ ├── terraform.tfstate │ └── variables.tf ├── terraform.tfstate ├── variables.tf 3 directories, 16 files
This is a sample Terraform configuration that can be used to deploy an Vault Dedicated cluster.
Note
The configuration included in the part1-config directory was supplied for those who did not attend the Migrating codified self-hosted Vault to HCP Vault Dedicated portion of this lab and want to be able to follow this lab step-by-step.
Examine the main Terraform configuration.
$ cat main.tf
This file defines the HCP provider and the version of the provider that should be used.
main.tf
terraform { required_providers { hcp = { source = "hashicorp/hcp" version = ">=0.30.0" } } } provider "hcp" { # Configuration options } provider "time" { # Configuration options }
Examine the variables used to define your Vault Dedicated cluster.
$ cat variables.tf
The
variables.tf
file is used by other Terraform configurations to gather required inputs for different resources. This is useful because you can reference these values where needed, without having to account for them multiple times in different resources or configuration files.For example, you can define the tier for your Vault Dedicated cluster and when you want to scale the cluster up to another size, you can make the change in one place.
variables.tf
variable "cloud_provider" { description = "The cloud provider of the HCP HVN and Vault cluster." type = string default = "aws" } variable "tier" { description = "Tier of the Vault Dedicated cluster." type = string default = "dev" } variable "primary_region" { description = "The region of the primary cluster HCP HVN and Vault cluster." type = string default = "us-east-1" } variable "primary_cluster_hvn" { description = "The ID of the HCP HVN." type = string default = "hvn-aws-us-east-1" } variable "primary_cluster_hvn_cidr" { description = "The CIDR range of the HCP HVN." type = string default = "172.25.16.0/20" } variable "primary_cluster_id" { description = "The ID of the Vault Dedicated cluster." type = string default = "vault-cluster-primary" }
Examine the Vault Dedicated configuration.
$ cat hcpvault.tf
There are 2 resources required to create an Vault Dedicated cluster - a HashiCorp Virtual Network (HVN), represented in the
hcp_hvn
resource, and the Vault cluster represented in thehcp_vault_cluster
resource.The region you create your HVN in will determine where your Vault cluster is created. Refer to the
variables.tf
file to review the values that will be used.The
time_sleep
andhcp_vault_cluster_admin_token
resources are not required, but provided here as an example for how you might retrieve the initial admin token after provisioning a new Vault Dedicated cluster.hcpvault.tf
resource "hcp_hvn" "primary_cluster_hvn" { hvn_id = var.primary_cluster_hvn cloud_provider = var.cloud_provider region = var.primary_region cidr_block = var.primary_cluster_hvn_cidr } resource "hcp_vault_cluster" "primary_cluster" { hvn_id = hcp_hvn.primary_cluster_hvn.hvn_id cluster_id = var.primary_cluster_id tier = var.tier public_endpoint = true } resource "time_sleep" "wait_30_primary" { depends_on = [hcp_vault_cluster.primary_cluster] create_duration = "30s" } resource "hcp_vault_cluster_admin_token" "primary_cluster_token" { cluster_id = var.primary_cluster_id depends_on = [time_sleep.wait_30_primary] }
Examine the Terraform outputs.
$ cat outputs.tf
Outputs are a useful way to collect and use information from one Terraform configuration and use it in another configuration or to use the information as a reference.
In this example, you may need to know the public URL of your Vault cluster to manually access it or to have another Terraform configuration applied after it is available.
outputs.tf
output "primary_vault_public_endpoint_url" { value = hcp_vault_cluster.primary_cluster.vault_public_endpoint_url description = "The public IP address of the cluster." } output "namespace" { value = hcp_vault_cluster.primary_cluster.namespace description = "The default namespace of the cluster." } output "primary_token" { value = hcp_vault_cluster_admin_token.primary_cluster_token.token description = "Token" sensitive = true }
Review HCP Vault Dedicated deployment
Because you deployed your Vault Dedicated cluster using Terraform, you can easily review details
about the deployment with the show
command.
Run
terraform show
.$ terraform show
Example output:
# hcp_hvn.primary_cluster_hvn: resource "hcp_hvn" "primary_cluster_hvn" { cidr_block = "172.25.16.0/20" cloud_provider = "aws" created_at = "2022-06-03T17:10:25.000Z" hvn_id = "hvn-aws-us-east-1" id = "/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.network.hvn/hvn-aws-us-east-1" organization_id = "a2c4e6g-88d2-69fb-8cc1-0242ac110014" project_id = "a2c4e6g-8920-4714-ba99-0242ac11000e" provider_account_id = "091862471264" region = "us-east-1" self_link = "/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.network.hvn/hvn-aws-us-east-1" } # hcp_vault_cluster.primary_cluster: resource "hcp_vault_cluster" "primary_cluster" { cloud_provider = "aws" cluster_id = "vault-cluster-primary" created_at = "2022-06-03T17:11:06.097Z" hvn_id = "hvn-aws-us-east-1" id = "/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary" namespace = "admin" organization_id = "a2c4e6g-88d2-69fb-8cc1-0242ac110014" project_id = "a2c4e6g-8920-4714-ba99-0242ac11000e" public_endpoint = true region = "us-east-1" self_link = "/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary" tier = "DEV" vault_private_endpoint_url = "https://vault-cluster-primary-private-vault-ec951ba3.98ebdb39.z1.hashicorp.cloud:8200" vault_public_endpoint_url = "https://vault-cluster-primary-public-vault-ec951ba3.98ebdb39.z1.hashicorp.cloud:8200" vault_version = "v1.10.3" } # hcp_vault_cluster_admin_token.primary_cluster_token: resource "hcp_vault_cluster_admin_token" "primary_cluster_token" { cluster_id = "vault-cluster-primary" created_at = "2022-06-03T13:17:54-04:00" id = "/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary/token" token = (sensitive value) } # time_sleep.wait_30_seconds: resource "time_sleep" "wait_30_seconds" { create_duration = "30s" id = "2022-06-03T17:17:52Z" } Outputs: namespace = "admin" token = (sensitive value) vault_public_endpoint_url = "https://vault-cluster-primary-public-vault-ec951ba3.98ebdb39.z1.hashicorp.cloud:8200"
Observe that the cluster tier is currently
DEV
.
Scale up the HCP Vault Dedicated cluster
After migrating your self-hosted Vault cluster to HCP, you need to increase the capacity to support additional applications accessing Vault.
If you needed to scale up a self hosted cluster on your own, you might need to spend weeks or even months planning. For example you might need to validate your on-premises compute resources have enough resources to support larger Vault instances or verify the configuration has a valid CPU and memory configuration that is supported by the underlying infrastructure.
With Vault Dedicated, you can scale your Vault cluster with a change to your variables.tf
file and re-applying the configuration.
The underlying infrastructure and associated managed of that infrastructure is handled by HashiCorp.
Change the tier from
dev
toplus_small
in thevariables.tf
file.$ sed -ibak "s/dev/plus_small/g" variables.tf
Note
Once you scale a cluster is scaled above the
dev
tier, it cannot be scaled back down to thedev
tier.Verify the file file has been updated.
$ cat variables.tf | grep plus_small default = "plus_small"
Re-apply the configuration to change the tier.
$ terraform apply
Example output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place Terraform will perform the following actions: # hcp_vault_cluster.primary_cluster will be updated in-place ~ resource "hcp_vault_cluster" "primary_cluster" { id = "/project/a2c4e6g-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary" ~ tier = "DEV" -> "plus_small" # (14 unchanged attributes hidden) } Plan: 0 to add, 1 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve.
The example output shows that the cluster will be updated in-place. The cluster does not need to be replaced or stopped to change the tier or size of the cluster.
Enter
yes
at the prompt to proceed with scaling the cluster.
Access the HCP Vault Dedicated cluster
One of the many benefits of running your Vault cluster on HCP is most of the infrastructure related maintenance is managed by HashiCorp. While your cluster is scaling, applications that rely on Vault can still access Vault resources.
Log into the HCP Portal and navigate to the Vault clusters list while Terraform is upgrading the cluster.
Observe that your cluster is an
Updating
state.Click
vault-cluster-primary
to access details about the cluster.Information about the cluster is still available while the upgrade is in process.
Click the Public link under Cluster URLs to copy the URL.
Open a new browser tab and navigate to the copied URL. Vault is still accessible even during the upgrade.
Click the Method pull down menu and select Username.
Log in as the
admin
user that was migrated from the self-hosted Vault instance to Vault Dedicated with the passwordsuperS3cret!
.Navigate to Secrets and click api-credentials.
Even though the cluster is in the process of being upgraded, Vault is still available and accessible for incoming requests.
Add performance replication
As the Vault admin, you have successfully scaled your cluster up from the dev tier to the plus tier to support increased Vault utilization. To meet the demands of your applications, you now need to enable performance replication in a different region.
Note
Performance replication is available on all Plus tier HCP Vault Dedicated clusters.
Adding performance replication to an existing cluster involves just a few steps and can be performed on-demand with no downtime. Similar to creating a new Vault Dedicated cluster using Terraform, you will need to define the parameters for your new cluster and associate the new cluster with your primary cluster.
Add the required values for the new replica cluster to the
variables.tf
file.$ cat >> variables.tf <<EOF variable "secondary_region" { description = "The region of the secondary cluster HCP HVN and Vault cluster." type = string default = "us-west-2" } variable "secondary_cluster_hvn" { description = "The ID of the HCP HVN." type = string default = "hvn-aws-us-west-2" } variable "secondary_cluster_hvn_cidr" { description = "The CIDR range of the HCP HVN." type = string default = "172.24.16.0/20" } variable "secondary_cluster_id" { description = "The ID of the Vault Dedicated cluster." type = string default = "vault-cluster-secondary" } EOF
Review the
variables.tf
file.$ cat variables.tf
The
variables.tf
file now contains the necessary information for the new cluster.variables.tf
variable "cloud_provider" { description = "The cloud provider of the HCP HVN and Vault cluster." type = string default = "aws" } variable "tier" { description = "Tier of the Vault Dedicated cluster." type = string default = "dev" } variable "primary_region" { description = "The region of the primary cluster HCP HVN and Vault cluster." type = string default = "us-east-1" } variable "primary_cluster_hvn" { description = "The ID of the HCP HVN." type = string default = "hvn-aws-us-east-1" } variable "primary_cluster_hvn_cidr" { description = "The CIDR range of the HCP HVN." type = string default = "172.25.16.0/20" } variable "primary_cluster_id" { description = "The ID of the Vault Dedicated cluster." type = string default = "vault-cluster-primary" } variable "secondary_cluster_hvn" { description = "The ID of the HCP HVN." type = string default = "hvn-aws-us-west-2" } variable "secondary_cluster_hvn_cidr" { description = "The CIDR range of the HCP HVN." type = string default = "172.24.16.0/20" } variable "secondary_cluster_id" { description = "The ID of the Vault Dedicated cluster." type = string default = "vault-cluster-secondary" } variable "secondary_region" { description = "The region of the secondary cluster HCP HVN and Vault cluster." type = string default = "us-west-2" }
Update
hcpvault.tf
to add the required resource blocks for the new cluster.$ cat >> hcpvault.tf <<EOF resource "time_sleep" "wait_30_secondary" { depends_on = [hcp_vault_cluster.secondary_cluster] create_duration = "30s" } resource "hcp_hvn" "secondary_cluster_hvn" { hvn_id = var.secondary_cluster_hvn cloud_provider = var.cloud_provider region = var.secondary_region cidr_block = var.secondary_cluster_hvn_cidr } resource "hcp_vault_cluster" "secondary_cluster" { hvn_id = hcp_hvn.secondary_cluster_hvn.hvn_id cluster_id = var.secondary_cluster_id tier = var.tier primary_link = hcp_vault_cluster.primary_cluster.self_link public_endpoint = true } resource "hcp_vault_cluster_admin_token" "secondary_cluster_token" { cluster_id = var.secondary_cluster_id depends_on = [time_sleep.wait_30_secondary] } EOF
Review the
hcpvault.tf
file.$ cat hcpvault.tf
The Terraform configuration for Vault Dedicated now includes the resources to create the new cluster. Performance replication is enabled by using the
primary_link
parameter to associate the new cluster with the primary cluster.hcpvault.tf
resource "time_sleep" "wait_30_seconds" { depends_on = [hcp_vault_cluster.primary_cluster] create_duration = "30s" } resource "hcp_hvn" "primary_cluster_hvn" { hvn_id = var.primary_cluster_hvn cloud_provider = var.cloud_provider region = var.primary_region cidr_block = var.primary_cluster_hvn_cidr } resource "hcp_vault_cluster" "primary_cluster" { hvn_id = hcp_hvn.primary_cluster_hvn.hvn_id cluster_id = var.primary_cluster_id tier = var.tier public_endpoint = true } resource "hcp_vault_cluster_admin_token" "primary_cluster_token" { cluster_id = var.primary_cluster_id depends_on = [time_sleep.wait_30_seconds] } resource "hcp_hvn" "secondary_cluster_hvn" { hvn_id = var.secondary_cluster_hvn cloud_provider = var.cloud_provider region = var.secondary_region cidr_block = var.secondary_cluster_hvn_cidr } resource "hcp_vault_cluster" "secondary_cluster" { hvn_id = hcp_hvn.secondary_cluster_hvn.hvn_id cluster_id = var.secondary_cluster_id tier = var.tier primary_link = hcp_vault_cluster.primary_cluster.self_link public_endpoint = true } resource "hcp_vault_cluster_admin_token" "secondary_cluster_token" { cluster_id = var.secondary_cluster_id depends_on = [time_sleep.wait_30_seconds] }
Add an output for the new clusters URL.
$ cat >> outputs.tf <<EOF output "secondary_vault_public_endpoint_url" { value = hcp_vault_cluster.secondary_cluster.vault_public_endpoint_url description = "The public IP address of the secondary cluster." } EOF
Create the new cluster by re-running
terraform apply
.terraform apply -auto-approve
Example output:
hcp_hvn.primary_cluster_hvn: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.network.hvn/hvn-aws-us-east-1] hcp_vault_cluster.primary_cluster: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary] time_sleep.wait_30_primary: Refreshing state... [id=2022-06-07T17:11:47Z] hcp_vault_cluster_admin_token.primary_cluster_token: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary/token] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # hcp_hvn.secondary_cluster_hvn will be created + resource "hcp_hvn" "secondary_cluster_hvn" { ...snip... hcp_vault_cluster.secondary_cluster: Still creating... [17m1s elapsed] hcp_vault_cluster.secondary_cluster: Creation complete after 17m8s [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-secondary] time_sleep.wait_30_secondary: Creating... time_sleep.wait_30_secondary: Still creating... [10s elapsed] time_sleep.wait_30_secondary: Still creating... [20s elapsed] time_sleep.wait_30_secondary: Creation complete after 30s [id=2022-06-07T21:37:16Z] hcp_vault_cluster_admin_token.secondary_cluster_token: Creating... hcp_vault_cluster_admin_token.secondary_cluster_token: Creation complete after 3s [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-secondary/token] Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: namespace = "admin" primary_token = <sensitive> primary_vault_public_endpoint_url = "https://vault-cluster-primary-public-vault-f0694e72.5f8f89a2.z1.hashicorp.cloud:8200" secondary_vault_public_endpoint_url = "https://vault-cluster-secondary-public-vault-d9915240.da3e83c2.z1.hashicorp.cloud:8200"
Log into the HCP Portal and navigate to the Vault clusters list. Both the primary and secondary cluster are available.
Unset the
VAULT_TOKEN
environment variable used to authenticate with the primary cluster.$ unset VAULT_TOKEN
Log into the secondary cluster using the Vault CLI.
$ vault login \ -address=$(terraform output -json | jq -r '.secondary_vault_public_endpoint_url.value') \ -method=userpass \ username=admin \ password=superS3cret!
Verify the status of the secondary Vault cluster.
$ vault status \ -address=$(terraform output -json | jq -r '.secondary_vault_public_endpoint_url.value')
Example output:
Key Value --- ----- Recovery Seal Type shamir Initialized true Sealed false Total Recovery Shares 1 Threshold 1 Version 1.10.3+ent Storage Type raft Cluster Name vault-cluster-e0356e94 Cluster ID 901c4f34-df29-ea41-1c70-ffe86d492a77 HA Enabled true HA Cluster https://172.24.25.116:8201 HA Mode active Active Since 2022-06-07T21:28:44.120674828Z Raft Committed Index 11356 Raft Applied Index 11356 Last WAL 3531
You can verify this is the secondary cluster because the IP address is part of the address space defined for the secondary cluster HVN.
View available secrets engines.
$ vault secrets list \ -address=$(terraform output -json | jq -r '.secondary_vault_public_endpoint_url.value')
Example output:
Path Type Accessor Description ---- ---- -------- ----------- api-credentials/ kv kv_e80f580b n/a cubbyhole/ ns_cubbyhole ns_cubbyhole_ebc1e938 per-token private secret storage identity/ ns_identity ns_identity_647c3433 identity store sys/ ns_system ns_system_b24fbdca system endpoints used for control, policy and debugging transit/ transit transit_d83dca00 n/a
All secrets engines were replicated from the primary cluster.
Add a path filter
Now that you have deployed performance replication, you have been informed that the transit
secrets engine
should not have been replicated to the secondary cluster.
You can solve for this by adding a path filter that denies a specific path from being replicated to the secondary cluster.
Add the
paths_filter
parameter to the secondary clusters resource block inhcpvault.tf
.$ sed -ibak '/primary_link/a\ paths_filter = ["transit"] ' hcpvault.tf
Re-apply the Terraform configuration to prevent the
transit
secrets engine from being replicated to the secondary cluster.$ terraform apply -auto-approve
Verify that the
transit
secrets engine is no longer available on the secondary cluster.$ vault secrets list \ -address=$(terraform output -json | jq -r '.secondary_vault_public_endpoint_url.value')
Example output:
Path Type Accessor Description ---- ---- -------- ----------- api-credentials/ kv kv_e80f580b n/a cubbyhole/ ns_cubbyhole ns_cubbyhole_ebc1e938 per-token private secret storage identity/ ns_identity ns_identity_647c3433 identity store sys/ ns_system ns_system_b24fbdca system endpoints used for control, policy and debugging
The
transit
secrets engine is no longer listed.
Disable performance replication and scale down the cluster
You have successfully scaled your Vault Dedicated cluster from the dev
tier to the plus
tier and enabled
performance replication. During peak business hours your applications have the necessary access to Vault Dedicated.
To help manage cost, you have been asked to scale down your cluster and remove the secondary cluster during non-peak hours.
Removing performance replication requires removing the resources block for the secondary cluster. Similarly,
scaling the cluster down to a smaller tier requires changing the tier value in variables.tf
.
Remove the secondary cluster resources from
hcpvault.tf
.$ sed -ibak '24,$ d' hcpvault.tf
Remove the secondary cluster URL from outputs.tf
$ sed -ibak '16,$ d' outputs.tf
Change the tier from
plus_small
tostarter_small
.$ sed -ibak "s/plus_small/starter_small/g" variables.tf
Re-apply the Terraform configuration.
$ terraform apply -auto-approve
Example output:
time_sleep.wait_30_secondary: Refreshing state... [id=2022-06-08T18:23:13Z] hcp_vault_cluster_admin_token.secondary_cluster_token: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-secondary/token] hcp_hvn.secondary_cluster_hvn: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.network.hvn/hvn-aws-us-west-2] hcp_hvn.primary_cluster_hvn: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.network.hvn/hvn-aws-us-east-1] hcp_vault_cluster.secondary_cluster: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-secondary] hcp_vault_cluster.primary_cluster: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary] time_sleep.wait_30_primary: Refreshing state... [id=2022-06-08T13:54:09Z] hcp_vault_cluster_admin_token.primary_cluster_token: Refreshing state... [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary/token] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place - destroy Terraform will perform the following actions: # hcp_hvn.secondary_cluster_hvn will be destroyed # (because hcp_hvn.secondary_cluster_hvn is not in configuration) ...snip... hcp_vault_cluster.primary_cluster: Still modifying... [id=/project/11eb3a47-8920-4714-ba99-0242ac...rp.vault.cluster/vault-cluster-primary, 15m21s elapsed] hcp_vault_cluster.primary_cluster: Modifications complete after 15m27s [id=/project/11eb3a47-8920-4714-ba99-0242ac11000e/hashicorp.vault.cluster/vault-cluster-primary] Apply complete! Resources: 0 added, 1 changed, 4 destroyed. Outputs: namespace = "admin" primary_token = <sensitive> primary_vault_public_endpoint_url = "https://vault-cluster-primary-public-vault-d7a8ff49.edd3051f.z1.hashicorp.cloud:8200"
Log into the HCP Portal and navigate to the Vault clusters list. The secondary cluster has been removed and the cluster scaled down from
plus_small
tostarter_small
.
One more thing...
Before we wrap up - did anyone notice an operational and security benefit when enabling performance replication and path filters for Vault Dedicated?
Typically, when configuring replication or path filters for Vault Enterprise, some person or team would need access to Vault.
With Vault Dedicated, you can limit who has access to Vault because you can enable replication and path filters from the HCP Portal.
This means you can offload operational work such as adding path filters to any number of teams within your organization who may not otherwise
require access to Vault. For example, your development team needs access to Vault for testing application integration, but you do not want to
add the operational overhead of managing Vault replication to that team. With Vault Dedicated, you could grant contributor
access to the HCP Portal
and allow a helpdesk or operations team to help the development team manage replication path filters for their Vault Dedicated cluster or grand admin
access to allow certain teams to manage scaling clusters up or down.
Summary
By using simple, modular, and composable infrastructure as code with Terraform, you can easily manage the lifecycle of your Vault Dedicated clusters.
Clean up
Here are the steps you need to clean up the scenario content from your local environment.
Delete the Vault Dedicated cluster instance and HVN.
$ terraform destroy -auto-approve
Unset the environment variables.
$ unset VAULT_ADDR VAULT_TOKEN LEARN_VAULT_PID VAULT_NAMESPACE