Consul
Automate your network configuration with Consul-Terraform-Sync
Consul-Terraform-Sync (CTS) allows you to build integrations that automatically apply network and security infrastructure changes reacting to changes in the Consul service catalog. CTS monitors changes to the services in Consul catalog and triggers Terraform runs to automate your network infrastructure.
You can configure CTS to execute one or more automation tasks. Each task consists of a runbook automation written as a compatible Terraform module using resources and data sources for the underlying network infrastructure.
In this tutorial, you will learn how to configure CTS to connect to your Consul cluster and monitor the Consul catalog for changes. Once CTS detects a change to the service mesh, it will trigger Terraform to update security group rules for a jumphost instance. These rules will allow the instance to communicate with the related services from the Consul catalog.
Prerequisites
The tutorial assumes that you are familiar with Consul and its core functionality. If you are new to Consul, refer to the Consul Getting Started tutorials collection.
For this tutorial, you will need:
- An HCP account configured for use with Terraform
- An AWS account configured for use with Terraform
- git >= 2.0
- aws-cli >= 2.0
- terraform >= 1.4
- jq >= 1.6
Clone GitHub repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-cts-intro.git
Change into the directory that contains the complete configuration files for this tutorial.
$ cd learn-consul-cts-intro/hcp
This repository contains Terraform configuration to spin up the initial infrastructure and all files to deploy Consul, the sample application, and the API Gateway resources.
Here, you will find the following Terraform configuration:
instance-scripts/
directory contains the provisioning scripts and configuration files used to bootstrap the EC2 instances and CTSprovisioning/
directory contains the CTS Terraform module, as well as configuration file templatesapplication-instance.tf
defines the EC2 application instances provisioned with Nginxcts-instance.tf
defines the EC2 CTS instancehcp.tf
defines the HashiCorp Virtual Network (HVN) and HCP cluster resourcesoutputs.tf
defines outputs you will use to authenticate and connect to your EC2 instancesproviders.tf
defines AWS and HCP provider definitions for Terraformvariables.tf
defines variables you can use to customize the tutorialterraform.tfvars
defines the actual values of the variablesvpc.tf
defines the AWS VPC resources
Using the following GitHub repository, you will provision the following resources:
- An HCP HashiCorp Virtual Network (HVN)
- An HCP Consul Dedicated server Cluster
- An AWS VPC
- An AWS key pair
- An AWS EC2 instance running Consul server agent and CTS
- An AWS EC2 instance with an nginx application deployment
Deploy your infrastructure
With these Terraform configuration files, you are ready to deploy your infrastructure.
Initialize your Terraform configuration to download the necessary providers and modules.
$ terraform init
Initializing the backend...
Initializing provider plugins...
##...
Terraform has been successfully initialized!
##...
Then, create the infrastructure. Confirm the run by entering yes
.
Note
The default target AWS region to deploy is `us-east-2`. If you wish to deploy to another region, modify the `terraform.tfvars` file accordingly.$ terraform apply
##...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
It will take a few minutes to deploy your infrastructure. Once the deploy completes, it will return a list of outputs you will use to complete the tutorial.
Apply complete! Resources: 34 added, 0 changed, 0 destroyed.
Outputs:
##...
app_instance_ips = [
"10.0.4.62",
]
aws_region = "us-east-2"
consul_token = <sensitive>
cts_instance_ip = "18.119.130.97"
jumphost_instance_ip = "18.189.185.197"
next_steps = [
"You can now add the TLS certificate for accessing your EC2 instances by running:",
"ssh-add ./tls-key.pem",
]
Terraform deployed the infrastructure for this tutorial which includes your AWS VPC, HCP HVN, as well as your HCP Consul Dedicated cluster.
In order to log on to the instances, configure your SSH key manager agent to use the correct SSH key identity file.
$ ssh-add tls-key.pem
Identity added: tls-key.pem (tls-key.pem)
Export the CTS instance IP into a variable for further use.
$ export CTSINSTANCE_IP=$(terraform output -raw cts_instance_ip)
Configure the Consul ACL system for CTS
In this section, you will review the ACL policies required for CTS, define a policy with sufficient privileges, and create a token with the same privileges for CTS to use when communicating with the Consul cluster.
In production environments, we recommend enabling access control lists (ACLs) to secure your Consul deployment. When ACLs is enabled, you need to pass a token to CTS so that it can access information from Consul.
Review cts-policy.hcl
for the ACL policies required for CTS to interact with Consul. Notice that CTS requires permissions to register itself as a service, write Terraform state in the Consul KV, and observe updates to Consul services and nodes.
$ ssh ubuntu@$CTSINSTANCE_IP "cat /opt/consul-nia/cts-policy.hcl"
## Consul CTS service privileges
# Permission for CTS to register itself as a service in the Consul catalog
service "Consul-Terraform-Sync" {
policy = "write"
}
## Consul KV privileges
# Permission for CTS to write the Terraform state in the default Consul KV path
key_prefix "consul-terraform-sync/" {
policy = "write"
}
# Permission for CTS to lock a semaphore session for all possible node names.
# For a restrictive environment, this should be a Node name (session) or a Node name prefix (session_prefix)
session_prefix "" {
policy = "write"
}
## Consul catalog privileges
# Permission for CTS to observe the nginx service for catalog changes
service_prefix "nginx" {
policy = "read"
}
# Permission for CTS to observe services on any node
node_prefix "" {
policy = "read"
}
Consul CTS service privileges
CTS automatically registers itself as a service with Consul, which requires write permissions for the CTS service. The name of the CTS service defaults to Consul-Terraform-Sync
. Refer to the consul.service_registration
configuration for options to change this name or to disable this feature.
The policy in cts-policy.hcl
grants write permissions for the CTS service using the default service name.
cts-policy.hcl
# Permission for CTS to register itself as a service in the Consul catalog
service "Consul-Terraform-Sync" {
policy = "write"
}
Consul KV privileges
By default, CTS uses Consul as a backend for Terraform. As a result, the token needs permissions to store Terraform state inside Consul KV and to use sessions to ensure locking during the Terraform state changes. However, you can configure the path in the driver.terraform.backend
section of the configuration.
cts-policy.hcl
# Permission for CTS to write the Terraform state in the default Consul KV path
key_prefix "consul-terraform-sync/" {
policy = "write"
}
# Permission for CTS to lock a semaphore session for all possible node names.
# For a restrictive environment, this should be a Node name (session) or a Node name prefix (session_prefix)
session_prefix "" {
policy = "write"
}
Note
Consul is the default backend for Consul-Terraform-Sync. It is important to note that it does not support encryption. In a production environment we recommend you to use a Terraform backend that supports encryption.
Consul catalog privileges
CTS needs access to the Consul catalog to retrieve information about services registered with Consul. You need a token that can read the services you want to monitor for CTS. If you want CTS to only have access to a limited set of services, define them specifically in the policy rules.
For Consul Enterprise, the token also needs access to the service's namespaces.
The policy in cts-policy.hcl
only grants access to services that start with nginx
.In addition, the policy grants read permissions over all nodes. However, if there is a common prefix for all the nodes that host your services, we recommend you restrict the rules to the node prefix.
cts-policy.hcl
# Permission for CTS to observe the nginx service for catalog changes
service_prefix "nginx" {
policy = "read"
}
# Permission for CTS to observe services on any node
node_prefix "" {
policy = "read"
}
Generate a Consul token for CTS
Now, create the policy for the CTS with the minimum permissions.
$ ssh ubuntu@$CTSINSTANCE_IP "consul acl policy create -name cts-policy -rules @/opt/consul-nia/cts-policy.hcl"
ID: f54a91f8-4d6c-5603-5843-dd30177310c6
Name: cts-policy
Description:
Datacenters:
Rules:
# ...
After you create the policy, generate a token associated with this policy. Assign the token's SecretID to an environment variable to use later.
$ export CTS_TOKEN=$(ssh ubuntu@$CTSINSTANCE_IP "consul acl token create -description 'consul-terraform-sync policy' -policy-name cts-policy -format json | jq -r '.SecretID'") && echo $CTS_TOKEN
"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Modify the CTS configuration file to use the token you have just created.
$ ssh ubuntu@$CTSINSTANCE_IP "sed -i 's/ token = .*/ token = \"$CTS_TOKEN\"/g' /etc/consul-nia.d/cts-config.hcl"
Review CTS configuration file
You can configure CTS with HCL or JSON configuration files. In a CTS configuration file, you will find the following components:
- Consul configuration to authenticate and interact with your Consul cluster
- General configuration specific to the CTS daemon, for example the logging level or port selection
- Terraform driver section to relay provider discovery and installation information to Terraform
- Task definition to configure which data should the CTS daemon monitor for change, and what to perform once this change has been detected
For the full list of available options, refer to the CTS documentation.
Inspect the CTS configuration file on the CTS instance.
cts-config.hcl
$ ssh ubuntu@$CTSINSTANCE_IP 'cat /etc/consul-nia.d/cts-config.hcl'
consul {
address = "localhost:8500"
token = "xxxx-xxxx-xxxx-xxxx-xxxx"
}
log_level = "INFO"
working_dir = "/opt/consul-nia/sync-tasks"
port = 8558
id = "cts-0"
syslog {}
driver "terraform" {
log = false
persist_log = true
path = "/opt/consul-nia/"
backend "consul" {
gzip = true
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.43"
}
}
}
terraform_provider "aws" {
}
task {
name = "jumphost-ssh"
description = "execute every minute using service information from nginx"
module = "/opt/consul-nia/cts-jumphost-module"
providers = ["aws"]
variable_files = ["/opt/consul-nia/cts-jumphost-module.tfvars"]
condition "schedule" {
cron = "* * * * *" # every minute
}
module_input "services" {
names = "nginx"
}
}
Consul block
The consul
block configures CTS so it can query the Consul catalog when it executes a task. This tutorial pre-populates the configuration with the Consul management token. Update this value to use the ACLs token you generated earlier scoped for CTS.
cts-config.hcl
consul {
address = "localhost:8500"
token = "xxxx-xxxx-xxxx-xxxx-xxxx"
}
In a fully secured mTLS environment, we recommend including the certificates required to communicate with Consul. In this tutorial, since CTS and Consul are hosted on the same virtual machine, traffic stays local and unencrypted.
We recommend hosting CTS on a dedicated node with a Consul agent. This ensures dedicated resources for network automation and enables you to fine tune security and privilege separation between the network administrators and the other Consul agents.
Global configs
You can configure the CTS daemon using top level options. For example, you can configure:
- The
log_level
parameter specifies how detailed you want CTS to log. - The
port
parameter specifies which port CTS should expose the API interface. - The
syslog
parameter specifies the syslog server for logging. This section can be useful when you configure CTS as a daemon, for example in Linux usingsystemd
. - The
id
parameter specifies under what name CTS will be registered as a service in the Consul Catalog
cts-config.hcl
log_level = "INFO"
working_dir = "/opt/consul-nia/sync-tasks"
port = 8558
id = "cts-0"
syslog {}
Driver "terraform" block
The driver
block configures the subprocess used by CTS to propagate infrastructure change. The Terraform driver is required for CTS to define the Terraform version, and which providers to use.
cts-config.hcl
driver "terraform" {
log = false
persist_log = true
path = "/opt/consul-nia/"
backend "consul" {
gzip = true
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.43"
}
}
}
By default, CTS uses Consul to store Terraform state files and use the connection information defined in the consul
block. If you want to use a different Terraform backend or to specify a different Consul datacenter as backend, use the backend
section of the configuration. CTS supports all standard Terraform backends. Refer to the Terraform backend documentation to learn about available options.
Task block
A task
block configures the task to run as automation for the defined services. You can explicitly define services in the task's condition
block. You can specify multiple task blocks in case you need to configure multiple tasks.
In the following example, CTS will run the cts-jumphost-module
every minute and collect data from the Consul Catalog about the nginx
service and whether there are any changes to it. The variable_files
parameter passes more information to the module, like the AWS region and the ID of the security group that Terraform will sync rules to.
The cts-jumphost-module
module deploys a ruleset for a security group attached to a jumphost EC2 instance that allows only outbound SSH communication to the services defined in CTS. In this case, the defined service is nginx
.
cts-config.hcl
task {
name = "jumphost-ssh"
description = "execute every minute using service information from nginx"
module = "/opt/consul-nia/cts-jumphost-module"
providers = ["aws"]
variable_files = ["/opt/consul-nia/cts-jumphost-module.tfvars"]
condition "schedule" {
cron = "* * * * *" # every minute
}
module_input "services" {
names = "nginx"
}
}
Refer to the Task Execution documentation for a full list of values that can trigger a task to run.
CTS will attempt to execute each task when it starts to synchronize infrastructure with the current state of Consul. CTS will stop and exit if any error occurs while it prepares the automation environment or executes the task for the first time.
Start CTS
CTS provides different running modes, including some that can be useful to safely test your configuration and the changes that are going to be applied.
The default mode is named daemon mode. In daemon mode, CTS passes through a once-mode phase, in which it will try to run all the tasks once before turning into a long running process. During the once-mode phase, the daemon will exit with a non-zero status if it encounters an error. After CTS completes the once-mode phase, when it encounters an error, CTS will not exit and log the error.
You will run CTS in daemon mode via the SystemD platform on the CTS instance. Review the SystemD unit file for CTS.
$ ssh ubuntu@$CTSINSTANCE_IP 'cat /etc/systemd/system/cts.service'
[Unit]
Description="HashiCorp Consul-Terraform-Sync - A Network Infrastructure Automation solution"
Documentation=https://www.consul.io/docs/nia
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/consul-nia.d/cts-config.hcl
[Service]
EnvironmentFile=/etc/consul-nia.d/consul-nia.env
User=consul-nia
Group=consul-nia
ExecStart=/usr/bin/consul-terraform-sync start -config-dir=/etc/consul-nia.d/
ExecReload=/bin/kill --signal HUP $MAINPID
KillMode=process
KillSignal=SIGTERM
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
Then, start CTS.
$ ssh ubuntu@$CTSINSTANCE_IP 'sudo systemctl start cts'
Inspect the status of the CTS process. After a successful startup, the state will be active (running)
.
$ ssh ubuntu@$CTSINSTANCE_IP 'systemctl status cts'
● cts.service - "HashiCorp Consul-Terraform-Sync - A Network Infrastructure Automation solution"
Loaded: loaded (/etc/systemd/system/cts.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2023-10-18 10:12:34 UTC; 1min 51s ago
Docs: https://www.consul.io/docs/nia
Main PID: 8275 (consul-terrafor)
Tasks: 8 (limit: 1104)
CGroup: /system.slice/cts.service
└─8275 /usr/bin/consul-terraform-sync start -config-dir=/etc/consul-nia.d/
2023-10-18T10:12:35.314Z [INFO] ctrl: running task once: task_name=jumphost-ssh
2023-10-18T10:12:35.314Z [INFO] registration: registering Consul-Terraform-Sync as a service with Consul: id=cts-0 service_name=consul-terraform-sync
2023-10-18T10:12:35.317Z [INFO] client.terraformcli: Terraform output is muted
2023-10-18T10:12:35.317Z [INFO] client.terraformcli: persisting Terraform logs on disk: logPath=/opt/consul-nia/sync-tasks/jumphost-ssh/terraform.log
2023-10-18T10:12:35.317Z [INFO] driver.terraform: retrieved 0 Terraform handlers for task: task_name=jumphost-ssh
2023-10-18T10:12:35.459Z [INFO] registration: Consul-Terraform-Sync registered as a service with Consul: id=cts-0 service_name=consul-terraform-sync
2023-10-18T10:12:35.459Z [INFO] registration: to view registered services, navigate to the Services section in the Consul UI: id=cts-0 service_name=c
Leave CTS running for a minute so that it completes the jumphost-ssh
task. Open the CTS logging in real time and look for the following log entries for when the task has completed.
$ ssh ubuntu@$CTSINSTANCE_IP 'journalctl --follow --lines 4 --unit cts'
-- Logs begin at Wed 2023-10-18 12:25:14 UTC. --
2023-10-18T12:43:51.740Z [INFO] ctrl: task completed: task_name=jumphost-ssh
2023-10-18T12:43:51.741Z [INFO] ctrl: all tasks completed once
2023-10-18T12:43:51.741Z [INFO] ctrl: start task monitoring
2023-10-18T12:43:52.003Z [INFO] tasksmanager: scheduled task next run time: task_name=jumphost-ssh wait_time=58.729278385s next_runtime="2023-10-18 12:44:52 +0000 UTC"
Review automation results
Since there is currently only one nginx
instance, the security group ruleset applied to the jumphost instance will contain only one rule. Inspect the contents of the security-group attached to the jumphost instance. Notice there is only one item in IpRanges
.
$ aws --region $(terraform output -raw aws_region) ec2 describe-security-groups --filters "Name=tag-key,Values=CtsJumphostModule" | jq -r '.SecurityGroups[].IpPermissionsEgress'
[
{
"FromPort": 22,
"IpProtocol": "tcp",
"IpRanges": [
{
"CidrIp": "<<APPLICATION INSTANCE IP>>"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 22,
"UserIdGroupPairs": []
}
]
Save the address of the jumphost into an environment variable.
$ export JUMPHOST_IP=$(terraform output -raw jumphost_instance_ip)
Save the address of the nginx
application instance into an environment variable.
$ export APPINSTANCE_0_IP=$(terraform output -json app_instance_ips | jq -r '.[0]')
Log on to the nginx
application instance via the jumphost instance and execute a sample command to test the jumphost function.
$ ssh -J ubuntu@$JUMPHOST_IP ubuntu@$APPINSTANCE_0_IP 'uname -a'
##...
Linux nginx-0 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Review files created by Consul-Terraform-Sync daemon
When CTS starts, it will run Terraform inside the working_dir
defined in the CTS configuration.
Inside that folder, Terraform will create a workspace for each task defined in the configuration.
$ ssh ubuntu@$CTSINSTANCE_IP 'tree /opt/consul-nia/sync-tasks/'
/opt/consul-nia/sync-tasks/
└── jumphost-ssh
├── main.tf
├── providers.auto.tfvars
├── terraform.log
├── terraform.tfvars
├── terraform.tfvars.tmpl
├── variables.auto.tfvars
├── variables.module.tf
└── variables.tf
1 directory, 8 files
You will find the following files in this directory:
- The
main.tf
file contains the Terraform block, provider blocks, and a module block calling the module configured for the task. - The
providers.auto.tfvars
file contains the required Terraform providers you defined in thedrivers
block of the CTS configuration. - The
terraform.tfvars
file contains the services input variables from the Consul catalog. CTS periodically updates this file to reflect the current state of the configured set of services for the task. - The
terraform.tfvars.tmpl
file serves as a template for the information retrieved from Consul catalog into theterraform.tfvars
file. - The
variables.tf
file contains the definition of the services input variables from the Consul catalog, as well as the intermediate variables used to dynamically configure providers. - The
variables.module.tf
file contains the manually added variable definitions for extra information for the module - The
variables.auto.tfvars
file contains the actual values of the manually added variables for extra information for the module
The terraform.tfvars
file created by CTS will be similar to the following:
terraform.tfvars
# This file is generated by Consul-Terraform-Sync.
#
# The HCL blocks, arguments, variables, and values are derived from the
# operator configuration for Consul-Terraform-Sync. Any manual changes to
# this file may not be preserved and could be overwritten by a subsequent
# update.
#
# Task: jumphost-ssh
# Description:
services = {
"nginx.nginx-0.learn-cts-koy970" = {
id = "nginx"
name = "nginx"
kind = ""
address = "10.0.1.4"
port = 80
meta = {}
tags = []
namespace = ""
status = "passing"
node = "nginx-0"
node_id = "86294666-befb-41ce-5277-8bc6986ca2a8"
node_address = "10.0.1.4"
node_datacenter = "learn-cts-koy970"
node_tagged_addresses = {
lan = "10.0.1.4"
lan_ipv4 = "10.0.1.4"
wan = "10.0.1.4"
wan_ipv4 = "10.0.1.4"
}
node_meta = {
consul-network-segment = ""
consul-version = "1.16.2"
}
cts_user_defined_meta = {}
},
}
CTS auto-generates the Terraform configuration files. Any manual changes to these files may not be preserved and could be overwritten by a subsequent update.
Next, you will review the results of the CTS automation. Leave your session to the CTS instance open for when you will inspect the CTS logs later, and open a new terminal session on your local machine.
Scale-up the application deployments
Scale-up the deployments of the nginx
service to observe how CTS triggers updates to the security group.
Update the application_instances_amount
variable in terraform.tfvars
to 2
.
terraform.tfvars
application_instances_amount = 2
Next, apply your changes to deploy one more instance of nginx
. Confirm the run by entering yes
.
$ terraform apply
## …
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
aws_instance.application[1]: Creation complete after 15s [id=i-0ccb3e18790036752]
Wait a couple of minutes for the new nginx
instance to deploy, boot up, and join the Consul cluster as a client node. When the new instance is ready and operational, observe the output in the log tracking session in the running CTS instance:
$ ssh ubuntu@$CTSINSTANCE_IP 'journalctl --follow --lines 5 --unit cts'
## ...
[INFO] tasksmanager: scheduled task next run time: task_name=jumphost-ssh wait_time=59.999408751s next_runtime="2023-10-25 15:43:00 +0000 UTC"
[INFO] tasksmanager: scheduled task triggered but had no changes: task_name=jumphost-ssh
[INFO] tasksmanager: scheduled task next run time: task_name=jumphost-ssh wait_time=59.999408751s next_runtime="2023-10-25 15:44:00 +0000 UTC"
[INFO] tasksmanager: executing task: task_name=jumphost-ssh
[INFO] tasksmanager: task completed: task_name=jumphost-ssh
Verify the current ruleset consists of two rules for the currently deployed nginx
instances:
$ aws --region $(terraform output -raw aws_region) ec2 describe-security-groups --filters "Name=tag-key,Values=CtsJumphostModule" | jq -r '.SecurityGroups[].IpPermissionsEgress'
[
{
"FromPort": 22,
"IpProtocol": "tcp",
"IpRanges": [
{
"CidrIp": "<<APPLICATION INSTANCE 0 IP>>"
},
{
"CidrIp": "<<APPLICATION INSTANCE 1 IP>>"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 22,
"UserIdGroupPairs": []
}
]
Save the address of the second application instance into an environment variable.
$ export APPINSTANCE_1_IP=$(terraform output -json app_instance_ips | jq -r '.[1]')
Log on to the second application instance through the jumphost instance.
$ ssh -J ubuntu@$JUMPHOST_IP ubuntu@$APPINSTANCE_1_IP 'uname -a'
##...
Linux nginx-1 5.4.0-1103-aws #111~18.04.1-Ubuntu SMP Tue May 23 20:04:10 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
By scaling-up the nginx
deployment, CTS has reacted to the change and refreshed the content of the security group ruleset.
Clean up resources
Destroy the infrastructure via Terraform. Confirm the run by entering yes
.
$ terraform destroy
##...
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
It may take several minutes for Terraform to successfully delete your infrastructure.. Once Terraform completes, you should get the following output.
Destroy complete! Resources: 34 destroyed.
Next steps
In this tutorial, you learned how to use CTS to automate your network infrastructure and build an integration that automatically applies security configuration that reacts to changes of the Consul service catalog.
CTS executed a task that continuously polls for changes to the services in Consul catalog and triggers Terraform runs. These runs deploy security group configuration entries that allow communication to the preconfigured list of application instances.
Refer to the Network Infrastructure Automation documentation to learn more about configuration of CTS.