Boundary
Configure workers
Before you configure workers, you should have completed the following steps:
- Installed Boundary on at least three controller nodes.
- Prepared or have three existing network boundaries:
- Public/DMZ network
- Intermediary network
- Private network
- Prepared three virtual machines for Boundary workers, one in each network boundary with the Boundary binary installed on it.
In the following configuration files, there are common configuration components as well as some unique components depending on the role the Boundary worker performs. There are three files, one for each worker in a unique network boundary. Additionally, Boundary Enterprise supports a multi-hop configuration in which the Boundary workers can serve one of three purposes: an ingress worker, an ingress/egress worker, or an egress worker.
Prepare the environment files
HashiCorp recommends using either the env://
or file://
notation within the configuration files, to securely provide secret configuration components to the Boundary worker binaries.
The following configuration example uses env://
to secure AWS KMS configuration items.
When you install the Boundary binary using a package manager, it includes a unit file which configures an environment file at /etc/boundar.d/boundary.env
.
You can use this file to set the sensitive values that are used to configure the Boundary workers.
The following file is an example of how this environment file could be configured:
/etc/boundary.d/boundary.env
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Note
In the example above, the proper IAM roles and permissions for the given AWS_ACCESS_KEY
and AWS_SECRET_ACCESS_KEY
must be in place so that Boundary can use them to access the different KMS keys.
Prepare the worker KMS keys
The worker-auth storage KMS key is used by a Worker for the encrypted storage of authentication keys. This is recommended Workers using controller-led or worker-led methods. If it is not specified, the authentication keys are not encrypted on disk. Optionally, if you deploy KMS authentication-driven Boundary workers, an additional KMS key must be generated to authenticate the Boundary worker with the controller.
HashiCorp strongly recommends using the Key Management System (KMS) of the cloud provider where you deploy your Boundary workers. Keep in mind that Boundary workers must have the correct level of permissions for interacting with the cloud provider's KMS. Refer to your cloud provider's documentation, for more information.
Create the worker configurations
After you create the requisite key or keys in the cloud provider of your choice, you can begin configuring the workers.
The following configuration examples all employ the worker-led authorization flow. For more information on configuring KMS authentication for Boundary workers, refer to the KMS authentication configuration documenation.
If you use Boundary Enterprise, you can configure multiple workers to act in three different roles: ingress, intermediary, and egress. For Community Edition, workers only serve one role, acting as both the point of ingress and egress. Select your Boundary edition, and complete the following steps to configure workers.
For Boundary Enterprise, you can configure ingress, intermediary, and egress workers to take advantage of multi-hop worker capabilities.
Note that "ingress," "intermediary," and "egress" are general ways to describe how the respective worker interacts with resources. A worker can serve more than one of those roles at a time. Refer to Multi-hop sessions for more information.
Complete the steps below to configure workers for Boundary Enterprise.
Ingress worker configuration
Create the ingress-worker.hcl
file with the relevant configuration information:
/etc/boundary.d/ingress-worker.hcl
# disable memory from being swapped to disk
disable_mlock = true
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# worker block for configuring the specifics of the
# worker service
worker {
public_addr = "<worker_public_addr>"
initial_upstreams = ["<controller_lb_address>:9201"]
auth_storage_path = "/var/lib/boundary"
tags {
type = ["worker1", "upstream"]
}
}
# Events (logging) configuration. This
# configures logging for ALL events to both
# stderr and a file at /var/log/boundary/<boundary_use>.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "ingress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
kms "awskms" {
purpose = "worker-auth-storage"
region = "us-east-1"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey3"
endpoint = "https://vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com"
}
Intermediate worker configuration
Create the intermediate-worker.hcl
file with the relevant configuration information:
/etc/boundary.d/intermediate-worker.hcl
# disable memory from being swapped to disk
disable_mlock = true
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# worker block for configuring the specifics of the
# worker service
worker {
public_addr = "<worker_public_addr>"
initial_upstreams = ["<ingress_worker_address>:9202"]
auth_storage_path = "/var/lib/boundary"
tags {
type = ["worker2", "intermediate"]
}
}
# Events (logging) configuration. This
# configures logging for ALL events to both
# stderr and a file at /var/log/boundary/<boundary_use>.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "intermediate-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
kms "awskms" {
purpose = "worker-auth-storage"
region = "us-east-1"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey4"
endpoint = "https://vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com"
}
Egress worker configuration
Create the egress-worker.hcl
file with the relevant configuration information:
/etc/boundary.d/egress-worker.hcl
# disable memory from being swapped to disk
disable_mlock = true
# listener denoting this is a worker proxy
listener "tcp" {
address = "0.0.0.0:9202"
purpose = "proxy"
}
# worker block for configuring the specifics of the
# worker service
worker {
public_addr = "<worker_public_addr>"
initial_upstreams = ["<intermediate_worker_address>:9202"]
auth_storage_path = "/var/lib/boundary"
tags {
type = ["worker3", "egress"]
}
}
# Events (logging) configuration. This
# configures logging for ALL events to both
# stderr and a file at /var/log/boundary/<boundary_use>.log
events {
audit_enabled = true
sysevents_enabled = true
observations_enable = true
sink "stderr" {
name = "all-events"
description = "All events sent to stderr"
event_types = ["*"]
format = "cloudevents-json"
}
sink {
name = "file-sink"
description = "All events sent to a file"
event_types = ["*"]
format = "cloudevents-json"
file {
path = "/var/log/boundary"
file_name = "egress-worker.log"
}
audit_config {
audit_filter_overrides {
sensitive = "redact"
secret = "redact"
}
}
}
}
# kms block for encrypting the authentication PKI material
kms "awskms" {
purpose = "worker-auth-storage"
region = "us-east-1"
kms_key_id = "19ec80b0-dfdd-4d97-8164-c6examplekey5"
endpoint = "https://vpce-0e1bb1852241f8cc6-pzi0do8n.kms.us-east-1.vpce.amazonaws.com"
}
Refer to the list below for explanations of the parameters used in the example above:
disable mlock (bool: false)
- Disables the server from executing themlock
syscall, which prevents memory from being swapped to the disk. This is fine for local development and testing. However, it is not recommended for production unless the systems running Boundary use only encrypted swap or do not use swap at all. Boundary only supports memory locking on UNIX-like systems that supportmlock()
syscall like Linux and FreeBSD.On Linux, to give the Boundary executable the ability to use
mlock
syscall without running the process as root, run the following command:sudo setcap cap_ipc_lock=+ep $(readlink -f $(which boundary))
If you use a Linux distribution with a modern version of systemd, you can add the following directive to the "[Service]" configuration section:
LimitMEMLOCK=infinity
listener
- Configures the listeners on which Boundary serves traffic (API cluster and proxy).worker
- Configures the worker. If present,boundary server
starts a worker subprocess.events
- Configures event-specific parameters.The example events configuration above is exhaustive and writes all events to both
stderr
and a file. This configuration may or may not work for your organization's logging solution.kms
- Configures KMS blocks for various purposes.Refer to the links below for configuration information for the different cloud KMS blocks:
Refer to the documentation for additional top-level configuration options and additional controller-specific options.
Start the Boundary service
When the configuration files are in place on each Boundary controller, you can proceed to enable and start the binary on each of the Boundary worker nodes using systemd
.
Run the following commands to start the service:
Adopt the Workers (optional)
If you use the Workers as outlined above, you must adopt the Boundary Workers. Complete the following steps to adopt the workers:
Complete the following steps to adopt the worker using the UI:
Log in to Boundary as the admin user.
Select Workers in the navigation pane.
Click New.
(Optional) You can use the workers page to construct the contents of the
worker.hcl
file, if you did not create the configuration file as part of the installation process above. Provide the following details, and Boundary constructs the worker configuration file for you:- Boundary Cluster ID
- Worker Public Address
- Config file path
- Worker Tags
Scroll to the bottom of the New Worker page, and paste the Worker Auth Registration Request key. Boundary provides you with the Worker Auth Registration Request key in the CLI output when you start the worker. You can also locate this value in the
auth_request_token
file.Click Register Worker.
Click Done.
The new worker appears on the Workers page.