Terraform
Forward Terraform Enterprise logs to Datadog
Terraform Enterprise (TFE) offers enterprises a self-hosted, private instance of the HCP Terraform application with additional enterprise-grade architectural features like audit logging, SAML single sign-on, and unlimited resource management. Terraform Enterprise has a high log volume that enables you, as an administrator, to audit and manage your Terraform Enterprise application.
With log forwarding, you can ingest and deliver logs to your preferred log
destinations. This lets you use your existing log aggregation tools and
workflows. In previous versions of Terraform Enterprise, you could only use
logspout
, a log router for Docker containers, to collect and forward the logs
over syslog.
In this tutorial, you will ingest logs from your Terraform Enterprise application and forward them to Datadog. To do this, you will enable and configure log forwarding. Then, you will restart the Terraform Enterprise application to apply your log forwarding configuration. Finally, you will view your Terraform Enterprise logs and filter them in Datadog.
Prerequisites
For this tutorial, you will need:
- a Datadog account. If you do not have one, Datadog offers a 14 day free trial.
- a Terraform Enterprise
application version
v202109-1
or later (Standalone or Active/Active deployment).
Connect to your TFE application
You must open a shell session to the server your Terraform Enterprise application is running on to configure it. Use SSH or your cloud provider's session manager to connect to the Terraform Enterprise application's instances.
Note
Run the following steps on every instance of your Terraform Enterprise application.
Once you have connected to an instance of your Terraform Enterprise application,
switch to the root
user. Several commands in this tutorial are only
available to the root
user.
$ sudo su -
Log forwarding requires a Docker version that supports the journald
logging
driver. Ensure that your Terraform Enterprise application's instance has the
journald
plugin installed.
$ docker info --format '{{.Plugins.Log}}'
[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]
In addition, log forwarding requires a Replicated version 2.52.0+. Verify your
replicatedctl
version.
$ replicatedctl version
Replicated version 2.53.0 (git="31e1290", date="2021-08-07 01:44:26 +0000 UTC")
If either of these requirements are not satisfied, upgrade to the appropriate versions before proceeding.
Create a Datadog API key
In your Datadog dashboard, visit your API Keys page and click on "API Keys" to view your API Keys. Create a new API key named "Terraform Enterprise".
Hover over the purple block to view your API key, then store it in a safe place. You will use the token to forward your Terraform Enterprise logs to Datadog using this API key.
Enable and configure log forwarding
Terraform Enterprise's log forwarding feature uses Fluent Bit, an open source log processor and forwarder, to ingest and deliver logs from Terraform Enterprise to your desired log destination.
Note
For Active/Active deployments, you must enable and configure log forwarding, and restart all instances of Terraform Enterprise. Repeat the steps below for each host.
Update Terraform Enterprise configuration
Terraform Enterprise disables log forwarding by default. To enable and configure log forwarding, you must set two Terraform Enterprise application settings.
On your Terraform Enterprise instance, run the following commands.
Set
log_forwarding_enable
to1
. This enables log forwarding.$ replicatedctl app-config set log_forwarding_enabled --value 1 Config item set successfully
Set
log_forwarding_config
to a Fluent Bit configuration specifying the external destination(s) to forward logs.First, create a file named
fluent-bit.conf
with the following contents. Update the value ofapikey
with the Datadog API key you created in the previous section.fluent-bit.conf
[OUTPUT] Name datadog Match * Host http-intake.logs.datadoghq.com TLS on compress gzip apikey 00000000000000000000000000000000 dd_service terraform_enterprise dd_source docker dd_tags environment:development,owner:engineering
Refer to the Fluent Bit Datadog documentation for the full list of configuration parameters.
Then, set
log_forwarding_config
to the contents of yourfluent-bit.conf
file.$ replicatedctl app-config set log_forwarding_config --value "$(cat fluent-bit.conf)" Config item set successfully
Verify Terraform Enterprise configuration
Review your Terraform Enterprise configuration to verify that you have successfully enabled and configured log forwarding.
$ replicatedctl app-config export
## ...
"log_forwarding_config": {
"value": "[OUTPUT]\n Name datadog\n Match *\n Host http-intake.logs.datadoghq.com\n TLS on\n compress gzip\n apikey cdf236e8bef680077e2ae6581dd95ca7\n dd_service terraform_enterprise\n dd_source docker\n dd_tags environment:development,owner:engineering"
},
"log_forwarding_enabled": {
"value": "1"
},
## …
Review the output of this command and ensure that log_forwarding_enabled
is set to 1
and log_forwarding_config
set to the contents of fluent-bit.conf
.
Restart Terraform Enterprise instance
To apply your log forwarding configuration, you must restart your Terraform Enterprise application instance.
First, stop your Terraform Enterprise application instance.
$ replicatedctl app stop
App is stopping
Tip
Terraform Enterprise may take a few minutes to restart. Please ensure your deployment is configured to allow for this downtime.
Verify your Terraform Enterprise instance stopped. The State
will display
stopped
.
$ replicatedctl app status
[
{
"AppID": "00000000000000000000000000000000",
"Sequence": 000,
"PatchSequence": 0,
"State": "stopped",
"DesiredState": "stopped",
"Error": "",
"IsCancellable": true,
"IsTransitioning": true,
"LastModifiedAt": "2021-09-14T00:00:00.000000000Z"
}
]
Then, start your Terraform Enterprise application instance.
$ replicatedctl app start
App is starting
Verify your Terraform Enterprise instance started. The State
will display
started
.
$ replicatedctl app status
[
{
"AppID": "00000000000000000000000000000000",
"Sequence": 000,
"PatchSequence": 0,
"State": "started",
"DesiredState": "started",
"Error": "",
"IsCancellable": false,
"IsTransitioning": false,
"LastModifiedAt": "2021-09-14T00:00:00.000000000Z"
}
]
Verify log forwarding
Review the Fluent Bit container logs to verify you configured Fluent Bit correctly.
$ docker logs -f tfe-fluent-bit
Fluent Bit v1.8.4
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io
[2021/09/14 00:00:00] [ info] [engine] started (pid=1)
[2021/09/14 00:00:00] [ info] [storage] version=1.1.1, initializing...
[2021/09/14 00:00:00] [ info] [storage] in-memory
[2021/09/14 00:00:00] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/09/14 00:00:00] [ info] [cmetrics] version=0.2.1
[2021/09/14 00:00:00] [ info] [input:systemd:systemd.0] seek_cursor=s=bed061ae941c4982a30df01d394e7448;i=148... OK
[2021/09/14 00:00:00] [ info] [http_server] listen iface=0.0.0.0 tcp_port=2020
[2021/09/14 00:00:00] [ info] [sp] stream processor started
[2021/09/14 00:00:00] [ info] [output:datadog:datadog.0] https://http-intake.logs.datadoghq.com, port=443, HTTP status=200 payload={}
[2021/09/14 00:00:00] [ info] [output:datadog:datadog.0] https://http-intake.logs.datadoghq.com, port=443, HTTP status=200 payload={}
[2021/09/14 00:00:00] [ info] [output:datadog:datadog.0] https://http-intake.logs.datadoghq.com, port=443, HTTP status=200 payload={}
If the output does not include stream processor started
, verify that your
fluent-bit.conf
file uses the API key you created in the
Create Datadog API Keys section.
Even if log forwarding is misconfigured, the Terraform Enterprise application
will continue to run. Use the tfe-fluent-bit
container logs to debug and
resolve any errors.
View Terraform Enterprise logs in Datadog
Navigate to the Datadog Log page.
In the search bar, enter service:terraform_enterprise
and click on the search
icon. Datadog now returns your Terraform Enterprise logs.
Terraform Enterprise delivers audit logs and application logs together. Log forwarding cannot split these logs into separate streams.
To display only your Terraform Enterprise audit logs, use the following Datadog search query.
service:terraform_enterprise \[ Audit Log \]
Next steps
Congratulations! Over the course of this tutorial, you enabled and configured Terraform Enterprise log forwarding to Datadog. You also verified log forwarding and filtered the logs by function in Datadog.
By using log forwarding, you can use your existing logging tools and workflows to manage and audit your Terraform Enterprise application. For more information on topics covered in this tutorial, review the documentation below:
Visit the Terraform Enterprise Log Forwarding documentation to learn more about log forwarding and to find Fluent Bit configuration for other supported log destinations, including but not limited to Amazon CloudWatch, Azure Log Analytics, Google Cloud Platform Cloud Logging, Splunk and more.
Visit the Terraform Enterprise Logs documentation to find more information about interacting with Terraform Enterprise logs.