Consul
Connect services on Windows workloads to Consul service mesh
A significant number of critical enterprise applications and services operate on Windows servers. As network topologies grow more complicated and security requirements evolve, it is increasingly important to integrate Windows workloads with a service mesh.
Consul is the only service mesh to support both Windows and Linux, enabling you to effectively manage your network from a single control plane. Consul service mesh provides your Windows workloads capabilities such as traffic management, zero trust security, resiliency, policy, enhanced observability, and secure communication across any platform or runtime.
Note
This capability is available in beta for Consul 1.15.0 preview with Windows Server 2019 VMs and Envoy versions 1.23.0, and 1.22.2. The beta version supports all Consul open source features for a single Consul datacenter.
In this tutorial, you will deploy a Consul server cluster, two Windows VMs pre-configured with Consul, Envoy, and an example workload. In the process, you will learn how to configure your existing Windows workloads with Consul and Envoy to leverage Consul service mesh.
Prerequisites
The tutorial assumes that you are familiar with Consul and its core functionality. If you're new to Consul, refer to the Consul Getting Started tutorials collection.
For this tutorial, you will need:
- An HCP account configured for use with Terraform
- An AWS account configured for use with Terraform
- consul 1.15.0-preview
- terraform >= 1.2
- git >= 2.0
Confirm that you have configured your HCP and AWS credentials correctly for Terraform to use.
$ env | grep -e 'HCP\|AWS'
HCP_CLIENT_ID=********************************
HCP_CLIENT_SECRET=****************************************************************
AWS_ACCESS_KEY_ID=********************
AWS_SECRET_ACCESS_KEY=****************************************
Clone example repository
Clone the GitHub repository containing the configuration files and resources.
$ git clone https://github.com/hashicorp-education/learn-consul-windows-vm.git
Navigate to the cloned repository.
$ cd learn-consul-windows-vm
Deploy infrastructure
This tutorial deploys an HCP Consul Dedicated cluster, supporting AWS networking infrastructure (VPC, subnets, etc), and two Windows servers. The Windows servers are pre-configured with Consul, Envoy, and fake-service
. In the next section, you will go through the provisioning steps required to enable Consul client agents on your Windows servers.
Initialize the Terraform project to download the necessary providers and modules.
$ terraform init
## ...
Initializing provider plugins...
## ...
Terraform has been successfully initialized!
## ...
Then, deploy the resources. Confirm the run by entering yes
.
$ terraform apply
## ...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 30 added, 0 changed, 0 destroyed.
This step takes about 15 minutes. While waiting, read the next section to learn how to configure Consul, Envoy, and service definitions on a Windows virtual machine (VM).
Review Consul client configuration
The repository contains three main components that enable Consul on Windows workloads:
- The
windows.tf
file contains the configuration for the Windows virtual machine. - The
templates/consul-client-agent.tftpl
file is a PowerShell script Terraform uses to install the Consul and Envoy binaries, configure the client agents, start Consul and Envoy, and register the services (fake-service
frontend or backend) to Consul. - The
services
directory contains the Consul service definitions forfake-service
and an intentions file to enable service-to-service communication between thefake-service
backend and frontend.
Open windows.tf
and find the aws_instance.fakeservice
resource.
windows.tf
resource "aws_instance" "fakeservice" {
for_each = toset([
for s in fileset(path.module, "services/*.json"):
trimsuffix(replace(s, "services/", ""), ".json")
])
ami = nonsensitive(data.aws_ssm_parameter.agent_windows_ami.value)
instance_type = "t2.micro"
subnet_id = module.vpc.public_subnets[0]
vpc_security_group_ids = [module.aws_hcp_consul.security_group_id, aws_security_group.hcp_consul_ec2.id]
key_name = aws_key_pair.instance_key_pair.key_name
get_password_data = true
user_data = templatefile("${path.module}/templates/consul-client-agent.tftpl",
merge(var.consul_base_folders, {
envoy_folder = "envoy"
hashicups_folder = "hashicups"
consul_download_url = var.consul_url
node_name = each.key
service_definition = file("${path.module}/services/${each.key}.json")
fakeservice_url = var.fakeservice_url
consul_token = hcp_consul_cluster_root_token.token.secret_id
consul_ca = base64decode(hcp_consul_cluster.main.consul_ca_file)
config_file = base64decode(hcp_consul_cluster.main.consul_config_file)
envoy_url = var.envoy_url
})
)
tags = {
Name = "fakeservice-${each.key}"
}
}
This resource defines a Windows Server 2019 VM with the following parameters:
- The
for_each
parameter creates a set of service names from the service definitions in theservices
file. Terraform uses this set to deploy a Windows VM with the service definitions pre-configured. This tutorial's Terraform configuration assumes a dedicated VM for each service. - The
user_data
parameter specifies the file the instance loads automatically when it launches. Notice it uses thetemplates/consul-client-agent.tftpl
file as a base, populating it with values to configure Consul and Envoy.
These parameters configure the Consul client agent to connect to and interact with the HCP Consul Dedicated cluster.
consul_token = hcp_consul_cluster_root_token.token.secret_id
consul_ca = base64decode(hcp_consul_cluster.main.consul_ca_file)
config_file = base64decode(hcp_consul_cluster.main.consul_config_file)
Open templates/consul-client-agent.tftpl
. Review the comments in the file for an explanation of the steps required to configure Consul on Windows workloads. The values in ${}
map to the EC2 instance's user_data
parameter.
templates/consul-client-agent.tftpl
# Configure firewall rules
netsh advfirewall set publicprofile state off
# Install chocolatey and others
[System.Net.ServicePointManager]::SecurityProtocol = 3072
$NODE_NAME="${node_name}"
$CONSUL_PATH="C:\${consul_folder}"
$CONSUL_CONFIG_PATH="C:\${consul_folder}\${consul_config_folder}"
$CONSUL_DATA_PATH="C:\${consul_folder}\data"
$CONSUL_LOG_PATH="C:\${consul_folder}\consul.log"
$CONSUL_CERTS_PATH="C:\${consul_folder}\${consul_certs_folder}"
$ENVOY_PATH="C:\${envoy_folder}"
$HASHICUPS_PATH="C:\${hashicups_folder}"
$FAKESERVICE_PATH="C:\Fake"
New-Item -type directory $CONSUL_PATH
New-Item -type directory $CONSUL_CONFIG_PATH
New-Item -type directory $CONSUL_CERTS_PATH
New-Item -type directory $ENVOY_PATH
New-Item -type directory $HASHICUPS_PATH
New-Item -type directory $CONSUL_DATA_PATH
New-Item -type directory $FAKESERVICE_PATH
# Download Consul
cd $CONSUL_PATH
Invoke-WebRequest -Uri ${consul_download_url} -OutFile consul.zip
Expand-Archive consul.zip -DestinationPath .
# Download Envoy
cd $ENVOY_PATH
Invoke-WebRequest -Uri ${envoy_url} -OutFile envoy.exe
# Add Consul and Envoy to path
$env:path = $env:path + ";" + $CONSUl_PATH + ";" + $ENVOY_PATH
[System.Environment]::SetEnvironmentVariable('Path', $env:path,[System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable('Path', $env:path,[System.EnvironmentVariableTarget]::Machine)
# Download the service definitions from KV store
cd $CONSUL_CONFIG_PATH
# Create Consul client configuration file
@"
${config_file}
"@ | Set-Content consul.json -Force
# Create Consul service definitions
@"
${service_definition}
"@ | Set-Content service.json -Force
# Copy certificate authority files
@"
${consul_ca}
"@ | Out-File ca.pem -NoNewline -Encoding utf8
# Add ACL token and grpc and change path to ca.pem
$consulJsonConfig = Get-Content .\consul.json -Raw | ConvertFrom-Json
$tokenConfig = @{"agent"="${consul_token}"}
$grpcConfig = @{"grpc"= 8502}
$consulJsonConfig | Add-Member -Type NoteProperty -Name 'ports' -value $grpcConfig
$consulJsonConfig.Acl | Add-Member -Type NoteProperty -Name 'tokens' -value $tokenConfig
$consulJsonConfig.ca_file = $CONSUL_CONFIG_PATH + "\ca.pem"
$consulJsonConfig | ConvertTo-Json -Depth 6 | Set-Content consul.json
$serviceJson = Get-Content .\service.json -Raw | ConvertFrom-Json
$serviceJson.service | Add-Member -Type NoteProperty -Name 'token' -value "${consul_token}"
$serviceJson | ConvertTo-Json -Depth 6 | Set-Content service.json
# Start Consul
$consulservice = @{
Name = "consul"
BinaryPathName = "c:\consul\consul.exe agent -node " + $NODE_NAME + " -config-dir=$CONSUL_CONFIG_PATH -log-file=$CONSUL_LOG_PATH -data-dir=$CONSUL_DATA_PATH"
DisplayName = "Consul"
StartupType = "Automatic"
Description = "The consul service"
}
New-Service @consulservice
Start-Service "consul"
## Set up fakeservice
# Download fakeservice
cd $FAKESERVICE_PATH
Invoke-WebRequest -Uri ${fakeservice_url} -OutFile fakeservice.zip
Expand-Archive fakeservice.zip -DestinationPath .
# Add fakeservice to path
$env:path = $env:path + ";" + $FAKESERVICE_PATH
[System.Environment]::SetEnvironmentVariable('Path', $env:path,[System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable('Path', $env:path,[System.EnvironmentVariableTarget]::Machine)
# Set up fakeservice env variables
$env:LISTEN_ADDR="0.0.0.0:9090"
if ($NODE_NAME -eq "fakeservice-frontend") {
$env:UPSTREAM_URIS="http://localhost:8080"
}
$env:NAME=$NODE_NAME
# Start fakeservice and envoy
Start-Process fake-service -RedirectStandardOutput .\console.out -RedirectStandardError .\console.err
consul.exe connect envoy -sidecar-for=${node_name} -token=${consul_token} -admin-access-log-path="C:\/envoy\/back.log" -bootstrap | Set-Content c:\envoy\envoy.json -Force
Start-Process envoy.exe -ArgumentList '-c','c:\envoy\envoy.json' -RedirectStandardOutput c:\envoy\console.out -RedirectStandardError c:\envoy\console.err
Unlike Linux and MacOS, you cannot bootstrap Envoy using the consul connect envoy
command. Instead, you must explicitly use the -bootstrap
option, and specify your Consul ACL token and a valid access log path. The consul-client-agent.tftpl
does the following to generate the Envoy configuration file.
templates/consul-client-agent.tftpl
$ consul.exe connect envoy \
-sidecar-for=${node_name} \
-token=${consul_token} \
-admin-access-log-path="C:\/envoy\/back.log" \
-bootstrap | Set-Content c:\envoy\envoy.json -Force
Open services/fakeservice-backend.json
. This file defines a Consul service and a healthcheck for the fake-service
backend. The tutorial Terraform configuration prepopulates the service definitions (must be a JSON file) in the services
directory in an unique Windows VM.
services/fakeservice-backend.json
{
"service": {
"name": "fakeservice-backend",
"id": "fakeservice-backend",
"port": 9090,
"connect": {
"sidecar_service": {}
},
"check": {
"id": "fakeservice-backend",
"name": "fakeservice-backend",
"service_id": "fakeservice-backend",
"tcp": "localhost:9090",
"interval": "1s",
"timeout": "3s"
}
}
}
Configure Consul CLI
Once Terraform completes, it returns similar output to the following:
Outputs:
consul_root_token = <sensitive>
consul_url = "https://learn-consul-windows-T2G8.consul.98a0dcc3-5473-4e4d-a28e-6c343c498530.aws.hashicorp.cloud"
fakeservice_addresses = {
"fakeservice-backend" = "http://18.236.173.214:9090"
"fakeservice-frontend" = "http://34.217.41.199:9090"
}
password_data = <sensitive>
The password_data
output contains passwords to remote access into each Windows VM. If you are interested in learning how to connect to a Windows VM using RDP, visit the AWS documentation.
$ terraform output password_data
{
"fakeservice-backend" = "REDACTED"
"fakeservice-frontend" = "REDACTED"
}
You will now configure the Consul CLI to connect to your HCP Consul Dedicated cluster by retrieving the HCP Consul Dedicated cluster address and root token.
First, retrieve the HCP Consul Dedicated cluster URL and export it as an environment variable named CONSUL_HTTP_ADDR
.
$ export CONSUL_HTTP_ADDR=$(terraform output -raw consul_url)
Then, retrieve the root token and export it as an environment variable named CONSUL_HTTP_TOKEN
.
$ export CONSUL_HTTP_TOKEN=$(terraform output -raw consul_root_token)
Confirm your Consul CLI connects to your HCP Consul Dedicated cluster by retrieving the cluster's members. You should observe one Consul server and two Consul clients.
$ consul members
Node Address Status Type Build Protocol DC Segment
ip-172-25-41-43 172.25.41.43:8301 alive server 1.13.2+ent 2 learn-consul-windows-t2g8 <all>
fakeservice-backend 10.0.1.59:8301 alive client 1.12.0 2 learn-consul-windows-t2g8 <default>
fakeservice-frontend 10.0.1.101:8301 alive client 1.12.0 2 learn-consul-windows-t2g8 <default>
Verify services
List the services registered in Consul. Notice this includes the fake-service
backend and frontend, in addition to their respective Envoy sidecar proxies.
$ consul catalog services
consul
fakeservice-backend
fakeservice-backend-sidecar-proxy
fakeservice-frontend
fakeservice-frontend-sidecar-proxy
List the fake-service
frontend service's address. Open the address in your browser.
$ terraform output fakeservice_addresses
{
"fakeservice-backend" = "http://18.236.173.214:9090"
"fakeservice-frontend" = "http://34.217.41.199:9090"
}
You will find something similar to the following. Notice that the service is up, but is unable to connect to its backend. This is expected behavior since you have not yet defined any service intentions.
$ curl http://34.217.41.199:9090 # invoke fakeservice-frontend external address
{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.0.1.11"
],
"start_time": "2022-09-30T07:34:40.196869",
"end_time": "2022-09-30T07:34:41.545555",
"duration": "1.3477248s",
"body": "Hello World",
"upstream_calls": {
"http://localhost:8080": {
"uri": "http://localhost:8080",
"code": -1,
"error": "Error communicating with upstream service: Get \"http://localhost:8080/\": dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it."
}
},
"code": 500
}
Create Consul service intentions
In this section, you will create a Consul intention to enable the fake-service
frontend and backend services to interact with each other.
Open services/service-intentions.hcl
. This file defines the service intention that allows traffic from fakeservice-frontend
to fakeservice-backend
.
services/service-intentions.hcl
Kind = "service-intentions"
Name = "fakeservice-backend"
Sources = [
{
Name = "fakeservice-frontend"
Action = "allow"
}
]
Create the service intentions to allow the services to interact with each other.
$ consul config write services/service-intentions.hcl
Config entry written: service-intentions/fakeservice-backend
Verify connected services
Refresh the frontend service in your browser. You should find something similar to the following. The frontend service can now interact with the backend service.
$ curl http://34.217.41.199:9090 # invoke fakeservice-frontend external address
{
"name": "frontend",
"uri": "/",
"type": "HTTP",
"ip_addresses": [
"10.0.1.11"
],
"start_time": "2022-09-30T07:37:23.196869",
"end_time": "2022-09-30T07:37:24.545555",
"duration": "1.3477248s",
"body": "Hello World",
"upstream_calls": {
"http://localhost:8080": {
"name": "fakeservice-backend",
"uri": "http://localhost:8080",
"type": "HTTP",
"ip_addresses": [
"10.0.1.127"
],
"start_time": "2022-09-30T10:43:45.685990",
"end_time": "2022-09-30T10:43:45.687329",
"duration": "1.3395ms",
"headers": {
"Content-Length": "266",
"Content-Type": "text/plain; charset=utf-8",
"Date": "Fri, 30 Sep 2022 10:43:45 GMT"
},
"body": "Hello World",
"code": 200
}
},
"code": 200
}
You have successfully deployed and configured two services on Windows workloads to Consul service mesh.
Clean up your infrastructure
Before moving on, destroy the infrastructure you created in this tutorial. Confirm the destroy with a yes
.
$ terraform destroy
Next steps
In this tutorial, you deployed a Consul server cluster and two Windows VMs, pre-configured with Consul, Envoy, and fake-service
. In the process, you learned how to configure your existing Windows workload with Consul and Envoy to leverage Consul service mesh.
For more information on topics covered in this tutorial, check out the following resources.
- Complete the Upgrade Services with Canary Deployments tutorial to learn how to use Consul's service splitting capabilities to manage service traffic
- Read the service definition documentation to learn more about how to register services to Consul
- Read the service intention documentation to learn more about how to enable service to service communication