Consul
WAN Federation Between VMs and Kubernetes Through Mesh Gateways
1.8.0+: This feature is available in Consul versions 1.8.0 and higher
This topic requires familiarity with Mesh Gateways and WAN Federation Via Mesh Gateways.
This page describes how to federate Consul clusters separately deployed in VM and Kubernetes runtimes. Refer to Multi-Cluster Overview for more information, including Kubernetes networking requirements.
Consul datacenters running on non-kubernetes platforms like VMs or bare metal can be federated with Kubernetes datacenters.
Kubernetes as the Primary
One Consul datacenter must be the primary. If your primary datacenter is running on Kubernetes, use the Helm config from the Primary Datacenter section to install Consul.
Once installed on Kubernetes, and with the ProxyDefaults
resource created,
you'll need to export the following information from the primary Kubernetes cluster:
- Certificate authority cert and key (in order to create SSL certs for VMs)
- External addresses of Kubernetes mesh gateways
- Replication ACL token
- Gossip encryption key
The following sections detail how to export this data.
Certificates
Retrieve the certificate authority cert:
kubectl get secrets/consul-ca-cert --namespace consul --template='{{index .data "tls.crt" | base64decode }}' > consul-agent-ca.pem
And the certificate authority signing key:
kubectl get secrets/consul-ca-key --namespace consul --template='{{index .data "tls.key" | base64decode }}' > consul-agent-ca-key.pem
With the
consul-agent-ca.pem
andconsul-agent-ca-key.pem
files you can create certificates for your servers and clients running on VMs that share the same certificate authority as your Kubernetes servers.You can use the
consul tls
commands to generate those certificates:# NOTE: consul-agent-ca.pem and consul-agent-ca-key.pem must be in the current # directory. $ consul tls cert create -server -dc=vm-dc -node <node_name> ==> WARNING: Server Certificates grants authority to become a server and access all state in the cluster including root keys and all ACL tokens. Do not distribute them to production hosts that are not server nodes. Store them as securely as CA keys. ==> Using consul-agent-ca.pem and consul-agent-ca-key.pem ==> Saved vm-dc-server-consul-0.pem ==> Saved vm-dc-server-consul-0-key.pem
Note the
-node
option in the above command. This should be same as the node name of the Consul Agent. This is a requirement for Consul Federation to work. Alternatively, if you plan to use the same certificate and key pair on all your Consul server nodes, or you don't know the nodename in advance, use-node "*"
instead. Not satisfying this requirement would result in the following error in the Consul Server logs:[ERROR] agent.server.rpc: TLS handshake failed: conn=from= error="remote error: tls: bad certificate"
See the help for output of
consul tls cert create -h
to see more options for generating server certificates.These certificates can be used in your server config file:
server.hcl
tls { defaults { cert_file = "vm-dc-server-consul-0.pem" key_file = "vm-dc-server-consul-0-key.pem" ca_file = "consul-agent-ca.pem" } }
For clients, you can generate TLS certs with:
$ consul tls cert create -client ==> Using consul-agent-ca.pem and consul-agent-ca-key.pem ==> Saved dc1-client-consul-0.pem ==> Saved dc1-client-consul-0-key.pem
Or use the auto_encrypt feature.
Mesh Gateway Addresses
Retrieve the WAN addresses of the mesh gateways:
$ kubectl exec statefulset/consul-server --namespace consul -- sh -c \
'curl --silent --insecure https://localhost:8501/v1/catalog/service/mesh-gateway | jq ".[].ServiceTaggedAddresses.wan"'
{
"Address": "1.2.3.4",
"Port": 443
}
{
"Address": "1.2.3.4",
"Port": 443
}
In this example, the addresses are the same because both mesh gateway pods are fronted by the same Kubernetes load balancer.
These addresses will be used in the server config for the primary_gateways
setting:
primary_gateways = ["1.2.3.4:443"]
Replication ACL Token
If ACLs are enabled, you'll also need the replication ACL token:
$ kubectl get secrets/consul-acl-replication-acl-token --namespace consul --template='{{.data.token | base64decode}}'
e7924dd1-dc3f-f644-da54-81a73ba0a178
This token will be used in the server config for the replication token.
acls {
tokens {
replication = "e7924dd1-dc3f-f644-da54-81a73ba0a178"
}
}
NOTE: You'll also need to set up additional ACL tokens as needed by the ACL system. See tutorial Secure Consul with Access Control Lists (ACLs) for more information.
Gossip Encryption Key
If gossip encryption is enabled, you'll need the key as well. The command to retrieve the key will depend on which Kubernetes secret you've stored it in.
This key will be used in server and client configs for the encrypt
setting:
encrypt = "uF+GsbI66cuWU21kiXLze5JLEX5j4iDFlDTb0ZWNpDI="
Final Configuration
A final example server config file might look like:
# From above
tls {
defaults {
cert_file = "vm-dc-server-consul-0.pem"
key_file = "vm-dc-server-consul-0-key.pem"
ca_file = "consul-agent-ca.pem"
}
internal_rpc {
verify_incoming = true
verify_outgoing = true
verify_server_hostname = true
}
}
primary_gateways = ["1.2.3.4:443"]
acl {
enabled = true
default_policy = "deny"
down_policy = "extend-cache"
tokens {
agent = "e7924dd1-dc3f-f644-da54-81a73ba0a178"
replication = "e7924dd1-dc3f-f644-da54-81a73ba0a178"
}
}
encrypt = "uF+GsbI66cuWU21kiXLze5JLEX5j4iDFlDTb0ZWNpDI="
# Other server settings
server = true
datacenter = "vm-dc"
data_dir = "/opt/consul"
enable_central_service_config = true
primary_datacenter = "dc1"
connect {
enabled = true
enable_mesh_gateway_wan_federation = true
}
ports {
https = 8501
http = -1
grpc = 8502
}
Kubernetes as the Secondary
If you're running your primary datacenter on VMs then you'll need to manually construct the Federation Secret in order to federate Kubernetes clusters as secondaries. In addition, primary clusters must be able to make requests to the Kubernetes API URLs of secondary clusters when ACLs are enabled.
Your VM cluster must be running mesh gateways, and have mesh gateway WAN federation enabled. See WAN Federation via Mesh Gateways.
You'll need:
The root certificate authority cert placed in
consul-agent-ca.pem
.The root certificate authority key placed in
consul-agent-ca-key.pem
.The IP addresses of the mesh gateways running in your VM datacenter. These must be routable from the Kubernetes cluster.
If ACLs are enabled you must create an ACL replication token with the following rules:
acl = "write" operator = "write" agent_prefix "" { policy = "read" } node_prefix "" { policy = "write" } service_prefix "" { policy = "read" intentions = "read" }
This token is used for ACL replication and for automatic ACL management in Kubernetes.
If you're running Consul Enterprise you'll need the rules:
operator = "write" agent_prefix "" { policy = "read" } node_prefix "" { policy = "write" } namespace_prefix "" { acl = "write" service_prefix "" { policy = "read" intentions = "read" } }
If ACLs are enabled you must also modify the anonymous token policy to have the following permissions:
node_prefix "" { policy = "read" } service_prefix "" { policy = "read" }
With Consul Enterprise, use:
partition_prefix "" { namespace_prefix "" { node_prefix "" { policy = "read" } service_prefix "" { policy = "read" } } }
These permissions are needed to allow cross-datacenter requests. To make a cross-dc request the sidecar proxy in the originating DC needs to know about the services running in the remote DC. To do so, it needs an ACL token that allows it to look up the services in the remote DC. The way tokens are created in Kubernetes, the sidecar proxies have local ACL tokens–i.e tokens that are only valid in the local DC. When a request goes from one DC to another, if the request has a local token, it is stripped from the request because the remote DC won't be able to validate it. When the request lands in the other DC, it has no ACL token and so will be subject to the anonymous token policy. This is why the anonymous token policy must be configured to allow read access to all services. When the Kubernetes DC is the primary, this is handled automatically, but when the primary DC is on VMs, this must be configured manually.
To configure the anonymous token policy, first create a policy with the above rules, then attach it to the anonymous token. For example using the CLI:
echo 'node_prefix "" { policy = "read" } service_prefix "" { policy = "read" }' | consul acl policy create -name anonymous -rules - consul acl token update -id 00000000-0000-0000-0000-000000000002 -policy-name anonymous
If gossip encryption is enabled, you'll need the key.
With that data ready, you can create the Kubernetes federation secret:
kubectl create secret generic consul-federation \
--from-literal=caCert=$(cat consul-agent-ca.pem) \
--from-literal=caKey=$(cat consul-agent-ca-key.pem)
# If ACLs are enabled uncomment.
# --from-literal=replicationToken="<your acl replication token>" \
# If using gossip encryption uncomment.
# --from-literal=gossipEncryptionKey="<your gossip encryption key>"
If ACLs are enabled, you must next determine the Kubernetes API URL for the secondary cluster. The API URL of the must be specified in the config files for all secondary clusters because secondary clusters need to create global Consul ACL tokens (tokens that are valid in all datacenters) and these tokens can only be created by the primary datacenter. By setting the API URL, the secondary cluster will configure a Consul auth method in the primary cluster so that components in the secondary cluster can use their Kubernetes ServiceAccount tokens to retrieve global Consul ACL tokens from the primary.
To determine the Kubernetes API URL, first get the cluster name in your kubeconfig:
$ export CLUSTER=$(kubectl config view -o jsonpath="{.contexts[?(@.name == \"$(kubectl config current-context)\")].context.cluster}")
Then get the API URL:
$ kubectl config view -o jsonpath="{.clusters[?(@.name == \"$CLUSTER\")].cluster.server}"
https://<some-url>
You'll use this URL when setting global.federation.k8sAuthMethodHost
.
Then use the following Helm config file:
global:
name: consul
datacenter: dc2
tls:
enabled: true
caCert:
secretName: consul-federation
secretKey: caCert
caKey:
secretName: consul-federation
secretKey: caKey
# Delete this acls section if ACLs are disabled.
acls:
manageSystemACLs: true
replicationToken:
secretName: consul-federation
secretKey: replicationToken
federation:
enabled: true
k8sAuthMethodHost: <kubernetes-api-url>
primaryDatacenter: dc1
# Delete this gossipEncryption section if gossip encryption is disabled.
gossipEncryption:
secretName: consul-federation
secretKey: gossipEncryptionKey
connectInject:
enabled: true
meshGateway:
enabled: true
server:
extraConfig: |
{
"primary_gateways": ["<ip of your VM mesh gateway>", "<other ip>", ...]
}
Notes:
- You must fill out the
server.extraConfig
section with the IPs of your mesh gateways running on VMs. - Set
global.federation.k8sAuthMethodHost
to the Kubernetes API URL of this cluster (includinghttps://
). global.federation.primaryDatacenter
should be set to the name of your primary datacenter.
With your config file ready to go, follow our Installation Guide to install Consul on your secondary cluster(s).
After installation, if you're using consul-helm 0.30.0+, create the
ProxyDefaults
resource
to allow traffic between datacenters.
Next Steps
In both cases (Kubernetes as primary or secondary), after installation, follow the Verifying Federation section to verify that federation is working as expected.