Packer
Google Cloud Platform
@hashicorp
The googlecompute plugin can be used with HashiCorp Packer to create custom images on GCE.
- Official
- HCP Ready
Updated 2 years ago
- GitHub(opens in new tab)
Google Cloud Platform
The Google compute Packer plugin lets you create custom images for use within Google Compute Engine (GCE).
Installation
To install this plugin, copy and paste this code into your Packer configuration, then run packer init
.
packer {
required_plugins {
googlecompute = {
source = "github.com/hashicorp/googlecompute"
version = "~> 1"
}
}
}
Alternatively, you can use packer plugins install
to manage installation of this plugin.
$ packer plugins install github.com/hashicorp/googlecompute
Components
Builders
- googlecompute - The googlecompute builder creates images from existing ones, by launching an instance, provisioning it, then exporting it as a reusable image.
Post-Processors
googlecompute-import - The googlecompute-import post-processor imports an existing raw disk image, and imports it as a GCE image that can be used for launching instances from.
googlecompute-export - The googlecompute-export post-processor exports the image built by the googlecompute builder as a .tar.gz archive into Google Cloud Storage (GCS).
Authentication
Authenticating with Google Cloud services requires either a User Application Default Credentials,
a JSON Service Account Key or an Access Token. These are not required if you are
running the googlecompute
Packer builder on Google Cloud with a
properly-configured Google Service
Account.
The following options are available for the googlecompute
builder, the googlecompute-export, and
the
googlecompute-import`:
access_token
(string) - A temporary OAuth 2.0 access token obtained from the Google Authorization server, i.e. theAuthorization: Bearer
token used to authenticate HTTP requests to GCP APIs. This is an alternative toaccount_file
, and ignores thescopes
field. If both are specified,access_token
will be used over theaccount_file
field.These access tokens cannot be renewed by Packer and thus will only work until they expire. If you anticipate Packer needing access for longer than a token's lifetime (default
1 hour
), please use a service account key withaccount_file
instead.account_file
(string) - The JSON file containing your account credentials. Not required if you run Packer on a GCE instance with a service account. Instructions for creating the file or using service accounts are above.credentials_file
(string) - The JSON file containing your account credentials.The file's contents may be anything supported by the Google Go client, i.e.:
- Service account JSON
- OIDC-provided token for federation
- Gcloud user credentials file (refresh-token JSON)
- A Google Developers Console client_credentials.json
credentials_json
(string) - The raw JSON payload for credentials.The accepted data formats are same as those described under credentials_file.
impersonate_service_account
(string) - This allows service account impersonation as per the docs.vault_gcp_oauth_engine
(string) - Can be set instead of account_file. If set, this builder will use HashiCorp Vault to generate an Oauth token for authenticating against Google Cloud. The value should be the path of the token generator within vault. For information on how to configure your Vault + GCP engine to produce Oauth tokens, see https://www.vaultproject.io/docs/auth/gcp You must have the environment variables VAULT_ADDR and VAULT_TOKEN set, along with any other relevant variables for accessing your vault instance. For more information, see the Vault docs: https://www.vaultproject.io/docs/commands/#environment-variables Example:"vault_gcp_oauth_engine": "gcp/token/my-project-editor",
Running locally on your workstation.
If you run the googlecompute
Packer builder locally on your workstation, you will
need to install the Google Cloud SDK and authenticate using User Application Default
Credentials.
You don't need to specify an account file if you are using this method. Your user
must have at least Compute Instance Admin (v1)
& Service Account User
roles
to use Packer succesfully.
Running on Google Cloud
If you run the googlecompute
Packer builder on GCE or GKE, you can
configure that instance or cluster to use a Google Service
Account. This will allow
Packer to authenticate to Google Cloud without having to bake in a separate
credential/authentication file.
It is recommended that you create a custom service account for Packer and assign it
Compute Instance Admin (v1)
& Service Account User
roles.
For gcloud
, you can run the following commands:
$ gcloud iam service-accounts create packer \
--project YOUR_GCP_PROJECT \
--description="Packer Service Account" \
--display-name="Packer Service Account"
$ gcloud projects add-iam-policy-binding YOUR_GCP_PROJECT \
--member=serviceAccount:packer@YOUR_GCP_PROJECT.iam.gserviceaccount.com \
--role=roles/compute.instanceAdmin.v1
$ gcloud projects add-iam-policy-binding YOUR_GCP_PROJECT \
--member=serviceAccount:packer@YOUR_GCP_PROJECT.iam.gserviceaccount.com \
--role=roles/iam.serviceAccountUser
$ gcloud projects add-iam-policy-binding YOUR_GCP_PROJECT \
--member=serviceAccount:packer@YOUR_GCP_PROJECT.iam.gserviceaccount.com \
--role=roles/iap.tunnelResourceAccessor
$ gcloud compute instances create INSTANCE-NAME \
--project YOUR_GCP_PROJECT \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--network YOUR_GCP_NETWORK \
--zone YOUR_GCP_ZONE \
--service-account=packer@YOUR_GCP_PROJECT.iam.gserviceaccount.com \
--scopes="https://www.googleapis.com/auth/cloud-platform"
The service account will be used automatically by Packer as long as there is no account file specified in the Packer configuration file.
Running outside of Google Cloud
The Google Cloud Console allows
you to create and download a credential file that will let you use the
googlecompute
Packer builder anywhere. To make the process more
straightforwarded, it is documented here.
Log into the Google Cloud Console and select a project.
Click Select a project, choose your project, and click Open.
Click Create Service Account.
Enter a service account name (friendly display name), an optional description, select the
Compute Engine Instance Admin (v1)
andService Account User
roles, and then click Save.Generate a JSON Key and save it in a secure location.
Set the Environment Variable
GOOGLE_APPLICATION_CREDENTIALS
to point to the path of the service account key.
Precedence of Authentication Methods
Packer looks for credentials in the following places, preferring the first location found:
An
access_token
option in your packer file.An
account_file
option in your packer file.A JSON file (Service Account) whose path is specified by the
GOOGLE_APPLICATION_CREDENTIALS
environment variable.A JSON file in a location known to the
gcloud
command-line tool. (gcloud auth application-default login
creates it)On Windows, this is:
%APPDATA%/gcloud/application_default_credentials.json
On other systems:
$HOME/.config/gcloud/application_default_credentials.json
On Google Compute Engine and Google App Engine Managed VMs, it fetches credentials from the metadata server. (Needs a correct VM authentication scope configuration, see above.)
Examples
Basic Example
Below is a fully functioning example. It doesn't do anything useful since no provisioners or startup-script metadata are defined, but it will effectively repackage an existing GCE image.
JSON
{
"builders": [
{
"type": "googlecompute",
"project_id": "my project",
"source_image": "debian-9-stretch-v20200805",
"ssh_username": "packer",
"zone": "us-central1-a"
}
]
}
HCL2
source "googlecompute" "basic-example" {
project_id = "my project"
source_image = "debian-9-stretch-v20200805"
ssh_username = "packer"
zone = "us-central1-a"
}
build {
sources = ["sources.googlecompute.basic-example"]
}
Windows Example
Before you can provision using the winrm communicator, you need to allow traffic through google's firewall on the winrm port (tcp:5986). You can do so using the gcloud command.
gcloud compute firewall-rules create allow-winrm --allow tcp:5986
Or alternatively by navigating to https://console.cloud.google.com/networking/firewalls/list.
Once this is set up, the following is a complete working packer config after
setting a valid project_id
:
JSON
{
"builders": [
{
"type": "googlecompute",
"project_id": "my project",
"source_image": "windows-server-2019-dc-v20200813",
"disk_size": "50",
"machine_type": "n1-standard-2",
"communicator": "winrm",
"winrm_username": "packer_user",
"winrm_insecure": true,
"winrm_use_ssl": true,
"metadata": {
"sysprep-specialize-script-cmd": "winrm quickconfig -quiet & net user /add packer_user & net localgroup administrators packer_user /add & winrm set winrm/config/service/auth @{Basic=\"true\"}"
},
"zone": "us-central1-a"
}
]
}
HCL2
source "googlecompute" "windows-example" {
project_id = "MY_PROJECT"
source_image = "windows-server-2019-dc-v20200813"
zone = "us-central1-a"
disk_size = 50
machine_type = "n1-standard-2"
communicator = "winrm"
winrm_username = "packer_user"
winrm_insecure = true
winrm_use_ssl = true
metadata = {
sysprep-specialize-script-cmd = "winrm quickconfig -quiet & net user /add packer_user & net localgroup administrators packer_user /add & winrm set winrm/config/service/auth @{Basic=\"true\"}"
}
}
build {
sources = ["sources.googlecompute.windows-example"]
}
Warning: Please note that if you're setting up WinRM for provisioning, you'll probably want to turn it off or restrict its permissions as part of a shutdown script at the end of Packer's provisioning process. For more details on the why/how, check out this useful blog post and the associated code: https://missionimpossiblecode.io/post/winrm-for-provisioning-close-the-door-on-the-way-out-eh/
This build can take up to 15 min.
Windows over WinSSH Example
The following uses Windows SSH as backend communicator https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse
source "googlecompute" "windows-ssh-example" {
project_id = "MY_PROJECT"
source_image = "windows-server-2019-dc-v20200813"
zone = "us-east4-a"
disk_size = 50
machine_type = "n1-standard-2"
communicator = "ssh"
ssh_username = var.packer_username
ssh_password = var.packer_user_password
ssh_timeout = "1h"
metadata = {
sysprep-specialize-script-cmd = "net user ${var.packer_username} \"${var.packer_user_password}\" /add /y & wmic UserAccount where Name=\"${var.packer_username}\" set PasswordExpires=False & net localgroup administrators ${var.packer_username} /add & powershell Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 & powershell Start-Service sshd & powershell Set-Service -Name sshd -StartupType 'Automatic' & powershell New-NetFirewallRule -Name 'OpenSSH-Server-In-TCP' -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22 & powershell.exe -NoProfile -ExecutionPolicy Bypass -Command \"Set-ExecutionPolicy -ExecutionPolicy bypass -Force\""
}
}
build {
sources = ["sources.googlecompute.windows-ssh-example"]
provisioner "powershell" {
script = "../scripts/install-features.ps1"
elevated_user = var.packer_username
elevated_password = var.packer_user_password
}
}
Windows over WinSSH - Ansible Provisioner
The following uses Windows SSH as backend communicator https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse with a private key.
- The
sysprep-specialize-script-cmd
creates thepacker_user
and adds it to the local administrators group and configures the ssh key, firewall rule and required permissions.
source "googlecompute" "windows-ssh-ansible" {
project_id = var.project_id
source_image = "windows-server-2019-dc-v20200813"
zone = "us-east4-a"
disk_size = 50
machine_type = "n1-standard-8"
communicator = "ssh"
ssh_username = var.packer_username
ssh_private_key_file = var.ssh_key_file_path
ssh_timeout = "1h"
metadata = {
sysprep-specialize-script-cmd = "net user ${var.packer_username} \"${var.packer_user_password}\" /add /y & wmic UserAccount where Name=\"${var.packer_username}\" set PasswordExpires=False & net localgroup administrators ${var.packer_username} /add & powershell Add-WindowsCapability -Online -Name OpenSSH.Server~~~~0.0.1.0 & echo ${var.ssh_pub_key} > C:\\ProgramData\\ssh\\administrators_authorized_keys & icacls.exe \"C:\\ProgramData\\ssh\\administrators_authorized_keys\" /inheritance:r /grant \"Administrators:F\" /grant \"SYSTEM:F\" & powershell New-ItemProperty -Path \"HKLM:\\SOFTWARE\\OpenSSH\" -Name DefaultShell -Value \"C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe\" -PropertyType String -Force & powershell Start-Service sshd & powershell Set-Service -Name sshd -StartupType 'Automatic' & powershell New-NetFirewallRule -Name 'OpenSSH-Server-In-TCP' -DisplayName 'OpenSSH Server (sshd)' -Enabled True -Direction Inbound -Protocol TCP -Action Allow -LocalPort 22 & powershell.exe -NoProfile -ExecutionPolicy Bypass -Command \"Set-ExecutionPolicy -ExecutionPolicy bypass -Force\""
}
account_file = var.account_file_path
}
build {
sources = ["sources.googlecompute.windows-ssh-ansible"]
provisioner "ansible" {
playbook_file = "./playbooks/playbook.yml"
use_proxy = false
ansible_ssh_extra_args = ["-o StrictHostKeyChecking=no -o IdentitiesOnly=yes"]
ssh_authorized_key_file = "var.public_key_path"
extra_arguments = ["-e", "win_packages=${var.win_packages}",
"-e",
"ansible_shell_type=powershell",
"-e",
"ansible_shell_executable=None",
"-e",
"ansible_shell_executable=None"
]
user = var.packer_username
}
}
Nested Hypervisor Example
This is an example of using the image_licenses
configuration option to create
a GCE image that has nested virtualization enabled. See Enabling Nested
Virtualization for VM
Instances
for details.
JSON
{
"builders": [
{
"type": "googlecompute",
"project_id": "my project",
"source_image_family": "centos-stream-9",
"ssh_username": "packer",
"zone": "us-central1-a",
"image_licenses": ["projects/vm-options/global/licenses/enable-vmx"]
}
]
}
HCL2
source "googlecompute" "basic-example" {
project_id = "my project"
source_image_family = "centos-stream-9"
ssh_username = "packer"
zone = "us-central1-a"
image_licenses = ["projects/vm-options/global/licenses/enable-vmx"]
}
build {
sources = ["sources.googlecompute.basic-example"]
}
Shared VPC Example
This is an example of using the network_project_id
configuration option to create
a GCE instance in a Shared VPC Network. See Creating a GCE Instance using Shared
VPC
for details. The user/service account running Packer must have Compute Network User
role on
the Shared VPC Host Project to create the instance in addition to the other roles mentioned in the
Running on Google Cloud section.
JSON
{
"builders": [
{
"type": "googlecompute",
"project_id": "my project",
"subnetwork": "default",
"source_image_family": "centos-stream-9",
"network_project_id": "SHARED_VPC_PROJECT",
"ssh_username": "packer",
"zone": "us-central1-a",
"image_licenses": ["projects/vm-options/global/licenses/enable-vmx"]
}
]
}
HCL2
source "googlecompute" "sharedvpc-example" {
project_id = "my project"
source_image_family = "centos-stream-9"
subnetwork = "default"
network_project_id = "SHARED_VPC_PROJECT"
ssh_username = "packer"
zone = "us-central1-a"
image_licenses = ["projects/vm-options/global/licenses/enable-vmx"]
}
build {
sources = ["sources.googlecompute.sharedvpc-example"]
}
Separate Image Project Example
This is an example of using the image_project_id
configuration option to create
the generated image in a different GCP project than the one used to create the virtual machine. Make sure that Packer has permission in the target project to manage images, the Compute Storage Admin
role will grant the desired permissions.
JSON
{
"builders": [
{
"type": "googlecompute",
"project_id": "my project",
"image_project_id": "my image target project",
"source_image": "debian-9-stretch-v20200805",
"ssh_username": "packer",
"zone": "us-central1-a"
}
]
}
HCL2
source "googlecompute" "basic-example" {
project_id = "my project"
image_project_id = "my image target project"
source_image = "debian-9-stretch-v20200805"
ssh_username = "packer"
zone = "us-central1-a"
}
build {
sources = ["sources.googlecompute.basic-example"]
}