Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Private Service Connect endpoint support to GCS backend #31967

Merged
merged 3 commits into from
Oct 11, 2022

Conversation

SarahFrench
Copy link
Member

@SarahFrench SarahFrench commented Oct 7, 2022

Google offers the ability to call their APIs via Private Service Connect endpoints which allow traffic to remain inside VPCs and avoid passing over the public internet. The Google provider alreadys supports this for all supported APIs (e.g. this field is equivalent to the one added to the backend in this PR) but the gcs backend doesn't support the same feature.

This PR adds the storage_custom_endpoint field to the gcs backend to allow communication with the backend to travel via a Private Service Connect endpoint. Documentation is also updated (see deployment). Adding this feature will benefit any users who need to adhere to policies restricting traffic outside of given networks.

Here's the docs for the WithEndpoint function option used in this PR to override the default URL for the Storage API

Closes #28856

Target Release

1.4.0

Draft CHANGELOG entry

ENHANCEMENTS

-  backend/gcs: Add `storage_custom_endpoint` argument, to allow communication with the backend via a Private Service Connect endpoint. [GH-28856]

@SarahFrench
Copy link
Member Author

Testing

I wasn't sure how to set up an automated test for this feature as it depends on Terraform (or the automated tests) being run on a VM in a specific networking scenario.

So far I have tested it manually by:

  1. Using Terraform to provision a VM in GCP
  2. Uploading a build of Terraform using this PR's code
  3. Running terraform init from the VM with a config that uses the new field in the gcs backend's configuration
    • Using a manually made GCS bucket as the backend in this config
Config used to set up the testing environment (step 1)
provider "google" {
  project = var.gcp_project_id
  region  = var.default_region
  zone    = var.default_zone
}

variable "gcp_project_id" {
  type = string
}
variable "default_region" {
  type = string
}
variable "default_zone" {
  type = string
}

locals {
  label = "tf-test-private-connect"
  fwd_rule_name = "myfwdrule"
}

resource "google_compute_network" "network" {
  project                 = var.gcp_project_id
  name                    = "${local.label}-network"
  auto_create_subnetworks = false
}

resource "google_compute_subnetwork" "subnet" {
  project                  = google_compute_network.network.project
  name                     = "${local.label}-subnet"
  ip_cidr_range            = "10.2.0.0/16"
  region                   = var.default_region
  network                  = google_compute_network.network.id
  private_ip_google_access = true
}

resource "google_compute_firewall" "allow_ssh_any_ip" {
  name    = "${local.label}-ssh-rule"
  network = google_compute_network.network.name

  allow {
    protocol = "tcp"
    ports    = ["22"]
  }

  source_ranges = ["0.0.0.0/0"]
}

resource "google_compute_global_address" "default" {
  project      = google_compute_network.network.project
  name         = "${local.label}-backend-global-psconnect-ip"
  address_type = "INTERNAL"
  purpose      = "PRIVATE_SERVICE_CONNECT"
  network      = google_compute_network.network.id
  address      = "10.3.0.5"
}

resource "google_compute_global_forwarding_rule" "default" {
  project               = google_compute_network.network.project
  name                  = local.fwd_rule_name
  target                = "all-apis"
  network               = google_compute_network.network.id
  ip_address            = google_compute_global_address.default.id
  load_balancing_scheme = ""
}

resource "google_compute_instance" "vm_instance" {
  name         = "${local.label}-instance"
  machine_type = "f1-micro"

  labels = {
    fizz = "buzz"
  }

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
      labels = {
        foo = "bar"
      }
    }
  }

  network_interface {
    network = google_compute_network.network.name
    subnetwork = google_compute_subnetwork.subnet.name

    access_config {
      // Ephemeral public IP
    }
  }

  service_account {
    # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
    # Service account has no IAM roles bound to it
    email  = google_service_account.default.email
    scopes = ["cloud-platform"]
  }
}

resource "google_service_account" "default" {
  account_id   = "${local.label}-sa"
  display_name = "Service account used with the VM Terraform is run inside of"
}

// Permissions needed for Terraform to function when using application default credentials in the VM

// Permission allow interaction with GCS bucket backend
resource "google_project_iam_member" "storage" {
  project  = var.gcp_project_id
  role     = "roles/storage.admin"
  member   = "serviceAccount:${google_service_account.default.email}"
}

// Permission to manage service accounts (something to provision from inside the VM)
resource "google_project_iam_member" "service_account" {
  project  = var.gcp_project_id
  role     = "roles/iam.serviceAccountAdmin"
  member   = "serviceAccount:${google_service_account.default.email}"
}
Config used inside the VM (step 3)
provider "google" {
  project = var.gcp_project_id
  region  = var.default_region
  zone    = var.default_zone
}

variable "gcp_project_id" {
  type = string
}
variable "default_region" {
  type = string
}
variable "default_zone" {
  type = string
}

terraform {
  backend "gcs" {
    bucket = "my-testing-bucket" #change
    # Below uses the value of local.fwd_rule_name from the config used to create the testing environment
    storage_custom_endpoint = "https://storage-myfwdrule.p.googleapis.com/storage/v1/b"
  }
}

#  A resource to test Terraform with after successful init step

# resource "google_service_account" "service_account" {
#   project = var.gcp_project_id
#   account_id   = "my-test-sa"
#   display_name = "Service account made as a test when running TF with a Private Connect endpoint"
# }

@SarahFrench SarahFrench marked this pull request as ready for review October 10, 2022 13:17
@SarahFrench SarahFrench requested a review from a team as a code owner October 10, 2022 13:17
Copy link
Contributor

@megan07 megan07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job here! Thanks!

@SarahFrench SarahFrench merged commit 89ef27d into main Oct 11, 2022
@github-actions
Copy link

Reminder for the merging maintainer: if this is a user-visible change, please update the changelog on the appropriate release branch.

@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 11, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add capability to GCS backend to allow use of a Private Service Connect endpoint for Google Storage API
3 participants