Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow passing node service_account when autopilot enabled #13024

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions .changelog/6733.txt
@@ -0,0 +1,3 @@
```release-note:bug
container: fixed a bug where `cluster_autoscaling.auto_provisioning_defaults.service_account` can not be set when `enable_autopilot = true` for `google_container_cluster`
```
69 changes: 43 additions & 26 deletions google/resource_container_cluster.go
Expand Up @@ -67,6 +67,13 @@ var (
forceNewClusterNodeConfigFields = []string{
"workload_metadata_config",
}

suppressDiffForAutopilot = schema.SchemaDiffSuppressFunc(func(k, oldValue, newValue string, d *schema.ResourceData) bool {
if v, _ := d.Get("enable_autopilot").(bool); v {
return true
}
return false
})
)

// This uses the node pool nodeConfig schema but sets
Expand Down Expand Up @@ -326,21 +333,24 @@ func resourceContainerCluster() *schema.Resource {
MaxItems: 1,
// This field is Optional + Computed because we automatically set the
// enabled value to false if the block is not returned in API responses.
Optional: true,
Computed: true,
Description: `Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster's workload. See the guide to using Node Auto-Provisioning for more details.`,
ConflictsWith: []string{"enable_autopilot"},
Optional: true,
Computed: true,
Description: `Per-cluster configuration of Node Auto-Provisioning with Cluster Autoscaler to automatically adjust the size of the cluster and create/delete node pools based on the current needs of the cluster's workload. See the guide to using Node Auto-Provisioning for more details.`,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"enabled": {
Type: schema.TypeBool,
Required: true,
Description: `Whether node auto-provisioning is enabled. Resource limits for cpu and memory must be defined to enable node auto-provisioning.`,
Type: schema.TypeBool,
Optional: true,
Computed: true,
ConflictsWith: []string{"enable_autopilot"},
Description: `Whether node auto-provisioning is enabled. Resource limits for cpu and memory must be defined to enable node auto-provisioning.`,
},
"resource_limits": {
Type: schema.TypeList,
Optional: true,
Description: `Global constraints for machine resources in the cluster. Configuring the cpu and memory types is required if node auto-provisioning is enabled. These limits will apply to node pool autoscaling in addition to node auto-provisioning.`,
Type: schema.TypeList,
Optional: true,
ConflictsWith: []string{"enable_autopilot"},
DiffSuppressFunc: suppressDiffForAutopilot,
Description: `Global constraints for machine resources in the cluster. Configuring the cpu and memory types is required if node auto-provisioning is enabled. These limits will apply to node pool autoscaling in addition to node auto-provisioning.`,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"resource_type": {
Expand Down Expand Up @@ -384,25 +394,28 @@ func resourceContainerCluster() *schema.Resource {
Description: `The Google Cloud Platform Service Account to be used by the node VMs.`,
},
"disk_size": {
Type: schema.TypeInt,
Optional: true,
Default: 100,
Description: `Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB.`,
ValidateFunc: validation.IntAtLeast(10),
Type: schema.TypeInt,
Optional: true,
Default: 100,
Description: `Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB.`,
DiffSuppressFunc: suppressDiffForAutopilot,
ValidateFunc: validation.IntAtLeast(10),
},
"disk_type": {
Type: schema.TypeString,
Optional: true,
Default: "pd-standard",
Description: `Type of the disk attached to each node.`,
ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd", "pd-balanced"}, false),
Type: schema.TypeString,
Optional: true,
Default: "pd-standard",
Description: `Type of the disk attached to each node.`,
DiffSuppressFunc: suppressDiffForAutopilot,
ValidateFunc: validation.StringInSlice([]string{"pd-standard", "pd-ssd", "pd-balanced"}, false),
},
"image_type": {
Type: schema.TypeString,
Optional: true,
Default: "COS_CONTAINERD",
Description: `The default image type used by NAP once a new node pool is being created.`,
ValidateFunc: validation.StringInSlice([]string{"COS_CONTAINERD", "COS", "UBUNTU_CONTAINERD", "UBUNTU"}, false),
Type: schema.TypeString,
Optional: true,
Default: "COS_CONTAINERD",
Description: `The default image type used by NAP once a new node pool is being created.`,
DiffSuppressFunc: suppressDiffForAutopilot,
ValidateFunc: validation.StringInSlice([]string{"COS_CONTAINERD", "COS", "UBUNTU_CONTAINERD", "UBUNTU"}, false),
},
"boot_disk_kms_key": {
Type: schema.TypeString,
Expand Down Expand Up @@ -3257,8 +3270,12 @@ func expandMaintenancePolicy(d *schema.ResourceData, meta interface{}) *containe

func expandClusterAutoscaling(configured interface{}, d *schema.ResourceData) *container.ClusterAutoscaling {
l, ok := configured.([]interface{})
enableAutopilot := false
if v, ok := d.GetOk("enable_autopilot"); ok && v == true {
enableAutopilot = true
}
if !ok || l == nil || len(l) == 0 || l[0] == nil {
if v, ok := d.GetOk("enable_autopilot"); ok && v == true {
if enableAutopilot {
return nil
}
return &container.ClusterAutoscaling{
Expand Down
80 changes: 73 additions & 7 deletions google/resource_container_cluster_test.go
Expand Up @@ -1895,6 +1895,7 @@ func TestAccContainerCluster_withShieldedNodes(t *testing.T) {
func TestAccContainerCluster_withAutopilot(t *testing.T) {
t.Parallel()

pid := getTestProjectFromEnv()
containerNetName := fmt.Sprintf("tf-test-container-net-%s", randString(t, 10))
clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10))

Expand All @@ -1904,7 +1905,42 @@ func TestAccContainerCluster_withAutopilot(t *testing.T) {
CheckDestroy: testAccCheckContainerClusterDestroyProducer(t),
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withAutopilot(containerNetName, clusterName, "us-central1", true, false),
Config: testAccContainerCluster_withAutopilot(pid, containerNetName, clusterName, "us-central1", true, false, ""),
},
{
ResourceName: "google_container_cluster.with_autopilot",
ImportState: true,
ImportStateVerify: true,
ImportStateVerifyIgnore: []string{"min_master_version"},
},
},
})
}

func TestAccContainerClusterCustomServiceAccount_withAutopilot(t *testing.T) {
t.Parallel()

pid := getTestProjectFromEnv()
containerNetName := fmt.Sprintf("tf-test-container-net-%s", randString(t, 10))
clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10))
serviceAccountName := fmt.Sprintf("tf-test-sa-%s", randString(t, 10))

vcrTest(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckContainerClusterDestroyProducer(t),
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withAutopilot(pid, containerNetName, clusterName, "us-central1", true, false, serviceAccountName),
Check: resource.ComposeTestCheckFunc(
resource.TestCheckResourceAttr("google_container_cluster.with_autopilot",
"cluster_autoscaling.0.enabled", "true"),
resource.TestCheckResourceAttr("google_container_cluster.with_autopilot",
"cluster_autoscaling.0.auto_provisioning_defaults.0.service_account",
fmt.Sprintf("%s@%s.iam.gserviceaccount.com", serviceAccountName, pid)),
resource.TestCheckResourceAttr("google_container_cluster.with_autopilot",
"cluster_autoscaling.0.auto_provisioning_defaults.0.oauth_scopes.0", "https://www.googleapis.com/auth/cloud-platform"),
),
},
{
ResourceName: "google_container_cluster.with_autopilot",
Expand All @@ -1919,6 +1955,7 @@ func TestAccContainerCluster_withAutopilot(t *testing.T) {
func TestAccContainerCluster_errorAutopilotLocation(t *testing.T) {
t.Parallel()

pid := getTestProjectFromEnv()
containerNetName := fmt.Sprintf("tf-test-container-net-%s", randString(t, 10))
clusterName := fmt.Sprintf("tf-test-cluster-%s", randString(t, 10))

Expand All @@ -1928,7 +1965,7 @@ func TestAccContainerCluster_errorAutopilotLocation(t *testing.T) {
CheckDestroy: testAccCheckContainerClusterDestroyProducer(t),
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_withAutopilot(containerNetName, clusterName, "us-central1-a", true, false),
Config: testAccContainerCluster_withAutopilot(pid, containerNetName, clusterName, "us-central1-a", true, false, ""),
ExpectError: regexp.MustCompile(`Autopilot clusters must be regional clusters.`),
},
},
Expand Down Expand Up @@ -2657,8 +2694,8 @@ func testAccCheckContainerClusterDestroyProducer(t *testing.T) func(s *terraform
}

attributes := rs.Primary.Attributes
_, err := config.NewContainerClient(config.userAgent).Projects.Zones.Clusters.Get(
config.Project, attributes["location"], attributes["name"]).Do()
_, err := config.NewContainerClient(config.userAgent).Projects.Locations.Clusters.Get(
fmt.Sprintf("projects/%s/locations/%s/clusters/%s", config.Project, attributes["location"], attributes["name"])).Do()
if err == nil {
return fmt.Errorf("Cluster still exists")
}
Expand Down Expand Up @@ -5223,8 +5260,36 @@ resource "google_container_cluster" "primary" {
`, name)
}

func testAccContainerCluster_withAutopilot(containerNetName string, clusterName string, location string, enabled bool, withNetworkTag bool) string {
config := fmt.Sprintf(`
func testAccContainerCluster_withAutopilot(projectID string, containerNetName string, clusterName string, location string, enabled bool, withNetworkTag bool, serviceAccount string) string {
config := ""
clusterAutoscaling := ""
if serviceAccount != "" {
config += fmt.Sprintf(`
resource "google_service_account" "service_account" {
account_id = "%[1]s"
project = "%[2]s"
display_name = "Service Account"
}

resource "google_project_iam_binding" "project" {
project = "%[2]s"
role = "roles/container.nodeServiceAccount"
members = [
"serviceAccount:%[1]s@%[2]s.iam.gserviceaccount.com",
]
}`, serviceAccount, projectID)

clusterAutoscaling = fmt.Sprintf(`
cluster_autoscaling {
auto_provisioning_defaults {
service_account = "%s@%s.iam.gserviceaccount.com"
oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
}
}`, serviceAccount, projectID)
}

config += fmt.Sprintf(`

resource "google_compute_network" "container_network" {
name = "%s"
auto_create_subnetworks = false
Expand Down Expand Up @@ -5271,9 +5336,10 @@ resource "google_container_cluster" "with_autopilot" {
disabled = false
}
}
%s
vertical_pod_autoscaling {
enabled = true
}`, containerNetName, clusterName, location, enabled)
}`, containerNetName, clusterName, location, enabled, clusterAutoscaling)
if withNetworkTag {
config += `
node_pool_auto_config {
Expand Down
11 changes: 6 additions & 5 deletions website/docs/r/container_cluster.html.markdown
Expand Up @@ -470,15 +470,16 @@ addons_config {

<a name="nested_cluster_autoscaling"></a>The `cluster_autoscaling` block supports:

* `enabled` - (Required) Whether node auto-provisioning is enabled. Resource
limits for `cpu` and `memory` must be defined to enable node auto-provisioning.
* `enabled` - (Optional) Whether node auto-provisioning is enabled. Must be supplied for GKE Standard clusters, `true` is implied
for autopilot clusters. Resource limits for `cpu` and `memory` must be defined to enable node auto-provisioning for GKE Standard.

* `resource_limits` - (Optional) Global constraints for machine resources in the
cluster. Configuring the `cpu` and `memory` types is required if node
auto-provisioning is enabled. These limits will apply to node pool autoscaling
in addition to node auto-provisioning. Structure is [documented below](#nested_resource_limits).

* `auto_provisioning_defaults` - (Optional) Contains defaults for a node pool created by NAP.
* `auto_provisioning_defaults` - (Optional) Contains defaults for a node pool created by NAP. A subset of fields also apply to
GKE Autopilot clusters.
Structure is [documented below](#nested_auto_provisioning_defaults).

* `autoscaling_profile` - (Optional, [Beta](https://terraform.io/docs/providers/google/provider_versions.html)) Configuration
Expand All @@ -503,11 +504,11 @@ Minimum CPU platform to be used for NAP created node pools. The instance may be
specified or newer CPU platform. Applicable values are the friendly names of CPU platforms, such
as "Intel Haswell" or "Intel Sandy Bridge".

* `oauth_scopes` - (Optional) Scopes that are used by NAP when creating node pools. Use the "https://www.googleapis.com/auth/cloud-platform" scope to grant access to all APIs. It is recommended that you set `service_account` to a non-default service account and grant IAM roles to that service account for only the resources that it needs.
* `oauth_scopes` - (Optional) Scopes that are used by NAP and GKE Autopilot when creating node pools. Use the "https://www.googleapis.com/auth/cloud-platform" scope to grant access to all APIs. It is recommended that you set `service_account` to a non-default service account and grant IAM roles to that service account for only the resources that it needs.

-> `monitoring.write` is always enabled regardless of user input. `monitoring` and `logging.write` may also be enabled depending on the values for `monitoring_service` and `logging_service`.

* `service_account` - (Optional) The Google Cloud Platform Service Account to be used by the node VMs.
* `service_account` - (Optional) The Google Cloud Platform Service Account to be used by the node VMs created by GKE Autopilot or NAP.

* `boot_disk_kms_key` - (Optional) The Customer Managed Encryption Key used to encrypt the boot disk attached to each node in the node pool. This should be of the form projects/[KEY_PROJECT_ID]/locations/[LOCATION]/keyRings/[RING_NAME]/cryptoKeys/[KEY_NAME]. For more information about protecting resources with Cloud KMS Keys please see: https://cloud.google.com/compute/docs/disks/customer-managed-encryption

Expand Down