Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform destroy Unsupported attribute issue #1221

Closed
armujahid opened this issue Dec 1, 2022 · 5 comments
Closed

terraform destroy Unsupported attribute issue #1221

armujahid opened this issue Dec 1, 2022 · 5 comments
Labels

Comments

@armujahid
Copy link
Contributor

Description

Deploy karpenter example with released version using v4.17.0 (git checkout tags/v4.17.0)
Destroy cluster in multiple steps.
This last step (terraform destroy -auto-approve) throws this error

module.eks_blueprints_kubernetes_addons.module.aws_ebs_csi_driver[0].data.aws_eks_addon_version.this: Read complete after 1s [id=aws-ebs-csi-driver]
╷
│ Error: Unsupported attribute
│ 
│   on main.tf line 21, in provider "kubectl":
│   21:   host                   = module.eks_blueprints.eks_cluster_endpoint
│     ├────────────────
│     │ module.eks_blueprints is object with 12 attributes
│ 
│ This object does not have an attribute named "eks_cluster_endpoint".
╵
╷
│ Error: Unsupported attribute
│ 
│   on main.tf line 22, in provider "kubectl":
│   22:   cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
│     ├────────────────
│     │ module.eks_blueprints is object with 12 attributes
│ 
│ This object does not have an attribute named "eks_cluster_certificate_authority_data".
╵

Versions

  • Module version [Required]: v4.17.0

  • Terraform and provider versions:

Terraform v1.3.6
on linux_amd64
+ provider registry.terraform.io/gavinbunney/kubectl v1.14.0
+ provider registry.terraform.io/hashicorp/aws v4.44.0
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/helm v2.7.1
+ provider registry.terraform.io/hashicorp/kubernetes v2.16.0
+ provider registry.terraform.io/hashicorp/local v2.2.3
+ provider registry.terraform.io/hashicorp/null v3.2.1
+ provider registry.terraform.io/hashicorp/random v3.4.3
+ provider registry.terraform.io/hashicorp/time v0.9.1
+ provider registry.terraform.io/hashicorp/tls v4.0.4
+ provider registry.terraform.io/terraform-aws-modules/http v2.4.1

Reproduction Code [Required]

Steps to reproduce the behavior:

  1. git checkout tags/v4.17.0
  2. cd examples/karpenter
  3. terraform init
  4. terraform apply (1st apply will fail with this error but 2nd apply will work)
╷
│ Error: default failed to create kubernetes rest client for update of resource: Unauthorized
│ 
│   with kubectl_manifest.karpenter_provisioner["apiVersion: karpenter.sh/v1alpha5\nkind: Provisioner\nmetadata:\n  name: default\nspec:\n  requirements:\n    - key: \"topology.kubernetes.io/zone\"\n      operator: In\n      values: [us-west-2a,us-west-2b,us-west-2c]\n    - key: \"karpenter.sh/capacity-type\"\n      operator: In\n      values: [\"spot\", \"on-demand\"]\n  limits:\n    resources:\n      cpu: 1000\n  provider:\n    instanceProfile: karpenter-managed-ondemand\n    subnetSelector:\n      Name: \"karpenter-private*\"\n    securityGroupSelector:\n      karpenter.sh/discovery/karpenter: 'karpenter'\n  labels:\n    type: karpenter\n    provisioner: default\n  taints:\n    - key: default\n      value: 'true'\n      effect: NoSchedule\n  ttlSecondsAfterEmpty: 120"],
│   on main.tf line 203, in resource "kubectl_manifest" "karpenter_provisioner":
│  203: resource "kubectl_manifest" "karpenter_provisioner" {
│ 
╵
╷
│ Error: default-lt failed to create kubernetes rest client for update of resource: Unauthorized
│ 
│   with kubectl_manifest.karpenter_provisioner["apiVersion: karpenter.sh/v1alpha5\nkind: Provisioner\nmetadata:\n  name: default-lt\nspec:\n  requirements:\n    - key: \"topology.kubernetes.io/zone\"\n      operator: In\n      values: [us-west-2a,us-west-2b,us-west-2c]                               #Update the correct region and zones\n    - key: \"karpenter.sh/capacity-type\"\n      operator: In\n      values: [\"spot\", \"on-demand\"]\n    - key: \"node.kubernetes.io/instance-type\"              #If not included, all instance types are considered\n      operator: In\n      values: [\"m5.2xlarge\", \"m5.4xlarge\"]\n    - key: \"kubernetes.io/arch\"                            #If not included, all architectures are considered\n      operator: In\n      values: [\"arm64\", \"amd64\"]\n  limits:\n    resources:\n      cpu: 1000\n  provider:\n    launchTemplate: \"karpenter-karpenter\"     # Used by Karpenter Nodes\n    subnetSelector:\n      Name: \"karpenter-private*\"\n  labels:\n    type: karpenter\n    provisioner: default-lt\n  taints:\n    - key: default-lt\n      value: 'true'\n      effect: NoSchedule\n  ttlSecondsAfterEmpty: 120"],
│   on main.tf line 203, in resource "kubectl_manifest" "karpenter_provisioner":
│  203: resource "kubectl_manifest" "karpenter_provisioner" {
│ 
╵
╷
│ Error: Unauthorized
│ 
│   with kubernetes_secret_v1.datadog_api_key,
│   on main.tf line 214, in resource "kubernetes_secret_v1" "datadog_api_key":
│  214: resource "kubernetes_secret_v1" "datadog_api_key" {
│ 
╵

  1. terraform apply (2nd time)

At this point cluster will be available

  1. Now destroy it
terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
terraform destroy -target="module.eks_blueprints" -auto-approve
terraform destroy -target="module.vpc" -auto-approve
  1. Run this final step to get above mentioned error
terraform destroy -auto-approve

OR

terraform destroy -auto-approve -refresh

Expected behaviour

Cluster should destroy without any error and terraform.tfstate shouldn't have any residual resource

Actual behaviour

error mentioned above is thrown and terraform.tfstate still has some residual resources although the cluster has been removed along with the VPC

Terminal Output Screenshot(s)

image

Additional context

#1161

@github-actions
Copy link
Contributor

github-actions bot commented Jan 1, 2023

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@github-actions github-actions bot added the stale label Jan 1, 2023
@armujahid
Copy link
Contributor Author

armujahid commented Jan 1, 2023

Not stale. also tested with v4.20.0

@github-actions github-actions bot removed the stale label Jan 2, 2023
@james-arawhanui
Copy link

james-arawhanui commented Jan 4, 2023

Confirmed I had to manually delete remaining data tfstate.

image

I followed the workshop in it's entirety, using blueprints v4.20.0 and latest terraform providers and terraform aws modules, see versions below.

$ history

terraform destroy -target=module.kubernetes_addons -auto-approve
terraform destroy -target=module.eks_blueprints -auto-approve
terraform destroy -auto-approve # error
terraform destroy -target=module.vpc -auto-approve
terraform destroy -auto-approve  # error
terraform state rm data.aws_availability_zones.available
terraform state rm data.aws_caller_identity.current
terraform state rm data.aws_eks_cluster.cluster
terraform state rm data.aws_eks_cluster_auth.this
terraform state rm data.aws_region.current
terraform state rm data.kubectl_path_documents.karpenter_provisioners

Versions

main.tf

module "eks_blueprints" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=v4.20.0"
# ...

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.18.1"
# ...

module "kubernetes_addons" {
  source = "github.com/aws-ia/terraform-aws-eks-blueprints?ref=v4.20.0/modules/kubernetes-addons"
# ...

locals.tf

locals {
  name            = basename(path.cwd)
  region          = data.aws_region.current.name
  cluster_version = "1.24"
# ...

providers.tf

terraform {
  required_version = "~> 1.3.6"

  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "~> 4.48.0"
    }
    kubernetes = {
      source = "hashicorp/kubernetes"
      version = "~> 2.16.1"
    }
    helm = {
      source  = "hashicorp/helm"
      version = "~> 2.8.0"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.14"
    }
  }
}

eks-blueprint.zip

@github-actions
Copy link
Contributor

github-actions bot commented Feb 4, 2023

This issue has been automatically marked as stale because it has been open 30 days
with no activity. Remove stale label or comment or this issue will be closed in 10 days

@github-actions github-actions bot added the stale label Feb 4, 2023
@github-actions
Copy link
Contributor

Issue closed due to inactivity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants