Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't import resource from module when non-relevant condition can't be determined #24690

Open
ophintor opened this issue Apr 16, 2020 · 3 comments
Labels
enhancement import Importing resources

Comments

@ophintor
Copy link

Terraform Version

Terraform v0.12.18
+ provider.aws v2.57.0

Terraform Configuration Files

Main file

provider "aws" {
  region = "eu-west-2"
}

variable "vpc_id" {
  default = "vpc-............."
}

resource "aws_security_group" "sg1" {
  name        = "sg1"
  description = "test sg"
  vpc_id      = var.vpc_id
}

module "test" {
  source = "./module"
  sg     = aws_security_group.sg1.id
  vpc_id = var.vpc_id
}

Module main file

variable "sg" {
  default = ""
}
variable "vpc_id" {}

locals {
  sg = var.sg == "" ? aws_security_group.sg2[0].id : var.sg
}

resource "aws_security_group" "sg2" {
  count       = var.sg == "" ? 1 : 0
  name        = "sg2"
  description = "test sg"
  vpc_id      = var.vpc_id
}

resource "aws_security_group_rule" "sgr1" {
  type              = "egress"
  from_port         = 0
  to_port           = 65535
  protocol          = "tcp"
  cidr_blocks       = ["0.0.0.0/0"]
  security_group_id = local.sg
}

resource "aws_security_group" "sg3" {
  name        = "sg3"
  description = "test sg"
  vpc_id      = var.vpc_id
}

Actual Behavior

So this first step is probably expected behaviour. I will refer to the issue below, but this is still relevant.

When I run apply I get:

Error: Invalid count argument

  on module/main.tf line 11, in resource "aws_security_group" "sg2":
  11:   count       = var.sg == "" ? 1 : 0

The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.

Which I suppose it makes sense.

So I do as suggested and run with target first:

$ terraform apply -target=aws_security_group.sg1


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_security_group.sg1 will be created
  + resource "aws_security_group" "sg1" {
      + arn                    = (known after apply)
      + description            = "test sg"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = "sg1"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = "vpc-..."
    }

Plan: 1 to add, 0 to change, 0 to destroy.

So sg1 gets created.

Then I run terraform apply without a target. This time it works fine and creates the rest:

$ terraform apply
aws_security_group.sg1: Refreshing state... [id=sg-073e79413c0747899]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.test.aws_security_group.sg3 will be created
  + resource "aws_security_group" "sg3" {
      + arn                    = (known after apply)
      + description            = "test sg"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = "sg3"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = "vpc-..."
    }

  # module.test.aws_security_group_rule.sgr1 will be created
  + resource "aws_security_group_rule" "sgr1" {
      + cidr_blocks              = [
          + "0.0.0.0/0",
        ]
      + from_port                = 0
      + id                       = (known after apply)
      + protocol                 = "tcp"
      + security_group_id        = "sg-073e79413c0747899"
      + self                     = false
      + source_security_group_id = (known after apply)
      + to_port                  = 65535
      + type                     = "egress"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

So far so good. Note that sg2 does not get created due to the condition.

But then let's say I remove the sg3 sec group from the state file:

$ terraform state rm module.test.aws_security_group.sg3
Removed module.test.aws_security_group.sg3
Successfully removed 1 resource instance(s).

And try to re-import it:

$ terraform import module.test.aws_security_group.sg3 sg-0e7e14a8376210c47
module.test.aws_security_group.sg3: Importing from ID "sg-0e7e14a8376210c47"...
module.test.aws_security_group.sg3: Import prepared!
  Prepared aws_security_group for import
module.test.aws_security_group.sg3: Refreshing state... [id=sg-0e7e14a8376210c47]

Error: Invalid index

  on module/main.tf line 7, in locals:
   7:   sg = var.sg == "" ? aws_security_group.sg2[0].id : var.sg
    |----------------
    | aws_security_group.sg2 is empty tuple

The given key does not identify an element in this collection value.

Why is this happening? Why are we looking at the locals and can't we avoid this issue?

Now, imagine in my main one I comment out the sg line so we create sg2:

module "test" {
  source = "./module"
  # sg     = aws_security_group.sg1.id
  vpc_id = var.vpc_id
}

I delete the sg3 manually from AWS (since it's not in the state any more) and run apply:

$ terraform apply
aws_security_group.sg1: Refreshing state... [id=sg-073e79413c0747899]
module.test.aws_security_group_rule.sgr1: Refreshing state... [id=sgrule-2009651522]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
-/+ destroy and then create replacement

Terraform will perform the following actions:

  # module.test.aws_security_group.sg2[0] will be created
  + resource "aws_security_group" "sg2" {
      + arn                    = (known after apply)
      + description            = "test sg"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = "sg2"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = "vpc-..."
    }

  # module.test.aws_security_group.sg3 will be created
  + resource "aws_security_group" "sg3" {
      + arn                    = (known after apply)
      + description            = "test sg"
      + egress                 = (known after apply)
      + id                     = (known after apply)
      + ingress                = (known after apply)
      + name                   = "sg3"
      + owner_id               = (known after apply)
      + revoke_rules_on_delete = false
      + vpc_id                 = "vpc-..."
    }

  # module.test.aws_security_group_rule.sgr1 must be replaced
-/+ resource "aws_security_group_rule" "sgr1" {
        cidr_blocks              = [
            "0.0.0.0/0",
        ]
        from_port                = 0
      ~ id                       = "sgrule-2009651522" -> (known after apply)
      - ipv6_cidr_blocks         = [] -> null
      - prefix_list_ids          = [] -> null
        protocol                 = "tcp"
      ~ security_group_id        = "sg-073e79413c0747899" -> (known after apply) # forces replacement
        self                     = false
      + source_security_group_id = (known after apply)
        to_port                  = 65535
        type                     = "egress"
    }

Plan: 3 to add, 0 to change, 1 to destroy.

Removing and importing sg3 now works.

$ terraform state rm module.test.aws_security_group.sg3
Removed module.test.aws_security_group.sg3
Successfully removed 1 resource instance(s).

$ terraform import module.test.aws_security_group.sg3 sg-04a4578d42c5c42c8
module.test.aws_security_group.sg3: Importing from ID "sg-04a4578d42c5c42c8"...
module.test.aws_security_group.sg3: Import prepared!
  Prepared aws_security_group for import
module.test.aws_security_group.sg3: Refreshing state... [id=sg-04a4578d42c5c42c8]

Import successful!

The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.

Why is this dependent on whether sg2 exists or not? Should not terraform be able to import it regardless?

Steps to Reproduce

As above.

Additional Context

This is a simple example that reflects an issue that we've experienced in a much more complex environment, but the outcome is the same. The resource that gets passed onto the module and that determines whether another resource is created or not, in our case is not declared at the root but is retrieved from a separate state file from a different set of configuration files as a data resource. I've made it this wat to simplify it but the outcome is the same.

@danieldreier danieldreier added enhancement import Importing resources labels Apr 16, 2020
@wolfmd
Copy link

wolfmd commented Jun 2, 2020

I'm seeing this issue as well, where the false value is looking for a resource that I don't create when the variable is true. Thanks for opening this with great detail!

@mhuxtable
Copy link

mhuxtable commented Oct 30, 2020

I've just hit this problem too, while trying to import a resource into state in a completely different part of the state hierarchy to a location which has an optional module (implemented with count equal to 0 or 1 in a ternary depending on the value of some variable). The existence of the optional module could not be determined, which caused the import operation to fail.

What's more, the import operation at a CLI can look like it has succeeded – I get a line printed in green to imply it is successful, followed by a few deprecation warnings for code we haven't yet migrated, and this error is non-obvious at the end of some quite verbose Terraform output.

I don't really understand why the entire state tree needs to be exercised to import a single resource to another location in the state, where there are no dependencies between the branch of the tree to which I am importing and the branch where the optional resource resides. The optional resource was even toggled on in the workspace I was using – so the state even knew about the resource!

In the end, I worked around it by manually patching the HCL locally to bind the relevant interpolations statically, ran the import and then reverted the change. However, this behaviour is non-obvious/is not ergonomic and requires a moderate level of understanding of Terraform to debug.

In some cases, we absolutely depend on import workflows – in this case, we had a cloud-based singleton resource which had to be side-loaded by hand due to side-effects of creating it through TF, so side-loading and import was the only option. I'm trying to motivate other engineers to self-serve their own Terraform, but in busy environments, projects like this are going to preclude such ambitions from succeeding!

@tomas0620

This comment was marked as off-topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement import Importing resources
Projects
None yet
Development

No branches or pull requests

5 participants