Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: failed to read schema... for deleted resource #32218

Closed
kroussou opened this issue Nov 15, 2022 · 7 comments
Closed

Error: failed to read schema... for deleted resource #32218

kroussou opened this issue Nov 15, 2022 · 7 comments
Labels
bug waiting for reproduction unable to reproduce issue without further information waiting-response An issue/pull request is waiting for a response from the community

Comments

@kroussou
Copy link

Terraform Version

Terraform v1.3.2
on darwin_amd64

Terraform Configuration Files

provider "elasticsearch" {
  url               = "https://localhost:9999"
}
terraform {
  required_providers {
    elasticsearch = {
      source  = "phillbaker/elasticsearch"
      version = "2.0.6"
    }
  }
  required_version = "~> 1.3.2"
}
resource "elasticsearch_kibana_object" "kibana_obj36" {
  body = <<EOF
  ***stripped***
EOF
}

Debug Output

...
2022-11-10T20:43:51.086Z [TRACE] (graphTransformerMulti) Executing graph transform *terraform.MissingProviderTransformer
2022-11-10T20:43:51.086Z [DEBUG] adding implicit provider configuration provider["registry.terraform.io/hashicorp/elasticsearch"], implied first by elasticsearch_kibana_object.kibana_obj36
2022-11-10T20:43:51.087Z [TRACE] (graphTransformerMulti) Completed graph transform *terraform.MissingProviderTransformer with new graph:
...

Expected Behavior

No error returned

Actual Behavior

Plan fails with error:

│ Error: failed to read schema for elasticsearch_kibana_object.kibana_obj36 in registry.terraform.io/hashicorp/elasticsearch: failed to instantiate provider "registry.terraform.io/hashicorp/elasticsearch" to obtain schema: unavailable provider "registry.terraform.io/hashicorp/elasticsearch"

Steps to Reproduce

  1. terraform init && terraform apply
  2. Remove kibana_obj36 resource from config
  3. Remove this resource manually using Kibana/Opensearch UI
  4. terraform plan # fails with above error

Additional Context

Workaround:
After running terraform apply -refresh-only error is no longer reported.

References

No response

@kroussou kroussou added bug new new issue not yet triaged labels Nov 15, 2022
@jbardin
Copy link
Member

jbardin commented Nov 15, 2022

Hi @kroussou,

Thanks for filing the issue. I'm not able to replicate the given error with the configuration shown, and the debug output is not complete and doesn't offer any more clues. Can you show the exact steps used to reach the error?

Are you by chance removing the required_providers and provider blocks from the configuration as well? The error indicates that something in the configuration or state is assuming an elasticsearch provider within the registry.terraform.io/hashicorp namespace, which is usually caused by a missing required_providers entry.

Thanks!

@jbardin jbardin added waiting-response An issue/pull request is waiting for a response from the community waiting for reproduction unable to reproduce issue without further information and removed new new issue not yet triaged labels Nov 15, 2022
@AubreySLavigne
Copy link

Ran into a similar issue with the opsgenie provider. Plans for local development completed without issue, but our CI pipeline gave the following error:

Error: failed to read schema for opsgenie_schedule_rotation.rotations in registry.terraform.io/hashicorp/opsgenie: failed to instantiate provider "registry.terraform.io/hashicorp/opsgenie" to obtain schema: unavailable provider "registry.terraform.io/hashicorp/opsgenie"

It's a kludgy solution, but the issue was resolved by removing the "deleted" resources from the terraform.tfstate file

@kroussou
Copy link
Author

kroussou commented Dec 3, 2022

Hi @jbardin,

Thanks for looking into it!

Unfortunately I've destroyed my local test environment by accident so I can't post full debug log or recheck if I still can reproduce it ATM. I've created local test environment by reducing much more complex config from our shared test environment. I reproduced this error several times in the process so I'm pretty sure about the reproduction steps.

Please take a look at debug log from our shared test env. Maybe it will help finding the cause. Problematic resource here is module.kibana.elasticsearch_kibana_object.index_patterns["Xcfi-sandbox-test2Xnginx-accessX"]. Config being applied has no module.kibana.elasticsearch_kibana_object.index_patterns resources so terraform is expected to destroy this and a dozen of similar instances.

As for the state I see nothing special about problematic resource. Running terraform apply -refresh-only deletes it from the state and error stops appearing:

➜ head terraform\ \(1\).json             # BAD
{
  "version": 4,
  "terraform_version": "1.3.2",
  "serial": 129,
  "lineage": "a18c34fa-4d73-d0f2-1cfd-ea7496d96ea0",
  "outputs": {},
  "resources": [
    {
      "mode": "data",
      "type": "azurerm_key_vault",
➜ head terraform\ \(2\).json             # OK (after refresh)
{
  "version": 4,
  "terraform_version": "1.3.2",
  "serial": 130,
  "lineage": "a18c34fa-4d73-d0f2-1cfd-ea7496d96ea0",
  "outputs": {},
  "resources": [
    {
      "mode": "data",
      "type": "azurerm_key_vault",
➜ diff <(terraform state list -state='terraform (1).json') <(terraform state list -state='terraform (2).json')
35d34
< module.kibana.elasticsearch_kibana_object.index_patterns["Xcfi-sandbox-test2Xnginx-accessX"]

Snippet below is from terraform (1).json. I see no difference between problematic and normal resource in the state file:

    {
      "module": "module.kibana",
      "mode": "managed",
      "type": "elasticsearch_kibana_object",
      "name": "index_patterns",
      "provider": "provider[\"registry.terraform.io/phillbaker/elasticsearch\"]",
      "instances": [
...
        {
          "index_key": "Xcfi-sandbox-test2XkubernetesX",
          "schema_version": 0,
          "attributes": {
            "body": "[{\"_id\":\"index-pattern:126ade29-4267-df69-7210-fe67ce75fe75\",\"_source\":{\"index-pattern\":{\"timeFieldName\":\"@timestamp\",\"title\":\"*cfi-sandbox-test2*kubernetes*\"},\"migrationVersion\":{\"index-pattern\":\"7.6.0\"},\"references\":[],\"type\":\"index-pattern\"}}]",
            "id": "index-pattern:126ade29-4267-df69-7210-fe67ce75fe75",
            "index": ".kibana"
          },
          "sensitive_attributes": [],
          "private": "bnVsbA==",
          "dependencies": [
            "module.kibana.random_uuid.kibana_object_ids"
          ]
        },
        {
          "index_key": "Xcfi-sandbox-test2Xnginx-accessX",
          "schema_version": 0,
          "attributes": {
            "body": "[{\"_id\":\"index-pattern:410c1426-6211-c4cb-5983-0ae8c4579e9e\",\"_source\":{\"index-pattern\":{\"timeFieldName\":\"@timestamp\",\"title\":\"*cfi-sandbox-test2*nginx-access*\"},\"migrationVersion\":{\"index-pattern\":\"7.6.0\"},\"references\":[],\"type\":\"index-pattern\"}}]",
            "id": "index-pattern:410c1426-6211-c4cb-5983-0ae8c4579e9e",
            "index": ".kibana"
          },
          "sensitive_attributes": [],
          "private": "bnVsbA==",
          "dependencies": [
            "module.kibana.random_uuid.kibana_object_ids"
          ]
        },
...

My guess is that issue is somehow related to how terraform handles state refresh during plan:

  • terraform identifies orphan resource
  • terraform calls provider and finds that resource is also deleted on target
  • terraform removes resource from state
  • terraform marks resource node 'NoOp' -> nothing to do.

IMHO terraform apply -refresh-only doesn't go beyond here so it succeeds and fixes error condition.
Then weird things start:

  • terraform tries to access state for this resource again and gets nothing (for what reason? also it has been deleted from the state a moment ago!)
  • terraform guesses (incorrectly) provider for the resource
  • terraform calls guessed provider and fails

@jbardin
Copy link
Member

jbardin commented Dec 5, 2022

Hi @kroussou,

Thanks for the added info. The given log doesn't seem to have anything related to the error, but the idea of this being tripped up by a deleted orphan is quite probable. There shouldn't have been a NoOp change at all for that case, and it was coincidentally removed by #32246 in v1.3.6, so you may find the latest release already fixes the issue.

@kroussou
Copy link
Author

kroussou commented Dec 6, 2022

Hi @jbardin,

You are right. This is similar to #32235.

With the following config

terraform {
  required_providers {
    local = {
      source = "hashicorp/local"
      version = "2.2.3"
    }
  }
}
resource "local_file" "foo" {
    content  = "foo!"

    filename = "${path.module}/foo.bar"
}

running plan after removal of resource from both config and system I've got the following error:

local_file.foo: Refreshing state... [id=4bf3e335199107182c6f7638efaad377acc7f452]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
╷
terraform {
│ Error: Resource node has no configuration attached
│
│ The graph node for local_file.foo has no configuration attached to it. This suggests a bug in Terraform's apply graph builder; please
│ report it!
╵

However when I did the same with renamed local build of "hashicorp/local" provider I've got the same error as with elasticsearch provider:

➜  cat test1.tf
terraform {
  required_providers {
    local1 = {
      source = "somecompany/local1"
      version = "2.2.3"
    }
  }
}
#resource "local1_file" "foo" {
#    content  = "foo!"
#
#    filename = "${path.module}/foo.bar"
#}

➜  terraform apply
local1_file.foo: Refreshing state... [id=4bf3e335199107182c6f7638efaad377acc7f452]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
╷
│ Error: failed to read provider configuration schema for registry.terraform.io/hashicorp/local1: failed to instantiate provider "registry.terraform.io/hashicorp/local1" to obtain schema: unavailable provider "registry.terraform.io/hashicorp/local1"
│
│
╵

Upgrading terraform to 1.3.6 fixes the issue:

➜  tfenv use 1.3.6
Switching default version to v1.3.6
Default version (when not overridden by .terraform-version or TFENV_TERRAFORM_VERSION) is now: 1.3.6
➜  terraform apply
local1_file.foo: Refreshing state... [id=4bf3e335199107182c6f7638efaad377acc7f452]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

@jbardin
Copy link
Member

jbardin commented Dec 6, 2022

Thanks for confirming!

@jbardin jbardin closed this as completed Dec 6, 2022
@github-actions
Copy link

github-actions bot commented Jan 6, 2023

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 6, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug waiting for reproduction unable to reproduce issue without further information waiting-response An issue/pull request is waiting for a response from the community
Projects
None yet
Development

No branches or pull requests

3 participants