Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yaml change causes destroy -> create instead of in place. #275

Open
yctn opened this issue Sep 12, 2023 · 1 comment
Open

yaml change causes destroy -> create instead of in place. #275

yctn opened this issue Sep 12, 2023 · 1 comment

Comments

@yctn
Copy link

yctn commented Sep 12, 2023

every time i change something in the yaml it tries to destroy the yaml and recreate it. it never actualy in-place replace it. this causes operators to do a full restart of clusters sometimes.

i suspect that is because the full yaml is in the terraform state.
is this expected?
or am i simply doing something wrong here?
my tf code is as following.

`data "kubectl_path_documents" "kafka-yaml" {
pattern = "./manifest/kafka/*.yaml.tpl"
}

resource "kubectl_manifest" "kafka-GRA7" {
provider = kubectl.GRA7
for_each = toset(data.kubectl_path_documents.kafka-yaml.documents)
yaml_body = each.value
wait = false
wait_for_rollout = false
}`

@alekc
Copy link
Contributor

alekc commented Sep 25, 2023

As discussed in alekc/terraform-provider-kubectl#50, a proper way to do it is to use manifests and not documents.
I.e.

data "kubectl_path_documents" "manifests-directory-yaml" {
  pattern = "./manifests/*.yaml"
}
resource "kubectl_manifest" "directory-yaml" {
  for_each  = data.kubectl_path_documents.manifests-directory-yaml.manifests
  yaml_body = each.value
}

Documentation has been updated on the fork

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants