Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deployment tolerations using keys matching k8s node taints are being ignored for the tfstate #2376

Open
Restless-ET opened this issue Dec 27, 2023 · 2 comments · May be fixed by #2380
Open
Labels

Comments

@Restless-ET
Copy link

Overriding the default tolerationSeconds value for the pods of any given Deployment is totally valid necessity
and should be a supported scenario by this provider IMO.

By not properly supporting this we get into this "perpetual diff" situation for a resource/object configuration that's completely acceptable within Kubernetes.

Terraform Version, Provider Version and Kubernetes Version

Terraform version: 1.4.6
Kubernetes provider version: 2.24.0
Kubernetes version: 1.26.8

Affected Resource(s)

  • kubernetes_deployment
  • kubernetes_deployment_v1

Steps to Reproduce

  1. Add a toleration inside spec.template.spec on your kubernetes_deployment resource. Using as key one of k8s "node.kubernetes.io/*" taints (e.g: "node.kubernetes.io/unreachable").
  2. terraform apply
  3. terraform plan (after the successful apply)

Expected Behavior

What should have happened?

The terraform plan should not show any changes to be performed after a successful apply has executed.

Actual Behavior

What actually happened?

The terraform plan keeps showing tolerations as required to be added to the deployment:

Terraform will perform the following actions:

  # module.envoy-global.kubernetes_deployment_v1.envoy will be updated in-place
  ~ resource "kubernetes_deployment_v1" "envoy" {
        id               = "egress/envoy"
        # (1 unchanged attribute hidden)

      ~ spec {
            # (5 unchanged attributes hidden)

          ~ template {
              ~ spec {
                    # (12 unchanged attributes hidden)

                  + toleration {
                      + effect             = "NoExecute"
                      + key                = "node.kubernetes.io/not-ready"
                      + operator           = "Exists"
                      + toleration_seconds = "150"
                    }
                  + toleration {
                      + effect             = "NoExecute"
                      + key                = "node.kubernetes.io/unreachable"
                      + operator           = "Exists"
                      + toleration_seconds = "90"
                    }

                    # (5 unchanged blocks hidden)
                }

                # (1 unchanged block hidden)
            }

            # (2 unchanged blocks hidden)
        }

        # (1 unchanged block hidden)
    }

Important Factoids

A similar issue was raised in the past (1) and got a fix (2), but it only addressed node.kubernetes.io prefixed toleration keys that are not part of the well-known node taints.

I believe it should be supported to properly manage any toleration key at the Deployment resource (I understand Pod may need to be left as is).

Actually, a fair point about this was already made here: #955 (comment).

Also, (3) seems to propose a fair attempt at sorting this, but appears to have become stale and eventually (auto-)closed. I think we should recover this.

References

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@arybolovlev
Copy link
Contributor

Hi @Restless-ET,

Thank you for raising this issue. I think it makes sense to strip well-known tolerations only when we flatten a pod object spec, in all other cases, when it is a template, we could keep them. The same approach we use for labels and annotations. You are right, it looks like #1012 introduced a similar fix but for whatever reason didn't get much attention. I will raise this proposal during our next triage session and let you know about the decision here.

Happy New Year! 🎉

@arybolovlev arybolovlev linked a pull request Jan 2, 2024 that will close this issue
2 tasks
@Restless-ET
Copy link
Author

Hello @arybolovlev and HNY 😄

Thank you for the feedback.
I can see there's already an open PR for the fix so thanks again for the quick action.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants