-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gracefully handle deletion of first-class provider cluster #416
Comments
We can't infer automatically that the cluster has been deleted (vs. just being transiently unavailable), which I think probably means we need a way of signaling through the stack ref boundary metadata about the referenced resource. We'd probably want to design this to comport with @metral's work. @lblackstone @metral @lukehoban do you think we should consider this problem (though perhaps not the solution/direction I propose) for M22? |
Signaling through the stack ref about resource updates sounds like the way to go. I'm leaning towards M22 if it lines up with the rest of the priorities slated. |
@pgavlin IIUC the resource import work you're focusing on could apply here as another means to access other Pulumi program resources, is that correct? If so, what does this mean for programs that use StackReferences, as their lack of update info for the depending reference will still be a problem? My gut is telling me in this case we'd still need a "signaling" mechanism. |
I don't think that the resource import work will affect this case. The import work is focused around allowing externally-created resources to be adopted by Pulumi. |
After encountering this issue a few more times, I think it might be better to mark these resources as deleted on the next update. While it's possible that this could cause erroneous results if the cluster is only temporarily unavailable, this should be noticeable during preview. Thoughts @lukehoban @pgavlin? |
I agree. I have some preliminary changes to enable this behavior; I'll push them to a branch and send them out as a draft PR. |
This is addressed by #2489 |
If a user provisions a k8s cluster in one stack and then uses it as a provider in a separate stack, an implicit dependency is created. If the user destroys the k8s stack first, the destroy operation will fail on the dependent stack because the provider is unable to talk to the cluster.
We should either detect the cross-stack dependency, or detect that the cluster was deleted to avoid this situation.
Current workaround for cleaning up a stack in this state is the following:
pulumi stack export > stack
.deployment.resources
.pulumi stack import --file stack
Related: #881 #491
The text was updated successfully, but these errors were encountered: