You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
Issue details
Certain resources behave "catastrophically" when deleted, which can occur inadvertently during a replace operation. When a namespace or a CRD is deleted, all of the resources that depend on it are deleted. This cascading delete can result in downtime or outages and leaves the Pulumi stack's state inconsistent with the cluster.
For example suppose I have a project with these resources deployed via Pulumi:
If foo namespace is replaced, the foo/bar resource is destroyed. Likewise, if the Quux CRD is replaced, the Thwomp resource is destroyed. This won't be detected until a pulumi refresh occurs, and the deployment will likely be inconsistent, resulting in one of three outcomes:
the Pulumi program does not modify any dependent resources, resulting in those being deleted and not recreated
the Pulumi engine attempts to update the resources after the Kubernetes API server deletes them, resulting in an error
the Pulumi engine races the API server and creates or updates dependent resources before the Kubernetes API server deletes them
In all of these situations, I would like to mark the respective resources with resource options like Protect or RetainOnDelete, to ensure that a cascading deletion out of band from the Pulumi engine cannot occur.
Affected area/feature
Component resources like:
helm/v2.Chart
yaml.*
kustomize.*
The text was updated successfully, but these errors were encountered:
Hello!
Issue details
Certain resources behave "catastrophically" when deleted, which can occur inadvertently during a replace operation. When a namespace or a CRD is deleted, all of the resources that depend on it are deleted. This cascading delete can result in downtime or outages and leaves the Pulumi stack's state inconsistent with the cluster.
For example suppose I have a project with these resources deployed via Pulumi:
If
foo
namespace is replaced, thefoo/bar
resource is destroyed. Likewise, if theQuux
CRD is replaced, theThwomp
resource is destroyed. This won't be detected until apulumi refresh
occurs, and the deployment will likely be inconsistent, resulting in one of three outcomes:In all of these situations, I would like to mark the respective resources with resource options like
Protect
orRetainOnDelete
, to ensure that a cascading deletion out of band from the Pulumi engine cannot occur.Affected area/feature
Component resources like:
helm/v2.Chart
yaml.*
kustomize.*
The text was updated successfully, but these errors were encountered: