New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pulumi refresh targeting a yaml.ConfigFile or yaml.ConfigGroup should refresh all generated resources #1642
Comments
It isn't just refresh that this should work for IMO. I just ran into this where I was trying to update a helm chart, which is just one part of a particular project, and unfortunately nothing changed because the |
Does |
I will have to try that out. I was not aware of that option. I think it may be better to default that on, or maybe give a message to the user asking if that’s what they want, as any resource with children like a helm chart will likely only want to be updated in this manner. |
It does not look like Note that we are also in several other issues exploring giving components a more fully-defined meaning in terms of things like "depending on a resource always depends on it's children". A corollary to formalizing that fully would be to say that |
This seems like a major issue and one that shouldn't be that difficult to resolve.
this fails with:
How do I update the |
An update on the summarized issues:
Feature Request: pulumi/pulumi#16033
Fixed: pulumi/pulumi#11164
The Kubernetes provider now has an upsert mode and has "patch" resources to cover these scenarios, see blog post. @kjenney I would advise you to use the Since the Kubernetes specifics have been addressed, am closing this issue. |
Hi, Pulumi team! First of all, thank you for building this awesome product!
I just wanted to talk a bit about what we're trying to achieve with Pulumi and a few suggestions that could make our and, potentially, many other people's life easier.
We maintain a quite complex infrastructure that involves spinning up multiple Kubernetes clusters on all three major cloud platforms in all available regions, and then deploying and updating our own app on them. IaC sounds like an absolute necessity for us, and we were already using Terraform for some time. With Terraform, however, we found their modularization approach quite inflexible, and we ran into HCL limitations quite too often. So, we decided to give Pulumi a try! I've been playing with it extensively for almost a month now, and I was able to implement all the cloud parts of our infrastructure responsible for spinning up different kinds of k8s clusters. I have to say, I really liked the flexibility that Typescript gave me compared to HCL. Now, there's one part remaining for me is to figure out how to deploy our k8s stacks and maintain them long-term. The way it's done right now is following:
kubectl apply -f
that file.tf
file where we pass the entire content to this plugin: https://github.com/gavinbunney/terraform-provider-kubectl.tf
files to our infra repo, from which terraform cloud feeds and applies updates periodically.Now that plugin I mentioned, provides a simple resource called
kubectl_manifest
which acts exactly likekubectl apply -f
+ stores the manifest itself in the Terraform state so that it knows when there's a change in yaml and it needs to re-apply.This may be not the cleanest way to move the configuration, as it doesn't take care of cleaning up stuff for example, but the key part is that it's quite forgiving for state drifts. E.g. if some resource already exists, the apply won't choke on it, but simply skip and move forward with the rest of the diff. This is really important for us, as there are clusters that we have to touch manually quite a lot, and it's unrealistic for us to expect that everything will be under programmatic control.
So, this is the behavior that I was trying to replicate in Pulumi or find an alternative approach to it and failed so far. I was looking at
k8s.yaml.ConfigFile
andk8s.yaml.ConfigGroup
as closest alternatives tokubectl_manifest
, but they behave quite differently. Here's where I struggled:Right after the k8s cluster creation I pre-populate it with some
Secret
s which have to go into their own namespaces. Same namespaces are defined in that bigyaml
with the app object (though secrets aren't there).k8s.yaml.ConfigFile
parses the namespace definition from there, but it already exists in the cluster so - failure 🤷 . I tried using thetransformations
option and add theimport
property to the parsedNamespace
object and had limited success with it: the creation succeeds, but if I'm to delete thek8s.yaml.ConfigFile
resource, it'll also delete the Namespace, which will still exist in the state of the Namespace resource created with the cluster.We have a few clusters where
yaml
is already applied, we can't afford to recreate them and we need to move them under Pulumi without destroying anything. I further experimented with theimport
option on a throwaway cluster but had no success at all. For example, I tried addingimport: <namespace>/<name>
to every parsed object and then runningpulumi up
. This lead to some weird things: it said it has imported state for all the objects, but then went ahead and deleted them from the k8s cluster 🤷 . I ended up with a completely broken Pulumi state and had to manually clean it up and start over again.Speaking of cleaning up: for some reason,
pulumi state delete <urn>
only allows me to delete an object if it has no children. This, in the case of ak8s.yaml.ConfigFile
with ~100 parsed objects causes an obvious inconvenience. I understand that deleting children recursively may be dangerous, but at least an option for that would be really nice to have. Kind of likerm -rf
. Terraform, for example, doesn't bother at all and deletes the entire tree.Finally, I learned about
pulumi refresh --target <..>
and had a hope that I could simply point it at myk8s.yaml.ConfigFile
object and it will read the rest of the config from the actual Kubernetes state. Unfortunately, in case ofConfigFile
it just does nothing :/I really love the code aspect of Pulumi would love to make it our DevOps solution long-term. However, I'd need to figure out how to take our k8s business under Pulumi control first. I came here in the hope that someone would point me in the direction of the best practice for my problem. I'll also leave a couple of feature suggestions in the section below. Thanks for any help!
Affected feature
pulumi refresh --target <k8s.yaml.ConfigFile resource>
should refresh state of all generated resourcespulumi state delete <k8s.yaml.ConfigFile resource>
should delete the config with all of its childrenimportIfExists: true
could work.k8s.yaml.ConfigFile
to behave exactly likekubectl apply -f
does. I know, you have a resource for Kustomize folders, and I looked at it as one of the potential solutions, but I ruled it out as too cumbersome. Creating unnecessary folders and customize files when it's not needed isn't pretty...The text was updated successfully, but these errors were encountered: