Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pulumi refresh targeting a yaml.ConfigFile or yaml.ConfigGroup should refresh all generated resources #1642

Closed
bsod90 opened this issue Jul 1, 2021 · 6 comments
Assignees
Labels
area/yaml kind/enhancement Improvements or new features mro1 Monica's list of 1st tier overlay related issues resolution/fixed This issue was fixed

Comments

@bsod90
Copy link

bsod90 commented Jul 1, 2021

Hi, Pulumi team! First of all, thank you for building this awesome product!
I just wanted to talk a bit about what we're trying to achieve with Pulumi and a few suggestions that could make our and, potentially, many other people's life easier.

We maintain a quite complex infrastructure that involves spinning up multiple Kubernetes clusters on all three major cloud platforms in all available regions, and then deploying and updating our own app on them. IaC sounds like an absolute necessity for us, and we were already using Terraform for some time. With Terraform, however, we found their modularization approach quite inflexible, and we ran into HCL limitations quite too often. So, we decided to give Pulumi a try! I've been playing with it extensively for almost a month now, and I was able to implement all the cloud parts of our infrastructure responsible for spinning up different kinds of k8s clusters. I have to say, I really liked the flexibility that Typescript gave me compared to HCL. Now, there's one part remaining for me is to figure out how to deploy our k8s stacks and maintain them long-term. The way it's done right now is following:

  • We have a piece of code that, given the region name and a bunch of other parameters spits out a giant yaml with a bit under 100 objects
  • We then either manually kubectl apply -f that file
  • Or wrap it into a .tf file where we pass the entire content to this plugin: https://github.com/gavinbunney/terraform-provider-kubectl
  • We then commit these .tf files to our infra repo, from which terraform cloud feeds and applies updates periodically.
  • The repo approach also gives us visibility into all the changes that were applied to our k8s infra, and that facilitates debugging.

Now that plugin I mentioned, provides a simple resource called kubectl_manifest which acts exactly like kubectl apply -f + stores the manifest itself in the Terraform state so that it knows when there's a change in yaml and it needs to re-apply.
This may be not the cleanest way to move the configuration, as it doesn't take care of cleaning up stuff for example, but the key part is that it's quite forgiving for state drifts. E.g. if some resource already exists, the apply won't choke on it, but simply skip and move forward with the rest of the diff. This is really important for us, as there are clusters that we have to touch manually quite a lot, and it's unrealistic for us to expect that everything will be under programmatic control.

So, this is the behavior that I was trying to replicate in Pulumi or find an alternative approach to it and failed so far. I was looking at k8s.yaml.ConfigFile and k8s.yaml.ConfigGroup as closest alternatives to kubectl_manifest, but they behave quite differently. Here's where I struggled:

  • Right after the k8s cluster creation I pre-populate it with some Secrets which have to go into their own namespaces. Same namespaces are defined in that big yaml with the app object (though secrets aren't there). k8s.yaml.ConfigFile parses the namespace definition from there, but it already exists in the cluster so - failure 🤷 . I tried using the transformations option and add the import property to the parsed Namespace object and had limited success with it: the creation succeeds, but if I'm to delete the k8s.yaml.ConfigFile resource, it'll also delete the Namespace, which will still exist in the state of the Namespace resource created with the cluster.

  • We have a few clusters where yaml is already applied, we can't afford to recreate them and we need to move them under Pulumi without destroying anything. I further experimented with the import option on a throwaway cluster but had no success at all. For example, I tried adding import: <namespace>/<name> to every parsed object and then running pulumi up. This lead to some weird things: it said it has imported state for all the objects, but then went ahead and deleted them from the k8s cluster 🤷 . I ended up with a completely broken Pulumi state and had to manually clean it up and start over again.

  • Speaking of cleaning up: for some reason, pulumi state delete <urn> only allows me to delete an object if it has no children. This, in the case of a k8s.yaml.ConfigFile with ~100 parsed objects causes an obvious inconvenience. I understand that deleting children recursively may be dangerous, but at least an option for that would be really nice to have. Kind of like rm -rf. Terraform, for example, doesn't bother at all and deletes the entire tree.

  • Finally, I learned about pulumi refresh --target <..> and had a hope that I could simply point it at my k8s.yaml.ConfigFile object and it will read the rest of the config from the actual Kubernetes state. Unfortunately, in case of ConfigFile it just does nothing :/

I really love the code aspect of Pulumi would love to make it our DevOps solution long-term. However, I'd need to figure out how to take our k8s business under Pulumi control first. I came here in the hope that someone would point me in the direction of the best practice for my problem. I'll also leave a couple of feature suggestions in the section below. Thanks for any help!

Affected feature

  • pulumi refresh --target <k8s.yaml.ConfigFile resource> should refresh state of all generated resources
  • pulumi state delete <k8s.yaml.ConfigFile resource> should delete the config with all of its children
  • There should be a way to tell Pulumi to not fail if the object already exists in the cluster. Something like importIfExists: true could work.
  • Ideadlly, there should be a way to tell k8s.yaml.ConfigFile to behave exactly like kubectl apply -f does. I know, you have a resource for Kustomize folders, and I looked at it as one of the potential solutions, but I ruled it out as too cumbersome. Creating unnecessary folders and customize files when it's not needed isn't pretty...
@bsod90 bsod90 added the kind/enhancement Improvements or new features label Jul 1, 2021
@lukehoban lukehoban added the needs-triage Needs attention from the triage team label Jul 1, 2021
@leezen leezen removed the needs-triage Needs attention from the triage team label Jul 2, 2021
@nesl247
Copy link

nesl247 commented Aug 26, 2021

It isn't just refresh that this should work for IMO. I just ran into this where I was trying to update a helm chart, which is just one part of a particular project, and unfortunately nothing changed because the pulumi up --target $urn does not apply to the children, only the literal urn that is being targeted.

@lukehoban
Copy link
Member

nothing changed because the pulumi up --target $urn does not apply to the children, only the literal urn that is being targeted.

Does --target-dependents address this part of the issue for you?

https://www.pulumi.com/docs/reference/cli/pulumi_up/

@nesl247
Copy link

nesl247 commented Aug 26, 2021

I will have to try that out. I was not aware of that option. I think it may be better to default that on, or maybe give a message to the user asking if that’s what they want, as any resource with children like a helm chart will likely only want to be updated in this manner.

@lukehoban
Copy link
Member

Finally, I learned about pulumi refresh --target <..> and had a hope that I could simply point it at my k8s.yaml.ConfigFile object and it will read the rest of the config from the actual Kubernetes state. Unfortunately, in case of ConfigFile it just does nothing :/

It does not look like pulumi refresh currently supports a --target-dependents. But it sounds like possibly that is part of this.

Note that we are also in several other issues exploring giving components a more fully-defined meaning in terms of things like "depending on a resource always depends on it's children". A corollary to formalizing that fully would be to say that --target <component> implicitly targets children. This parent/child relationship is different than the dependency relationship, and is likely more clearly something that should be targeted by default when targeting a component.

@kjenney
Copy link

kjenney commented Feb 27, 2022

This seems like a major issue and one that shouldn't be that difficult to resolve.
I'm using GitOps for all of the resources in my cluster. For example:
kubectl apply -f namespaces.yaml updates ALL of the existing namespaces in the cluster with proper labels and annotations. If I run

    namespaces_deploy = ConfigFile(
        "namespaces",
        file="namespaces.yaml",
    )

this fails with:

error: update failed

  kubernetes:core/v1:Namespace (default):
    error: resource default was not successfully created by the Kubernetes API server : namespaces "default" already exists

How do I update the default namespace since it's there in EVERY Kubernetes cluster by default.

@EronWright
Copy link
Contributor

An update on the summarized issues:

pulumi refresh --target <k8s.yaml.ConfigFile resource> should refresh state of all generated resources

Feature Request: pulumi/pulumi#16033

pulumi state delete <k8s.yaml.ConfigFile resource> should delete the config with all of its children

Fixed: pulumi/pulumi#11164

There should be a way to tell Pulumi to not fail if the object already exists in the cluster. Something like importIfExists: true could work.

The Kubernetes provider now has an upsert mode and has "patch" resources to cover these scenarios, see blog post.

@kjenney I would advise you to use the NamespacePatch resource because you wouldn't want Pulumi to delete the default namespace, and patch resources don't assume they 'own' the object.

Since the Kubernetes specifics have been addressed, am closing this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/yaml kind/enhancement Improvements or new features mro1 Monica's list of 1st tier overlay related issues resolution/fixed This issue was fixed
Projects
None yet
Development

No branches or pull requests

8 participants