Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OpenStack provider #540

Merged
merged 21 commits into from Dec 6, 2021
Merged

Add OpenStack provider #540

merged 21 commits into from Dec 6, 2021

Conversation

tfussell
Copy link
Contributor

@tfussell tfussell commented Nov 2, 2021

For https://github.com/giantswarm/giantswarm/issues/19400

Adds support for templating OpenStack clusters.

Adds the following OpenStack-specific flags:

  • --cloud
  • --cloud-config
  • --dns-nameservers
  • --external-network-id
  • --failure-domain
  • --image-name
  • --node-machine-flavor
  • --node-cidr
  • --root-volume-disk-size
  • --root-volume-source-type
  • --root-volume-source-uuid

Some of these flags will be dropped in favor of defaulting. Others will become flags common for all providers. For now we need the configurability for testing OpenStack alpha releases.

@tfussell tfussell self-assigned this Nov 2, 2021
cmd/get/capi/runner.go Outdated Show resolved Hide resolved
cmd/get/capi/runner.go Outdated Show resolved Hide resolved
cmd/get/capi/runner.go Outdated Show resolved Hide resolved
cmd/template/cluster/provider/capo.go Outdated Show resolved Hide resolved
@tfussell tfussell marked this pull request as ready for review November 23, 2021 21:05
@tfussell tfussell requested review from a team November 23, 2021 22:06
Copy link

@ericgraf ericgraf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@marians
Copy link
Member

marians commented Nov 24, 2021

kubectl gs template cluster --help is already pretty messy, because of a lot of options, and I am worried that adding another 12 options will render it useless. That's a big side effect for all users, while the flags to be added will only serve a few.

What could we do?

Can me make the OpenStack related flags hidden, as they are meant to be used by a limited audience only (and may be removed in the future anyway)? This way we wouldn't have negative side effects for people using AWS or Azure only. But that would make the flags harder to use for those few who actually need them.

Can we structure the help output in a way that OpenStack-related flags form their own group? That would actually be a big improvement for every user.

@tfussell
Copy link
Contributor Author

@marians I don't really like the idea of hiding them because I want OpenStack customers to be supported just like AWS and Azure customers.

One option would be to add a subcommand under opsctl template cluster for each provider. Then, opsctl template cluster openstack could only show the flags for that provider.

Another option would be to parse the --provider flag immediately then initialize the other flags depending on its value. opsctl template cluster --provider openstack --help would show all flags including OpenStack only while opsctl template cluster --help would only show common flags. This isn't very intuitive though.

We could also write our own help message instead of using the automatically generated message.

One last option is to keep it like this until cobra merges the PR adding flag grouping support. It sounds like it will be soon. I actually saw your comment there :)

What do you think @giantswarm/sig-ux?

@marians
Copy link
Member

marians commented Nov 30, 2021

For reference, here is the upstream issue: spf13/cobra#1327

I can't find a PR to close that issue. The linked PR 1003 implements grouping of commands, not flags.

I would suggest to override the flags help rendering then with our own "hard-coded" string.

@@ -27,7 +27,20 @@ type ClusterCRsConfig struct {
EKS bool
ExternalSNAT bool
ControlPlaneSubnet string
PodsCIDR string

// OpenStack only.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are all of these options really unique per workload cluster?
Wouldn't it make way more sense to default a bunch of these (e.g. SSHKeyName, ImageName, ...) in the webhook as they are tied to the management cluster?

Additionally we know that customers want to use gitops to manage these clusters, supplying them with good overlays seems more sensible to me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dropped SSH key name because we don't want to allow it. Image name will be decided by us (we're managing images), but is dependent on Kubernetes version and may be useful to override. Source UUID is tricky because it depends on the OpenStack project the cluster is deployed into and that is therefore cluster-dependent.

For the alpha (or "pre-alpha") we just want the ability to specify all of the options. The flags won't show up in the automatic CLI help so I don't think it will be a problem in terms of backwards compatibility. Let me know if you disagree @MarcelMue.

@marians
Copy link
Member

marians commented Nov 30, 2021

Regarding the general UX concerns, we just discussed that in SIG UX. That should not block this PR in any way. I created this issue to tackle the necessary improvements independently:

@tfussell
Copy link
Contributor Author

tfussell commented Dec 6, 2021

I would suggest to override the flags help rendering then with our own "hard-coded" string.

After a team discussion, we decided to simply hide all of the provider-specific flags until we have a better solution.

@MarcelMue
Copy link
Contributor

^ sorry for the wrong commit - got mixed up with my CLI.

@tfussell tfussell merged commit cb70e66 into master Dec 6, 2021
@tfussell tfussell deleted the add-openstack branch December 6, 2021 16:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants