Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert all component command-line flags to versioned configuration #12245

Open
bgrant0607 opened this issue Aug 5, 2015 · 29 comments
Open

Convert all component command-line flags to versioned configuration #12245

bgrant0607 opened this issue Aug 5, 2015 · 29 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture.

Comments

@bgrant0607
Copy link
Member

Forked from #1627, #5472, #246, #12201, and other issues.

We configure Kubernetes components like apiserver and controller-manager with command-line flags currently. Every time we change these flags, we break everyone's turnup automation, at least for anything not in our repo: hosted products, other repos, blog posts, ...

OTOH, the scheduler and kubectl read configuration files. We should be treating all configuration similarly, as versioned APIs, with backward compatibility, deprecation policies, documentation, etc. How we distribute the configuration is a separate issue -- #1627.

cc @davidopp @thockin @dchen1107 @lavalamp

@bgrant0607 bgrant0607 added priority/backlog Higher priority than priority/awaiting-more-evidence. area/install sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels Aug 5, 2015
@bgrant0607 bgrant0607 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Aug 18, 2015
@bgrant0607 bgrant0607 added this to the v1.1 milestone Aug 18, 2015
@mikedanese mikedanese self-assigned this Aug 18, 2015
@bgrant0607
Copy link
Member Author

@bgrant0607
Copy link
Member Author

@bgrant0607
Copy link
Member Author

@stefwalter
Copy link
Contributor

@mikedanese I started yesterday to convert KubeletServer (which is poorly named) to use a JSON settings file. I'd be happy to follow the lead of your work done on the scheduler config, as it does cover a couple things I missed.

@mikedanese
Copy link
Member

I haven't done much with this yet. I've begun to add a component config api group in this commit:

a8b00ed

Is this a similar direction you are taking? Let's discuss how we want to redistribute the configuration work tomorrow.

@pmorie
Copy link
Member

pmorie commented Aug 20, 2015

@mikedanese @stefwalter Can you guys confirm that you intend to combine this with the mechanism described in #6477 ?

@mikedanese
Copy link
Member

@pmorie Yes it is. It is related but separate and can be done in parallel. The combination should close #1627.

@mikedanese
Copy link
Member

@stefwalter can you please point me to the branch where you are working on this?

@bgrant0607
Copy link
Member Author

All new configuration formats we create should go into new API groups #12951

@stefwalter
Copy link
Contributor

@mikedanese I stopped working on it once I saw your work. I did about half an hour ... before posting what I was doing ... https://github.com/stefwalter/kubernetes/tree/kubelet-settings

I'd be happy to help out with the tedious bits or wherever necessary.

@bgrant0607
Copy link
Member Author

Some additional motivation:

The current way in which we configure components via command-line flags is unnecessarily convoluted and hard to standardize, since some deployments run the components using containers, some use init, some use systemd, some use supervisord, etc.

Life of configuring a flag on GCE (if wrong, that's another indication of how convoluted it is; different for every cloud platform):

  1. Set an environment variable (e.g., ADMISSION_CONTROL) in a config-default.sh script that is invoked by some script invoked indirectly by kube-up.sh: https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/config-default.sh
  2. Which is semi-manually translated to a YAML file and stuffed into VM metadata: https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/debian/helper.sh
  3. Which is used to generate a Salt pillar on the master node: https://github.com/kubernetes/kubernetes/blob/master/cluster/gce/configure-vm.sh
  4. Which is used to template-substitute the apiserver's pod config: https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/kube-apiserver/kube-apiserver.manifest

@davidopp
Copy link
Member

davidopp commented Oct 6, 2015

Not for 1.1

@bgrant0607 bgrant0607 added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Mar 2, 2017
@kfox1111
Copy link

I'm not sure if this is relevant or not to the conversation, but if daemonsets are enhanced to support the linked feature, along with daemonset rolling upgrade support, maybe this will be useful:
#43112

@roberthbailey
Copy link
Contributor

@mikedanese is going to be driving the component configuration standardization for 1.7.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 23, 2017
@mikedanese mikedanese added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 9, 2018
@bgrant0607
Copy link
Member Author

/remove-lifecycle stale
/lifecycle frozen

@mikedanese mikedanese removed their assignment Jan 25, 2018
@mtaufen
Copy link
Contributor

mtaufen commented Mar 28, 2018

Guide on how to implement componentconfig: https://docs.google.com/document/d/1FdaEJUEh091qf5B98HM6_8MS764iXrxxigNIdwHYW9c/

@kfox1111
Copy link

doc is private

@mtaufen
Copy link
Contributor

mtaufen commented Mar 29, 2018 via email

@neolit123
Copy link
Member

this topic has multiple stakeholding sigs, instead of cluster-lifecycle as the owner.
tagging sig-arch as the umbrella.

/sig architecture
/remove-sig cluster-lifecycle

the design idea is already in place and the component owners are graduating their config to v1.

  • kube-schduler - v1
  • kube-controller-manager - v1alpha1
  • kube-apiserver - none yet
  • kube-proxy - v1alpha1
  • kubeadm - v1beta2
  • kubelet - v1beta1

@k8s-ci-robot k8s-ci-robot added sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. and removed sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. labels Sep 3, 2020
@zqzten
Copy link
Member

zqzten commented Apr 12, 2023

Apologize for the bothering, I'm wondering if there's any update on the progress of this issue? Recently I found controller-runtime deprecating its ComponentConfig implementation, so I comed up with this issue in k/k, and wanted to know the community's current opinion and roadmap about this.

@neolit123
Copy link
Member

neolit123 commented Apr 12, 2023

Apologize for the bothering, I'm wondering if there's any update on the progress of this issue? Recently I found controller-runtime deprecating its ComponentConfig implementation, so I comed up with this issue in k/k, and wanted to know the community's current opinion and roadmap about this.

similarly to controller runtime, k/k componentconfig apis have made a rather slow progress into graduation to v1. some api versions are stale and should be higher - i.e. maturity is not accurately represented.

there has not been a discussion to drop the effort entirety. the apis are widely used, despite not beeing v1 - e.g. kubelet is still at v1beta1. removing such apis will be very disruptive to users. instead we should push them to graduation and maintain them.

@zqzten
Copy link
Member

zqzten commented Apr 12, 2023

@neolit123 Thanks for the rapid reply! I would also be happy to see these configurations get maintained, is there anybody or group currently responsible for this?

Also, I've found that not every component of k/k has supported being configured by a versioned configration file, and for kubelet, it implemented the whole processing logic in its own code base which cannot be easily reused. So I'm wondering if there's any chance that we build a common mechanism (e.g. in the component-base) which can be reused across all the components of k/k, and even for other sub projects (such as the controller-runtime above). This way we can achieve a unified behavior as well as reducing the overall maintenance cost.

@neolit123
Copy link
Member

@neolit123 Thanks for the rapid reply! I would also be happy to see these configurations get maintained, is there anybody or group currently responsible for this?

currently, different groups / sigs own components and their respective configs.

Also, I've found that not every component of k/k has supported being configured by a versioned configration file, and for kubelet, it implemented the whole processing logic in its own code base which cannot be easily reused. So I'm wondering if there's any chance that we build a common mechanism (e.g. in the component-base) which can be reused across all the components of k/k, and even for other sub projects (such as the controller-runtime above). This way we can achieve a unified behavior as well as reducing the overall maintenance cost.

a working group called "component standard" would have been the appropriate forum for this discussion but it disbanded. currently, each component has the freedom to decide on the versioned config implementation details. if you want to open the discussion the next suitable place would be the sig architecture regular zoom call.

https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#meetings

please add the topic to the agenda doc there if you'd like.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture.
Projects
None yet
Development

No branches or pull requests