Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update dependencies to latest version #258

Open
wants to merge 8 commits into
base: master
Choose a base branch
from

Conversation

humblec
Copy link
Contributor

@humblec humblec commented Jan 9, 2023

Fix #206

Signed-off-by: Humble Chirammal hchiramm@redhat.com

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jan 9, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: humblec

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Jan 9, 2023
@humblec
Copy link
Contributor Author

humblec commented Jan 9, 2023

/assign @yonatankahana

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 4, 2023
@tkudupudi2
Copy link

@humblec will this PR be merged anytime soon?

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 6, 2023
@humblec humblec force-pushed the trivy branch 3 times, most recently from b1c3fb1 to d7cff69 Compare April 6, 2023 08:48
humblec and others added 5 commits April 6, 2023 14:25
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
@humblec
Copy link
Contributor Author

humblec commented Apr 6, 2023

@yonatankahana can you please review this PR ?

Copy link
Contributor

@jackielii jackielii left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly looks good to me except the image tag change. I'll approve once amended. thanks

go.mod Show resolved Hide resolved
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Copy link
Contributor

@yonatankahana yonatankahana left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@humblec
tested on kubernetes 1.25.3 and it failed to start with the error:

F0411 00:34:32.671804       1 controller.go:878] Error creating lock: endpoints lock is removed, migrate to endpointsleases

seems like it already fixed in sig-storage-lib-external-provisioner (https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/cabcbb0869a16b267bd3c260c70d976495606526/controller/controller.go#L868) and released in v9.0.0. maybe we should even use v9.0.2 already.

please, test your commits before pushing them...

n.b: we claim to support kubernetes >= 1.9, we must test if it still true after these changes before merge.

Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
@humblec
Copy link
Contributor Author

humblec commented Apr 11, 2023

@yonatankahana I have updated to v9.0.2 and also tested on 1.26.3 cluster and controller runs.

I0411 09:49:12.421958   17960 leaderelection.go:248] attempting to acquire leader lease default/nfs-provisioner...
I0411 09:49:12.460820   17960 leaderelection.go:258] successfully acquired lease default/nfs-provisioner
I0411 09:49:12.460847   17960 event.go:285] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"nfs-provisioner", UID:"0b5fa37b-5f24-4f17-aca0-89951b7824aa", APIVersion:"v1", ResourceVersion:"1728", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' myhost_e7f8cb2c-02db-4993-97c3-a92dbfbd9ee3 became leader
I0411 09:49:12.460881   17960 event.go:285] Event(v1.ObjectReference{Kind:"Lease", Namespace:"default", Name:"nfs-provisioner", UID:"a32d59b0-6dff-4657-bdfa-ad7f60e0de8e", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"1729", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' myhost_e7f8cb2c-02db-4993-97c3-a92dbfbd9ee3 became leader
I0411 09:49:12.460924   17960 controller.go:811] Starting provisioner controller nfs-provisioner_myhost_e7f8cb2c-02db-4993-97c3-a92dbfbd9ee3!
I0411 09:49:12.562042   17960 controller.go:860] Started provisioner controller nfs-provisioner_myhost_e7f8cb2c-02db-4993-97c3-a92dbfbd9ee3!

n.b: we claim to support kubernetes >= 1.9, we must test if it still true after these changes before merge.

I don't see any point to support versions really OLD! these Kubernetes versions ( v1.9 to v1.23) are not even supported from the community, even if we think about some kind of extended support from a specific distro/vendor may be a couple of versions could be added to n-3 , but v1.9 to 1.21 support looks an unwanted claim!

@humblec
Copy link
Contributor Author

humblec commented Apr 14, 2023

@yonatankahana @jackielii can we get this in ?

@yonatankahana
Copy link
Contributor

yonatankahana commented Apr 18, 2023

@humblec

I don't see any point to support versions really OLD! these Kubernetes versions ( v1.9 to v1.23) are not even supported from the community, even if we think about some kind of extended support from a specific distro/vendor may be a couple of versions could be added to n-3 , but v1.9 to 1.21 support looks an unwanted claim!

i agree, but we must understand what those changes means in terms of backward compatibility, those are probably breaking changes....
we also need to update the documentation and helm charts kubeVersion constraint accordingly.

@bman46
Copy link

bman46 commented May 17, 2023

Any updates on this?

Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
@humblec
Copy link
Contributor Author

humblec commented Jun 7, 2023

@humblec

I don't see any point to support versions really OLD! these Kubernetes versions ( v1.9 to v1.23) are not even supported from the community, even if we think about some kind of extended support from a specific distro/vendor may be a couple of versions could be added to n-3 , but v1.9 to 1.21 support looks an unwanted claim!

i agree, but we must understand what those changes means in terms of backward compatibility, those are probably breaking changes.... we also need to update the documentation and helm charts kubeVersion constraint accordingly.

@yonatankahana users are free to use the existing version of provisioner if they want to stick to old or if they are facing issue with latest version of the provisioner in VERY OLD clusters. The new clusters can definitely use the provisioner which has been updated.. Also I don't think any user still exist with versions like Kube v1.9 or something below v1.23 ( ie 4 releases back to latest) . However, I have placed the version as 1.21 and above. With that I don't see a point to blocking this PR from the merge. I have updated the readme and chart definition to reflect the same. please take a look.

@@ -1,10 +1,10 @@
apiVersion: v1
appVersion: 4.0.2
appVersion: 4.0.3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only version should be bumped, appVersion is the image tag (4.0.2)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was deliberate , so that, the image ( tag) remain consistent for older deployment.. the upgrade can be version and image update to really land on new versions. @yonatankahana .

@yonatankahana
Copy link
Contributor

yonatankahana commented Jun 7, 2023

@humblec

I don't see any point to support versions really OLD! these Kubernetes versions ( v1.9 to v1.23) are not even supported from the community, even if we think about some kind of extended support from a specific distro/vendor may be a couple of versions could be added to n-3 , but v1.9 to 1.21 support looks an unwanted claim!

i agree, but we must understand what those changes means in terms of backward compatibility, those are probably breaking changes.... we also need to update the documentation and helm charts kubeVersion constraint accordingly.

@yonatankahana users are free to use the existing version of provisioner if they want to stick to old or if they are facing issue with latest version of the provisioner in VERY OLD clusters. The new clusters can definitely use the provisioner which has been updated.. Also I don't think any user still exist with versions like Kube v1.9 or something below v1.23 ( ie 4 releases back to latest) . However, I have placed the version as 1.21 and above. With that I don't see a point to blocking this PR from the merge. I have updated the readme and chart definition to reflect the same. please take a look.

@humblec imo if it's compatible there is no reason to restrict installation on old clusters. I myself still maintaining more than 20 clusters with the nfs subdir external provisioner installed on kubernetes v1.17 and I will be very happy to get those trivy CVE's resolved. I do agree that we don't need to actively support such old versions but why just change the version arbitrarily? why not check it fully? where are we rushing to?

nothing is blocked, such big changes require deep look, i can't just merge 1.3k lines patches. look how much changes done since PR created. it takes time - we all volunteer in our free time 😃

tbh, i will feel much safer to merge #287 to solve the original issue, release a new version, and only then merge this PR.
we can, tomorrow, release a 100% compatible version with 2 years of bugs and CVE's resolved. after that we can break whatever we want.

description: nfs-subdir-external-provisioner is an automatic provisioner that used your *already configured* NFS server, automatically creating Persistent Volumes.
name: nfs-subdir-external-provisioner
home: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
version: 4.0.18
kubeVersion: ">=1.9.0-0"
version: 4.0.20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changing the version will cause chart release, better to change the kubeVersion only when we will actually release the chart with those restrictions, currently it will be redundant since image and chart are same.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

new chart release was the plan with new image tag as mentioned above. so that the versions remain different and users can upgrade when they want.

@humblec
Copy link
Contributor Author

humblec commented Jun 19, 2023

tbh, i will feel much safer to merge #287 to solve the original issue, release a new version, and only then merge this PR.
we can, tomorrow, release a 100% compatible version with 2 years of bugs and CVE's resolved. after that we can break whatever we want.

I am fine to merge 287 and release new version. However the justifications for keeping this PR in queue for long time does not look good to me :), any way, we will get this in on top of 287 and a new release. 👍

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jun 19, 2023
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mengjiao-liu
Copy link

mengjiao-liu commented Aug 28, 2023

Please rebase to resolve the conflict.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@cornfeedhobo
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 21, 2024
@cornfeedhobo
Copy link

This PR is a year old. Can someone open another PR that closes this one, so we can stop dragging this out?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 20, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants