Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unit test failure: k8s.io/kubernetes/pkg/registry/storage/csidriver/storage.TestUpdate #76236

Closed
mkumatag opened this issue Apr 7, 2019 · 9 comments · Fixed by #76237
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@mkumatag
Copy link
Member

mkumatag commented Apr 7, 2019

What happened:

Fails with the following error:

# make check WHAT=./pkg/registry/storage/csidriver/storage GOFLAGS=-v KUBE_TEST_ARGS='-run ^TestUpdate$'
Running tests for APIVersion: v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,networking.k8s.io/v1beta1,node.k8s.io/v1alpha1,node.k8s.io/v1beta1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,scheduling.k8s.io/v1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
+++ [0407 04:07:20] Running tests without code coverage
=== RUN   TestUpdate
2019-04-07 04:07:25.108722 I | integration: launching 2870051308638608689 (unix://localhost:28700513086386086890)
2019-04-07 04:07:25.189277 I | etcdserver: name = 2870051308638608689
2019-04-07 04:07:25.189296 I | etcdserver: data dir = /tmp/etcd875312686
2019-04-07 04:07:25.189303 I | etcdserver: member dir = /tmp/etcd875312686/member
2019-04-07 04:07:25.189308 I | etcdserver: heartbeat = 10ms
2019-04-07 04:07:25.189313 I | etcdserver: election = 100ms
2019-04-07 04:07:25.189318 I | etcdserver: snapshot count = 0
2019-04-07 04:07:25.189328 I | etcdserver: advertise client URLs = unix://127.0.0.1:2100220112
2019-04-07 04:07:25.189336 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:2100120112
2019-04-07 04:07:25.189349 I | etcdserver: initial cluster = 2870051308638608689=unix://127.0.0.1:2100120112
2019-04-07 04:07:25.229592 I | etcdserver: starting member 6baecdf24c30c8c in cluster 8008c2e9cf0442b6
2019-04-07 04:07:25.229622 I | raft: 6baecdf24c30c8c became follower at term 0
2019-04-07 04:07:25.229632 I | raft: newRaft 6baecdf24c30c8c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2019-04-07 04:07:25.229637 I | raft: 6baecdf24c30c8c became follower at term 1
2019-04-07 04:07:25.326112 W | auth: simple token is not cryptographically signed
2019-04-07 04:07:25.373277 I | etcdserver: set snapshot count to default 100000
2019-04-07 04:07:25.373296 I | etcdserver: starting server... [version: 3.3.10, cluster version: to_be_decided]
2019-04-07 04:07:25.373568 I | etcdserver: 6baecdf24c30c8c as single-node; fast-forwarding 9 ticks (election ticks 10)
2019-04-07 04:07:25.373811 I | etcdserver/membership: added member 6baecdf24c30c8c [unix://127.0.0.1:2100120112] to cluster 8008c2e9cf0442b6
2019-04-07 04:07:25.375734 I | integration: launched 2870051308638608689 (unix://localhost:28700513086386086890)
2019-04-07 04:07:25.419846 I | raft: 6baecdf24c30c8c is starting a new election at term 1
2019-04-07 04:07:25.419863 I | raft: 6baecdf24c30c8c became candidate at term 2
2019-04-07 04:07:25.419873 I | raft: 6baecdf24c30c8c received MsgVoteResp from 6baecdf24c30c8c at term 2
2019-04-07 04:07:25.419882 I | raft: 6baecdf24c30c8c became leader at term 2
2019-04-07 04:07:25.419889 I | raft: raft.node: 6baecdf24c30c8c elected leader 6baecdf24c30c8c at term 2
2019-04-07 04:07:25.420113 I | etcdserver: setting up the initial cluster version to 3.3
2019-04-07 04:07:25.420180 I | etcdserver: published {Name:2870051308638608689 ClientURLs:[unix://127.0.0.1:2100220112]} to cluster 8008c2e9cf0442b6
2019-04-07 04:07:25.441796 N | etcdserver/membership: set the initial cluster version to 3.3
2019-04-07 04:07:25.458756 I | etcdserver/api: enabled capabilities for version 3.3
2019-04-07 04:07:25.472533 I | integration: terminating 2870051308638608689 (unix://localhost:28700513086386086890)
2019-04-07 04:07:25.501902 I | integration: terminated 2870051308638608689 (unix://localhost:28700513086386086890)
--- FAIL: TestUpdate (0.39s)
    resttest.go:543: unexpected error: CSIDriver.storage.k8s.io "foo2" is invalid: spec: Invalid value: storage.CSIDriverSpec{AttachRequired:(*bool)(0xc000430356), PodInfoOnMount:(*bool)(0xc000430357)}: field is immutable
    resttest.go:117: object does not have ObjectMeta: object does not implement the Object interfaces
        <nil>
FAIL
FAIL	k8s.io/kubernetes/pkg/registry/storage/csidriver/storage	0.420s
Makefile:184: recipe for target 'check' failed
make: *** [check] Error 1

In the testcase we are trying to update the immutable field which is wrong, that is happening because of invalid parameter passed to test.TestUpdate need to fix by sending update update followed by invalid update

What you expected to happen:
Test should pass
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@mkumatag mkumatag added the kind/bug Categorizes issue or PR as related to a bug. label Apr 7, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Apr 7, 2019
@mkumatag
Copy link
Member Author

mkumatag commented Apr 7, 2019

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 7, 2019
@liggitt
Copy link
Member

liggitt commented Apr 7, 2019

How is CI passing on master if this is failing?

cc @kubernetes/sig-testing

@mkumatag
Copy link
Member Author

mkumatag commented Apr 7, 2019

How is CI passing on master if this is failing?

cc @kubernetes/sig-testing

@liggitt this test has a skip in the beginning if api version is not v1beta1 which is set during the executing via KUBE_TEST_API env. I really doubt about setting this env while running in the PR test(bazel_test?)

@liggitt
Copy link
Member

liggitt commented Apr 7, 2019

@liggitt this test has a skip in the beginning if api version is not v1beta1 which is set during the executing via KUBE_TEST_API env. I really doubt about setting this env while running in the PR test(bazel_test?)

Ah. We should not have unit tests like that. @kubernetes/sig-storage-test-failures can we remove the skips in tests like this and remove any dependency on the testapi package that forces this behavior? xref #69929 (comment)

cc @gnufied

@mkumatag
Copy link
Member Author

mkumatag commented Apr 9, 2019

@liggitt this test has a skip in the beginning if api version is not v1beta1 which is set during the executing via KUBE_TEST_API env. I really doubt about setting this env while running in the PR test(bazel_test?)

Ah. We should not have unit tests like that. @kubernetes/sig-storage-test-failures can we remove the skips in tests like this and remove any dependency on the testapi package that forces this behavior? xref #69929 (comment)

cc @gnufied

@liggitt I still see these CSIDriver* and CSINode* are still in beta1 stage, tests will not run if we remove the skip! you can check the failures in https://github.com/kubernetes/kubernetes/pull/76237/files

https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/storage/v1beta1/types.go#L304
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/storage/v1beta1/types.go#L225

I'm just wondering is there is other way to run these testcases!!

@liggitt
Copy link
Member

liggitt commented Apr 9, 2019

the best way would be to switch the test to use the same storage config as the kubeapiserver, like the last commit in https://github.com/liggitt/kubernetes/commits/fix_storage_test

@mkumatag
Copy link
Member Author

mkumatag commented Apr 9, 2019

the best way would be to switch the test to use the same storage config as the kubeapiserver, like the last commit in https://github.com/liggitt/kubernetes/commits/fix_storage_test

That's great! Let me cherrypick your commit :P

@liggitt
Copy link
Member

liggitt commented Apr 9, 2019

I added a second commit as well to fix up job and cronjob tests in the same way

@mkumatag
Copy link
Member Author

mkumatag commented Apr 9, 2019

I added a second commit as well to fix up job and cronjob tests in the same way

pushed that one too :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
3 participants