Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce new VolumeAttachment API Object #54463

Merged
merged 2 commits into from Nov 15, 2017

Conversation

saad-ali
Copy link
Member

What this PR does / why we need it:

Introduce a new VolumeAttachment API Object. This object will be used by the CSI volume plugin to enable external attachers (see design here. In the future, existing volume plugins can be refactored to use this object as well.

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): Part of issue kubernetes/enhancements#178

Special notes for your reviewer:
None

Release note:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Oct 24, 2017
@k8s-github-robot k8s-github-robot added the kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API label Oct 24, 2017
Copy link
Member

@jsafrane jsafrane left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I miss the object registration in pkg/registry/storage/volumeattachment.. IMO it should be in this PR - the API object is unusable without it.

allErrs = append(allErrs, field.Invalid(fldPath, nodeName, msg))
}
return allErrs
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be also ValidateVolumeAttachmentUpdate that makes sure that Spec can't be changed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fixed now few lines below.

@jsafrane
Copy link
Member

@kubernetes/api-reviewers @kubernetes/sig-storage-api-reviews

@k8s-ci-robot k8s-ci-robot added the sig/storage Categorizes an issue or PR as relevant to SIG Storage. label Oct 24, 2017
// Indicates the volume is successfully attached.
// This field must only be set by the entity completing the attach
// operation, i.e. the external-attacher.
IsAttached bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a couple things to consider from the API conventions:

  • Think twice about bool fields. Many ideas start as boolean but eventually trend towards a small set of mutually exclusive options. Plan for future expansions by describing the policy options explicitly as a string type alias (e.g. TerminationMessagePolicy).
  • The name of a field expressing a boolean property called 'fooable' should be called Fooable, not IsFooable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, it seems like we'd need to be able to tell from status whether attach had begun.

Since attach is not instant, there could be a period of time during

  1. Kubernetes creates VolumeAttachment
  2. Attacher observes VolumeAttachment, starts attaching
  3. PV/VolumeAttachment objects are vulnerable to deletion/finalization, since status doesn't indicate the volume is attached or is attaching
  4. Attacher completes attaching, tries to update status to indicate attached state, object is gone

It seems like it should be possible to tell where we are in the detached -> attaching -> attached -> detaching -> detached lifecycle by looking at status (and making the attacher do a successful status update from detached -> attaching before starting the attach operation prevents races trying to attach a volume that is getting deleted)

Copy link
Member

@jsafrane jsafrane Oct 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is that we do not need "attaching" state, simple presence of the object with (Is)Attached==false is enough. The same for "detaching", presence of deletetionTimestamp is enough.

detached -> attaching

This is not possible. VolumeAttachment with deletetionTimestamp must be deleted first in API server and Kubernetes must create a new instance. Or is it allowed to cancel deletetionTimestamp? It looks odd.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We did not want to introduce state machines with detached -> attaching -> attached -> detaching -> detached because it's not possible to add a new state there later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

detached -> attaching

This is not possible

Maybe I used the wrong term. Is unattached -> attaching -> attached -> detaching -> detached more correct? Don't we need to represent the state where the attacher has observed the request for the attachment and is currently working to attach (so the VolumeAttachment object should not be removed), but it has not yet attached (so the volume is not ready to mount)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to capture the state that attacher has started attaching. A/D controller with time out eventually and send some event to pod. There is a glitch though, there is no way how to tell if external attacher does not run or it's just slow.

Attacher string

// The volume to attach.
Volume api.VolumeSource
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it intentional that VolumeAttachment is a cluster-scoped type that references api.VolumeSource instead of api.PersistentVolumeSource?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. It serves as memento of pod's VolumeSource in case the pod is deleted and we need to detach its volumes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's no place to indicate the original namespace, pod name or pod uid... is none of that information needed to resolve the volume or referenced secrets?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We moved to supporting PersistentVolumes only and we use namespaced SecretReferences there

@gnufied
Copy link
Member

gnufied commented Oct 24, 2017

cc @kubernetes/sig-storage-api-reviews

// This field must only be set by the entity completing the attach
// operation, i.e. the external-attacher.
// +optional
AttachError VolumeError
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be a pointer? kubectl get volumeattachment shows ugly output when I created an instance with no Status:

  apiVersion: storage.k8s.io/v1
  kind: VolumeAttachment
  metadata:
    creationTimestamp: 2017-10-24T16:26:47Z
    name: first
    namespace: ""
    resourceVersion: "341"
    selfLink: /apis/storage.k8s.io/v1/volumeattachments/first
    uid: 2192999e-b8d8-11e7-b26b-525400224e72
  spec:
    attacher: csi/example
    nodeName: localhost
    volume:
      hostPath:
        path: /tmp
        type: ""
  status:
    attachError:
      Message: ""
      time: null
    detachError:
      Message: ""
      time: null
    isAttached: false
kind: List

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// This field must only be set by the entity completing the attach
// operation, i.e. the external-attacher.
// +optional
AttachError VolumeError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Prefer a status slice similar to Pod Conditions. It helps track the evolution attachment status.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed that and there are rumors that Conditions are bad design and we should not use them for new features. I am not sure who's source of that rumor, @thockin?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a result of discussions with @thockin and @erictune

// This field must only be set by the entity completing the detach
// operation, i.e. the external-attacher.
// +optional
DetachError VolumeError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the VolumeAttachment object used as more like record/status keeping object or used to convey a desired state?
If the latter, then im assuming the detach is triggered when this VolumeAttachment object is deleted. So if an failure happens during the detach, the external-attacher will not have this object to specify the DetachError anymore.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • VolumeAttachment is used for both - Spec for the desired state and Status for record keeping.
  • We use finalizers to keep "deleted" VolumeAttachment instances around until associated volume is truly detached from a node. See CSI Volume Plugins in Kubernetes Design Doc community#1258 for details.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are VolumeAttachment objects 1:1 with PV objects?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, one volume can be attached to multiple nodes and thus will have multiple VolumeAttachment instances.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so they are unique by (pv,node)? is that enforced by convention or by pinning the name format to something like <pvName>.<nodeName>, etc?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's just an convention. IMO, nobody but Kubernetes A/D controller should be allowed to create this object.

Volume api.VolumeSource

// The node that the volume should be attached to.
NodeName string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is the VolumeAttachment object used during a multi-attachments scenario? For example with RO volumes, will there be multiple of these VolumeAttachment objects for every RO "attachment" even if the volume source and node is the same or will there just be one?

Copy link
Contributor

@kokhang kokhang Oct 24, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should you also store information about pods and mountDir in the case where plugins are not using the attacher-detacher controller?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

volumes, will there be multiple of these VolumeAttachment objects for every RO "attachment" even if the volume source and node is the same or will there just be one?

We do not plan to change current behavior, i.e. a volume is attached to a node only once. So there will be one instance per node-volume pair. Kubelet then mounts the single device to individual pods (using bind-mounts).

Should you also store information about pods and mountDir in the case where plugins are not using the attacher-detacher controller?

I am not sure I understand the question. That's kubelet job and is not related to volume attachment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was alluding to the fact that in non-attachable plugins, kubelet mount() does everything (attach, mountDevice and mount podDir). So my question was if there is a need to store pod information to determine when it is safe to detach (delete VolumeAttach object).

Look like kubelet can determine how many bind mount ref are there during a detach and delete the attachment once there are no bind mount left.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VolumeAttachment is used only for attachable plugins (CSI). See the proposal for handling of CSI plugins that don't support ControllerPublishVolume call.

And indeed, kubelet has its own means how to track pods that use the volume and it won't let Attach/Detach controller to detach a volume that's still mounted.

@jsafrane
Copy link
Member

I updated the API to current version in the proposal + added its registration and validation. kubect get va and kubectl create -f volumeattachment.yaml now works.

@luxas luxas added this to the v1.9 milestone Oct 27, 2017
@saad-ali saad-ali force-pushed the volumeAttachmentAPI branch 5 times, most recently from c8a78a6 to df157c6 Compare November 9, 2017 00:33
@thockin
Copy link
Member

thockin commented Nov 9, 2017

We should be judicious with shortnames like va - considering this is not something humans will use very often, I'd propose to not add a shortcut

Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From other doc:

Did we rule out simply storing this as fields on PV?

// Right now only PersistenVolumes can be attached via external attacher,
// in future we may allow also inline volumes in pods.
// Exactly one member can be set.
type AttachedVolumeSource struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, we've swung towards discriminated unions. Given that this has just one item, I suppose we can forego the discriminator for now, but as soon as we add a second option, we need it. We can add a type field that defaults to "PersistentVolume" when we need it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ack.


// Spec is read-only
if !apiequality.Semantic.DeepEqual(old.Spec, new.Spec) {
allErrs = append(allErrs, field.Invalid(field.NewPath("spec"), new.Spec, "spec is immutable after creation"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should just say "field is immutable"

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing

@saad-ali
Copy link
Member Author

saad-ali commented Nov 9, 2017

@caesarxuchao ./hack/test-update-storage-objects.sh is failing with panic: storage codec doesn't seem to match given type: Internal type not encodable: no kind "VolumeAttachment" is registered for version "storage.k8s.io/v1beta1". Do we have to add this new object to both storage.k8s.io/v1beta1 and storage.k8s.io/v1? Or can we get away with just adding it to storage.k8s.io/v1?

@saad-ali
Copy link
Member Author

saad-ali commented Nov 9, 2017

We should be judicious with shortnames like va - considering this is not something humans will use very often, I'd propose to not add a shortcut

Ack. Removing.

From other doc: Did we rule out simply storing this as fields on PV?

Mostly because it makes detaching trickier.

@saad-ali saad-ali force-pushed the volumeAttachmentAPI branch 2 times, most recently from 2697fcb to 1744587 Compare November 9, 2017 16:36
@saad-ali saad-ali added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. labels Nov 14, 2017
@k8s-github-robot
Copy link

[MILESTONENOTIFIER] Milestone Pull Request Needs Approval

@childsb @saad-ali @thockin @kubernetes/sig-storage-misc

Action required: This pull request must have the status/approved-for-milestone label applied by a SIG maintainer. If the label is not applied within 6 days, the pull request will be moved out of the v1.9 milestone.

Pull Request Labels
  • sig/storage: Pull Request will be escalated to these SIGs if needed.
  • priority/important-soon: Escalate to the pull request owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.
  • kind/feature: New functionality.
Help

@thockin
Copy link
Member

thockin commented Nov 14, 2017

This is a ludicrous amount of work to add a new API. Absolutely insane.
/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 14, 2017
@k8s-github-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: saad-ali, thockin

Associated issue: 178

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these OWNERS Files:

You can indicate your approval by writing /approve in a comment
You can cancel your approval by writing /approve cancel in a comment

@k8s-github-robot k8s-github-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 14, 2017
Copy link
Member

@caesarxuchao caesarxuchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments and then the apimachinery related code lgtm.

}

func (volumeAttachmentStrategy) AllowUnconditionalUpdate() bool {
return true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you really want this to be true?

It will defeat the optimistic concurrency built around resourceVersion. See

if doUnconditionalUpdate {
// Update the object's resource version to match the latest
// storage object's resource version.
err = e.Storage.Versioner().UpdateObject(obj, res.ResourceVersion)
if err != nil {
return nil, nil, err

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. Changed it to false.

}

func (volumeAttachmentStrategy) ValidateUpdate(ctx genericapirequest.Context, obj, old runtime.Object) field.ErrorList {
errorList := validation.ValidateVolumeAttachment(obj.(*storage.VolumeAttachment))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: add a variable and assign obj.(*storage.VolumeAttachment) to it, to save another type assertion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

// TODO: move SchemeBuilder with zz_generated.deepcopy.go to k8s.io/api.
// localSchemeBuilder and AddToScheme will stay in k8s.io/kubernetes.
SchemeBuilder = runtime.NewSchemeBuilder(addKnownTypes)
localSchemeBuilder = &SchemeBuilder
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know you copied the pattern from some other files, but still please remove the localSchemeBuilder, it's unnecessary. Also please remove the TODO at line 37.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do.

Introduce the v1alpha1 version to the Kubernetes storage API. And add a
new VolumeAttachment object to that version. This object will initially
be used only by the new CSI Volume Plugin. Eventually existing volume
plugins can be refactored to use it too.
@saad-ali saad-ali added the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Nov 15, 2017
@saad-ali
Copy link
Member Author

@caesarxuchao feedback addressed, PTAL.

@k8s-github-robot k8s-github-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 15, 2017
@saad-ali saad-ali removed the do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. label Nov 15, 2017
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Nov 15, 2017

@saad-ali: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-cross 9f294c1 link /test pull-kubernetes-cross

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@caesarxuchao
Copy link
Member

apimachinery part lgtm.

@saad-ali saad-ali added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 15, 2017
@k8s-github-robot
Copy link

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit ebe8ea7 into kubernetes:master Nov 15, 2017
@jsafrane
Copy link
Member

With hack/local-up-cluster.sh, I don't see storage.k8s.io/v1alpha1 API:

$ curl 127.0.0.1:8080/apis/storage.k8s.io/v1alpha1
{                                                                              
  "kind": "Status",                                                            
  "apiVersion": "v1",                                                          
  "metadata": {                                                                
                                                                               
  },                                                                           
  "status": "Failure",                                                         
  "message": "the server could not find the requested resource",               
  "reason": "NotFound",                                                        
  "details": {                                                                 
                                                                               
  },                                                                           
  "code": 404                                                                  
}

Do I need to enable it somewhere?

@liggitt
Copy link
Member

liggitt commented Nov 15, 2017

alpha API groups are not enabled by default. you can enable it with --runtime-config=apis/storage.k8s.io/v1alpha1=true (RUNTIME_CONFIG=apis/storage.k8s.io/v1alpha1=true envvar in local up cluster, I think)

@CaoShuFeng
Copy link
Contributor

Hi, I don't know why CI didn't find this, But I get this error:

[fujitsu@localhost kubernetes]$ make test WHAT=k8s.io/kubernetes/pkg/registry/storage/storageclass/storage
Running tests for APIVersion: v1,admissionregistration.k8s.io/v1alpha1,admission.k8s.io/v1alpha1,apps/v1beta1,apps/v1beta2,apps/v1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,settings.k8s.io/v1alpha1,storage.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,
+++ [1120 14:38:40] Running tests without code coverage
2017-11-20 14:38:43.201383 I | integration: launching 7874548005808368453 (unix://localhost:78745480058083684530)
2017-11-20 14:38:43.201637 I | etcdserver: name = 7874548005808368453
2017-11-20 14:38:43.201653 I | etcdserver: data dir = /tmp/etcd404175323
2017-11-20 14:38:43.201657 I | etcdserver: member dir = /tmp/etcd404175323/member
2017-11-20 14:38:43.201660 I | etcdserver: heartbeat = 10ms
2017-11-20 14:38:43.201663 I | etcdserver: election = 100ms
2017-11-20 14:38:43.201665 I | etcdserver: snapshot count = 0
2017-11-20 14:38:43.201671 I | etcdserver: advertise client URLs = unix://127.0.0.1:2100224704
2017-11-20 14:38:43.201676 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:2100124704
2017-11-20 14:38:43.201692 I | etcdserver: initial cluster = 7874548005808368453=unix://127.0.0.1:2100124704
2017-11-20 14:38:43.208654 I | etcdserver: starting member 7f7f4f96ec3c19c6 in cluster 787606054cdf7e02
2017-11-20 14:38:43.208687 I | raft: 7f7f4f96ec3c19c6 became follower at term 0
2017-11-20 14:38:43.208708 I | raft: newRaft 7f7f4f96ec3c19c6 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-11-20 14:38:43.208720 I | raft: 7f7f4f96ec3c19c6 became follower at term 1
2017-11-20 14:38:43.209874 I | etcdserver: set snapshot count to default 10000
2017-11-20 14:38:43.209918 I | etcdserver: starting server... [version: 3.1.10, cluster version: to_be_decided]
2017-11-20 14:38:43.210437 I | etcdserver/membership: added member 7f7f4f96ec3c19c6 [unix://127.0.0.1:2100124704] to cluster 787606054cdf7e02
2017-11-20 14:38:43.210826 I | integration: launched 7874548005808368453 (unix://localhost:78745480058083684530)
2017-11-20 14:38:43.238973 I | raft: 7f7f4f96ec3c19c6 is starting a new election at term 1
2017-11-20 14:38:43.238991 I | raft: 7f7f4f96ec3c19c6 became candidate at term 2
2017-11-20 14:38:43.239018 I | raft: 7f7f4f96ec3c19c6 received MsgVoteResp from 7f7f4f96ec3c19c6 at term 2
2017-11-20 14:38:43.239034 I | raft: 7f7f4f96ec3c19c6 became leader at term 2
2017-11-20 14:38:43.239041 I | raft: raft.node: 7f7f4f96ec3c19c6 elected leader 7f7f4f96ec3c19c6 at term 2
2017-11-20 14:38:43.239199 I | etcdserver: published {Name:7874548005808368453 ClientURLs:[unix://127.0.0.1:2100224704]} to cluster 787606054cdf7e02
2017-11-20 14:38:43.239262 I | etcdserver: setting up the initial cluster version to 3.1
2017-11-20 14:38:43.239352 N | etcdserver/membership: set the initial cluster version to 3.1
2017-11-20 14:38:43.239384 I | etcdserver/api: enabled capabilities for version 3.1
2017-11-20 14:38:43.243130 I | integration: terminating 7874548005808368453 (unix://localhost:78745480058083684530)
2017-11-20 14:38:43.255072 I | integration: terminated 7874548005808368453 (unix://localhost:78745480058083684530)
--- FAIL: TestCreate (0.05s)
	resttest.go:320: Unexpected error: no kind "StorageClass" is registered for version "storage.k8s.io/v1alpha1"
2017-11-20 14:38:43.255365 I | integration: launching 5776451847361156600 (unix://localhost:57764518473611566000)
2017-11-20 14:38:43.255607 I | etcdserver: name = 5776451847361156600
2017-11-20 14:38:43.255617 I | etcdserver: data dir = /tmp/etcd561142654
2017-11-20 14:38:43.255623 I | etcdserver: member dir = /tmp/etcd561142654/member
2017-11-20 14:38:43.255627 I | etcdserver: heartbeat = 10ms
2017-11-20 14:38:43.255631 I | etcdserver: election = 100ms
2017-11-20 14:38:43.255635 I | etcdserver: snapshot count = 0
2017-11-20 14:38:43.255641 I | etcdserver: advertise client URLs = unix://127.0.0.1:2100424704
2017-11-20 14:38:43.255646 I | etcdserver: initial advertise peer URLs = unix://127.0.0.1:2100324704
2017-11-20 14:38:43.255654 I | etcdserver: initial cluster = 5776451847361156600=unix://127.0.0.1:2100324704
2017-11-20 14:38:43.263060 I | etcdserver: starting member 53e079aca545846c in cluster d2a46b8e8af27f5e
2017-11-20 14:38:43.263082 I | raft: 53e079aca545846c became follower at term 0
2017-11-20 14:38:43.263089 I | raft: newRaft 53e079aca545846c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2017-11-20 14:38:43.263093 I | raft: 53e079aca545846c became follower at term 1
2017-11-20 14:38:43.263868 I | etcdserver: set snapshot count to default 10000
2017-11-20 14:38:43.263890 I | etcdserver: starting server... [version: 3.1.10, cluster version: to_be_decided]
2017-11-20 14:38:43.264308 I | integration: launched 5776451847361156600 (unix://localhost:57764518473611566000)
2017-11-20 14:38:43.264413 I | etcdserver/membership: added member 53e079aca545846c [unix://127.0.0.1:2100324704] to cluster d2a46b8e8af27f5e
2017-11-20 14:38:43.353276 I | raft: 53e079aca545846c is starting a new election at term 1
2017-11-20 14:38:43.353322 I | raft: 53e079aca545846c became candidate at term 2
2017-11-20 14:38:43.353334 I | raft: 53e079aca545846c received MsgVoteResp from 53e079aca545846c at term 2
2017-11-20 14:38:43.353350 I | raft: 53e079aca545846c became leader at term 2
2017-11-20 14:38:43.353366 I | raft: raft.node: 53e079aca545846c elected leader 53e079aca545846c at term 2
2017-11-20 14:38:43.353547 I | etcdserver: published {Name:5776451847361156600 ClientURLs:[unix://127.0.0.1:2100424704]} to cluster d2a46b8e8af27f5e
2017-11-20 14:38:43.353623 I | etcdserver: setting up the initial cluster version to 3.1
2017-11-20 14:38:43.353852 N | etcdserver/membership: set the initial cluster version to 3.1
2017-11-20 14:38:43.360294 I | integration: terminating 5776451847361156600 (unix://localhost:57764518473611566000)
2017-11-20 14:38:43.379161 I | integration: terminated 5776451847361156600 (unix://localhost:57764518473611566000)
--- FAIL: TestUpdate (0.12s)
	resttest.go:458: unexpected error: no kind "StorageClass" is registered for version "storage.k8s.io/v1alpha1"
	resttest.go:463: unexpected error: storageclasses.storage.k8s.io "foo2" not found
panic: interface conversion: runtime.Object is nil, not *storage.StorageClass [recovered]
	panic: interface conversion: runtime.Object is nil, not *storage.StorageClass

goroutine 200 [running]:
testing.tRunner.func1(0xc42036e960)
	/usr/local/go/src/testing/testing.go:711 +0x2d2
panic(0x154b6e0, 0xc420324600)
	/usr/local/go/src/runtime/panic.go:491 +0x283
k8s.io/kubernetes/pkg/registry/storage/storageclass/storage.TestUpdate.func1(0x0, 0x0, 0x14, 0xc42116dda8)
	/home/fujitsu/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/registry/storage/storageclass/storage/storage_test.go:95 +0x147
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/rest/resttest.(*Tester).testUpdateEquals(0xc4202ba570, 0x21637a0, 0xc4202726e0, 0xc420613ed8, 0xc4201d45a0, 0x17c2d40)
	/home/fujitsu/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/rest/resttest/resttest.go:465 +0x2bd
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/rest/resttest.(*Tester).TestUpdate(0xc4202ba570, 0x21637a0, 0xc420272580, 0xc42116ded8, 0xc4201d45a0, 0x17c2d40, 0xc420528140, 0x1, 0x1)
	/home/fujitsu/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/rest/resttest/resttest.go:164 +0x61
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/generic/testing.(*Tester).TestUpdate(0xc4201d4580, 0x21637a0, 0xc420272580, 0x17c2d40, 0xc420613f88, 0x1, 0x1)
	/home/fujitsu/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/registry/generic/testing/tester.go:92 +0x209
k8s.io/kubernetes/pkg/registry/storage/storageclass/storage.TestUpdate(0xc42036e960)
	/home/fujitsu/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/registry/storage/storageclass/storage/storage_test.go:90 +0x362
testing.tRunner(0xc42036e960, 0x17c2d50)
	/usr/local/go/src/testing/testing.go:746 +0xd0
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:789 +0x2de
FAIL	k8s.io/kubernetes/pkg/registry/storage/storageclass/storage	0.236s
Makefile:182: recipe for target 'test' failed
make: *** [test] Error 1

@CaoShuFeng
Copy link
Contributor

ref:
#56136

@lpabon lpabon mentioned this pull request Nov 28, 2017
k8s-github-robot pushed a commit that referenced this pull request Jan 13, 2018
Automatic merge from submit-queue (batch tested with PRs 57266, 58187, 58186, 46245, 56509). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

e2e: CSI Tests

**What this PR does / why we need it**:
This e2e test tests the CSI external attacher with a mock CSI plugin driver.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*: kubernetes/enhancements#178

**Special notes for your reviewer**:
* Tests features in kubernetes/enhancements#178
* Tests implementation of kubernetes/community#1258
* Tests VolumeAttachment Object: #54463

**Release note**:
```release-note
NONE
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. milestone/needs-approval priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note-none Denotes a PR that doesn't merit a release note. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet