Skip to content
This repository has been archived by the owner on Feb 10, 2022. It is now read-only.

Openstack provider reports "no such file or directory" when claim persistent volume. #347

Open
kkitai opened this issue Aug 19, 2019 · 0 comments

Comments

@kkitai
Copy link

kkitai commented Aug 19, 2019

What happened:
I got following error message from kube-controll-manager when I claim persistent volume.
Then kubectl get pvc show state pending.

$ kubectl get pvc
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pv1    Pending                                      cinder         8m26s
# /var/vcap/sys/logs/kube-controller-managerkube-controller-manager.stderr.log
I0819 07:44:17.682405       1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pv1", UID:"22206f4b-c255-11e9-bda3-fa163e1bf5e1", APIVersion:"v1", ResourceVersion:"4537964", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' Failed to provision volume with StorageClass "cinder": OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory                                                                                                                                                               

then my manifest is:

# manifest.yml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: go-sample
spec:
  replicas: 2
  selector:
    matchLabels:
      app: go-sample
  template:
    metadata:
      labels:
        app: go-sample
    spec:
      containers:
      - name: go-sample
        image: docker-registry/sample/go-sample:latest
        resources:
          requests:
            cpu: "100m"
            memory: "32Mi"
          limits:
            memory: "32Mi"
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: mydata
      volumes:
      - name: mydata
        persistentVolumeClaim:
          claimName: pv1
---

and I configured storage class.

---
# storage-class.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cinder
provisioner: kubernetes.io/cinder
parameters:
  availability: az

What you expected to happen:
When execute kubectl apply -f manifest.yml, openstack cinder should be provisioning volume properly and attached to the pod.

How to reproduce it (as minimally and precisely as possible):

  1. setup kubo with openstack provider(using kubo-deployment
    1 kubectrl apply -f storage-class.yml
    1 kubectrl apply -f manifest.yml

Anything else we need to know?:

Environment:

  • Deployment Info (bosh -d <deployment> deployment):
    kubo-deployment v0.34.0
  • Environment Info (bosh -e <environment> environment):
  • Kubernetes version (kubectl version):
$ kubectl version                                                                                                                                                                      
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-26T00:04:52Z", GoVersion:"go1.12.1"
, Compiler:"gc", Platform:"darwin/amd64"}       
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1"
, Compiler:"gc", Platform:"linux/amd64"}                                         
  • Cloud provider (e.g. aws, gcp, vsphere):
    Openstack - mitaka
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants