Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

x509: certificate has expired or is not yet valid #2951

Open
lukaszgryglicki opened this issue Mar 4, 2020 · 11 comments
Open

x509: certificate has expired or is not yet valid #2951

lukaszgryglicki opened this issue Mar 4, 2020 · 11 comments
Assignees
Labels
No ISSUE activity Team discussing PLAN Team is discussing & researching a plan to fix

Comments

@lukaszgryglicki
Copy link

Description

Trying to create the following PVC:

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: default
  name: pvtest
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: openebs-hostpath
  resources:
    requests:
      storage: 1Gi

One cluster is OK:

  • k apply -f tpv.yaml --> persistentvolumeclaim/pvtest created
  • k delete -f tpv.yaml --> persistentvolumeclaim "pvtest" deleted

Another cluster:

  • k apply -f tpv.yaml -->
Error from server (InternalError): error when creating "tpv.yaml": Internal error occurred: failed calling webhook "admission-webhook.openebs.io": Post https://admission-server-svc.openebs.svc:443/validate?timeout=30s: x509: certificate has expired or is not yet valid

Both clusters were deployed in the same way, have the same configuration, and both worked just fine for about 6 months until today (I didn't change anything on their config)

Expected Behavior

  • The second cluster should just create a PVC, then delete.
  • Actually, I've noticed this while trying to drop and delete again one namespace. 1st cluster was OK, 2nd is frozen on terminating that namespace forever, after some time I've found that the root cause for this is the inability to delete that namespace's PVCs. They're now reported as "lost".

Get namespaces:

sds                      Terminating   36d

PVCs in that namespace:

sds-0   Lost     pvc-07c870dd-41c9-11ea-a753-022eab783cb2   0                         openebs-hostpath   36d
sds-1   Lost     pvc-07c6a170-41c9-11ea-a753-022eab783cb2   0                         openebs-hostpath   36d
sds-2   Lost     pvc-07c949ec-41c9-11ea-a753-022eab783cb2   0                         openebs-hostpath   36d
sds-3   Lost     pvc-07c77cf6-41c9-11ea-a753-022eab783cb2   0                         openebs-hostpath   36d
  • kubectl get nodes:
ip-192-168-25-150.us-west-2.compute.internal   Ready    <none>   180d   v1.13.8-eks-cd3eb0
ip-192-168-49-247.us-west-2.compute.internal   Ready    <none>   180d   v1.13.8-eks-cd3eb0
ip-192-168-64-125.us-west-2.compute.internal   Ready    <none>   180d   v1.13.8-eks-cd3eb0
ip-192-168-83-36.us-west-2.compute.internal    Ready    <none>   180d   v1.13.8-eks-cd3eb0
``
* `kubectl get pods --all-namespaces`:

NAMESPACE NAME READY STATUS RESTARTS AGE
default local-storage-nfs-nfs-server-provisioner-0 1/1 Running 1 36d
dev-analytics-api-prod dev-analytics-api-7c7f56b8fc-448mh 1/1 Running 0 7h11m
dev-analytics-api-prod dev-analytics-api-7c7f56b8fc-jsksj 1/1 Running 0 7h11m
dev-analytics-api-prod dev-analytics-api-migration-1567762361901059583 0/1 Completed 0 180d
dev-analytics-ui dev-analytics-ui-6c984d9958-8z26v 1/1 Running 0 11d
json2hat json2hat-1581814860-dckpn 0/1 Completed 0 17d
json2hat json2hat-1582419660-jj47j 0/1 Completed 0 10d
json2hat json2hat-1583024460-rb7t8 0/1 Completed 0 3d15h
kibana kibana-7567969554-mvvp6 0/1 Running 0 31m
kibana kibana-7567969554-vnfcq 0/1 Running 0 31m
kube-system aws-node-lrh7d 1/1 Running 0 180d
kube-system aws-node-qnnjk 1/1 Running 0 180d
kube-system aws-node-rkn9z 1/1 Running 0 180d
kube-system aws-node-xp6mf 1/1 Running 1 180d
kube-system coredns-79d667b89f-22p4h 1/1 Running 0 42d
kube-system coredns-79d667b89f-fb8x5 1/1 Running 0 42d
kube-system kube-proxy-dv7bn 1/1 Running 0 180d
kube-system kube-proxy-mfpwx 1/1 Running 1 180d
kube-system kube-proxy-rs7d7 1/1 Running 0 180d
kube-system kube-proxy-sp9jm 1/1 Running 0 180d
mariadb backups-page-dcc76bd4d-kbzh9 1/1 Running 0 36d
mariadb mariadb-backups-1583203500-p4zmb 0/1 Completed 0 37h
mariadb mariadb-backups-1583289900-cvpqr 0/1 Completed 0 13h
mariadb mariadb-master-0 1/1 Running 3 153d
mariadb mariadb-slave-0 1/1 Running 0 153d
openebs maya-apiserver-c54455947-44k2r 1/1 Running 0 42d
openebs openebs-admission-server-b945f5f94-bwlgk 1/1 Running 0 30m
openebs openebs-localpv-provisioner-6c778bbc9c-7ckjp 1/1 Running 0 27m
openebs openebs-ndm-259t6 1/1 Running 1 180d
openebs openebs-ndm-dxhml 1/1 Running 0 180d
openebs openebs-ndm-ht9dw 1/1 Running 0 180d
openebs openebs-ndm-m725n 1/1 Running 0 180d
openebs openebs-ndm-operator-796785b89f-dfgq7 1/1 Running 0 36d
openebs openebs-provisioner-7db48b57f4-wq5nd 1/1 Running 1 42d
openebs openebs-snapshot-operator-5664bf5777-cssqm 2/2 Running 7 180d
sortinghat-api sortinghat-api-5f9d67cd8d-hx5kq 1/1 Running 0 180d

* `kubectl get services`:

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 443/TCP 182d
local-storage-nfs-nfs-server-provisioner ClusterIP 10.100.255.228 2049/TCP,20048/TCP,51413/TCP,51413/UDP 36d

* `kubectl get sc`:

NAME PROVISIONER AGE
gp2 (default) kubernetes.io/aws-ebs 182d
local-storage kubernetes.io/no-provisioner 182d
nfs-openebs-localstorage cluster.local/local-storage-nfs-nfs-server-provisioner 36d
openebs-device openebs.io/local 182d
openebs-hostpath openebs.io/local 182d
openebs-jiva-default openebs.io/provisioner-iscsi 182d
openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 182d

* `kubectl get pv`:

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-050a7761-d075-11e9-95db-024cac4c3a40 4Gi RWO Delete Bound mariadb/data-mariadb-master-0 openebs-hostpath 180d
pvc-050e5e66-d075-11e9-95db-024cac4c3a40 4Gi RWO Delete Bound mariadb/data-mariadb-slave-0 openebs-hostpath 180d
pvc-57f3604e-41c8-11ea-a753-022eab783cb2 4Gi RWO Delete Bound default/data-local-storage-nfs-nfs-server-provisioner-0 openebs-hostpath 36d
pvc-63fe9708-41c8-11ea-a753-022eab783cb2 4Gi RWX Delete Bound mariadb/mariadb-backups nfs-openebs-localstorage 36d

* `kubectl get pvc`:

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-local-storage-nfs-nfs-server-provisioner-0 Bound pvc-57f3604e-41c8-11ea-a753-022eab783cb2 4Gi RWO openebs-hostpath 36d

* OS (from `/etc/os-release`):

NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

* Kernel (from `uname -a`):

Linux ip-172-31-34-53 4.15.0-1043-aws #45-Ubuntu SMP Mon Jun 24 14:07:03 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux


OpenEBS version: `1.1.0`
@lukaszgryglicki
Copy link
Author

Found this here:

When a user creates or deletes a PVC, there are validation triggers and a request has been intercepted by the admission webhook controller after authentication/authorization from kube-apiserver. By default admission webhook service has been configured to 443 port and the error above suggests that either port 443 is not allowed to use in cluster or admission webhook service has to be allowed in k8s cluster Proxy settings.

But I don't know what actually should I do/can I do?

@lukaszgryglicki
Copy link
Author

lukaszgryglicki commented Mar 5, 2020

Fixed via OpenEBS Slack online help:

The fix was:

kubectl delete validatingwebhookconfigurations validation-webhook-cfg
kubectl delete deploy openebs-admission-server -n openebs

Then kubectl create -f example.yaml and kubectl delete -f example.yaml` started to work again on example.yaml file:

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: default
  name: pvtest
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: openebs-hostpath
  resources:
    requests:
      storage: 1Gi

Thanks again pandey14 (@prateekpandey14 ) from Slack.

I'll mention OpenEBS usage in CNCF here: #2719 (as requested on slack).

I will do it today, ASAP, just need to fix other stuff that was broken due to my issue, some things need to be redeployed, restarted etc once all is green I’ll add my use case there

@github-actions
Copy link

github-actions bot commented Jun 4, 2020

Issues go stale after 90d of inactivity.

@dpedu
Copy link

dpedu commented Aug 26, 2021

I ran into this issue on my installation and wanted to add a few notes. In my case, I needed to delete the secret admission-server-secret, which contains the cert the admission-server uses in its webserver. I also had to delete the validatingwebhookconfigurations validation-webhook-cfg mentioned above because the webhook configuration also has information about the certificate authority the admission-server used to self-sign its cert, for obvious security reasons. After this, restarting (deleting) the openebs-admission-server pod regenerated the above hook and secret and everything was working correctly again.

I don't understand why comments above are suggesting to delete the openebs-admission-server deploy entirely.

@flatboyseb
Copy link

Thanks @dpedu !
I followed your instructions and deleted both validatingwebhookconfigurations openebs-validation-webhook-cfg and openebs-cstor-validation-webhook, both secrets admission-server-secret and openebs-cstor-admission-secret and finally ran the command kubectl -n openebs get pods -o name | grep admission-server | xargs kubectl -n openebs delete from OpenEBS documentation. Worked like a charm.

@github-actions
Copy link

github-actions bot commented May 9, 2022

Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@github-actions
Copy link

github-actions bot commented Aug 8, 2022

Issues go stale after 90d of inactivity. Please comment or re-open the issue if you are still interested in getting this issue fixed.

@avishnu
Copy link
Member

avishnu commented Jan 20, 2024

@niladrih please check if this is still an issue.

@niladrih
Copy link
Member

This is still an issue @avishnu afaik, #2951 (comment) is the resolution at the moment.

@lwabish
Copy link

lwabish commented Feb 27, 2024

Met this too with chart version cstor-3.4.0, app version 3.4.0

@orville-wright
Copy link
Contributor

This issue have a lot of commentary for 4 years. - Either is resolved for we need to fix it. It can persist on.
What's that status here? @avishnu @niladrih

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
No ISSUE activity Team discussing PLAN Team is discussing & researching a plan to fix
Projects
None yet
Development

No branches or pull requests

9 participants