-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to create erasure-coded cephfs presumable due to ceph-bug #1785465 #7319
Comments
Think I found the culprit here, might not be a bug anyway - working on confirming. :-) |
So, this was almost all me.
Wrong fsName/pool in the cephfs-storageclass. I did encounter #4006 also, but that was easily fixed by creating the subvolumes (remember to replace cephfs with myfs-ec). That being said, would it make sense to have a separate Since it's basically being this diff, I can create/PR it if wanted. Consider the "bug" rejected and a question about that PR being wanted or not as the only thing needed here. |
@davralin Good to hear it's working now. Yes, sounds good if you want to open a PR for the cephfs storageclass-ec.yaml. We have one already for rbd, so an example would help for cephfs as well. Thanks! |
**Description of your changes:** Add dedicated documentation for cephfs-storageclass using erasure-coding. **Which issue is resolved by this Pull Request:** Resolves rook#7319 **Checklist:** - [x] **Commit Message Formatting**: Commit titles and messages follow guidelines in the [developer guide](https://rook.io/docs/rook/master/development-flow.html#commit-structure). - [x] **Skip Tests for Docs**: Add the flag for skipping the build if this is only a documentation change. See [here](https://github.com/rook/rook/blob/master/INSTALL.md#skip-ci) for the flag. - [ ] **Skip Unrelated Tests**: Add a flag to run tests for a specific storage provider. See [test options](https://github.com/rook/rook/blob/master/INSTALL.md#test-storage-provider). - [x] Reviewed the developer guide on [Submitting a Pull Request](https://rook.io/docs/rook/master/development-flow.html#submitting-a-pull-request) - [x] Documentation has been updated, if necessary. - [ ] Unit tests have been added, if necessary. - [ ] Integration tests have been added, if necessary. - [ ] Pending release notes updated with breaking and/or notable changes, if necessary. - [ ] Upgrade from previous release is tested and upgrade user guide is updated, if necessary. - [ ] Code generation (`make codegen`) has been run to update object specifications, if necessary. Signed-off-by: davralin <git@davralin.work>
I am using the master branch and I am running into the same issue where PVC is always pending. Here are some logs based on the common-issues document. All my OSDs are bluestore
Here is how I deployed my cluster with cephfs-ec.
Created PVC but it is always pending
Logs based on common-issues document
|
Increase the log level to rook/cluster/examples/kubernetes/ceph/operator.yaml Lines 40 to 42 in 3e299fb
|
Here are the logs for csi-cephfsplugin-provisioner with CSI_LOG_LEVEL: "5" set.
|
Is this a bug report or feature request?
Background:
I had my bulkstorage cephfs pool created with replication, decided to recreate it with erasure-coded pools instead, to increase storage capacity.
Deleted all PVC's from cephfs, kubectl delete -f filesystem.yaml; kubectl delete -f csi/cephfs/storageclass.yaml.
kubectl apply -f csi/cephfs/storageclass.yaml; kubectl apply -f filesystem-ec.yaml (with minor tweaks to deviceClass and chunks)
PVC's didn't work, further investigation last night got me to this ceph-issue, so I decided to call it quits and return today to create an issue.
Unfortunately, due to a power-issue, I have no idea where I got that log-output from, and I can't seem to find any traces of it either. PVC-creation still doesn't work though.
Deleting and recreating everything with replicated-pools makes PVC-creation work again.
Deviation from expected behavior:
Unable to create erasure-coded cephfs.
Perhaps due to creation missing a --force, as required by https://bugzilla.redhat.com/show_bug.cgi?id=1785465
Expected behavior:
kubectl apply -f csi/cephfs/storageclass.yaml; kubectl apply -f filesystem-ec.yaml
Then able to provision PVC's.
How to reproduce it (minimal and precise):
kubectl apply -f csi/cephfs/storageclass.yaml; kubectl apply -f filesystem-ec.yaml
Then create a PVC,
get events
in that namespace revealsfailed to provision volume with StorageClass "rook-cephfs": rpc error: code = InvalidArgument desc = volume not found
File(s) to submit:
cluster.yaml
, if necessaryDiff with upstream filesystems-ec.yaml:
To get logs, use
kubectl -n <namespace> logs <pod name>
When pasting logs, always surround them with backticks or use the
insert code
button from the Github UI.Read Github documentation if you need help.
Environment:
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
uname -a
):Linux Node1 4.19.0-14-amd64 Monitor bootstrapping with libcephd #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux
Bunch of whitebox homelab-on-prem.
rook version
inside of a Rook Pod):rook: v1.5.7
go: go1.13.8
ceph -v
):ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)
kubectl version
):Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
kubernetes
ceph health
in the Rook Ceph toolbox):pre-apply:
post-apply:
The text was updated successfully, but these errors were encountered: