New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Panic while mirroring corrupted minio storage to openstack swift #2327
Comments
Can you provide the entire log that is visible on the console? |
ping @mishak87 we would need a lot more info about the steps and the whole log to really diagnose the issue further. |
@deekoder Sorry I am on vacation. I will be back last week of the year. |
Hi there, I'm havig the same issue when mirroring two diferent storages one based on ceph and the other is s3. the full command issued is:
a little experpt from the log file in which the issue ocurrs is:
|
https://medium.com/golangspec/comparison-operators-in-go-910d9d788ec0 I think the above link can make us better understand what's going on. |
I will upgrade all minio servers in next two days and report back if issue still persists. Using latest mc client. |
Client does not panic anymore 👍 Upgrade to |
I got this reported again, this most likely happens when mc received some unexpected data. |
I'm seeing this as well... minio version on source host (Ubuntu Xenial): ~$ uname -a (I don't dare to upgrade this minio (distributed server setup) until I have managed to complete a full sync/mirror af all data) mc version on target host (Ubuntu Bionic): ~# uname -a Command: Error output:
I have started it up again, this time with --debug to see if that will provide further info that might be useful. |
@fdaone this is probaby a golang bug (golang/go#29768) but we still can add to simple workaround to avoid this situation. |
Thanks for the quick response. Adding a workaround in the mc code works for me. :-) |
We are also dealing with the same issue. Happens several times a day and have implemented a service watcher to restart the mirror once it stops working. We're mirroring from one minio cluster to another. Any workaround to add stability would be great. |
This time using --debug (in hopes that reveals anything useful)
|
Also, I should probably add that I successfully mirrored everything a few days ago without stumbling on this error. At that time I wasn't mirroring from one minio setup to another, though. I was simply mirroring from a remote minio to a local filesystem (and I wasn't using --watch --overwrite --remove). |
Hi I'm still seeing this with RELEASE.2019-01-24T01-40-23Z. Not as often as before, but it still makes the mc process die now and then which is a big pain point for me, since it takes mc mirror DAYS to "catch up" in the sync process (around 20 million small files in total). Fingers crossed that minio/minio#6494 will eventually be merged since having versioning in minio would make it somewhat redundant for me to mirror everything regularly to a secondary minio cluster which I then shut down and zfs snapshot for backup purposes.
|
Assigning this to @vadmeste due the original fix here minio/minio-go#1066 |
@fdaone I've just checked, version RELEASE.2019-01-24T01-40-23Z doesn't contain the fix (it doesn't contain the updated minio-go library), Headers here is causing the issue https://github.com/minio/mc/blob/RELEASE.2019-01-24T01-40-23Z/vendor/github.com/minio/minio-go/api-error-response.go#L56 Please feel free to reopen this issue if you have further questions or you are still seeing the error. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
mc version
Release-tag: RELEASE.2017-10-14T00-51-16Z
Commit-id: 785e14a
System information
CentOS 7 Factory image on Openstack
The text was updated successfully, but these errors were encountered: