First, thanks Xiubo for your feedback !

To go further on the points raised by Sake:
- How does this happen ? -> There were no preliminary signs before the incident

-  Is this avoidable? -> Good question, I'd also like to know how!

-  How to fix the issue ? -> So far, no fix nor workaround from what I read. I 
am very interested in finding a way to have the storage running again, so far 
our cluster is out of order as it's not possible to write on it anymore (good 
to know that data is still readable btw!). I'm not a Ceph guru so I don't want 
to play with the settings / parameters as the result could be even worse, but 
help would be greatly appreciated to get a system back available !

- Should you use cephfs with reef ? -> Well, from my experience, not for 
production

Thanks to everyone who helped me or will help me find a solution!
Nicolas
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to