Hello,
we recently upgraded two clusters to Ceph luminous with bluestore and we
discovered that we have many more pgs in state active+clean+inconsistent.
(Possible data damage, xx pgs inconsistent)

This is probably due to checksums in bluestore that discover more errors.

We have some pools with replica 2 and some with replica 3.

I have read past forums thread and I have seen that Ceph do not repair
automatically inconsistent pgs.

Even manual repair sometime fails.

I would like to understand if I am losing my data:

- with replica 2 I hope that ceph chooses right replica looking at checksums
- with replica 3 I hope that there are no problems at all

How can I tell ceph to simply create the second replica in another place?

Because I suppose that with replica 2 and inconsistent pgs I have only one
copy of data.

Thank you in advance for any help.

Mario
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to