On Wed, Jan 7, 2015 at 10:55 PM, Christian Balzer <ch...@gol.com> wrote:
> Which of course begs the question of why not having min_size at 1
> permanently, so that in the (hopefully rare) case of loosing 2 OSDs at the
> same time your cluster still keeps working (as it should with a size of 3).

The idea is that when a write happens at least min_size has it
committed on disk before the write is committed back to the client.
Just in case something happens to the disk before it can be
replicated. It also goes against the strongly consistent model of
Ceph.

I believe there is work to resolve the issue when the number of
replicas drops below min_number. Ceph should automatically start
backfilling to get to at least min_num so that I/O can continue. I
believe this work is also tied to prioritizing backfilling so that
things like this are backfilled first, then backfilling min_num to get
back to size.

I am interested in a not-so-strict eventual consistency option in Ceph
so that under normal circumstances instead of needing [size] writes to
OSDs to complete, only [min_num] is needed and the primary OSD then
ensures that the laggy OSD(s) eventually gets the write committed.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to