On 19/10/17 11:00, Dennis Benndorf wrote:
> Hello @all,
> 
> givin the following config:
> 
>       * ceph.conf:
> 
>         ...
>         mon osd down out subtree limit = host
>         osd_pool_default_size = 3
>         osd_pool_default_min_size = 2
>         ...
> 
>       * each OSD has its journal on a 30GB partition on a PCIe-Flash-Card
>       * 3 hosts
> 
> What would happen if one host goes down? I mean is there a limit of downtime 
> of this host/osds? How is Ceph detecting the differences between OSDs within 
> a placement group? Is there a binary log(which could run out of space) in the 
> journal/monitor or will it just copy all object within that pgs which had 
> unavailable osds?
> 
> Thanks in advance,
> Dennis

When the OSDs that were offline come back up, the PGs on those OSDs will 
resynchronise with the other replicas. Where there are new objects (or newer 
objects in the case of modifications), the new data will be copied from the 
other OSDs that remained active. There is no binary logging replication 
mechanism as you might be used to from mysql or similar.

Rich

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to