Hi Cephers,   We
 have two octopus 15.2.17 clusters in a multisite configuration. Every 
once in a while we have to perform a bucket reshard (most recently on 
613 shards) and this practically kills our replication for a few days.   Does 
anyone know of any priority mechanics within sync to give priority to other 
buckets and/or lower them?   Are there any improvements to this in higher 
versions of ceph that we 
could take advantage of if we upgrade the cluster (I haven't found any)?   
How to safely perform the increase of rgw_data_log_num_shards, because 
the documentation only says: "The values of rgw_data_log_num_shards and 
rgw_md_log_max_shards should not be changed after sync has started." 
Does this mean that I should block access to the cluster, wait until 
sync is caught up with source/master, change this value, restart rgw and
 unblock access?  Kind Regards,  Tom

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to