Hello everybody,

since auto sharding does not work on replicated clusters (we only share the
user account and metadata and not the actual data) I would like to
implement it on my own.

But when I reshard a bucket from 53 to 101 (yep, we have two buckets with
around 8m files in it) it takes a long time. So my question is: does this
somehow affect customer workload, or do I put their data in danger, when I
reshard and they upload files?

And how do you approach this problem? Do you have a very high default for
all buckets, or do you just ignore the large omap objects message?

Cheers
 Boris
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to