Hello Cephers,
We used to have a Ceph cluster and setup our data pool as 3 replicas, we 
estimated the number of files (given disk size and object size) for each PG was 
around 8K, we disabled folder splitting which mean all files located at the 
root PG folder. Our testing showed a good performance with such setup.

Right now we are evaluating erasure coding, which split the object into a 
number of chunks and increase the number of files several times, although XFS 
claims a good support for large directories [1], some testing also showed that 
we may expect performance degradation for large directories.

I would like to check with your experience on top of this for your Ceph cluster 
if you are using XFS. Thanks.

[1] http://www.scs.stanford.edu/nyu/02fa/sched/xfs.pdf

Thanks,
Guang--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to