Hi, experts,

We are using cephfs(15.2.*) with kernel mount on our production environment. 
And these days when we do massive read from cluster(multi processes),  ceph 
health always report slow ops for some osds(build with hdd(8TB) which using ssd 
as db cache). 

our cluster have more read than write request. 

health log like below:
100 slow ops, oldest one blocked for 114 sec, [osd.* ...] has slow ops (SLOW 
_OPS)

 
my question is does there any best practices to process hundreds of millions 
small files(means 100kb-300kb each file and 10000+ files in each directory, 
also more than 5000 directory)? A
 
Any config we can tune or any patch we can apply try to speed up the read(more 
important than write) and any other file system we could try (we also not sure 
cephfs is the best choice to store such huge small files )?

Please experts shed some light here! We really need your are help here!

Any suggestions are welcome! Thanks in advance!~

Thanks,
zx

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to