I had similar thing with doing the ls. Increasing the cache limit helped 
with our test cluster

mds_cache_memory_limit = 8000000000





-----Original Message-----
From: Surya Bala [mailto:sooriya.ba...@gmail.com] 
Sent: dinsdag 17 juli 2018 11:39
To: Anton Aleksandrov
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ls operation is too slow in cephfs

Thanks for the reply anton. 


CPU core count - 40
RAM - 250GB 

We have single active MDS, Ceph version luminous 12.2.4 default PG 
number is 64 and we are not changing PG count while creating pool we 
have totally 8 server each with 60OSD of 6TB size.
8 server splitted into 2 per region . Crush map is designed to use 2 
servers for each pool

Regards
Surya Balan


On Tue, Jul 17, 2018 at 1:48 PM, Anton Aleksandrov 
<an...@aleksandrov.eu> wrote:


        You need to give us more details about your OSD setup and hardware 
specification of nodes (CPU core count, RAM amount)
        


        On 2018.07.17. 10:25, Surya Bala wrote:
        

                Hi folks, 

                We have production cluster with 8 nodes and each node has 60 
disks of size 6TB each. We are using cephfs and FUSE client with global 
mount point. We are doing rsync from our old server to this cluster 
rsync is slow compared to normal server 

                when we do 'ls' inside some folder, which has many more number 
of files like 1lakhs and 2lakhs, the response is too slow. 

                Any suggestions please

                Regards
                Surya

                 
                
                _______________________________________________
                ceph-users mailing list
                ceph-users@lists.ceph.com
                http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> 



        _______________________________________________
        ceph-users mailing list
        ceph-users@lists.ceph.com
        http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> 
        
        



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to