Re: [ceph-users] MDS performance issue

2019-01-27 Thread Yan, Zheng
On Mon, Jan 28, 2019 at 10:34 AM Albert Yue wrote: > > Hi Yan Zheng, > > Our clients are also complaining about operations like 'du' or 'ncdu' being > very slow. Is there any alternative tool for such kind of operation on > CephFS? Thanks! > 'du' traverse whole directory tree to calculate

Re: [ceph-users] MDS performance issue

2019-01-27 Thread Albert Yue
Hi Yan Zheng, Our clients are also complaining about operations like 'du' or 'ncdu' being very slow. Is there any alternative tool for such kind of operation on CephFS? Thanks! Best regards, Albert On Wed, Jan 23, 2019 at 11:04 AM Yan, Zheng wrote: > On Wed, Jan 23, 2019 at 10:02 AM Albert

Re: [ceph-users] MDS performance issue

2019-01-22 Thread Yan, Zheng
On Wed, Jan 23, 2019 at 10:02 AM Albert Yue wrote: > > But with enough memory on MDS, I can just cache all metadata into memory. > Right now there are around 500GB metadata in the ssd. So this is not enough? > mds needs to tracking lots of extra information for each object. For 500G metadata,

Re: [ceph-users] MDS performance issue

2019-01-22 Thread Albert Yue
But with enough memory on MDS, I can just cache all metadata into memory. Right now there are around 500GB metadata in the ssd. So this is not enough? On Tue, Jan 22, 2019 at 5:48 PM Yan, Zheng wrote: > On Tue, Jan 22, 2019 at 10:49 AM Albert Yue > wrote: > > > > Hi Yan Zheng, > > > > In your

Re: [ceph-users] MDS performance issue

2019-01-22 Thread Yan, Zheng
On Tue, Jan 22, 2019 at 10:49 AM Albert Yue wrote: > > Hi Yan Zheng, > > In your opinion, can we resolve this issue by move MDS to a 512GB or 1TB > memory machine? > The problem is from client side, especially clients with large memory. I don't think enlarge mds cache size is good idea. you can

Re: [ceph-users] MDS performance issue

2019-01-21 Thread Albert Yue
Hi Yan Zheng, In your opinion, can we resolve this issue by move MDS to a 512GB or 1TB memory machine? On Mon, Jan 21, 2019 at 10:49 PM Yan, Zheng wrote: > On Mon, Jan 21, 2019 at 11:16 AM Albert Yue > wrote: > > > > Dear Ceph Users, > > > > We have set up a cephFS cluster with 6 osd

Re: [ceph-users] MDS performance issue

2019-01-21 Thread Gregory Farnum
On Mon, Jan 21, 2019 at 12:52 AM Yan, Zheng wrote: > On Mon, Jan 21, 2019 at 12:12 PM Albert Yue > wrote: > > > > Hi Yan Zheng, > > > > 1. mds cache limit is set to 64GB > > 2. we get the size of meta data pool by running `ceph df` and saw meta > data pool just used 200MB space. > > > > That's

Re: [ceph-users] MDS performance issue

2019-01-21 Thread Alexandre DERUMIER
s/vm/drop_caches fi - Mail original - De: "Marc Roos" À: "transuranium.yue" , "Zheng Yan" Cc: "ceph-users" Envoyé: Lundi 21 Janvier 2019 15:53:17 Objet: Re: [ceph-users] MDS performance issue How can you see that the cache is filling up and you need

Re: [ceph-users] MDS performance issue

2019-01-21 Thread Marc Roos
How can you see that the cache is filling up and you need to execute "echo 2 > /proc/sys/vm/drop_caches"? -Original Message- From: Yan, Zheng [mailto:uker...@gmail.com] Sent: 21 January 2019 15:50 To: Albert Yue Cc: ceph-users Subject: Re: [ceph-users] MDS performance

Re: [ceph-users] MDS performance issue

2019-01-21 Thread Yan, Zheng
On Mon, Jan 21, 2019 at 11:16 AM Albert Yue wrote: > > Dear Ceph Users, > > We have set up a cephFS cluster with 6 osd machines, each with 16 8TB > harddisk. Ceph version is luminous 12.2.5. We created one data pool with > these hard disks and created another meta data pool with 3 ssd. We

Re: [ceph-users] MDS performance issue

2019-01-21 Thread Yan, Zheng
On Mon, Jan 21, 2019 at 12:12 PM Albert Yue wrote: > > Hi Yan Zheng, > > 1. mds cache limit is set to 64GB > 2. we get the size of meta data pool by running `ceph df` and saw meta data > pool just used 200MB space. > That's very strange. One file uses about 1k metadata storage. 560M files

Re: [ceph-users] MDS performance issue

2019-01-20 Thread Albert Yue
Hi Yan Zheng, 1. mds cache limit is set to 64GB 2. we get the size of meta data pool by running `ceph df` and saw meta data pool just used 200MB space. Thanks, On Mon, Jan 21, 2019 at 11:35 AM Yan, Zheng wrote: > On Mon, Jan 21, 2019 at 11:16 AM Albert Yue > wrote: > > > > Dear Ceph Users,

Re: [ceph-users] MDS performance issue

2019-01-20 Thread Yan, Zheng
On Mon, Jan 21, 2019 at 11:16 AM Albert Yue wrote: > > Dear Ceph Users, > > We have set up a cephFS cluster with 6 osd machines, each with 16 8TB > harddisk. Ceph version is luminous 12.2.5. We created one data pool with > these hard disks and created another meta data pool with 3 ssd. We

[ceph-users] MDS performance issue

2019-01-20 Thread Albert Yue
Dear Ceph Users, We have set up a cephFS cluster with 6 osd machines, each with 16 8TB harddisk. Ceph version is luminous 12.2.5. We created one data pool with these hard disks and created another meta data pool with 3 ssd. We created a MDS with 65GB cache size. But our users are keep