On Mon, Jan 28, 2019 at 10:34 AM Albert Yue wrote:
>
> Hi Yan Zheng,
>
> Our clients are also complaining about operations like 'du' or 'ncdu' being
> very slow. Is there any alternative tool for such kind of operation on
> CephFS? Thanks!
>
'du' traverse whole directory tree to calculate
Hi Yan Zheng,
Our clients are also complaining about operations like 'du' or 'ncdu' being
very slow. Is there any alternative tool for such kind of operation on
CephFS? Thanks!
Best regards,
Albert
On Wed, Jan 23, 2019 at 11:04 AM Yan, Zheng wrote:
> On Wed, Jan 23, 2019 at 10:02 AM Albert
On Wed, Jan 23, 2019 at 10:02 AM Albert Yue wrote:
>
> But with enough memory on MDS, I can just cache all metadata into memory.
> Right now there are around 500GB metadata in the ssd. So this is not enough?
>
mds needs to tracking lots of extra information for each object. For
500G metadata,
But with enough memory on MDS, I can just cache all metadata into memory.
Right now there are around 500GB metadata in the ssd. So this is not enough?
On Tue, Jan 22, 2019 at 5:48 PM Yan, Zheng wrote:
> On Tue, Jan 22, 2019 at 10:49 AM Albert Yue
> wrote:
> >
> > Hi Yan Zheng,
> >
> > In your
On Tue, Jan 22, 2019 at 10:49 AM Albert Yue wrote:
>
> Hi Yan Zheng,
>
> In your opinion, can we resolve this issue by move MDS to a 512GB or 1TB
> memory machine?
>
The problem is from client side, especially clients with large memory.
I don't think enlarge mds cache size is good idea. you can
Hi Yan Zheng,
In your opinion, can we resolve this issue by move MDS to a 512GB or 1TB
memory machine?
On Mon, Jan 21, 2019 at 10:49 PM Yan, Zheng wrote:
> On Mon, Jan 21, 2019 at 11:16 AM Albert Yue
> wrote:
> >
> > Dear Ceph Users,
> >
> > We have set up a cephFS cluster with 6 osd
On Mon, Jan 21, 2019 at 12:52 AM Yan, Zheng wrote:
> On Mon, Jan 21, 2019 at 12:12 PM Albert Yue
> wrote:
> >
> > Hi Yan Zheng,
> >
> > 1. mds cache limit is set to 64GB
> > 2. we get the size of meta data pool by running `ceph df` and saw meta
> data pool just used 200MB space.
> >
>
> That's
s/vm/drop_caches
fi
- Mail original -
De: "Marc Roos"
À: "transuranium.yue" , "Zheng Yan"
Cc: "ceph-users"
Envoyé: Lundi 21 Janvier 2019 15:53:17
Objet: Re: [ceph-users] MDS performance issue
How can you see that the cache is filling up and you need
How can you see that the cache is filling up and you need to execute
"echo 2 > /proc/sys/vm/drop_caches"?
-Original Message-
From: Yan, Zheng [mailto:uker...@gmail.com]
Sent: 21 January 2019 15:50
To: Albert Yue
Cc: ceph-users
Subject: Re: [ceph-users] MDS performance
On Mon, Jan 21, 2019 at 11:16 AM Albert Yue wrote:
>
> Dear Ceph Users,
>
> We have set up a cephFS cluster with 6 osd machines, each with 16 8TB
> harddisk. Ceph version is luminous 12.2.5. We created one data pool with
> these hard disks and created another meta data pool with 3 ssd. We
On Mon, Jan 21, 2019 at 12:12 PM Albert Yue wrote:
>
> Hi Yan Zheng,
>
> 1. mds cache limit is set to 64GB
> 2. we get the size of meta data pool by running `ceph df` and saw meta data
> pool just used 200MB space.
>
That's very strange. One file uses about 1k metadata storage. 560M
files
Hi Yan Zheng,
1. mds cache limit is set to 64GB
2. we get the size of meta data pool by running `ceph df` and saw meta data
pool just used 200MB space.
Thanks,
On Mon, Jan 21, 2019 at 11:35 AM Yan, Zheng wrote:
> On Mon, Jan 21, 2019 at 11:16 AM Albert Yue
> wrote:
> >
> > Dear Ceph Users,
On Mon, Jan 21, 2019 at 11:16 AM Albert Yue wrote:
>
> Dear Ceph Users,
>
> We have set up a cephFS cluster with 6 osd machines, each with 16 8TB
> harddisk. Ceph version is luminous 12.2.5. We created one data pool with
> these hard disks and created another meta data pool with 3 ssd. We
Dear Ceph Users,
We have set up a cephFS cluster with 6 osd machines, each with 16 8TB
harddisk. Ceph version is luminous 12.2.5. We created one data pool with
these hard disks and created another meta data pool with 3 ssd. We created
a MDS with 65GB cache size.
But our users are keep
14 matches
Mail list logo