On Mon, Jan 21, 2019 at 12:52 AM Yan, Zheng <uker...@gmail.com> wrote:

> On Mon, Jan 21, 2019 at 12:12 PM Albert Yue <transuranium....@gmail.com>
> wrote:
> >
> > Hi Yan Zheng,
> >
> > 1. mds cache limit is set to 64GB
> > 2. we get the size of meta data pool by running `ceph df` and saw meta
> data pool just used 200MB space.
> >
>
> That's very strange. One file uses about 1k metadata storage. 560M
> files should use hundreds gigabytes.
>

That's presumably because OSDs still don't report LevelDB/RocksDB usage up
in that view, and all the MDS metadata is stored there?
-Greg


>
> > Thanks,
> >
> >
> > On Mon, Jan 21, 2019 at 11:35 AM Yan, Zheng <uker...@gmail.com> wrote:
> >>
> >> On Mon, Jan 21, 2019 at 11:16 AM Albert Yue <transuranium....@gmail.com>
> wrote:
> >> >
> >> > Dear Ceph Users,
> >> >
> >> > We have set up a cephFS cluster with 6 osd machines, each with 16 8TB
> harddisk. Ceph version is luminous 12.2.5. We created one data pool with
> these hard disks and created another meta data pool with 3 ssd. We created
> a MDS with 65GB cache size.
> >> >
> >> > But our users are keep complaining that cephFS is too slow. What we
> observed is cephFS is fast when we switch to a new MDS instance, once the
> cache fills up (which will happen very fast), client became very slow when
> performing some basic filesystem operation such as `ls`.
> >> >
> >>
> >> what's your mds cache config ?
> >>
> >> > What we know is our user are putting lots of small files into the
> cephFS, now there are around 560 Million files. We didn't see high CPU wait
> on MDS instance and meta data pool just used around 200MB space.
> >>
> >> It's unlikely.  For output of 'ceph osd df', you should take both both
> >> DATA and OMAP into account.
> >>
> >> >
> >> > My question is, what is the relationship between the metadata pool
> and MDS? Is this performance issue caused by the hardware behind meta data
> pool? Why the meta data pool only used 200MB space, and we saw 3k iops on
> each of these three ssds, why can't MDS cache all these 200MB into memory?
> >> >
> >> > Thanks very much!
> >> >
> >> >
> >> > Best Regards,
> >> >
> >> > Albert
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to