I tried "echo 3 > /proc/sys/vm/drop_caches" and dentry_pinned_count dropped.

Thanks for your help.

On Thu, Apr 30, 2015 at 11:34 PM Yan, Zheng <uker...@gmail.com> wrote:

> On Thu, Apr 30, 2015 at 4:37 PM, Dexter Xiong <dxtxi...@gmail.com> wrote:
> > Hi,
> >     I got these message when I remount:
> > 2015-04-30 15:47:58.199837 7f9ad30a27c0 -1 asok(0x3c83480)
> > AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed
> to
> > bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok':
> (17)
> > File exists
> > fuse: bad mount point `ceph-fuse': No such file or directory
> > ceph-fuse[2576]: fuse failed to initialize
> > 2015-04-30 15:47:58.199980 7f9ad30a27c0 -1 init, newargv = 0x3ca9b00
> > newargc=14
> > 2015-04-30 15:47:58.200020 7f9ad30a27c0 -1 fuse_parse_cmdline failed.
> > ceph-fuse[2574]: mount failed: (22) Invalid argument.
> >
> >     It seems that FUSE doesn't support remount? This link is google
> result.
> >
>
> please try "echo 3 > /proc/sys/vm/drop_caches". check if the pinned
> dentries count drops after executing the command.
>
> Regards
> Yan, Zheng
>
> >     I am using ceph-dokan too. And I got the similar memory problem. I
> don't
> > know if it is the same problem. I switched to use kernel module and
> Samba to
> > replace previous solution temporarily.
> >     I'm trying to read and track the ceph & ceph-dokan source code to
> find
> > more useful information.
> >
> >
> >     I don't know if my previous email arrived the list(Maybe the
> attachment
> > is too large). Here is its content:
> >
> > I wrote a test case with Python:
> > '''
> > import os
> > for i in range(200):
> >         dir_name = '/srv/ceph_fs/test/d%s'%i
> >         os.mkdir(dir_name)
> >         for j in range(3):
> >                 with open('%s/%s'%(dir_name, j), 'w') as f:
> >                         f.write('0')
> > '''
> >
> > The output of status command after test on a fresh mount:
> > {
> >     "metadata": {
> >         "ceph_sha1": "e4bfad3a3c51054df7e537a724c8d0bf9be972ff",
> >         "ceph_version": "ceph version 0.94.1
> > (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)",
> >         "entity_id": "admin",
> >         "hostname": "local-share-server",
> >         "mount_point": "\/srv\/ceph_fs"
> >     },
> >     "dentry_count": 204,
> >     "dentry_pinned_count": 201,
> >     "inode_count": 802,
> >     "mds_epoch": 25,
> >     "osd_epoch": 177,
> >     "osd_epoch_barrier": 176
> > }
> > It seems that all pinned dentrys are directories from dump cache command
> > output.
> >
> > Attachment is a package of debug log and dump cache content.
> >
> >
> > On Thu, Apr 30, 2015 at 2:55 PM Yan, Zheng <uker...@gmail.com> wrote:
> >>
> >> On Wed, Apr 29, 2015 at 4:33 PM, Dexter Xiong <dxtxi...@gmail.com>
> wrote:
> >> > The output of status command of fuse daemon:
> >> > "dentry_count": 128966,
> >> > "dentry_pinned_count": 128965,
> >> > "inode_count": 409696,
> >> > I saw the pinned dentry is nearly the same as dentry.
> >> > So I enabled debug log(debug client = 20/20) and  read  Client.cc
> source
> >> > code in general. I found that an entry will not be trimed if it is
> >> > pinned.
> >> > But how can I unpin dentrys?
> >>
> >> Maybe these dentries are pinned by fuse kernel module (ceph-fuse does
> >> not try trimming kernel cache when its cache size >
> >> client_cache_size). Could you please run "mount -o remount <mount
> >> point>", then run the status command again. check if number of pinned
> >> dentries drops.
> >>
> >> Regards
> >> Yan, Zheng
> >>
> >>
> >> >
> >> > On Wed, Apr 29, 2015 at 12:19 PM Dexter Xiong <dxtxi...@gmail.com>
> >> > wrote:
> >> >>
> >> >> I tried set client cache size = 100, but it doesn't solve the
> problem.
> >> >> I tested ceph-fuse with kernel version 3.13.0-24 3.13.0-49 and
> >> >> 3.16.0-34.
> >> >>
> >> >>
> >> >>
> >> >> On Tue, Apr 28, 2015 at 7:39 PM John Spray <john.sp...@redhat.com>
> >> >> wrote:
> >> >>>
> >> >>>
> >> >>>
> >> >>> On 28/04/2015 06:55, Dexter Xiong wrote:
> >> >>> > Hi,
> >> >>> >     I've deployed a small hammer cluster 0.94.1. And I mount it
> via
> >> >>> > ceph-fuse on Ubuntu 14.04. After several hours I found that the
> >> >>> > ceph-fuse process crashed. The end is the crash log from
> >> >>> > /var/log/ceph/ceph-client.admin.log. The memory cost of ceph-fuse
> >> >>> > process was huge(more than 4GB) when it crashed.
> >> >>> >     Then I did some test and found these actions will increase
> >> >>> > memory
> >> >>> > cost of ceph-fuse rapidly and the memory cost never seem to
> >> >>> > decrease:
> >> >>> >
> >> >>> >   * rsync command to sync small files(rsync -a /mnt/some_small
> >> >>> > /srv/ceph)
> >> >>> >   * chown command/ chmod command(chmod 775 /srv/ceph -R)
> >> >>> >
> >> >>> > But chown/chmod command on accessed files will not increase the
> >> >>> > memory
> >> >>> > cost.
> >> >>> > It seems that ceph-fuse caches the file nodes but never releases
> >> >>> > them.
> >> >>> > I don't know if there is an option to control the cache size. I
> >> >>> > set mds cache size = 2147483647 option to improve the performance
> of
> >> >>> > mds, and I tried to set mds cache size = 1000 at client side but
> it
> >> >>> > doesn't effect the result.
> >> >>>
> >> >>> The setting for client-side cache limit is "client cache size",
> >> >>> default
> >> >>> is 16384
> >> >>>
> >> >>> What kernel version are you using on the client?  There have been
> some
> >> >>> issues with cache trimming vs. fuse in recent kernels, but we
> thought
> >> >>> we
> >> >>> had workarounds in place...
> >> >>>
> >> >>> Cheers,
> >> >>> John
> >> >>>
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to