On Fri, May 8, 2015 at 1:34 AM, Yan, Zheng wrote:
> On Fri, May 8, 2015 at 11:15 AM, Dexter Xiong wrote:
>> I tried "echo 3 > /proc/sys/vm/drop_caches" and dentry_pinned_count dropped.
>>
>> Thanks for your help.
>>
>
> could you please try the attached patch
I haven't followed the whole convers
Hi
I tested the patch. It seems that everything is OK now. Thanks.
On Fri, May 8, 2015 at 4:34 PM Yan, Zheng wrote:
> On Fri, May 8, 2015 at 11:15 AM, Dexter Xiong wrote:
> > I tried "echo 3 > /proc/sys/vm/drop_caches" and dentry_pinned_count
> dropped.
> >
> > Thanks for your help.
> >
>
>
On Fri, May 8, 2015 at 11:15 AM, Dexter Xiong wrote:
> I tried "echo 3 > /proc/sys/vm/drop_caches" and dentry_pinned_count dropped.
>
> Thanks for your help.
>
could you please try the attached patch
patch
Description: Binary data
___
ceph-users maili
I tried "echo 3 > /proc/sys/vm/drop_caches" and dentry_pinned_count dropped.
Thanks for your help.
On Thu, Apr 30, 2015 at 11:34 PM Yan, Zheng wrote:
> On Thu, Apr 30, 2015 at 4:37 PM, Dexter Xiong wrote:
> > Hi,
> > I got these message when I remount:
> > 2015-04-30 15:47:58.199837 7f9ad3
On Thu, Apr 30, 2015 at 4:37 PM, Dexter Xiong wrote:
> Hi,
> I got these message when I remount:
> 2015-04-30 15:47:58.199837 7f9ad30a27c0 -1 asok(0x3c83480)
> AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to
> bind the UNIX domain socket to '/var/run/ceph/ceph-clien
Hi,
I got these message when I remount:
2015-04-30 15:47:58.199837 7f9ad30a27c0 -1 asok(0x3c83480)
AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to
bind the UNIX domain socket to '/var/run/ceph/ceph-client.admin.asok': (17)
File exists
fuse: bad mount point `ceph-fuse
On Wed, Apr 29, 2015 at 4:33 PM, Dexter Xiong wrote:
> The output of status command of fuse daemon:
> "dentry_count": 128966,
> "dentry_pinned_count": 128965,
> "inode_count": 409696,
> I saw the pinned dentry is nearly the same as dentry.
> So I enabled debug log(debug client = 20/20) and read
On Wed, Apr 29, 2015 at 1:33 AM, Dexter Xiong wrote:
> The output of status command of fuse daemon:
> "dentry_count": 128966,
> "dentry_pinned_count": 128965,
> "inode_count": 409696,
> I saw the pinned dentry is nearly the same as dentry.
> So I enabled debug log(debug client = 20/20) and read
On 29/04/2015 09:33, Dexter Xiong wrote:
The output of status command of fuse daemon:
"dentry_count": 128966,
"dentry_pinned_count": 128965,
"inode_count": 409696,
I saw the pinned dentry is nearly the same as dentry.
So I enabled debug log(debug client = 20/20) and read Client.cc
source code
The output of status command of fuse daemon:
"dentry_count": 128966,
"dentry_pinned_count": 128965,
"inode_count": 409696,
I saw the pinned dentry is nearly the same as dentry.
So I enabled debug log(debug client = 20/20) and read Client.cc source
code in general. I found that an entry will not b
I tried set client cache size = 100, but it doesn't solve the problem.
I tested ceph-fuse with kernel version 3.13.0-24 3.13.0-49 and 3.16.0-34.
On Tue, Apr 28, 2015 at 7:39 PM John Spray wrote:
>
>
> On 28/04/2015 06:55, Dexter Xiong wrote:
> > Hi,
> > I've deployed a small hammer cluster
On 28/04/2015 06:55, Dexter Xiong wrote:
Hi,
I've deployed a small hammer cluster 0.94.1. And I mount it via
ceph-fuse on Ubuntu 14.04. After several hours I found that the
ceph-fuse process crashed. The end is the crash log from
/var/log/ceph/ceph-client.admin.log. The memory cost of ce
Hi,
I've deployed a small hammer cluster 0.94.1. And I mount it via
ceph-fuse on Ubuntu 14.04. After several hours I found that the ceph-fuse
process crashed. The end is the crash log from
/var/log/ceph/ceph-client.admin.log. The memory cost of ceph-fuse process
was huge(more than 4GB) when it
13 matches
Mail list logo