Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-06 Thread Ravishankar N

On 09/06/2016 12:27 PM, Keiviw wrote:
Could you please tell me your glusterfs version and the mount command 
that you have used?? My GlusterFS version is 3.3.0, different versions 
may be exits different results.


I tried it on the master branch, on Fedora 22 virtual machines (kernel 
version: 4.1.6-200.fc22.x86_64 ). By the way 3.3 is a rather old 
version, you might want to use the latest 3.8.x release.








At 2016-09-06 12:35:19, "Ravishankar N"  wrote:

That is strange. I tried the experiment on a volume with a million
files. The client node's memory usage did grow, as I observed from
the output of free(1) http://paste.fedoraproject.org/422551/ when
I did a `ls`.
-Ravi

On 09/02/2016 07:31 AM, Keiviw wrote:

Exactly, I mounted the volume in a no-brick node(nodeB), and
nodeA was the server. I have set different timeout, but when I
excute "ls /mnt/glusterfs(about 3 million small files, in other
words, about 3 million dentries)", the result was the same,
memory usage in nodeB didn't change at all while nodeA's memory
usage was changed about 4GB!

发自 网易邮箱大师 
On 09/02/2016 09:45, Ravishankar N
 wrote:

On 09/02/2016 05:42 AM, Keiviw wrote:

Even if I set the attribute-timeout and entry-timeout to
3600s(1h), in the nodeB, it didn't cache any metadata
because the memory usage didn't change. So I was confused
that why did the client not cache dentries and inodes.


If you only want to test fuse's caching, I would try mounting
the volume on a separate machine (not on the brick node
itself), disable all gluster performance xlators, do a
find.|xargs stat on the mount 2 times in succession and see
what free(1) reports the 1st and 2nd time. You could do this
experiment with various attr/entry timeout values. Make sure
your volume has a lot of small files.
-Ravi



在 2016-09-01 16:37:00,"Ravishankar N"
 写道:

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE)
didn't cache metadata like dentries and inodes. I have
installed GlusterFS 3.6.0 in nodeA and nodeB, and the
brick1 and brick2 was in nodeA, then in nodeB, I
mounted the volume to /mnt/glusterfs by FUSE. From my
test, I excuted 'ls /mnt/glusterfs' in nodeB, and found
that the memory didn't use at all. Here are my questions:
1. In fuse kernel, the author set some attributes
to control the time-out about dentry and inode, in
other words, the fuse kernel supports metadata cache,
but in my test, dentries and inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to
local to enable the metadata cache in fuse kernel?



You can tweak the attribute-timeout and entry-timeout
seconds while mounting the volume. Default is 1 second
for both.  `man mount.glusterfs` lists various mount
options.
-Ravi




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel















___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-05 Thread Keiviw
Could you please tell me your glusterfs version and the mount command that you 
have used?? My GlusterFS version is 3.3.0, different versions may be exits 
different results.






At 2016-09-06 12:35:19, "Ravishankar N"  wrote:

That is strange. I tried the experiment on a volume with a million files. The 
client node's memory usage did grow, as I observed from the output of free(1)  
http://paste.fedoraproject.org/422551/ when I did a `ls`.
-Ravi
 
On 09/02/2016 07:31 AM, Keiviw wrote:

Exactly, I mounted the volume in a no-brick node(nodeB), and nodeA was the 
server. I have set different timeout, but when I excute "ls 
/mnt/glusterfs(about 3 million small files, in other words, about 3 million 
dentries)", the result was the same, memory usage in nodeB didn't change at all 
while nodeA's memory usage was changed about 4GB!


发自 网易邮箱大师
On 09/02/2016 09:45, Ravishankar N wrote:
On 09/02/2016 05:42 AM, Keiviw wrote:

Even if I set the attribute-timeout and entry-timeout to 3600s(1h), in the 
nodeB, it didn't cache any metadata because the memory usage didn't change. So 
I was confused that why did the client not cache dentries and inodes.


If you only want to test fuse's caching, I would try mounting the volume on a 
separate machine (not on the brick node itself), disable all gluster 
performance xlators, do a find.|xargs stat on the mount 2 times in succession 
and see what free(1) reports the 1st and 2nd time. You could do this experiment 
with various attr/entry timeout values. Make sure your volume has a lot of 
small files.
-Ravi



在 2016-09-01 16:37:00,"Ravishankar N"  写道:

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE) didn't cache metadata 
like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, 
and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to 
/mnt/glusterfs by FUSE. From my test, I excuted 'ls /mnt/glusterfs' in nodeB, 
and found that the memory didn't use at all. Here are my questions:
1. In fuse kernel, the author set some attributes to control the time-out 
about dentry and inode, in other words, the fuse kernel supports metadata 
cache, but in my test, dentries and inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to local to enable the 
metadata cache in fuse kernel? 


You can tweak the attribute-timeout and entry-timeout seconds while mounting 
the volume. Default is 1 second for both.  `man mount.glusterfs` lists various 
mount options.
-Ravi


 




___
Gluster-devel mailing list
Gluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel








 









___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-05 Thread Ravishankar N
That is strange. I tried the experiment on a volume with a million 
files. The client node's memory usage did grow, as I observed from the 
output of free(1) http://paste.fedoraproject.org/422551/ when I did a `ls`.

-Ravi

On 09/02/2016 07:31 AM, Keiviw wrote:
Exactly, I mounted the volume in a no-brick node(nodeB), and nodeA was 
the server. I have set different timeout, but when I excute "ls 
/mnt/glusterfs(about 3 million small files, in other words, about 3 
million dentries)", the result was the same, memory usage in nodeB 
didn't change at all while nodeA's memory usage was changed about 4GB!


发自 网易邮箱大师 
On 09/02/2016 09:45, Ravishankar N  wrote:

On 09/02/2016 05:42 AM, Keiviw wrote:

Even if I set the attribute-timeout and entry-timeout to
3600s(1h), in the nodeB, it didn't cache any metadata because the
memory usage didn't change. So I was confused that why did the
client not cache dentries and inodes.


If you only want to test fuse's caching, I would try mounting the
volume on a separate machine (not on the brick node itself),
disable all gluster performance xlators, do a find.|xargs stat on
the mount 2 times in succession and see what free(1) reports the
1st and 2nd time. You could do this experiment with various
attr/entry timeout values. Make sure your volume has a lot of
small files.
-Ravi



在 2016-09-01 16:37:00,"Ravishankar N" 
写道:

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE)
didn't cache metadata like dentries and inodes. I have
installed GlusterFS 3.6.0 in nodeA and nodeB, and the brick1
and brick2 was in nodeA, then in nodeB, I mounted the volume
to /mnt/glusterfs by FUSE. From my test, I excuted 'ls
/mnt/glusterfs' in nodeB, and found that the memory didn't
use at all. Here are my questions:
1. In fuse kernel, the author set some attributes to
control the time-out about dentry and inode, in other words,
the fuse kernel supports metadata cache, but in my test,
dentries and inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to local
to enable the metadata cache in fuse kernel?



You can tweak the attribute-timeout and entry-timeout seconds
while mounting the volume. Default is 1 second for both. 
`man mount.glusterfs` lists various mount options.

-Ravi




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel











___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Ravishankar N

On 09/02/2016 05:42 AM, Keiviw wrote:
Even if I set the attribute-timeout and entry-timeout to 3600s(1h), in 
the nodeB, it didn't cache any metadata because the memory usage 
didn't change. So I was confused that why did the client not cache 
dentries and inodes.


If you only want to test fuse's caching, I would try mounting the volume 
on a separate machine (not on the brick node itself), disable all 
gluster performance xlators, do a find.|xargs stat on the mount 2 times 
in succession and see what free(1) reports the 1st and 2nd time. You 
could do this experiment with various attr/entry timeout values. Make 
sure your volume has a lot of small files.

-Ravi



在 2016-09-01 16:37:00,"Ravishankar N"  写道:

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE) didn't
cache metadata like dentries and inodes. I have installed
GlusterFS 3.6.0 in nodeA and nodeB, and the brick1 and brick2 was
in nodeA, then in nodeB, I mounted the volume to /mnt/glusterfs
by FUSE. From my test, I excuted 'ls /mnt/glusterfs' in nodeB,
and found that the memory didn't use at all. Here are my questions:
1. In fuse kernel, the author set some attributes to control
the time-out about dentry and inode, in other words, the fuse
kernel supports metadata cache, but in my test, dentries and
inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to local to
enable the metadata cache in fuse kernel?



You can tweak the attribute-timeout and entry-timeout seconds
while mounting the volume. Default is 1 second for both.  `man
mount.glusterfs` lists various mount options.
-Ravi




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel







___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Keiviw
Even if I set the attribute-timeout and entry-timeout to 3600s(1h), in the 
nodeB, it didn't cache any metadata because the memory usage didn't change. So 
I was confused that why did the client not cache dentries and inodes.





在 2016-09-01 16:37:00,"Ravishankar N"  写道:

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE) didn't cache metadata 
like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, 
and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to 
/mnt/glusterfs by FUSE. From my test, I excuted 'ls /mnt/glusterfs' in nodeB, 
and found that the memory didn't use at all. Here are my questions:
1. In fuse kernel, the author set some attributes to control the time-out 
about dentry and inode, in other words, the fuse kernel supports metadata 
cache, but in my test, dentries and inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to local to enable the 
metadata cache in fuse kernel? 


You can tweak the attribute-timeout and entry-timeout seconds while mounting 
the volume. Default is 1 second for both.  `man mount.glusterfs` lists various 
mount options.
-Ravi


 




___
Gluster-devel mailing list
Gluster-devel@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Ravishankar N

On 09/01/2016 01:04 PM, Keiviw wrote:

Hi,
I have found that GlusterFS client(mounted by FUSE) didn't cache 
metadata like dentries and inodes. I have installed GlusterFS 3.6.0 in 
nodeA and nodeB, and the brick1 and brick2 was in nodeA, then in 
nodeB, I mounted the volume to /mnt/glusterfs by FUSE. From my test, I 
excuted 'ls /mnt/glusterfs' in nodeB, and found that the memory didn't 
use at all. Here are my questions:
1. In fuse kernel, the author set some attributes to control the 
time-out about dentry and inode, in other words, the fuse kernel 
supports metadata cache, but in my test, dentries and inodes were not 
cached. WHY?
2. Were there some options in GlusterFS mounted to local to enable 
the metadata cache in fuse kernel?



You can tweak the attribute-timeout and entry-timeout seconds while 
mounting the volume. Default is 1 second for both.  `man 
mount.glusterfs` lists various mount options.

-Ravi




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] How to enable FUSE kernel cache about dentry and inode?

2016-09-01 Thread Keiviw
Hi,
I have found that GlusterFS client(mounted by FUSE) didn't cache metadata 
like dentries and inodes. I have installed GlusterFS 3.6.0 in nodeA and nodeB, 
and the brick1 and brick2 was in nodeA, then in nodeB, I mounted the volume to 
/mnt/glusterfs by FUSE. From my test, I excuted 'ls /mnt/glusterfs' in nodeB, 
and found that the memory didn't use at all. Here are my questions:
1. In fuse kernel, the author set some attributes to control the time-out 
about dentry and inode, in other words, the fuse kernel supports metadata 
cache, but in my test, dentries and inodes were not cached. WHY?
2. Were there some options in GlusterFS mounted to local to enable the 
metadata cache in fuse kernel? ___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel