Re: [Gluster-users] I/O error repaired only by owner or root access

2013-02-22 Thread Rajesh Amaravathi
Hi Dan,
Could you please provide the following info
(1) the exact permissions of the file you are accessing and
its parent directory,
(2) the user from which 'ls -l' is issued, and
(3) the owner of the file, and the parent directory.

You could open a bug for it if it is seen several times.

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
 From: Dan Bretherton d.a.brether...@reading.ac.uk
 To: gluster-users gluster-users@gluster.org
 Sent: Thursday, February 21, 2013 8:16:40 PM
 Subject: [Gluster-users] I/O error repaired only by owner or root access
 
 Dear All-
 Several users are having a lot of trouble reading files belonging to
 other users.  Here is an example.
 
 [sms05dab@jupiter ~]$ ls -l /users/gcs/WORK/ORCA1/ORCA1-R07-MEAN/Ctl
 ls: /users/gcs/WORK/ORCA1/ORCA1-R07-MEAN/Ctl: Input/output error
 
 The corresponding nfs.log messages are shown below.
 
 [2013-02-21 12:11:39.204659] W [nfs3.c:727:nfs3svc_getattr_stat_cbk]
 0-nfs: fe2ba5b8: /gorgon/users/gcs/WORK/ORCA1/ORCA1-R07-MEAN = -1
 (Invalid argument)
 [2013-02-21 12:11:39.204778] W
 [nfs3-helpers.c:3389:nfs3_log_common_res]
 0-nfs-nfsv3: XID: fe2ba5b8, GETATTR: NFS: 22(Invalid argument for
 operation), POSIX: 22(Invalid argument)
 [2013-02-21 12:11:39.215345] I
 [dht-common.c:954:dht_lookup_everywhere_cbk] 0-nemo2-dht: deleting
 stale
 linkfile /gorgon/users/gcs/WORK/ORCA1/ORCA1-R07-MEAN on
 nemo2-replicate-0
 [2013-02-21 12:11:39.225674] W
 [client3_1-fops.c:592:client3_1_unlink_cbk] 0-nemo2-client-1: remote
 operation failed: Permission denied
 [2013-02-21 12:11:39.225786] W
 [client3_1-fops.c:592:client3_1_unlink_cbk] 0-nemo2-client-0: remote
 operation failed: Permission denied
 [2013-02-21 12:11:39.681029] W
 [client3_1-fops.c:258:client3_1_mknod_cbk] 0-nemo2-client-18: remote
 operation failed: Permission denied. Path:
 /gorgon/users/gcs/WORK/ORCA1/ORCA1-R07-MEAN
 (1662aa0a-d43b-4c2e-9be9-407eb7a89e85)
 [2013-02-21 12:11:39.681400] W
 [client3_1-fops.c:258:client3_1_mknod_cbk] 0-nemo2-client-19: remote
 operation failed: Permission denied. Path:
 /gorgon/users/gcs/WORK/ORCA1/ORCA1-R07-MEAN
 (1662aa0a-d43b-4c2e-9be9-407eb7a89e85)
 [2013-02-21 12:11:39.682268] W [nfs3.c:1627:nfs3svc_readlink_cbk]
 0-nfs:
 2ca5b8: /gorgon/users/gcs/WORK/ORCA1/ORCA1-R07-MEAN = -1 (Invalid
 argument)
 [2013-02-21 12:11:39.682338] W
 [nfs3-helpers.c:3403:nfs3_log_readlink_res] 0-nfs-nfsv3: XID: 2ca5b8,
 READLINK: NFS: 22(Invalid argument for operation), POSIX: 22(Invalid
 argument), target: (null)
 
 I managed to access the same directory as the owner (or any user with
 write access including root) without any trouble, and after that
 access
 from my normal user account was fine as well.  The permissions on the
 directory allowed read access by everyone, but the Permission
 denied
 messages in nfs.log indicate that some sort of operation is not being
 allowed when the directory is accessed by other users.   I have seen
 this happen with files and directories, and with the GlusterFS native
 client and NFS.
 
 I presume this is a bug; I would be grateful if someone could confirm
 this.  I would file a bug report, but the trouble is that I don't
 know
 how to reproduce the problem that causes the I/O error in the first
 place.  It only happens with some files and directories, not all.
  Would
 a bug report without any way to reproduce the error be any use, and
 can
 anyone suggest a way to dig deeper (eg looking at xattrs) next time I
 come across an example?
 
 -Dan.
 
 --
 Dan Bretherton
 ESSC Computer System Manager
 Department of Meteorology
 Harry Pitt Building, 3 Earley Gate
 University of Reading
 Reading, RG6 7BE (or RG6 6AL for postal service deliveries)
 UK
 Tel. +44 118 378 5205, Fax: +44 118 378 6413
 --
 ## Please sponsor me to run in VSO's 30km Race to the Eye ##
 ##http://www.justgiving.com/DanBretherton ##
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-users
 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS subdirectory Solaris client

2012-07-19 Thread Rajesh Amaravathi
Hi Anthony,
   sub directory mount is not possible with GlusterNfs in 3.2.x version.
   you can get the 3.3 gNfs to work for subdir mount on solaris, though it
   requires some oblique steps to get it working. The steps are provided in
   the documentation (Admin guide i think) for 3.3.

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: anthony garnier sokar6...@hotmail.com
To: sgo...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, July 19, 2012 12:50:41 PM
Subject: Re: [Gluster-users] NFS subdirectory Solaris client



Hi Shishir, 

I have reconfigured the port to 2049 so there is no need to specify it. 
Moreover the mount of the volume works fine, it's only the mount of the 
subdirectory that doesn't work. 

Thx and Regards, 

Anthony 




 Date: Wed, 18 Jul 2012 22:54:28 -0400 
 From: sgo...@redhat.com 
 To: sokar6...@hotmail.com 
 CC: gluster-users@gluster.org 
 Subject: Re: [Gluster-users] NFS subdirectory Solaris client 
 
 Hi Anthony, 
 
 Please also specify this option port=38467, and try mounting it. 
 
 With regards, 
 Shishir 
 
 - Original Message - 
 From: anthony garnier sokar6...@hotmail.com 
 To: gluster-users@gluster.org 
 Sent: Wednesday, July 18, 2012 3:21:36 PM 
 Subject: [Gluster-users] NFS subdirectory Solaris client 
 
 
 
 Hi everyone, 
 
 I still have problem to mount subdirectory in NFS on Solaris client : 
 
 # mount -o proto=tcp,vers=3 nfs://yval1010:/test/test2 /users/glusterfs_mnt 
 nfs mount: yval1010: : RPC: Program not registered 
 nfs mount: retrying: /users/glusterfs_mnt 
 
 
 [2012-07-18 11:43:43.484994] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 [2012-07-18 11:43:43.491088] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 [2012-07-18 11:43:48.494268] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 [2012-07-18 11:43:48.992370] W [socket.c:195:__socket_rwv] 
 0-socket.nfs-server: readv failed (Connection reset by peer) 
 [2012-07-18 11:43:57.422070] W [socket.c:195:__socket_rwv] 
 0-socket.nfs-server: readv failed (Connection reset by peer) 
 [2012-07-18 11:43:58.498666] E [nfs3.c:305:__nfs3_get_volume_id] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup+0x125) 
 [0x7f5418ea4e15] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_lookup_reply+0x48)
  [0x7f5418e9b908] 
 (--/usr/local/lib//glusterfs/3.3.0/xlator/nfs/server.so(nfs3_request_xlator_deviceid+0x4c)
  [0x7f5418e9a54c]))) 0-nfs-nfsv3: invalid argument: xl 
 
 
 Can someone confirm that ? 
 
 Regards, 
 
 Anthony 
 
 ___ 
 Gluster-users mailing list 
 Gluster-users@gluster.org 
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable or not?

2012-07-17 Thread Rajesh Amaravathi
it should be possible to mount another kernel export with -o nolock option and
compile kernel on it. I'm just guessing when you mount with nolock option,
we are mounting for mostly read purposes and not for critical writes.

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: Whit Blauvelt whit.glus...@transpect.com
To: Rajesh Amaravathi raj...@redhat.com
Cc: David Coulson da...@davidcoulson.net, Gluster General Discussion List 
gluster-users@gluster.org
Sent: Monday, July 16, 2012 9:56:28 PM
Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
or not?

Say, is it possible to compile a kernel without whatever part of its NFS
support competes with Gluster's locking? 

Whit

On Fri, Jul 13, 2012 at 08:14:27AM -0400, Rajesh Amaravathi wrote:
 I hope you do realize that two NLM implementations of the same version
 cannot operate simultaneously in the same machine. I really look forward
 to a solution to make this work, that'd be something.
 
 Regards, 
 Rajesh Amaravathi, 
 Software Engineer, GlusterFS 
 RedHat Inc. 
 
 - Original Message -
 From: David Coulson da...@davidcoulson.net
 To: Rajesh Amaravathi raj...@redhat.com
 Cc: Tomasz Chmielewski man...@wpkg.org, Gluster General Discussion List 
 gluster-users@gluster.org
 Sent: Friday, July 13, 2012 5:28:04 PM
 Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
 or not?
 
 Was that introduced by the same person who thought that binding to 
 sequential ports down from 1024 was a good idea?
 
 Considering how hard RedHat was pushing Gluster at the Summit a week or 
 two ago, it seems like they're making it hard for people to really 
 implement it in any capacity other than their Storage Appliance product.
 
 Luckily I don't need locking yet, but I suppose RedHat will be happy 
 when I do since I'll need to buy more GFS2 Add-Ons for my environment :-)
 
 David
 
 On 7/13/12 7:49 AM, Rajesh Amaravathi wrote:
  Actually, if you want to mount *any* nfs volumes(of Gluster) OR
  exports (of kernel-nfs-server), you cannot do it with locking on
  a system where a glusterfs(nfs process) is running(since 3.3.0).
  However, if its ok to mount without locking, then you should be
  able to do it on localhost.
 
  Regards,
  Rajesh Amaravathi,
  Software Engineer, GlusterFS
  RedHat Inc.
 
  - Original Message -
  From: David Coulson da...@davidcoulson.net
  To: Tomasz Chmielewski man...@wpkg.org
  Cc: Rajesh Amaravathi raj...@redhat.com, Gluster General Discussion 
  List gluster-users@gluster.org
  Sent: Friday, July 13, 2012 3:16:38 PM
  Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - 
  reliable or not?
 
 
  On 7/13/12 5:29 AM, Tomasz Chmielewski wrote:
  Killing the option to use NFS mounts on localhost is certainly quite
  the opposite to my performance needs!
 
  He was saying you can't run kernel NFS server and gluster NFS server at
  the same time, on the same host. There is nothing stopping you from
  mounting localhost:/volume on all your boxes. That is exactly how our
  3.2.5 and 3.3.0 environments access volumes for the performance reasons
  you identified.
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable or not?

2012-07-17 Thread Rajesh Amaravathi
Let me elucidate with an example:

host1: GlusterNfs server, say vol

host2: Kernel Nfs export: say, export

Assuming host1 and host2 are not peers, i.e, host2 does NOT have any Gluster 
Nfs servers running,
Lets assume for some reason, export needs to be mounted on host1.

This is not possible *with locking* since glusterNfs already has the portmap 
registration.
Even if it is mounted, then the kernel nfs client starts the kernel NLM v4, 
which will override
the portmap registration of NLMv4 on host1, so Gluster NFS' NLM implementation 
wouldn't work.

However, you can mount export on host1 with -o nolock option, since the 
kernel nfs client
then does not attempt to spawn its own NLM.

Also, the only way host1 can mount vol via nfs is by specifying nolock 
option, else
the same conflict arises.

As to whether we can disable parts of kernel NFS (I'm assuming e.g NLM), I think
its not really necessary since we can mount other exports with nolock option.
If we take out NLM or disable NLM at the kernel level, then every time we need
NLM from kernel, we need to recompile the kernel/have a secondary kernel with 
NLM
and reboot, much tedious than simply killing Gluster/fuse NFS and after kernel 
NLM's
work is done, restart Gluster/fuse NFS. My $0.02 :) 
 

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: Whit Blauvelt whit.glus...@transpect.com
To: Rajesh Amaravathi raj...@redhat.com
Cc: Gluster General Discussion List gluster-users@gluster.org
Sent: Tuesday, July 17, 2012 5:38:48 PM
Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
or not?

Sorry my question was too vague. What I meant to ask is if it is possible,
since there is a conflict between the locking requests from the kernel's NFS
and from Gluster/fuse's NFS, that the kernel might be compiled so with some
or all of its NFS support disabled, so that then Gluster/fuse NFS-locking
would work.

Perhaps longer term there needs to be a way to have the kernel shut its NFS
locking attempts off, just if there is a userland NFS such as Gluster's
running. Meanwhile can enough of NFS be taken out of a custom kernel to
allow Gluster to lock?

Thanks,
Whit

On Tue, Jul 17, 2012 at 08:03:03AM -0400, Rajesh Amaravathi wrote:
 it should be possible to mount another kernel export with -o nolock option and
 compile kernel on it. I'm just guessing when you mount with nolock option,
 we are mounting for mostly read purposes and not for critical writes.
 
 Regards, 
 Rajesh Amaravathi, 
 Software Engineer, GlusterFS 
 RedHat Inc. 
 
 - Original Message -
 From: Whit Blauvelt whit.glus...@transpect.com
 To: Rajesh Amaravathi raj...@redhat.com
 Cc: David Coulson da...@davidcoulson.net, Gluster General Discussion 
 List gluster-users@gluster.org
 Sent: Monday, July 16, 2012 9:56:28 PM
 Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
 or not?
 
 Say, is it possible to compile a kernel without whatever part of its NFS
 support competes with Gluster's locking? 
 
 Whit
 
 On Fri, Jul 13, 2012 at 08:14:27AM -0400, Rajesh Amaravathi wrote:
  I hope you do realize that two NLM implementations of the same version
  cannot operate simultaneously in the same machine. I really look forward
  to a solution to make this work, that'd be something.
  
  Regards, 
  Rajesh Amaravathi, 
  Software Engineer, GlusterFS 
  RedHat Inc. 
  
  - Original Message -
  From: David Coulson da...@davidcoulson.net
  To: Rajesh Amaravathi raj...@redhat.com
  Cc: Tomasz Chmielewski man...@wpkg.org, Gluster General Discussion 
  List gluster-users@gluster.org
  Sent: Friday, July 13, 2012 5:28:04 PM
  Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - 
  reliable or not?
  
  Was that introduced by the same person who thought that binding to 
  sequential ports down from 1024 was a good idea?
  
  Considering how hard RedHat was pushing Gluster at the Summit a week or 
  two ago, it seems like they're making it hard for people to really 
  implement it in any capacity other than their Storage Appliance product.
  
  Luckily I don't need locking yet, but I suppose RedHat will be happy 
  when I do since I'll need to buy more GFS2 Add-Ons for my environment :-)
  
  David
  
  On 7/13/12 7:49 AM, Rajesh Amaravathi wrote:
   Actually, if you want to mount *any* nfs volumes(of Gluster) OR
   exports (of kernel-nfs-server), you cannot do it with locking on
   a system where a glusterfs(nfs process) is running(since 3.3.0).
   However, if its ok to mount without locking, then you should be
   able to do it on localhost.
  
   Regards,
   Rajesh Amaravathi,
   Software Engineer, GlusterFS
   RedHat Inc.
  
   - Original Message -
   From: David Coulson da...@davidcoulson.net
   To: Tomasz Chmielewski man...@wpkg.org
   Cc: Rajesh Amaravathi raj...@redhat.com, Gluster General Discussion 
   List gluster-users@gluster.org
   Sent: Friday, July 13, 2012 3

Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable or not?

2012-07-13 Thread Rajesh Amaravathi
Original Message -
From: Tomasz Chmielewski man...@wpkg.org
To: James Kahn jk...@idea11.com.au
Cc: Gluster General Discussion List gluster-users@gluster.org
Sent: Friday, July 13, 2012 1:51:15 PM
Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
or not?

On 07/13/2012 02:59 PM, James Kahn wrote:
 Try 3.3.0 - 3.2.6 has issues with NFS in general (memory leaks, etc).

Upgrading to 3.3.0 would be quite a big adventure to me (production 
site, lots of traffic etc.). But I guess it would be justified, if it 
really fixes this bug.

The issue was reported earlier, but I don't see any references it was 
fixed in 3.3.0:


Deadlock happens when writing a file big enough to fill the
filesystem cache and kernel is trying to flush it to free some
memory for glusterfsd which needs memory to commit some
filesystem blocks to free some memory for glusterfsd...


http://gluster.org/pipermail/gluster-users/2011-January/006477.html
https://bugzilla.redhat.com/show_bug.cgi?id=GLUSTER-2320


This is a problem generic to fuse/userspace filesystems.
Also, in 3.3, since we have NLM implemented to provide locking for NFS,
Its not possible to mount from a system which has glusterd(more precisely,
a Gluster NFS process) running since both kernel nfs and gNFS will try to 
register
for NLM v4 with portmapper.


-- 
Tomasz Chmielewski
http://www.ptraveler.com


 -Original Message-
 From: Tomasz Chmielewski man...@wpkg.org
 Date: Thursday, 12 July 2012 5:56 PM
 To: Gluster General Discussion List gluster-users@gluster.org
 Subject: [Gluster-users] NFS mounts with glusterd on localhost - reliable
 ornot?

 Hi,

 are NFS mounts made on a single server (i.e. where glusterd is running)
 supposed to be stable (with gluster 3.2.6)?


 I'm using the following line in /etc/fstab:


 localhost:/sites /var/ftp/sites nfs _netdev,mountproto=tcp,nfsvers=3,bg 0
 0


 The problem is, after some time (~1-6 hours), I'm no longer able to
 access this mount.

 dmesg says:

 [49609.832274] nfs: server localhost not responding, still trying
 [49910.639351] nfs: server localhost not responding, still trying
 [50211.446433] nfs: server localhost not responding, still trying


 What's worse, whenever this happens, *all* other servers in the cluster
 (it's a 10-server distributed volume) will destabilise - their load
 average will grow, and eventually their gluster mount becomes
 unresponsive, too (other servers use normal gluster mounts).

 At this point, I have to kill all gluster processes, start glusterd
 again, mount (on servers using gluster mount).


 Is it expected behaviour with gluster and NFS mounts on localhost? Can
 it be caused by some kind of deadlock? Any workarounds?



 --
 Tomasz Chmielewski
 http://www.ptraveler.com
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable or not?

2012-07-13 Thread Rajesh Amaravathi
Actually, if you want to mount *any* nfs volumes(of Gluster) OR
exports (of kernel-nfs-server), you cannot do it with locking on
a system where a glusterfs(nfs process) is running(since 3.3.0).
However, if its ok to mount without locking, then you should be
able to do it on localhost.

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: David Coulson da...@davidcoulson.net
To: Tomasz Chmielewski man...@wpkg.org
Cc: Rajesh Amaravathi raj...@redhat.com, Gluster General Discussion List 
gluster-users@gluster.org
Sent: Friday, July 13, 2012 3:16:38 PM
Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
or not?


On 7/13/12 5:29 AM, Tomasz Chmielewski wrote:

 Killing the option to use NFS mounts on localhost is certainly quite 
 the opposite to my performance needs!


He was saying you can't run kernel NFS server and gluster NFS server at 
the same time, on the same host. There is nothing stopping you from 
mounting localhost:/volume on all your boxes. That is exactly how our 
3.2.5 and 3.3.0 environments access volumes for the performance reasons 
you identified.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable or not?

2012-07-13 Thread Rajesh Amaravathi
I hope you do realize that two NLM implementations of the same version
cannot operate simultaneously in the same machine. I really look forward
to a solution to make this work, that'd be something.

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: David Coulson da...@davidcoulson.net
To: Rajesh Amaravathi raj...@redhat.com
Cc: Tomasz Chmielewski man...@wpkg.org, Gluster General Discussion List 
gluster-users@gluster.org
Sent: Friday, July 13, 2012 5:28:04 PM
Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
or not?

Was that introduced by the same person who thought that binding to 
sequential ports down from 1024 was a good idea?

Considering how hard RedHat was pushing Gluster at the Summit a week or 
two ago, it seems like they're making it hard for people to really 
implement it in any capacity other than their Storage Appliance product.

Luckily I don't need locking yet, but I suppose RedHat will be happy 
when I do since I'll need to buy more GFS2 Add-Ons for my environment :-)

David

On 7/13/12 7:49 AM, Rajesh Amaravathi wrote:
 Actually, if you want to mount *any* nfs volumes(of Gluster) OR
 exports (of kernel-nfs-server), you cannot do it with locking on
 a system where a glusterfs(nfs process) is running(since 3.3.0).
 However, if its ok to mount without locking, then you should be
 able to do it on localhost.

 Regards,
 Rajesh Amaravathi,
 Software Engineer, GlusterFS
 RedHat Inc.

 - Original Message -
 From: David Coulson da...@davidcoulson.net
 To: Tomasz Chmielewski man...@wpkg.org
 Cc: Rajesh Amaravathi raj...@redhat.com, Gluster General Discussion 
 List gluster-users@gluster.org
 Sent: Friday, July 13, 2012 3:16:38 PM
 Subject: Re: [Gluster-users] NFS mounts with glusterd on localhost - reliable 
 or not?


 On 7/13/12 5:29 AM, Tomasz Chmielewski wrote:
 Killing the option to use NFS mounts on localhost is certainly quite
 the opposite to my performance needs!

 He was saying you can't run kernel NFS server and gluster NFS server at
 the same time, on the same host. There is nothing stopping you from
 mounting localhost:/volume on all your boxes. That is exactly how our
 3.2.5 and 3.3.0 environments access volumes for the performance reasons
 you identified.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6

2012-06-26 Thread Rajesh Amaravathi
I found some memory leaks in 3.2 release, which, overtime, add up to a lot of 
leakage, but they are fixed in 3.3. We will fix them in 3.2 too, but the best 
option,IMO would be to upgrade to 3.3. Please let us know if you find any in 
3.3 too.


Regards,
Rajesh Amaravathi,
Software Engineer, GlusterFS
RedHat Inc.
- Original Message -

From: Philip Poten philip.po...@gmail.com
To: Rajesh Amaravathi raj...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, June 21, 2012 1:03:53 PM
Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6

Hi Rajesh,

We are handling only small files up to 10MB mainly in the 5-250kB range - in 
short, images in a flat structure of directories. Since there is a varnish 
setup facing the internet, my guess would be that reads and writes are somwhat 
balanced, i.e. not in excessive relation to each other. But still way more 
reads than writes.

Files are almost never truncated, altered or deleted. I'm not sure if the 
backend writes resized images by creating and renaming them on gluster or by 
moving them onto gluster.

The munin graph looks as if the memory consumption grows faster during heavy 
usage.

gluster volume top operations returns with the usage help, so I can't help 
you with that.


Options Reconfigured:

performance.quick-read: off
performance.cache-size: 64MB
performance.io-thread-count: 64
performance.io-cache: on
performance.stat-prefetch: on


I would gladly deploy a patched 3.2.6 deb package for better debugging or help 
you with any other measure that doesn't require us to take it offline for more 
than a minute.


thanks for looking into that!


kind regards,
Philip


2012/6/21 Rajesh Amaravathi  raj...@redhat.com 

 Hi all,
 I am looking into this issue, but could not make much from the statedumps.
 I will try to reproduce this issue. If i know what kind of operations (reads, 
 writes, metadata r/ws, etc) are being done,
 and if there are any other configuration changes w.r.t GlusterFS, it'll be of 
 great help.

 Regards,
 Rajesh Amaravathi,
 Software Engineer, GlusterFS
 RedHat Inc.
 
 From: Xavier Normand  xavier.norm...@gmail.com 
 To: Philip Poten  philip.po...@gmail.com 
 Cc: gluster-users@gluster.org
 Sent: Tuesday, June 12, 2012 6:32:41 PM
 Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6


 Hi Philip,

 I do have about the same problem that you describe. There is my setup:

 Gluster: Two bricks running gluster 3.2.6

 Clients:
 4 clients running native gluster fuse client.
 2 clients running nfs client

 My nfs client are not doing that much traffic but i was able to view after a 
 couple days that the brick used to mount the nfs is having memory issue.

 i can provide more info as needed to help correct the problem.

 Thank's

 Xavier



 Le 2012-06-12 à 08:18, Philip Poten a écrit :

 2012/6/12 Dan Bretherton  d.a.brether...@reading.ac.uk :

 I wonder if this memory leak is the cause of the NFS performance degradation

 I reported in April.


 That's probable, since the performance does go down for us too when
 the glusterfs process reaches a large percentage of RAM. My initial
 guess was that it's the file system cache that's being eradicated,
 thus iowait increases. But a closer look at our munin graphs implies,
 that it's also the user space that eats more and more CPU
 proportionally with RAM:

 http://imgur.com/a/8YfhQ

 There are two restarts of the whole gluster process family visible on
 those graphs: one a week ago at the very beginning (white in the
 memory graph, as munin couldn't fork all it needed), and one
 yesterday. The drop between 8 and 9 was due to a problemv unrelated to
 gluster.

 Pranith: I just made one dump, tomorrow I'll make one more and mail
 them both to you so that you can compare them. While I just restarted
 yesterday, the leak should be visible, as the process grows a few
 hundred MB every day.

 thanks for the fast reply,
 Philip
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users



 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Switching clients to NFS

2012-06-22 Thread Rajesh Amaravathi
if you are using version 3.3, then you will need a separate client. you cannot 
mount any other nfs exports/volumes on glusterfs servers.
if you are using 3.2.x, then you can mount it on the same servers

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: Marcus Bointon mar...@synchromedia.co.uk
To: gluster-users Discussion List gluster-users@gluster.org
Sent: Friday, June 22, 2012 3:23:43 PM
Subject: [Gluster-users] Switching clients to NFS

I'm looking at switching some clients from native gluster to NFS. Any advice on 
how to do this as transparently as possible? Can both mounts be used at the 
same time (so I can test NFS before switching)? I'm on a vanilla 2-way AFR 
config where both clients are also servers.

Marcus
-- 
Marcus Bointon
Synchromedia Limited: Creators of http://www.smartmessages.net/
UK info@hand CRM solutions
mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/



___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS permission problem

2012-06-12 Thread Rajesh Amaravathi
i see that for EASEGFS, you have used netopstoragen hostnames; 0  n  5. 
Are these hostnames resolvable under /etc/hosts on all hosts or local DNS? 
If yes, have you tried setting trusted write = off? 


What exactly is the problem when you say it doesnt work? 
what are the symptoms? please elucidate. 


Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 
- Original Message -

From: lihang lih...@netopcomputing.com 
To: Rajesh Amaravathi raj...@redhat.com 
Cc: gluster-users gluster-users@gluster.org 
Sent: Tuesday, June 12, 2012 9:43:55 AM 
Subject: Re: Re: [Gluster-users] GlusterFS permission problem 


HI 
Thank you for your help. The permission problem has been solved by changing the 
version to 3.3. 
But there is a new problem. I new two volumes. The volume TEST is everything 
ok. But when I mount the volume EASEGFS through the nfs .I could not see 
anything . 
They work ok under the glusterfs all. What can I do ? 


gluster volume info 

Volume Name: TEST 
Type: Stripe 
Volume ID: 5505d928-9a60-4c80-aa22-61c2933868cd 
Status: Started 
Number of Bricks: 1 x 2 = 2 
Transport-type: tcp 
Bricks: 
Brick1: 10.194.60.216:/test1 
Brick2: 10.194.60.216:/test2 
Options Reconfigured: 
nfs.addr-namelookup: off 
network.ping-timeout: 5 
nfs.trusted-write: on 

Volume Name: EASEGFS 
Type: Stripe 
Volume ID: b2f94b9f-3254-40ba-8ac2-5f6de8d83958 
Status: Started 
Number of Bricks: 1 x 4 = 4 
Transport-type: tcp 
Bricks: 
Brick1: netopstorage1:/ease 
Brick2: netopstorage2:/ease 
Brick3: netopstorage3:/ease 
Brick4: netopstorage4:/ease 
Options Reconfigured: 
nfs.trusted-write: on 
network.ping-timeout: 5 
nfs.addr-namelookup: off 


lihang 



From: Rajesh Amaravathi 
Date: 2012-05-31 15:07 
To: lihang 
CC: gluster-users 
Subject: Re: [Gluster-users] GlusterFS permission problem 

In 3.2.5, NLM is not implemented for our NFS server. you can try mounting with 
nolock option, it should work. 
If you need locking, we have NLM implementation in the latest release 3.3. 

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message - 
From: lih...@netopcomputing.com 
To: Rajesh Amaravathi raj...@redhat.com 
Cc: gluster-users@gluster.org 
Sent: Thursday, 31 May, 2012 11:45:01 AM 
Subject: Re: [Gluster-users] GlusterFS permission problem 

HI 
The version I used is 3.2.5 .Thanks. 


lihang 

On Thu, 31 May 2012 02:09:09 -0400 (EDT), Rajesh Amaravathi wrote: 
 which version of glusterfs are you using? 
 
 Regards, 
 Rajesh Amaravathi, 
 Software Engineer, GlusterFS 
 RedHat Inc. 
 
 - Original Message - 
 From: lih...@netopcomputing.com 
 To: gluster-users@gluster.org 
 Sent: Thursday, 31 May, 2012 9:43:26 AM 
 Subject: [Gluster-users] GlusterFS permission problem 
 
 HI,ALL 
 I found a strange problem. 
 I setup a GlusterFS between linux and windows by nfs and LDAP. 
 The windows client has mounted the volume successfully. But there are 
 some problem with permission . 
 I can new a file and edit it successfully on the volume at 
 windows.But 
 when I new a file in the application buy the save asbutton ,It 
 report 
 the error that I haven't the permission to edit it . 
 
 My volume info: 
 Volume Name: share 
 Type: Distribute 
 Status: Started 
 Number of Bricks: 4 
 Transport-type: tcp 
 Bricks: 
 Brick1: 10.194.60.211:/data 
 Brick2: 10.194.60.212:/data 
 Brick3: 10.194.60.213:/data 
 Brick4: 10.194.60.214:/data 
 Options Reconfigured: 
 nfs.addr-namelookup: off 
 nfs.trusted-write: on 
 nfs.trusted-sync: on 
 features.quota: on 
 network.ping-timeout: 5 
 
 windows mount commond : 
 mount 10.194.60.211:/share x: 
 
 
 
  
 
 lihang 
 ___ 
 Gluster-users mailing list 
 Gluster-users@gluster.org 
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Issue recreating volumes

2012-06-07 Thread Rajesh Amaravathi
one can use the clear_xattrs.sh script with the bricks as argument to remove 
all the xattrs set on bricks. it recursively deleted all 
xattrs from the bricks' files. after running this script on bricks, we can 
re-use them

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: Amar Tumballi ama...@redhat.com
To: Brian Candler b.cand...@pobox.com
Cc: gluster-users@gluster.org
Sent: Friday, June 8, 2012 10:34:08 AM
Subject: Re: [Gluster-users] Issue recreating volumes

Hi Brian,

Answers inline.

 Here are a couple of wrinkles I have come across while trying gluster 3.3.0
 under ubuntu-12.04.

 (1) At one point I decided to delete some volumes and recreate them. But
 it would not let me recreate them:

  root@dev-storage2:~# gluster volume create fast 
 dev-storage1:/disk/storage1/fast dev-storage2:/disk/storage2/fast
  /disk/storage2/fast or a prefix of it is already part of a volume

 This is even though gluster volume info showed no volumes.

 Restarting glusterd didn't help either. Nor indeed did a complete reinstall
 of glusterfs, even with apt-get remove --purge and rm -rf'ing the state
 directories.

 Digging around, I found some hidden state files:

  # ls -l /disk/storage1/*/.glusterfs/00/00
  /disk/storage1/fast/.glusterfs/00/00:
  total 0
  lrwxrwxrwx 1 root root 8 Jun  7 14:23 
 ----0001 -  ../../..

  /disk/storage1/safe/.glusterfs/00/00:
  total 0
  lrwxrwxrwx 1 root root 8 Jun  7 14:21 
 ----0001 -  ../../..

 I deleted them on both machines:

  rm -rf /disk/*/.glusterfs

 Problem solved? No, not even with glusterd restart :-(

  root@dev-storage2:~# gluster volume create safe replica 2 
 dev-storage1:/disk/storage1/safe dev-storage2:/disk/storage2/safe
  /disk/storage2/safe or a prefix of it is already part of a volume

 In the end, what I needed was to delete the actual data bricks themselves:

  rm -rf /disk/*/fast
  rm -rf /disk/*/safe

 That allowed me to recreate the volumes.

 This is probably an understanding/documentation issue. I'm sure there's a
 lot of magic going on in the gluster 3.3 internals (is that long ID some
 sort of replica update sequence number?) which if it were fully documented
 would make it easier to recover from these situations.


Preventing of 'recreating' of a volume (actually internally, it just 
prevents you from 're-using' the bricks, you can create same volume name 
with different bricks), is very much intentional to prevent disasters 
(like data loss) from happening.

We treat data separate from volume's config information. Hence, when a 
volume is 'delete'd, only the configuration details of the volume is 
lost, but data belonging to the volume is present on its brick as is. It 
is admin's discretion to handle the data later.

Considering above point, now, if we allow 're-using' of the same brick 
which was part of some volume earlier, it could lead to issues of data 
placement in wrong brick, internal inode number clashes etc, which could 
lead to 'heal' the data from client perspective, leading to deleting 
some files which would be important.

If admin is aware of the case, and knows that there is no 'data' inside 
the brick, then easier option is to delete the export dir and it gets 
created by 'gluster volume create'. If you want to fix it without 
deleting the export directory, then it is also possible, by deleting the 
extended attributes on the brick like below.

bash# setfattr -x trusted.glusterfs.volume-id $brickdir
bash# setfattr -x trusted.gfid $brickdir


And now, creating the brick should succeed.


 (2) Minor point: the FUSE client no longer seems to understand or need the
 _netdev option, however it still invokes it if you use defaults in
 /etc/fstab, and so you get a warning about an unknown option:

  root@dev-storage1:~# grep gluster /etc/fstab
  storage1:/safe /gluster/safe glusterfs defaults,nobootwait 0 0
  storage1:/fast /gluster/fast glusterfs defaults,nobootwait 0 0

  root@dev-storage1:~# mount /gluster/safe
  unknown option _netdev (ignored)


Will look into this.

Regards,
Amar
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS permission problem

2012-06-01 Thread Rajesh Amaravathi
Please make sure netopstorage* are resolvable (in /etc/hosts or local dns). 
Also, nfs client host
should not be peered with the cluster, i.e, the client should not be a 
glusterfs peer.

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: lih...@netopcomputing.com
To: Rajesh Amaravathi raj...@redhat.com
Cc: gluster-users@gluster.org
Sent: Friday, June 1, 2012 10:12:36 AM
Subject: Re: [Gluster-users] GlusterFS permission problem

HI
Thank you for your help. The permission problem has been solved by 
changing the version to 3.3.
But there is a new problem. I new two volumes. The volume TEST is 
everything ok. But when I mount the volume EASEGFS through the nfs .I 
could not see anything .
They  work ok under the glusterfs all. What can I do ?

gluster volume info

Volume Name: TEST
Type: Stripe
Volume ID: 5505d928-9a60-4c80-aa22-61c2933868cd
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.194.60.216:/test1
Brick2: 10.194.60.216:/test2
Options Reconfigured:
nfs.addr-namelookup: off
network.ping-timeout: 5
nfs.trusted-write: on

Volume Name: EASEGFS
Type: Stripe
Volume ID: b2f94b9f-3254-40ba-8ac2-5f6de8d83958
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: netopstorage1:/ease
Brick2: netopstorage2:/ease
Brick3: netopstorage3:/ease
Brick4: netopstorage4:/ease
Options Reconfigured:
nfs.trusted-write: on
network.ping-timeout: 5
nfs.addr-namelookup: off




lihang


On Thu, 31 May 2012 03:07:10 -0400 (EDT), Rajesh Amaravathi wrote:
 In 3.2.5, NLM is not implemented for our NFS server. you can try
 mounting with nolock option, it should work.
 If you need locking, we have NLM implementation in the latest release 
 3.3.

 Regards,
 Rajesh Amaravathi,
 Software Engineer, GlusterFS
 RedHat Inc.

 - Original Message -
 From: lih...@netopcomputing.com
 To: Rajesh Amaravathi raj...@redhat.com
 Cc: gluster-users@gluster.org
 Sent: Thursday, 31 May, 2012 11:45:01 AM
 Subject: Re: [Gluster-users] GlusterFS permission problem

 HI
The version I used is 3.2.5 .Thanks.


 lihang

 On Thu, 31 May 2012 02:09:09 -0400 (EDT), Rajesh Amaravathi wrote:
 which version of glusterfs are you using?

 Regards,
 Rajesh Amaravathi,
 Software Engineer, GlusterFS
 RedHat Inc.

 - Original Message -
 From: lih...@netopcomputing.com
 To: gluster-users@gluster.org
 Sent: Thursday, 31 May, 2012 9:43:26 AM
 Subject: [Gluster-users] GlusterFS permission problem

 HI,ALL
 I found a strange problem.
 I setup a GlusterFS between linux and windows by nfs and LDAP.
 The windows client has mounted the volume successfully. But there 
 are
 some problem with permission .
 I can new a file and edit it successfully on the volume at
 windows.But
 when I new a file in the application buy the save asbutton ,It
 report
 the error that I haven't the permission to edit it .

 My volume info:
 Volume Name: share
 Type: Distribute
 Status: Started
 Number of Bricks: 4
 Transport-type: tcp
 Bricks:
 Brick1: 10.194.60.211:/data
 Brick2: 10.194.60.212:/data
 Brick3: 10.194.60.213:/data
 Brick4: 10.194.60.214:/data
 Options Reconfigured:
 nfs.addr-namelookup: off
 nfs.trusted-write: on
 nfs.trusted-sync: on
 features.quota: on
 network.ping-timeout: 5

 windows mount commond :
 mount 10.194.60.211:/share x:


 
 

 lihang
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS permission problem

2012-05-31 Thread Rajesh Amaravathi
In 3.2.5, NLM is not implemented for our NFS server. you can try mounting with 
nolock option, it should work.
If you need locking, we have NLM implementation in the latest release 3.3.

Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 

- Original Message -
From: lih...@netopcomputing.com
To: Rajesh Amaravathi raj...@redhat.com
Cc: gluster-users@gluster.org
Sent: Thursday, 31 May, 2012 11:45:01 AM
Subject: Re: [Gluster-users] GlusterFS permission problem

HI
   The version I used is 3.2.5 .Thanks.


lihang

On Thu, 31 May 2012 02:09:09 -0400 (EDT), Rajesh Amaravathi wrote:
 which version of glusterfs are you using?

 Regards,
 Rajesh Amaravathi,
 Software Engineer, GlusterFS
 RedHat Inc.

 - Original Message -
 From: lih...@netopcomputing.com
 To: gluster-users@gluster.org
 Sent: Thursday, 31 May, 2012 9:43:26 AM
 Subject: [Gluster-users] GlusterFS permission problem

 HI,ALL
 I found a strange problem.
 I setup a GlusterFS between linux and windows by nfs and LDAP.
 The windows client has mounted the volume successfully. But there are
 some problem with permission .
 I can new a file and edit it successfully on the volume at 
 windows.But
 when I new a file in the application buy the save asbutton ,It 
 report
 the error that I haven't the permission to edit it .

 My volume info:
 Volume Name: share
 Type: Distribute
 Status: Started
 Number of Bricks: 4
 Transport-type: tcp
 Bricks:
 Brick1: 10.194.60.211:/data
 Brick2: 10.194.60.212:/data
 Brick3: 10.194.60.213:/data
 Brick4: 10.194.60.214:/data
 Options Reconfigured:
 nfs.addr-namelookup: off
 nfs.trusted-write: on
 nfs.trusted-sync: on
 features.quota: on
 network.ping-timeout: 5

 windows mount commond :
 mount 10.194.60.211:/share x:

 
 

 lihang
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users