[Gluster-users] Sharing Sub-Volumes

2011-08-30 Thread Georg Höllrigl
Hello, Is it possible and safe to use some kind of sub-volumes via glusterfs? For example, I have a share A, with lots of data in it, but on some hosts I'll only need subfolder B. I can solve this by using the NFS client. But what will happen, if I share the path A/B additionally to A?

[Gluster-users] GlusterFS disconnection

2011-08-30 Thread crl india
Hi, We have glusterFS setup with version 3.1.4 installed and configured in replica mode. The setup was configured in the following manner. The gluster volume info looks like # gluster volume info Volume Name: gluster-fs1 Type: Replicate Status: Started Number of Bricks: 2 Transport-type:

[Gluster-users] setfacl: operation not supported using glusterfs 3.2.2

2011-08-30 Thread Neetu Ojha
Dear gluster team, I have installed glusterfs on my servers for the storage. Machine: x86_64-redhat-linux I have created volumes with rdma protocol for infiniband. When I setacl for glusterfs mount point it works fine but when i do it for nfs mount it says. mount is done with acl option.

[Gluster-users] setfacl dir:operation not supported

2011-08-30 Thread Neetu Ojha
Dear gluster team, I have installed glusterfs on my servers for the storage. Machine: x86_64-redhat-linux I have created volumes with rdma protocol for infiniband. I have mount with acl option on server and client. When I run setfacl for glusterfs mount point it works fine but when i do it for

[Gluster-users] setfacl dir : operation not supported , using glusterfs 3.2.2

2011-08-30 Thread Neetu Ojha
Dear gluster team, I have installed glusterfs on my servers for the storage. Machine: x86_64-redhat-linux I have created volumes with rdma protocol for infiniband. I have mount with acl option on server and client. When I run setfacl for glusterfs mount point it works fine but when i do it for

[Gluster-users] Fwd: setfacl dir : operation not supported , using glusterfs 3.2.2

2011-08-30 Thread Neetu Ojha
-- Forwarded message -- From: Neetu Ojha neetuojha.c...@gmail.com Date: Tue, Aug 30, 2011 at 6:27 PM Subject: setfacl dir : operation not supported , using glusterfs 3.2.2 To: gluster-users@gluster.org Dear gluster team, I have installed glusterfs 3.2.2 on my servers for the

[Gluster-users] setfacl dit : operation not supported (installed glusterfs 3.2.2)

2011-08-30 Thread Neetu Ojha
Dear gluster team, I have installed glusterfs3.2.2 on my servers for the storage. Machine: x86_64-redhat-linux I have created volumes with rdma protocol for infiniband. I have mount with acl option on server and client. When I run setfacl for glusterfs mount point it works fine but when i do

Re: [Gluster-users] write-behind / write-back caching for replicated storage

2011-08-30 Thread root
On Fri, Aug 26, 2011 at 11:40:41AM +0200, Christian wrote: The behavior I am looking for is to store files locally first and then sync the content to the second node in the background. Is there a way for this? You should look at geo-replication in gluster, which is asynchronous. Whit

Re: [Gluster-users] setfacl dir:operation not supported

2011-08-30 Thread Amar Tumballi
Kindly help me how to resolve this issue. I am really getting confused that what can be the actual reason. that is because 'NFSv3' doesnt support the extended attributes, and ACL support needs extended attributes. The behavior you are seeing is normal. The log-message you are seeing is a

[Gluster-users] Tony Bussieres est absent(e).

2011-08-30 Thread tony . bussieres
Je serai absent(e) à partir du 30/08/2011 de retour le 31/08/2011. Je répondrai à votre message dès mon retour. En cas d'urgence, contactez Chantal Bacon au 514-281-2244 x 2252 I will answer my emails when I come back In case of emergency, please contact Chantal Bacon at 514-281-2244 x 2252

Re: [Gluster-users] setfacl dir:operation not supported

2011-08-30 Thread Amar Tumballi
Thanks a lot for prompt reply. But I would like you to throw some more light on this. That if volumes are created for rdma than nfs mount should be avoided. NFS export happens on the volume by default. To turn off this, use below command: bash# gluster volume set VOLNAME nfs.disable yes

[Gluster-users] Gluster 3.2.1 : Mounted volumes vanishes on client side

2011-08-30 Thread gluster1206
Hi! I am using Gluster 3.2.1 on a two/three Opensuse 11.3/11.4 server cluster, where the Gluster nodes are server and client. While merging the cluster to servers with higher performance, I tried Gluster 3.3 beta. Both versions show the same problem: A single volume (holding the mail base,

[Gluster-users] autofs using glusterfs not NFS

2011-08-30 Thread Mike Hanby
Gluster clients and servers are CentOS 5.6 x86_64 using the 3.2.3 Gluster provided -core and -fuse RPMs. I'm trying to use /etc/auto.home to automount my users home directories using glusterfs fuse client, and I keep getting transport endpoint is not connected. The summary: can successfully

Re: [Gluster-users] autofs using glusterfs not NFS

2011-08-30 Thread Luis Cerezo
the splat won't work. we have had the same issue. we moved homes to a direct mount. On Aug 30, 2011, at 4:23 PM, Mike Hanby wrote: Gluster clients and servers are CentOS 5.6 x86_64 using the 3.2.3 Gluster provided -core and -fuse RPMs. I'm trying to use /etc/auto.home to automount my

Re: [Gluster-users] autofs using glusterfs not NFS

2011-08-30 Thread Luis Cerezo
how's mount output look. I didn't think about the double mount.. On Aug 30, 2011, at 4:41 PM, Luis Cerezo wrote: the splat won't work. we have had the same issue. we moved homes to a direct mount. On Aug 30, 2011, at 4:23 PM, Mike Hanby wrote: Gluster clients and servers are CentOS

Re: [Gluster-users] autofs using glusterfs not NFS

2011-08-30 Thread Mike Hanby
/etc/fstab 192.168.1.11:/res /res glusterfs defaults,_netdev 0 0 /etc/master /home /etc/auto.home --timeout=1200 /etc/auto.home * localhost:/res/users/ That's it, works like a champ. Otherwise I have to do a lot of /etc/passwd hackery, which isn't something I'd like to do :-)

Re: [Gluster-users] Gluster 3.2.1 : Mounted volumes vanishes on client side

2011-08-30 Thread Pranith Kumar K
hi, This can happen if there is a split-brain on that directory, could you post the output of getfattr -d -m . /data/vmail/var/vmail on all the bricks so that we can confirm if that is the case. Pranith. On 08/31/2011 01:59 AM, gluster1...@akxnet.de wrote: Hi! I am using Gluster 3.2.1