Hello,
Is it possible and safe to use some kind of sub-volumes via glusterfs?
For example, I have a share A, with lots of data in it, but on some hosts I'll
only need subfolder B.
I can solve this by using the NFS client. But what will happen, if I share the path A/B
additionally to A?
Hi,
We have glusterFS setup with version 3.1.4 installed and configured in
replica mode.
The setup was configured in the following manner. The gluster volume info
looks like
# gluster volume info
Volume Name: gluster-fs1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type:
Dear gluster team,
I have installed glusterfs on my servers for the storage.
Machine: x86_64-redhat-linux
I have created volumes with rdma protocol for infiniband. When I setacl for
glusterfs mount point it works fine but when i do it for nfs mount it says.
mount is done with acl option.
Dear gluster team,
I have installed glusterfs on my servers for the storage.
Machine: x86_64-redhat-linux
I have created volumes with rdma protocol for infiniband. I have mount with
acl option on server and client. When I run setfacl for glusterfs mount
point it works fine but when i do it for
Dear gluster team,
I have installed glusterfs on my servers for the storage.
Machine: x86_64-redhat-linux
I have created volumes with rdma protocol for infiniband. I have mount with
acl option on server and client. When I run setfacl for glusterfs mount
point it works fine but when i do it for
-- Forwarded message --
From: Neetu Ojha neetuojha.c...@gmail.com
Date: Tue, Aug 30, 2011 at 6:27 PM
Subject: setfacl dir : operation not supported , using glusterfs 3.2.2
To: gluster-users@gluster.org
Dear gluster team,
I have installed glusterfs 3.2.2 on my servers for the
Dear gluster team,
I have installed glusterfs3.2.2 on my servers for the storage.
Machine: x86_64-redhat-linux
I have created volumes with rdma protocol for infiniband. I have mount with
acl option on server and client. When I run setfacl for glusterfs mount
point it works fine but when i do
On Fri, Aug 26, 2011 at 11:40:41AM +0200, Christian wrote:
The behavior I am looking for is to store files locally first and
then sync the content to the second node in the background.
Is there a way for this?
You should look at geo-replication in gluster, which is asynchronous.
Whit
Kindly help me how to resolve this issue. I am really getting confused that
what can be the actual reason.
that is because 'NFSv3' doesnt support the extended attributes, and ACL
support needs extended attributes. The behavior you are seeing is normal.
The log-message you are seeing is a
Je serai absent(e) à partir du 30/08/2011 de retour le 31/08/2011.
Je répondrai à votre message dès mon retour.
En cas d'urgence, contactez Chantal Bacon au
514-281-2244 x 2252
I will answer my emails when I come back
In case of emergency, please contact Chantal Bacon at 514-281-2244 x 2252
Thanks a lot for prompt reply. But I would like you to throw some more
light on this.
That if volumes are created for rdma than nfs mount should be avoided.
NFS export happens on the volume by default. To turn off this, use below
command:
bash# gluster volume set VOLNAME nfs.disable yes
Hi!
I am using Gluster 3.2.1 on a two/three Opensuse 11.3/11.4 server
cluster, where the Gluster nodes are server and client.
While merging the cluster to servers with higher performance, I tried
Gluster 3.3 beta.
Both versions show the same problem:
A single volume (holding the mail base,
Gluster clients and servers are CentOS 5.6 x86_64 using the 3.2.3 Gluster
provided -core and -fuse RPMs.
I'm trying to use /etc/auto.home to automount my users home directories using
glusterfs fuse client, and I keep getting transport endpoint is not connected.
The summary: can successfully
the splat won't work. we have had the same issue. we moved homes to a direct
mount.
On Aug 30, 2011, at 4:23 PM, Mike Hanby wrote:
Gluster clients and servers are CentOS 5.6 x86_64 using the 3.2.3 Gluster
provided -core and -fuse RPMs.
I'm trying to use /etc/auto.home to automount my
how's mount output look. I didn't think about the double mount..
On Aug 30, 2011, at 4:41 PM, Luis Cerezo wrote:
the splat won't work. we have had the same issue. we moved homes to a direct
mount.
On Aug 30, 2011, at 4:23 PM, Mike Hanby wrote:
Gluster clients and servers are CentOS
/etc/fstab
192.168.1.11:/res /res glusterfs defaults,_netdev 0 0
/etc/master
/home /etc/auto.home --timeout=1200
/etc/auto.home
* localhost:/res/users/
That's it, works like a champ. Otherwise I have to do a lot of /etc/passwd
hackery, which isn't something I'd like to do :-)
hi,
This can happen if there is a split-brain on that directory, could
you post the output of getfattr -d -m . /data/vmail/var/vmail on all
the bricks so that we can confirm if that is the case.
Pranith.
On 08/31/2011 01:59 AM, gluster1...@akxnet.de wrote:
Hi!
I am using Gluster 3.2.1
17 matches
Mail list logo