Hi all,
Running glusterfs-3.4.2-1.el6.x86_6 on centos6.5
Due to some smart people screw up my network connection on the nodes for don't
know how long. I found that I have my GlusterFS volume in split-brain. I
googled and found different way to clean this. I need some extra help on this.
#
Hi all,
really need some guidance here. Is this a good direction?
http://www.gluster.org/2012/07/fixing-split-brain-with-glusterfs-3-3/
On Thursday, February 20, 2014 5:43 PM, William Kwan pota...@yahoo.com wrote:
Hi all,
Running glusterfs-3.4.2-1.el6.x86_6 on centos6.5
Due to some
Hi all,
really need some guidance here. Is this a good direction?
http://www.gluster.org/2012/07/fixing-split-brain-with-glusterfs-3-3/
On Thursday, February 20, 2014 5:43 PM, William Kwan pota...@yahoo.com wrote:
Hi all,
Running glusterfs-3.4.2-1.el6.x86_6 on centos6.5
Due to some
Hi,
Gluster 3.4.2 on CentOS 6.5
I have a volume with 2 replica on two systems. The filesystem where a brick
was created on system 2 was corrupted. Hardware issue was resolved. The
filesystem is recreated and mounted at the same mount point. But I can't get
the volume replicated. I'm
Hi
I have a volume with 2 replica. I took one off and the volume become one with
one brick only. Is there anyway I can add one brick in and turn it back to a
replicated volume ?
W
___
Gluster-users mailing list
Gluster-users@gluster.org
:/data2/newbrick
volume add-brick: failed:
If I create a dir brick under /data2/newbrick and run the command it would
work fine.
On Monday, January 13, 2014 1:24 PM, Vijay Bellur vbel...@redhat.com wrote:
On 01/13/2014 09:31 PM, William Kwan wrote:
Hi
I have a volume with 2 replica. I took
of a volume
Thanks
W
On Thursday, December 26, 2013 5:24 PM, William Kwan pota...@yahoo.com wrote:
Hi all,
Running 3.4.1 on Centos6.
I have this issue, I created two xfs filesystem on each of two hosts
onode1:/data/glusterfs/kvm1/brick1
onode2:/data/glusterfs/kvm1/brick2
onode1:/data
peered with same adress/name?)
hth
Bernhard
On 27.12.2013, at 15:27, William Kwan pota...@yahoo.com wrote:
May be it is confusing or I'm using the wrong naming conventions. what I'm
actually testing is
Each node has one brick for each of glusterfs kvm1 and kvm2
So glusterfs kvm1 will have
Hi all,
Running 3.4.1 on Centos6.
I have this issue, I created two xfs filesystem on each of two hosts
onode1:/data/glusterfs/kvm1/brick1
onode2:/data/glusterfs/kvm1/brick2
onode1:/data/glusterfs/kvm2/brick1
onode1:/data/glusterfs/kvm2/brick2
The first volume was created successfully
#
Hi all,
Env: CentOS 6.5 with glusterfs 3.4.1
I just start working on Gluster. I have two test hosts. Both of them have a
xfs on top of LVM. I searched, but there are lots of result like this. I'm
not sure if this is a bug in my version or I missed something. Any suggestion
is
Hi all,
Env: CentOS 6.5 with glusterfs 3.4.1
I just start working on Gluster. I have two test hosts. Both of them have a
xfs on top of LVM. I searched, but there are lots of result like this. I'm
not sure if this is a bug in my version?
# gluster volume create gvol1 replica 2 transport
11 matches
Mail list logo