Hello List,
i am running ovirt (CentOS) on top of glusterfs. I have a 3 Node
replica. Versions see below.
Looks like i can not get my node1 (v 3.7.8) together with the othet
two (v3.7.0). The error i get when i try to " mount -t glusterfs
10.10.3.7:/RaidVolC /mnt/":
[2016-02-20 10:27:30.890701]
=0x0001
trusted.glusterfs.volume-id=0x8786357b9d114c01a34baee949c116e9
On Mon, Mar 30, 2015 at 12:38 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
On 03/30/2015 03:59 PM, Ml Ml wrote:
Anyone?
Is this a dumb question or just a hard one?
I already tried:
http
at 10:31 PM, Ml Ml mliebher...@googlemail.com wrote:
Hello List,
i have a 3 Peer Replica Gluster. On one of my peers the hard drive of
a brick failed.
I replaced it and formated the brick device it with ext4.
How do i get it back into my gluster? Is there a official way how to
re-integrade
Hello List,
i have a 3 Peer Replica Gluster. On one of my peers the hard drive of
a brick failed.
I replaced it and formated the brick device it with ext4.
How do i get it back into my gluster? Is there a official way how to
re-integrade it?
Thanks,
Mario
Will i then get loop? Do i need SPF or something alike?
Sorry, i meant STP (Spanning Tree Protocol).
Cheers,
Mario
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hello List,
i would like to build a 3 Node Cluster. I was thinking of such a setup:
http://oi58.tinypic.com/2rfgghi.jpg
My question:
Can i set it up my network that way, that it will still work of one node fails?
Just thinking about i want bonding and then i might even need a brige
to bring
:58 PM, Ml Ml wrote:
/1701d5ae-6a44-4374-8b29-61c699da870b/dom_md/ids is a binary file.
Here is the output of gluster volume info:
--
[root@ovirt-node03 ~]# gluster volume info
Volume Name: RaidVolB
Type
Can anyone help me here please?
On Tue, Jan 27, 2015 at 7:09 PM, Ml Ml mliebher...@googlemail.com wrote:
Hello List,
i was able to produce a split brain:
[root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
gfid:1c15d0cb
, Ravishankar N ravishan...@redhat.com wrote:
On 01/28/2015 08:34 PM, Ml Ml wrote:
Hello Ravi,
thanks a lot for your reply.
The Data on ovirt-node03 is the one which i want.
Here are the infos collected by following the howto:
https://github.com/GlusterFS/glusterfs/blob/master/doc/debugging
in my case, right?
What are my next setfattr commands nowin my case if i want to keep the
data from node03?
Thanks a lot!
Mario
On Wed, Jan 28, 2015 at 9:44 AM, Ravishankar N ravishan...@redhat.com wrote:
On 01/28/2015 02:02 PM, Ml Ml wrote:
I want to either take the file from node03
Hello List,
i was able to produce a split brain:
[root@ovirt-node03 splitmount]# gluster volume heal RaidVolB info
Brick ovirt-node03.example.local:/raidvol/volb/brick/
gfid:1c15d0cb-1cca-4627-841c-395f7b712f73
Number of entries: 1
Brick ovirt-node04.example.local:/raidvol/volb/brick/
11 matches
Mail list logo