From the engine.log - the gluster volumes are queried on host "node1"
which returns no volumes.
1. Your cluster "r710cluster1" - which nodes are added to it? node1
alone or node0 and node2 as well?
2. Was the attached supervdsm.log from node1?
3. Which node was the below "gluster volume info" output from? What is
the output of "gluster peer status" and "gluster volume info" on node1?
On 08/17/2015 12:49 PM, Demeter Tibor wrote:
Dear Sahina,
Thank you for your reply.
Volume Name: g2sata
Type: Replicate
Volume ID: 49d76fc8-853e-4c7d-82a5-b12ec98dadd8
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.16.0.10:/data/sata/brick2
Brick2: 172.16.0.12:/data/sata/brick2
Options Reconfigured:
nfs.disable: on
user.cifs: disable
auth.allow: 172.16.*
storage.owner-uid: 36
storage.owner-gid: 36
Volume Name: g4sata
Type: Replicate
Volume ID: f26ed231-c951-431f-8a2f-e8818b58cfb4
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 172.16.0.10:/data/sata/iso
Brick2: 172.16.0.12:/data/sata/iso
Options Reconfigured:
nfs.disable: off
user.cifs: disable
auth.allow: 172.16.0.*
storage.owner-uid: 36
storage.owner-gid: 36
Also, I have attached the logs .
Thanks in advance,
Tibor
----- 2015. aug.. 17., 8:40, Sahina Bose sab...@redhat.com írta:
Please provide output of "gluster volume info" command, vdsm.log &
engine.log
There could be a mismatch between node information in engine database
and gluster - one of the reasons is because the gluster server uuid
changed on the node and we will need to see why.
On 08/17/2015 12:35 AM, Demeter Tibor wrote:
Hi All,
I have to upgrade ovirt 3.5.0 to 3.5.3. We have a 3 node system and we have a
gluster replica beetwen 2 node of these 3 servers.
I had gluster volume beetwen node0 and node2
But I wanted to do a new volume beetwen node1 and node2.
It didn't work, it complety kill my node1, because glusterd does not stated.I
always have to got always error: gluster peer rejected (related to node1).
I followed this article
http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
and it was good, my gluster service already worked, but ovirt got these errors
Detected deletion of volume g2sata on cluster r710cluster1, and deleted it from
engine DB.
Detected deletion of volume g4sata on cluster r710cluster1, and deleted it from
engine DB.
And ovirt does not see my gluster volumes anymore.
I've checked with gluster volume status and gluster volume heal g2sata info, it
seems to be working, my VMs are ok.
How can I reimport my lost volumes to ovirt?
Thanks in advance,
Tibor
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users