engine.log and vdsm.log?
This can mostly happen due to following reasons
- "gluster volume status vm-store" is not consistently returning the
right output
- ovirt-engine is not able to identify the bricks properly
Anyway, engine.log will give better clarity.
On 05/22/2014 02:24 AM, Alastair Neil wrote:
I just did a rolling upgrade of my gluster storage cluster to the
latest 3.5 bits. This all seems to have gone smoothly and all the
volumes are on line. All volumes are replicated 1x2
The ovirt console now insists that two of my volumes , including the
vm-store volume with my vm's happily running have no bricks up.
It reports "Up but all bricks are down"
This would seem to be impossible. Gluster on the nodes itself
reports no issues
[root@gluster1 ~]# gluster volume status vm-store
Status of volume: vm-store
Gluster processPortOnlinePid
------------------------------------------------------------------------------
Brick gluster0:/export/brick0/vm-store49158Y2675
Brick gluster1:/export/brick4/vm-store49158Y2309
NFS Server on localhost2049Y27012
Self-heal Daemon on localhostN/AY27019
NFS Server on gluster02049Y12875
Self-heal Daemon on gluster0N/AY12882
Task Status of Volume vm-store
------------------------------------------------------------------------------
There are no active volume tasks
As I mentioned the vms are running happily
initially the ISOs volume had the same issue. I did a volume start
and stop on the volume as it was not being activly used and that
cleared up the issue in the console. However, as I have VMs running
I can't so this for the vm-store volume.
Any suggestions, Alastair
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users