Hello,

Gluster worked fine for me last year when I tried it.
Check out this blog post, there might be some options you need to set on the 
gluster volume for it to work:
http://blog.gluster.org/2014/02/setting-up-a-test-environment-for-apache-cloudstack-and-gluster/

Also check the #gluster channel on Freenode.

HTH

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

----- Original Message -----
> From: "Syahrul Sazli Shaharir" <[email protected]>
> To: [email protected]
> Sent: Monday, 6 July, 2015 07:52:05
> Subject: Cloudstack + GlusterFS

> Hi,
> 
> I'm running Cloudstack on a CentOS 6.x environment, and verified it to
> be running fine with normal NFS Primary Storage. Now, I wish to switch
> to GlusterFS as primary storage. The gluster volume stats are OK, and
> it works fine when mounted by hosts via manual mounts, but when
> launching VM instance, the host qemu logs the following error upon
> startup ( this particular log is the console proxy VM ):-
> 
> [2015-07-06 06:26:09.053694] E [rpc-clnt.c:362:saved_frames_unwind]
> (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f944ac3cfb0]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f944d270a87]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f944d270b9e]
> (--> 
> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f944d270c6b]
> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15f)[0x7f944d27122f]
> ))))) 0-gv0-client-0: forced unwinding frame type(GF-DUMP) op(DUMP(1))
> called at 2015-07-06 06:26:09.053030 (xid=0x3)
> [2015-07-06 06:26:09.055464] E [rpc-clnt.c:362:saved_frames_unwind]
> (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f944ac3cfb0]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f944d270a87]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f944d270b9e]
> (--> 
> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f944d270c6b]
> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15f)[0x7f944d27122f]
> ))))) 0-gv0-client-1: forced unwinding frame type(GF-DUMP) op(DUMP(1))
> called at 2015-07-06 06:26:09.055134 (xid=0x3)
> [2015-07-06 06:26:09.055488] E [MSGID: 108006]
> [afr-common.c:3919:afr_notify] 0-gv0-replicate-0: All subvolumes are
> down. Going offline until atleast one of them comes back up.  <----- I
> believe this is the root cause
> 2015-07-06T06:26:09.075618Z qemu-kvm: -drive
> file=gluster+tcp://192.168.100.110:24007/gv0/32fa4aab-f7a2-416a-ba35-952b1c2ee1af,if=none,id=drive-virtio-disk0,format=qcow2,cache=none:
> Gluster connection failed for server=192.168.100.110 port=24007
> volume=gv0 image=32fa4aab-f7a2-416a-ba35-952b1c2ee1af transport=tcp
> 2015-07-06T06:26:10.022847Z qemu-kvm: -drive
> file=gluster+tcp://192.168.100.110:24007/gv0/32fa4aab-f7a2-416a-ba35-952b1c2ee1af,if=none,id=drive-virtio-disk0,format=qcow2,cache=none:
> could not open disk image
> gluster+tcp://192.168.100.110:24007/gv0/32fa4aab-f7a2-416a-ba35-952b1c2ee1af:
> Input/output error
> 2015-07-06 06:26:10.132+0000: shutting down
> 
> I checked 192.168.100.110:24007/gv0/32fa4aab-f7a2-416a-ba35-952b1c2ee1af
> exist and readable by the host mounting it.
> 
> I would appreciate any pointers. Thanks.
> 
> --
> --sazli

Reply via email to