Hi,
I am facing an issue in adding ceph RBD storage to CloudStack. It is failing
with "Failed to add datasource" error.
I have followed all the available instructions related to CloudStack, KVM and
ceph storage integration.
CenOS 6.5 KVM is used as KVM node here. I have read in some blog that we need
to compile LibVirt in CenOS KVM nodes to make Ceph storage to work with in
CloudStack.
Hence I have got git cloned LibVirt package from its source and upgraded
LibVirt and Qemu versions.
(Commands used --> git cone #####, ./autogen.sh, make, make install).
It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support which
needs to be specified as a parameter while compiling LibVirt.
Can anyone throw some pointers on how to rectify this problem?
Management Server Exception:
2014-06-20 09:58:03,757 DEBUG [agent.transport.Request] (catalina-exec-6:null)
Seq 1-1602164611: Received: { Ans: , MgmtId: 52234925782, via: 1, Ver: v1,
Flags: 10, { Answer } }
2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl]
(catalina-exec-6:null) Details from executing class
com.cloud.agent.api.ModifyStoragePoolCommand: java.lang.NullPointerException
at
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:531)
at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:185)
at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:177)
I even tried defining a pool using virsh command even that is failing with
"internal error missing backend for pool type 8".
This indicates my KVM LibVirt is not supported for RBD.
Virsh exception on manual pool definition
<pool type='rbd'>
<name>c574980a-19fc-37e9-b6e3-788a7439575d</name>
<uuid>c574980a-19fc-37e9-b6e3-788a7439575d</uuid>
<source>
<host name='192.168.153.25' port='6789'/>
<name>cloudstack</name>
<auth username='cloudstack' type='ceph'>
<secret uuid='c574980a-19fc-37e9-b6e3-788a7439575d'/>
</auth>
</source>
</pool>
[root@kvm-ovs-002 agent]# virsh pool-define /tmp/rbd.xml
error: Failed to define pool from /tmp/rbd.xml
error: internal error missing backend for pool type 8
The Ceph storage is working fine and confirmed with following statistics of it.
Ceph output
[root@kvm-ovs-002 ~]# ceph auth list
installed auth entries:
osd.0
key: AQCwTKFTSOudGhAAsWAMRFuCqHjvTQKEV0zjvw==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQBRQqFTWOjBKhAA2s7KnL1z3h7PuKeqXMd7SA==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQBSQqFTYKm6CRAAjjZotpN68yJaOjS2QTKzKg==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
key: AQBRQqFT6GzXNxAA4ZTmVX6LIu0k4Sk7bh2Ifg==
caps: [mon] allow profile bootstrap-osd
client.cloudstack
key: AQBNTaFTeCuwFRAA0NE7CCm9rwuq3ngLcGEysQ==
caps: [mon] allow r
caps: [osd] allow rwx pool=cloudstack
[root@ceph ~]# ceph status
cluster 9c1be0b6-f600-45d7-ae0f-df7bcd3a82cd
health HEALTH_WARN 292 pgs degraded; 292 pgs stale; 292 pgs stuck stale;
292 pgs stuck unclean; 1/1 in osds are down; clock skew detected on
mon.kvm-ovs-00
2
monmap e1: 2 mons at
{ceph=192.168.153.25:6789/0,kvm-ovs-002=192.168.160.3: 6789/0}, election epoch
10, quorum 0,1 ceph,kvm-ovs-002
osdmap e8: 1 osds: 0 up, 1 in
pgmap v577: 292 pgs, 4 pools, 0 bytes data, 0 objects
26036 MB used, 824 GB / 895 GB avail
292 stale+active+degraded
[root@kvm-ovs-002 agent]# cat /etc/redhat-release
CentOS release 6.5 (Final)
The compiled LibVirt is showing upgraded version in virsh but still finding old
rpm packages in KVM.
Give me some hint on whether to clean-up these old RPMs?
Virsh version
[root@kvm-ovs-002 agent]# virsh version
Compiled against library: libvirt 1.2.6
Using library: libvirt 1.2.6
Using API: QEMU 1.2.6
Running hypervisor: QEMU 0.12.1
[root@kvm-ovs-002 agent]# rpm -qa | grep qemu
qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-guest-agent-0.12.1.2-2.415.el6_5.10.x86_64
You have mail in /var/spool/mail/root
[root@kvm-ovs-002 agent]# rpm -qa | grep libvirt
libvirt-python-0.10.2-29.el6_5.9.x86_64
libvirt-java-0.4.9-1.el6.noarch
libvirt-cim-0.6.1-9.el6_5.1.x86_64
libvirt-client-0.10.2-29.el6_5.9.x86_64
libvirt-devel-0.10.2-29.el6_5.9.x86_64
fence-virtd-libvirt-0.2.3-15.el6.x86_64
libvirt-0.10.2-29.el6_5.9.x86_64
libvirt-snmp-0.0.2-4.el6.x86_64
*Attached all the log files from management and kvm servers.
Thanks,
Praveen Kumar Buravilli