Hi, 

I have gotten so close to have Ceph work in my cloud but I have reached a 
roadblock. Any help would be greatly appreciated. I know this is not actually a 
CloudStack issue at this point, but I also know several ACS users have tried 
Ceph. 

I receive the following error when trying to get KVM to run a VM with an RBD 
volume: 

Libvirtd.log: 

2013-10-16 22:05:15.516+0000: 9814: error : qemuProcessReadLogOutput:1477 : 
internal error Process exited while reading console log output: 
char device redirected to /dev/pts/3 
kvm: -drive 
file=rbd:libvirt-pool/new-libvirt-image:id=libvirt:key=+F5ScBQlLhAAYCH8qhGEh/gjKW+NpziAlA==:auth_supported=cephx\;none:mon_host=
 
10.0.1.83\:6789,if=none,id=drive-ide0-0-1: error connecting 
kvm: -drive 
file=rbd:libvirt-pool/new-libvirt-image:id=libvirt:key=+F5ScBQlLhAAYCH8qhGEh/gjKW+NpziAlA==:auth_supported=cephx\;none:mon_host=
 
10.0.1.83\:6789,if=none,id=drive-ide0-0-1: could not open disk image 
rbd:libvirt-pool/new-libvirt-image:id=libvirt:key=+F5ScBQlLhAAYCH8qhGEh 
/gjKW+NpziAlA==:auth_supported=cephx\;none:mon_host=10.0.1.83\:6789: Invalid 
argument 

Ceph Pool showing test volume exists: 

root@ubuntu-test-KVM-RBD:/opt# rbd -p libvirt-pool ls 
new-libvirt-image 

Ceph Auth: 

client.libvirt 
key: AQBx+F5ScBQlLhAAYCH8qhGEh/gjKW+NpziAlA== 
caps: [mon] allow r 
caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
pool=libvirt-pool 

KVM Drive Support: 

root@ubuntu-test-KVM-RBD:/opt# kvm --drive 
format=?ibvirt-image:id=libvirt:key=+F5Sc 
Supported formats: vvfat vpc vmdk vdi sheepdog rbd raw host_cdrom host_floppy 
host_device file qed qcow2 qcow parallels nbd dmg tftp ftps ft 
p https http cow cloop bochs blkverify blkdebug 

Thank you if anyone can help :) 


Kelcey Damage | Infrastructure Systems Architect 
Strategy | Automation | Cloud Computing | Technology Development 

Backbone Technology, Inc 
604-331-1152 ext. 114 



Reply via email to