Although the attach/detach problem has been solved, I fond the driver name issue still causes problems restoring from a saved VM with disk attached.
For example, once saved, the checkpoint file would contains <disk type='file' device='disk'> <driver name='file' type='qcow2'/> <source file='/vrstorm/cloud/PrivateAccounts/74d37709b07be4bbae228dfc273e4339/volumes/vol-126'/> <target dev='vdf' bus='virtio'/> <alias name='virtio-disk5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> And then the restore would fail: [cloudadmin@test2 images]$ virsh restore checkpoint error: Failed to restore domain from checkpoint error: internal error unsupported driver name 'file' for disk '/vrstorm/cloud/PrivateAccounts/74d37709b07be4bbae228dfc273e4339/volumes/vol-126' If I simply edit the checkpoint file to change file to qemu:<driver name='qemu' type='qcow2'/> Then it works. [cloudadmin@test2 images]$ virsh restore checkpoint Domain restored from checkpoint So there is indeed a problem with the driver name. Maybe I should start a new thread on it? Shi On Fri, Mar 4, 2011 at 11:44 AM, Shi Jin <jinzish...@gmail.com> wrote: > Finally, everything works for if I simply change my template to use the > /usr/libexec/qemu-kvm instead of /usr/bin/kvm, which is a symbolic link to > it. > Not sure why it matters but that does solve all the mysteries. > > Shi > > On Fri, Mar 4, 2011 at 9:56 AM, Shi Jin <jinzish...@gmail.com> wrote: > >> It seems that your qemu does not support monitor command 'drive_add'. >>> What's version of your qemu? >>> >>> Thank you. Here is my version >> [cloudadmin@test2 ~]$ /usr/libexec/qemu-kvm -version >> QEMU PC emulator version 0.12.1 (qemu-kvm-0.12.1.2), Copyright (c) >> 2003-2008 Fabrice Bellard >> >> I finally think it is a permission issue. If I run commands as root and >> get kvm run by the default qemu user, both attach-disk and detach-disk works >> like your case. >> >> However, if I set to run as another user, such as cloudamin, attach-disk >> works but detach-disk does not. >> My libvirtd.conf has >> unix_sock_group = "cloudadmin" >> unix_sock_rw_perms = "0770" >> auth_unix_ro = "none" >> auth_unix_rw = "none" >> log_level = 3 >> log_outputs="3:file:/var/log/libvirt/libvirtd.log" >> >> and qemu.conf has >> user = "cloudadmin" >> group = "cloudadmin" >> dynamic_ownership = 0 >> >> All images are owned by the cloudadmin:cloudadmin. >> >> Is there a problem with this setup? >> >> Thank you very much. >> Shi >> >>> > 15:31:52.902: debug : virEventRunOnce:595 : Poll got 1 event >>> >>> > 15:31:52.902: debug : virEventDispatchTimeouts:405 : Dispatch 3 >>> > 15:31:52.902: debug : qemuMonitorAddDevice:1878 : mon=0x7f3628341370 >>> > >>> device=virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1 >>> >>> When 'drive_add' failes, we do not call qemuMonitorAddDevice(). But it is >>> called. >>> >>> > 15:31:52.902: debug : virEventDispatchHandles:450 : Dispatch 8 >>> > 15:31:52.902: debug : qemuMonitorCommandWithHandler:230 : Send command >>> > 'device_add >>> > >>> virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1' >>> > for write with FD -1 >>> >>> What's the reply of command 'device_add'? Can you provide it. I think it >>> should fail. >>> >>> Thanks. >>> Wen Congyang >>> >>> > >>> > Than you very much. >>> > >>> > Shi >>> > >>> > >>> > >>> > >>> > >>> > On Thu, Mar 3, 2011 at 10:12 PM, Wen Congyang <we...@cn.fujitsu.com> >>> wrote: >>> > >>> >> At 03/04/2011 01:00 PM, Shi Jin Write: >>> >>>> >>> >>>> >>> >>>> >>> >>>> <disk type='file' device='disk'> >>> >>>> <driver name='qemu' type='qcow2' cache='none'/> >>> >>>> <source file='/var/lib/libvirt/images/rhel6rc_64.img'/> >>> >>>> <target dev='hda' bus='ide'/> >>> >>>> <alias name='ide0-0-0'/> >>> >>>> <address type='drive' controller='0' bus='0' unit='0'/> >>> >>>> </disk> >>> >>>> <disk type='file' device='cdrom'> >>> >>>> <driver name='qemu' type='raw'/> >>> >>>> <source file='/var/lib/libvirt/images/test.iso'/> >>> >>>> <target dev='hdc' bus='ide'/> >>> >>>> <readonly/> >>> >>>> <alias name='ide0-1-0'/> >>> >>>> <address type='drive' controller='0' bus='1' unit='0'/> >>> >>>> </disk> >>> >>>> <disk type='file' device='disk'> >>> >>>> <driver name='file' type='qcow2'/> >>> >>>> <source file='/var/lib/libvirt/images/test3.img'/> >>> >>>> <target dev='vdb' bus='virtio'/> >>> >>>> <alias name='virtio-disk1'/> >>> >>>> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' >>> >>>> function='0x0'/> >>> >>>> </disk> >>> >>>> <controller type='ide' index='0'> >>> >>>> <alias name='ide0'/> >>> >>>> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' >>> >>>> function='0x1'/> >>> >>>> </controller> >>> >>>> >>> >>>> >>> >>>> Thank you very much. It is exactly the same output as mine, >>> particularly >>> >>> <driver name='file' type='qcow2'/> >>> >>> I thought it has to be name='qemu' to detach properly but since you >>> >> didn't >>> >>> have a problem, I am very lost on why mine didn't work. >>> >>> >>> >>> My libvirtd.log shows (with debugging turned on) >>> >>> 14:43:18.965: debug : qemuMonitorCommandWithHandler:235 : Receive >>> command >>> >>> reply ret=0 errno=0 33 bytes 'Device 'virtio-disk1' not found^M >>> >>> ' >>> >>> 14:43:18.965: debug : virEventDispatchTimeouts:405 : Dispatch 3 >>> >>> 14:43:18.965: debug : virEventDispatchHandles:450 : Dispatch 8 >>> >>> 14:43:18.965: error : qemuMonitorTextDelDevice:2314 : operation >>> failed: >>> >>> detaching virtio-disk1 device failed: Device 'virtio-disk1' not >>> found^M >>> >>> >>> >>> Do you know what the "device not found" error means? >>> >> >>> >> It seems attaching virtio-disk1 failed. >>> >> Can you provide the log when you attach virtio-disk1? >>> >> >>> >> Thanks. >>> >> Wen Congyang >>> >> >>> >>> >>> >>> Thanks. >>> >>> >>> >>> Shi >>> >>> >>> >> >>> >> >>> > >>> > >>> >>> >> >> >> -- >> Shi Jin, Ph.D. >> >> > > > -- > Shi Jin, Ph.D. > > -- Shi Jin, Ph.D.
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list