Please help. I'm having some recurring errors after VM is deploy PROLOG and then powered on BOOT. At the BOOT phase I suspect there is something going on ... because as soon as I *reboot* the VM I get "Boot failed: could not read the boot disk"
Upon further inspection the libvirt qemu log files for the specified VM(s) produce "block I/O error in device" as seen below: [root@shelf3-node2 ~]# tail -f /var/log/libvirt/qemu/one-41.log 2014-06-18 15:47:22.708+0000: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name one-41 -S -M rhel6.5.0 -enable-kvm -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 0c282fe0-a689-c92d-1be7-198bddb4e0c0 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-41.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/41/disk.0,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/41/disk.1,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:80:7b:00:35,bus=pci.0,addr=0x3 -vnc 0.0.0.0:41 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 block I/O error in device 'drive-virtio-disk0': Invalid argument (22) block I/O error in device 'drive-virtio-disk0': Invalid argument (22) ^C [root@shelf3-node2 ~]# tail -f /var/log/libvirt/qemu/one-40.log block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) block I/O error in device 'drive-ide0-0-0': Invalid argument (22) ^C [root@shelf3-node2 ~]# tail -f /var/log/libvirt/qemu/one-41.log 2014-06-18 15:47:22.708+0000: starting up LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name one-41 -S -M rhel6.5.0 -enable-kvm -m 512 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 0c282fe0-a689-c92d-1be7-198bddb4e0c0 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-41.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/41/disk.0,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/41/disk.1,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=29,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=02:00:80:7b:00:35,bus=pci.0,addr=0x3 -vnc 0.0.0.0:41 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 block I/O error in device 'drive-virtio-disk0': Invalid argument (22) block I/O error in device 'drive-virtio-disk0': Invalid argument (22) block I/O error in device 'drive-virtio-disk0': Invalid argument (22) Any idea why my configuration deploys the VM(s) to specified hosts in my cluster just fine BUT then upon a reboot produces these block I/O errors?? I don't suspect that it's my gluster config: Volume Name: GLUSTERFS-ONE Type: Distribute Volume ID: d40e6dcb-435d-47d5-b575-edfe1eec5b62 Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: storage1:/BRICK0/ONE Brick2: storage3:/BRICK0/ONE Options Reconfigured: storage.owner-gid: 9869 storage.owner-uid: 9869 server.allow-insecure: on Or Datastore config: [oneadmin@shelf3-node1 ~]$ onedatastore list ID NAME SIZE AVAIL CLUSTER IMAGES TYPE DS TM 0 system 2T 92% TESTCLUSTER 0 sys - shared 1 default 131T 73% TESTCLUSTER 6 img fs shared 2 files 59.1G 94% TESTCLUSTER 0 fil fs ssh 103 GlusterFS KVM 131T 73% TESTCLUSTER 3 img fs shared ANY IDEA. THIS HAS STUMPED ME. Going onto a couple of days now ... everything else seems to function as expected. -- W
_______________________________________________ Users mailing list Users@lists.opennebula.org http://lists.opennebula.org/listinfo.cgi/users-opennebula.org