On 01/06/2014 06:34 PM, William Kwan wrote:

1. What console type are you using? Spice or VNC?
I actually tried all of them, Spice and  VNC on Windows and over HTML5 also

2. What console option are you using? (e.g. Spice HTML5, Native Client,
NOVNC)?
Spice, USB disabled, 1 monitor with Single PCI checked

3. You mentioned that you disabled iptables. I'm not as familiar with
CentOS as I am fedora, but did Centos move to firewalld? If so, you may
need to stop firewalld instead for testing.
Still have iptables and I turned it off.

I can't get an idea where the VM is actually running on doing what even
the CPU gauge shows that it is using a great percentage of the allocated
cpu power.


it could be a VM stuck in boot screen ("dos mode"), taking 100% cpu, and simply having a black screen?



On Saturday, January 4, 2014 9:33 AM, Nicholas Kesick
<cybertimber2...@hotmail.com> wrote:
________________________________
 > From: pota...@yahoo.com <mailto:pota...@yahoo.com>
 > Date: Fri, 3 Jan 2014 16:45:38 -0500
 > To: users@ovirt.org <mailto:users@ovirt.org>
 > Subject: Re: [Users] Almost there... I can't bring up a VM
 >
 >
 > Hi. I need some help to troubleshoot this. I can't find a log showing
 > any hint for me to go forward with it.
 >
 > Any help is appreciated.
 >
 > Will
 >
Will,
1. What console type are you using? Spice or VNC?
2. What console option are you using? (e.g. Spice HTML5, Native Client,
NOVNC)?
3. You mentioned that you disabled iptables. I'm not as familiar with
CentOS as I am fedora, but did Centos move to firewalld? If so, you may
need to stop firewalld instead for testing.

I would recommend trying the native client called Virt Viewer instead of
Spice HTML5/NoVNC to verify that things are working.

- Nick

 >
 > Hi,
 >
 > Running oVirt 3.3.2-1.el6 on CentOS 6.5
 >
 > I have oVirt installed on a test host (ovirt1) and two test
 > visualization hosts (onode1, onode2) using GlusterFS. I have GlusterFS
 > available on both onode1 and 2. Storage domain, networks are defined,
 > ISO is uploaded to the ISO domain. I have been trying to create VM but
 > all I get is a black spice console I'm not able to tell what's going
 > on. I turned off all iptables also.
 >
 > I looked at /var/log/libvirt/qemu/testvm04.log. I got only the
 > starting up line and a few lines for Spice. I also looked at
 > engine.log for "ERROR"
 >
 > can anyone shed some light on this? I feel like I'm almost there to
 > use oVirt instead of VMware for the next project.
 >
 > ERROR lines in engine.log:
 > --------------------------------------
 > 2014-01-02 12:31:40,893 ERROR
 > [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
 > (DefaultQuartzScheduler_Worker-76) Command GlusterTasksListVDS
 > execution failed. Exception: VDSNetworkException:
 > org.apache.xmlrpc.XmlRpcException: <type 'exceptions.Exception'>:method
 > "glusterTasksList" is not supported
 > [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
 > (DefaultQuartzScheduler_Worker-76) FINISH, GlusterTasksListVDSCommand,
 > log id: 3fa83c4f
 > 2014-01-02 12:31:40,894 ERROR
 > [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl]
 > (DefaultQuartzScheduler_Worker-76) Failed to invoke scheduled method
 > gluster_async_task_poll_event:
 > java.lang.reflect.InvocationTargetException
 > at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source)
 > [:1.7.0_45]
 > at
 >
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 > [rt.jar:1.7.0_45]
 > at java.lang.reflect.Method.invoke(Method.java:606) [rt.jar:1.7.0_45]
 > at
 > org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60)
 > [scheduler.jar:]
 > at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
 > [quartz.jar:]
 > at
 >
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)

 > [quartz.jar:]
 > Caused by: org.ovirt.engine.core.common.errors.VdcBLLException:
 > VdcBLLException:
 > org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException:
 > org.apache.xmlrpc.XmlRpcException: <type 'exceptions.Exception'>:method
 > "glusterTasksList" is not supported (Failed with error
 > VDS_NETWORK_ERROR and code 5022)
 > at
 >
org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:122)
 > [bll.jar:]
 > at
 >
org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33)

 > [bll.jar:]
 > at
 >
org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.runVdsCommand(GlusterTasksService.java:60)

 > [bll.jar:]
 > at
 >
org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.getTaskListForCluster(GlusterTasksService.java:28)

 > [bll.jar:]
 > at
 >
org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob.updateTasksInCluster(GlusterTasksSyncJob.java:67)

 > [bll.jar:]
 > at
 >
org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob.updateGlusterAsyncTasks(GlusterTasksSyncJob.java:56)

 > [bll.jar:]
 > ... 6 more
 >
 > [org.ovirt.engine.core.bll.SetVmTicketCommand]
 > (ajp--127.0.0.1-8702-8) [1d0c71de] Running command: SetVmTicketCommand
 > internal: false. Entities affected : ID:
 > b4867838-561e-491e-bb00-5c2ce3828b83 Type: VM
 > [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
 > (ajp--127.0.0.1-8702-8) [1d0c71de] START,
 > SetVmTicketVDSCommand(HostName = onode1., HostId =
 > 5145362f-0f97-4edf-a23f-592ff2c74d3f,
 > vmId=b4867838-561e-491e-bb00-5c2ce3828b83, ticket=IpVcENDWEgVN,
 > validTime=120,m userName=admin@internal <mailto:admin@internal>,
 > userId=fdfc627c-d875-11e0-90f0-83df133b58cc), log id: 36bcf5a3
 > [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
 > (ajp--127.0.0.1-8702-8) [1d0c71de] FINISH, SetVmTicketVDSCommand, log
 > id: 36bcf5a3
 >
 > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
 > (ajp--127.0.0.1-8702-8) [1d0c71de] Correlation ID: 1d0c71de, Call
 > Stack: null, Custom Event ID: -1, Message: user admin@internal
<mailto:admin@internal>
 > initiated console session for VM testvm04
 >
 > qemu log:
 > ------------
 > LC_ALL=C
 > PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
 > QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name testvm04 -S -M
 > rhel6.4.0 -cpu Westmere -enable-kvm -m 4096 -realtime mlock=off -smp
 > 2,sockets=2,cores=1,threads=1 -uuid
 > b4867838-561e-491e-bb00-5c2ce3828b83 -smbios
 > type=1,manufacturer=oVirt,product=oVirt
 >
Node,version=6-5.el6.centos.11.2,serial=42294BD4-962B-0F5E-9F28-0A96F491CB56,uuid=b4867838-561e-491e-bb00-5c2ce3828b83

 > -nodefconfig -nodefaults -chardev
 >
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm04.monitor,server,nowait

 > -mon chardev=charmonitor,id=monitor,mode=control -rtc
 > base=2014-01-02T17:11:07,driftfix=slew -no-shutdown -device
 > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
 > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
 > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
 >
file=/rhev/data-center/mnt/ovirt1:_data_iso/347d8614-f509-4887-b4ae-4dc4bbc30604/images/11111111-1111-1111-1111-111111111111/CentOS-6.5-x86_64-bin-DVD1.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=

 > -device
 > ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
 > -drive
 >
file=/rhev/data-center/mnt/glusterSD/onode1:kvm1/83e1f2f9-40ab-4a01-bba5-87ac52ef791e/images/d652b29d-4030-4284-a7c9-2ec9442dda5c/84122d99-fe4e-45a3-94fa-28cdb23a4e0d,if=none,id=drive-virtio-disk0,format=raw,serial=d652b29d-4030-4284-a7c9-2ec9442dda5c,cache=none,werror=stop,rerror=stop,aio=threads

 > -device
 >
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0

 > -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
 >
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:43:a1:4d,bus=pci.0,addr=0x3,bootindex=1

 > -chardev
 >
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/b4867838-561e-491e-bb00-5c2ce3828b83.com.redhat.rhevm.vdsm,server,nowait

 > -device
 >
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm

 > -chardev
 >
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/b4867838-561e-491e-bb00-5c2ce3828b83.org.qemu.guest_agent.0,server,nowait

 > -device
 >
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0

 > -chardev spicevmc,id=charchannel2,name=vdagent -device
 >
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0

 > -chardev pty,id=charconsole0 -device
 > virtconsole,chardev=charconsole0,id=console0 -spice
 >
port=5900,tls-port=5901,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on

 > -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
 > qxl-vga.vram_size=33554432 -device
 > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
 > char device redirected to /dev/pts/2
 >
 > ((null):20150): Spice-Warning **:
 > reds.c:2718:reds_handle_read_link_done: spice channels 1 should be
 > encrypted
 > main_channel_link: add main channel client
 > main_channel_handle_parsed: net test: latency 2.683000 ms, bitrate
 > 31799267 bps (30.326144 Mbps)
 > ((null):20150): Spice-Warning **:
 > reds.c:2718:reds_handle_read_link_done: spice channels 2 should be
 > encrypted
 > ((null):20150): Spice-Warning **:
 > reds.c:2718:reds_handle_read_link_done: spice channels 3 should be
 > encrypted
 > ((null):20150): Spice-Warning **:
 > reds.c:2718:reds_handle_read_link_done: spice channels 4 should be
 > encrypted
 > inputs_connect: inputs channel client create
 > red_dispatcher_set_cursor_peer:

 >
 >
 > _______________________________________________ Users mailing list
 > Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
<http://lists.ovirt.org/mailman/listinfo/users>




_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to