Re: [Users] failed nestd vm only with spice and not vnc

2014-02-10 Thread Gianluca Cecchi
On Sun, Feb 9, 2014 at 11:13 PM, Itamar Heim wrote:


 is there a bug for tracking this?

Not yet. What to bug against? spice or spice-protocol or qemu-kvm itself?

Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] failed nestd vm only with spice and not vnc

2014-02-09 Thread Itamar Heim

On 02/05/2014 08:37 PM, Gianluca Cecchi wrote:

I replicated the problem with same environment but attached to iSCSI
storage domain.
So the Gluster part is not involved.
As soon as I run once a VM on the host the VM goes into paused state
and in host messages:


Feb  5 19:22:45 localhost kernel: [16851.192234] cgroup: libvirtd
(1460) created nested cgroup for controller memory which has
incomplete hierarchy support. Nested cgroups may change behavior in
the future.
Feb  5 19:22:45 localhost kernel: [16851.192240] cgroup: memory
requires setting use_hierarchy to 1 on the root.
Feb  5 19:22:46 localhost kernel: [16851.228204] device vnet0 entered
promiscuous mode
Feb  5 19:22:46 localhost kernel: [16851.236198] ovirtmgmt: port
2(vnet0) entered forwarding state
Feb  5 19:22:46 localhost kernel: [16851.236208] ovirtmgmt: port
2(vnet0) entered forwarding state
Feb  5 19:22:46 localhost kernel: [16851.591058] qemu-system-x86:
sending ioctl 5326 to a partition!
Feb  5 19:22:46 localhost kernel: [16851.591074] qemu-system-x86:
sending ioctl 80200204 to a partition!
Feb  5 19:22:46 localhost vdsm vm.Vm WARNING
vmId=`7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0`::_readPauseCode
unsupported by libvirt vm
Feb  5 19:22:47 localhost avahi-daemon[449]: Registering new address
record for fe80


And in qemu.log for the VM:

2014-02-05 18:22:46.280+: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-kvm -name c6i -S -machine
pc-1.0,accel=kvm,usb=off -cpu Nehalem -m 1024 -smp
1,sockets=1,cores=1,threads=1 -uuid
7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=19-6,serial=421F4B4A-9A4C-A7C4-54E5-847BF1ADE1A5,uuid=7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/c6i.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-02-05T18:22:45,driftfix=slew -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/rhev/data-center/mnt/ovirt.localdomain.local:_var_lib_exports_iso/6e80607d-5437-4fc5-b73c-66794f6381e0/images/----/CentOS-6.4-x86_64-bin-DVD1.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-drive 
file=/rhev/data-center/mnt/blockSD/f741671e-6480-4d7b-b357-8cf6e8d2c0f1/images/0912658d-1541-4d56-945b-112b0b074d29/67aaf7db-4d1c-42bd-a1b0-988d95c5d5d2,if=none,id=drive-virtio-disk0,format=qcow2,serial=0912658d-1541-4d56-945b-112b0b074d29,cache=none,werror=stop,rerror=stop,aio=native
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bb:9f:17,bus=pci.0,addr=0x3,bootindex=3
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.com.redhat.rhevm.vdsm,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/7094da5f-6c08-4b0c-ae98-8bfb6de1b9c0.org.qemu.guest_agent.0,server,nowait
-device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice 
tls-port=5900,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=33554432 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
KVM: unknown exit, hardware reason 3
EAX=0094 EBX=6e44 ECX=000e EDX=0511
ESI=0002 EDI=6df8 EBP=6e08 ESP=6dd4
EIP=3ffe1464 EFL=0017 [APC] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0010   00c09300 DPL=0 DS   [-WA]
CS =0008   00c09b00 DPL=0 CS32 [-RA]
SS =0010   00c09300 DPL=0 DS   [-WA]
DS =0010   00c09300 DPL=0 DS   [-WA]
FS =0010   00c09300 DPL=0 DS   [-WA]
GS =0010   00c09300 DPL=0 DS   [-WA]
LDT=   8200 DPL=0 LDT
TR =   8b00 DPL=0 TSS32-busy
GDT= 000fd3a8 0037
IDT= 000fd3e6 
CR0=0011 CR2= CR3= CR4=
DR0= DR1= DR2=
DR3=
DR6=0ff0 DR7=0400
EFER=
Code=eb be 83 c4 08 5b 5e 5f 5d c3 89 c1 ba 11 

Re: [Users] failed nestd vm only with spice and not vnc

2014-02-05 Thread Gianluca Cecchi
Hello,
I revamp this thread putting a subject more in line with the real problem.
Previous thread subject was

unable to start vm in 3.3 and f19 with gluster


and began here on ovirt users mailing list:
http://lists.ovirt.org/pipermail/users/2013-September/016628.html

Now I updated all to final 3.3.3 and I see that the problem is here yet.

So now I have updated Fedora 19 hosts that are VMs (virtual hw version
9) inside vSphere infra version 5.1.

CPU of ESX host is E7-4870 and cluster in oVirt is defined as Intel
Nehalem Family


On oVirt host VM
[root@ovnode01 qemu]# rpm -q libvirt qemu-kvm
libvirt-1.0.5.9-1.fc19.x86_64
qemu-kvm-1.4.2-15.fc19.x86_64

[root@ovnode01 qemu]# uname -r
3.12.9-201.fc19.x86_64

flags of cpuinfo:
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss syscall nx rdtscp
lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable
nonstop_tsc aperfmperf pni monitor vmx ssse3 cx16 sse4_1 sse4_2 x2apic
popcnt lahf_lm ida arat epb dtherm tpr_shadow vnmi ept vpid

[root@ovnode01 ~]# vdsClient -s localhost getVdsCapabilities
HBAInventory = {'FC': [], 'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:6344c23973df'}]}
ISCSIInitiatorName = 'iqn.1994-05.com.redhat:6344c23973df'
bondings = {'bond0': {'addr': '',
  'cfg': {},
  'hwaddr': '32:5c:6a:20:cd:21',
  'ipv6addrs': [],
  'mtu': '1500',
  'netmask': '',
  'slaves': []}}
bridges = {'ovirtmgmt': {'addr': '192.168.33.41',
 'cfg': {'BOOTPROTO': 'none',
 'DEFROUTE': 'yes',
 'DELAY': '0',
 'DEVICE': 'ovirtmgmt',
 'GATEWAY': '192.168.33.15',
 'IPADDR': '192.168.33.41',
 'NETMASK': '255.255.255.0',
 'NM_CONTROLLED': 'no',
 'ONBOOT': 'yes',
 'STP': 'no',
 'TYPE': 'Bridge'},
 'gateway': '192.168.33.15',
 'ipv6addrs': ['fe80::250:56ff:fe9f:686b/64'],
 'ipv6gateway': '::',
 'mtu': '1500',
 'netmask': '255.255.255.0',
 'ports': ['eth0', 'vnet1'],
 'stp': 'off'},
   'vlan172': {'addr': '',
   'cfg': {'DEFROUTE': 'no',
   'DELAY': '0',
   'DEVICE': 'vlan172',
   'NM_CONTROLLED': 'no',
   'ONBOOT': 'yes',
   'STP': 'no',
   'TYPE': 'Bridge'},
   'gateway': '0.0.0.0',
   'ipv6addrs': ['fe80::250:56ff:fe9f:3b86/64'],
   'ipv6gateway': '::',
   'mtu': '1500',
   'netmask': '',
   'ports': ['ens256.172', 'vnet0'],
   'stp': 'off'}}
clusterLevels = ['3.0', '3.1', '3.2', '3.3']
cpuCores = '4'
cpuFlags =
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,mmx,fxsr,sse,sse2,ss,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,nopl,xtopology,tsc_reliable,nonstop_tsc,aperfmperf,pni,monitor,vmx,ssse3,cx16,sse4_1,sse4_2,x2apic,popcnt,lahf_lm,ida,arat,epb,dtherm,tpr_shadow,vnmi,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270'
cpuModel = 'Intel(R) Xeon(R) CPU E7- 4870  @ 2.40GHz'
cpuSockets = '4'
cpuSpeed = '2394.000'
cpuThreads = '4'
emulatedMachines = ['pc',
'q35',
'isapc',
'pc-0.10',
'pc-0.11',
'pc-0.12',
'pc-0.13',
'pc-0.14',
'pc-0.15',
'pc-1.0',
'pc-1.1',
'pc-1.2',
'pc-1.3',
'none']
guestOverhead = '65'
hooks = {}
kvmEnabled = 'true'
lastClient = '127.0.0.1'
lastClientIface = 'lo'
management_ip = '0.0.0.0'
memSize = '16050'