qemu 2.4 machine + kernel 4.2 + kvm : freeze & cpu 100% at start on some hardware (maybe KVM_CAP_X86_SMM related)

2015-09-29 Thread Alexandre DERUMIER
Hi,

I'm currently implementing qemu 2.4 for proxmox hypervisors,
and a lot of users have reported qemu freeze with cpu at 100% when starting.
Connecting with vnc display : "qemu guest has not initialized the display yet"

Similar bug report here : 
https://lacyc3.eu/qemu-guest-has-not-initialized-the-display


This does not occur on all hardware, 
for example it freeze on dell powerege r710  (xeon E5540),  but not on dell 
r630 (CPU E5-2687W v3 @ 3.10GHz)
or very old dell poweredge 2950 (xeon 5110  @ 1.60GHz).

This is only with qemu 2.4 + kernel 4.2 (kernel 4.1 works fine) + kvm




not working command line
-
/usr/bin/kvm chardev 
socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon 
chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password 
-pidfile /var/run/qemu-server/100.pid -name test -cpu kvm64 -m 4096 -machine 
pc-i440fx-2.4


working command line
-
qemu 2.4 + kvm + compat 2.3 profil:

/usr/bin/kvm chardev 
socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon 
chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password 
-pidfile /var/run/qemu-server/100.pid -name test -cpu kvm64 -m 4096 -machine 
pc-i440fx-2.3

qemu 2.4 without kvm:

/usr/bin/kvm chardev 
socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait -mon 
chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/100.vnc,x509,password 
-pidfile /var/run/qemu-server/100.pid -name test -cpu kvm64 -m 4096 -machine 
accel=tcg,type=pc-i440fx-2.4



So it's working with qemu 2.4 + machine 2.3 compat profil.



Looking at the code:

static void pc_compat_2_3(MachineState *machine)
{
PCMachineState *pcms = PC_MACHINE(machine);
savevm_skip_section_footers();
if (kvm_enabled()) {
pcms->smm = ON_OFF_AUTO_OFF;
}
global_state_set_optional();
savevm_skip_configuration();
}


If I comment 
//pcms->smm = ON_OFF_AUTO_OFF;

I have the same freeze too.



So,it's seem to come from somewhere in


bool pc_machine_is_smm_enabled(PCMachineState *pcms)
{
bool smm_available = false;

if (pcms->smm == ON_OFF_AUTO_OFF) {
return false;
}

if (tcg_enabled() || qtest_enabled()) {
smm_available = true;
} else if (kvm_enabled()) {
smm_available = kvm_has_smm();>> maybe here ?
}

if (smm_available) {
return true;
}

if (pcms->smm == ON_OFF_AUTO_ON) {
error_report("System Management Mode not supported by this 
hypervisor.");
exit(1);
}
return false;
}


bool kvm_has_smm(void)
{
return kvm_check_extension(kvm_state, KVM_CAP_X86_SMM);
}



I'm not sure if it's a qemu bug or kernel/kvm bug.

Help is welcome.


Regards,

Alexandre Derumier

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG] Balloon malfunctions with memory hotplug

2015-02-28 Thread Alexandre DERUMIER
Hi, 

I think they was already reported some month ago,

and a patch was submitted to the mailing list (but waiting that memory unplug 
was merged before apply it)

http://lists.gnu.org/archive/html/qemu-devel/2014-11/msg02362.html




- Mail original -
De: Luiz Capitulino lcapitul...@redhat.com
À: qemu-devel qemu-de...@nongnu.org
Cc: kvm kvm@vger.kernel.org, Igor Mammedov imamm...@redhat.com, zhang 
zhanghailiang zhang.zhanghaili...@huawei.com, pkre...@redhat.com, Eric 
Blake ebl...@redhat.com, Michael S. Tsirkin m...@redhat.com, amit shah 
amit.s...@redhat.com
Envoyé: Jeudi 26 Février 2015 20:26:29
Objet: [BUG] Balloon malfunctions with memory hotplug

Hello, 

Reproducer: 

1. Start QEMU with balloon and memory hotplug support: 

# qemu [...] -m 1G,slots=2,maxmem=2G -balloon virtio 

2. Check balloon size: 

(qemu) info balloon 
balloon: actual=1024 
(qemu) 

3. Hotplug some memory: 

(qemu) object_add memory-backend-ram,id=mem1,size=1G 
(qemu) device_add pc-dimm,id=dimm1,memdev=mem1 

4. This is step is _not_ needed to reproduce the problem, 
but you may need to online memory manually on Linux so 
that it becomes available in the guest 

5. Check balloon size again: 

(qemu) info balloon 
balloon: actual=1024 
(qemu) 

BUG: The guest now has 2GB of memory, but the balloon thinks 
the guest has 1GB 

One may think that the problem is that the balloon driver is 
ignoring hotplugged memory. This is not what's happening. If 
you do balloon your guest, there's nothing stopping the 
balloon driver in the guest from ballooning hotplugged memory. 

The problem is that the balloon device in QEMU needs to know 
the current amount of memory available to the guest. 

Before memory hotplug this information was easy to obtain: the 
current amount of memory available to the guest is the memory the 
guest was booted with. This value is stored in the ram_size global 
variable in QEMU and this is what the balloon device emulation 
code uses today. However, when memory is hotplugged ram_size is 
_not_ updated and the balloon device breaks. 

I see two possible solutions for this problem: 

1. In addition to reading ram_size, the balloon device in QEMU 
could scan pc-dimm devices to account for hotplugged memory. 

This solution was already implemented by zhanghailiang: 

http://lists.gnu.org/archive/html/qemu-devel/2014-11/msg02362.html 

It works, except that on Linux memory hotplug is a two-step 
procedure: first memory is inserted then it has to be onlined 
from user-space. So, if memory is inserted but not onlined 
this solution gives the opposite problem: the balloon device 
will report a larger memory amount than the guest actually has. 

Can we live with that? I guess not, but I'm open for discussion. 

If QEMU could be notified when Linux makes memory online, then 
the problem would be gone. But I guess this can't be done. 

2. Modify the balloon driver in the guest to inform the balloon 
device on the host about the current memory available to the 
guest. This way, whenever the balloon device in QEMU needs 
to know the current amount of memory in the guest, it asks 
the guest. This drops any usage of ram_size in the balloon 
device 

I'm not completely sure this is feasible though. For example, 
what happens if the guest reports a memory amount to QEMU and 
right after this more memory is plugged? 

Besides, this solution is more complex than solution 1 and 
won't address older guests. 

Another important detail is that, I *suspect* that a very similar 
bug already exists with 32-bit guests even without memory 
hotplug: what happens if you assign 6GB to a 32-bit without PAE 
support? I think the same problem we're seeing with memory 
hotplug will happen and solution 1 won't fix this, although 
no one seems to care about 32-bit guests... 
-- 
To unsubscribe from this list: send the line unsubscribe kvm in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cache write back barriers

2013-06-13 Thread Alexandre DERUMIER
I'm wondering: does this also make kvm to ignore write barriers invoked 
by the virtual machine? 

no, cache=writeback is ok, write barriers are working correctly

only with cache=unsafe,it doesn't care about write flush.


- Mail original - 

De: folkert folk...@vanheusden.com 
À: kvm@vger.kernel.org 
Envoyé: Mercredi 12 Juin 2013 10:03:10 
Objet: cache write back  barriers 

Hi, 

In virt-manager I saw that there's the option for cache writeback for 
storage devices. 
I'm wondering: does this also make kvm to ignore write barriers invoked 
by the virtual machine? 


regards, 

Folkert van Heusden 

-- 
Always wondered what the latency of your webserver is? Or how much more 
latency you get when you go through a proxy server/tor? The numbers 
tell the tale and with HTTPing you know them! 
http://www.vanheusden.com/httping/ 
--- 
Phone: +31-6-41278122, PGP-key: 1F28D8AE, www.vanheusden.com 
-- 
To unsubscribe from this list: send the line unsubscribe kvm in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Unable to boot from SCSI disk

2013-05-30 Thread Alexandre DERUMIER
hello,
I can boot on lsi scsi with qemu 1.4 with

-device scsi-hd -drive file=/dev/

but not

-device scsi-block -drive file=/dev/



- Mail original - 

De: Daniel Guillermo Bareiro daniel-lis...@gmx.net 
À: kvm@vger.kernel.org 
Envoyé: Mercredi 29 Mai 2013 13:11:42 
Objet: Re: Unable to boot from SCSI disk 

On Tuesday, 28 May 2013 11:07:25 +0400, 
Michael Tokarev wrote: 

 A small but maybe important followup. In unstable (and testing) 
 version of debian, there is currently a more recent version of seabios 
 (based on 1.7.2.x), which is able to boot from an scsi device just 
 fine, and is compatible with qemu[-kvm] 1.1. You may try installing 
 that one (just the seabios, nothing more, it does not have any 
 dependencies whatsoever) and things should work in regular way. I'll 
 prepare backports of all stuff in the very near future. 

Thank you very much for your effort. 

  BTW, why are you installing stuff on scsi? Is there some particular 
  reason for that? 

 This question is interesting still. 

It was not really a necessity. I was just doing my first tests with 
libvirt and I came across this problem. Then I started to lose level to 
identify where the problem was: virt-manager, libvirt or qemu-[kvm]. And 
it seemed like a good idea to comment it on the list. 


Thanks for your reply. 


Regards, 
Daniel 
-- 
To unsubscribe from this list: send the line unsubscribe kvm in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: expanding virtual disk based on lvm

2012-09-05 Thread Alexandre DERUMIER

Certainly restart (shutting down qemu and restarting it, not a reset) 
works, I thought you wanted online resize. 

You can resize an lvm volume online without restarting the guest.


just use lvextend to extend you lvm volume

then use qmp block_resize command with the same size. (so the guest will see 
the new size)



I have implemented this in the proxmox kvm distribution, and it's working fine.

(tested with virtio-blk and virtio-scsi)



- Mail original - 

De: Avi Kivity a...@redhat.com 
À: Ross Boylan r...@biostat.ucsf.edu 
Cc: kvm@vger.kernel.org 
Envoyé: Mercredi 5 Septembre 2012 09:25:26 
Objet: Re: expanding virtual disk based on lvm 

On 09/04/2012 09:58 PM, Ross Boylan wrote: 
 On Tue, 2012-09-04 at 15:53 +0300, Avi Kivity wrote: 
 On 08/28/2012 11:26 PM, Ross Boylan wrote: 
  My vm launches with -hda /dev/turtle/VD0 -hdb /dev/turtle/VD1, where VD0 
  and VD1 are lvm logical volumes. I used lvextend to expand them, but 
  the VM, started after the expansion, does not seem to see the extra 
  space. 
  
  What do I need to so that the space will be recognized? 
 
 IDE (-hda) does not support rechecking the size. Try booting with 
 virtio-blk. Additionally, you may need to request the guest to rescan 
 the drive (no idea how to do that). Nor am I sure whether qemu will 
 emulate the request correctly. 
 
 Thank you for the suggestion. 
 
 I think the physical recognition of the new virtual disk size was 
 accomplished when I restarted the VM, without any other steps. I've had 
 plenty of other problems, but I think at the VM level things are good. 

Certainly restart (shutting down qemu and restarting it, not a reset) 
works, I thought you wanted online resize. 



-- 
error compiling committee.c: too many arguments to function 
-- 
To unsubscribe from this list: send the line unsubscribe kvm in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


block device type supporting trim or scsi unmap ?

2012-05-10 Thread Alexandre DERUMIER
Hi,
I'm looking to implement a san storage with ssd drive.

which block device type support trim or scsi unmap ?

I think ide support it (but performance...)

scsi ?
virtio ?
virtio-scsi ?


Regards,

Alexandre Derumier

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: block device type supporting trim or scsi unmap ?

2012-05-10 Thread Alexandre DERUMIER
iscsi supports it too but it requires that your iscsi target supports 
these opcodes, and that the filesystem/storage behind it supports it 
too. 
TGTD with EXT4 and a suitable storage device should do the trick. 

I'm using a iscsi solaris target, with scsi UNMAP support. 
Drives also support scsi unmap in raid (ocz talos hdd).
I'm using direct lun access from host to guest.


But my question was, which block device inside guests have trim or scsi unmap 
implemention ?



- Mail original - 

De: ronnie sahlberg ronniesahlb...@gmail.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: kvm@vger.kernel.org 
Envoyé: Jeudi 10 Mai 2012 12:27:05 
Objet: Re: block device type supporting trim or scsi unmap ? 

On Thu, May 10, 2012 at 8:19 PM, Alexandre DERUMIER aderum...@odiso.com 
wrote: 
 Hi, 
 I'm looking to implement a san storage with ssd drive. 
 
 which block device type support trim or scsi unmap ? 
 
 I think ide support it (but performance...) 
 
 scsi ? 
 virtio ? 
 virtio-scsi ? 

iscsi supports it too but it requires that your iscsi target supports 
these opcodes, and that the filesystem/storage behind it supports it 
too. 
TGTD with EXT4 and a suitable storage device should do the trick. 


 
 
 Regards, 
 
 Alexandre Derumier 
 
 -- 
 To unsubscribe from this list: send the line unsubscribe kvm in 
 the body of a message to majord...@vger.kernel.org 
 More majordomo info at http://vger.kernel.org/majordomo-info.html 



-- 

-- 




Alexandre D erumier 
Ingénieur Système 
Fixe : 03 20 68 88 90 
Fax : 03 20 68 90 81 
45 Bvd du Général Leclerc 59100 Roubaix - France 
12 rue Marivaux 75002 Paris - France 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: block device type supporting trim or scsi unmap ?

2012-05-10 Thread Alexandre DERUMIER
Guess it depends on how recent kernel your guest runs. 
If you present it as a SCSI disk to the guest, then I have 
successfully had Linux Mint 12 guests do UNMAP when accessing /dev/sd* 
from within the guest. 

Ok, thanks for the info, i'll try it !


- Mail original - 

De: ronnie sahlberg ronniesahlb...@gmail.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: kvm@vger.kernel.org 
Envoyé: Jeudi 10 Mai 2012 14:36:38 
Objet: Re: block device type supporting trim or scsi unmap ? 

On Thu, May 10, 2012 at 10:28 PM, Alexandre DERUMIER 
aderum...@odiso.com wrote: 
iscsi supports it too but it requires that your iscsi target supports 
these opcodes, and that the filesystem/storage behind it supports it 
too. 
TGTD with EXT4 and a suitable storage device should do the trick. 
 
 I'm using a iscsi solaris target, with scsi UNMAP support. 
 Drives also support scsi unmap in raid (ocz talos hdd). 
 I'm using direct lun access from host to guest. 
 
 
 But my question was, which block device inside guests have trim or scsi unmap 
 implemention ? 
 
 

Guess it depends on how recent kernel your guest runs. 
If you present it as a SCSI disk to the guest, then I have 
successfully had Linux Mint 12 guests do UNMAP when accessing /dev/sd* 
from within the guest. 


If you present the device as a SCSI disk to the guest, you use a 
recent 3.x linux kernel for the guest and EXT4 as the filesystem 
I guess it should work. 


Dont know about status for IDE emulation. 

regards 
ronnie sahlberg 



-- 

-- 




Alexandre D erumier 
Ingénieur Système 
Fixe : 03 20 68 88 90 
Fax : 03 20 68 90 81 
45 Bvd du Général Leclerc 59100 Roubaix - France 
12 rue Marivaux 75002 Paris - France 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Virtio network performance on Debian

2012-04-16 Thread Alexandre DERUMIER
note:
proxmox2 kernel is based on 2.6.32-220.7.1.el6 RHEL6.2 kernel.
+ qemu-kvm git.


- Mail original - 

De: Stefan Pietsch stefan.piet...@lsexperts.de 
À: Hans-Kristian Bakke hkba...@gmail.com 
Cc: kvm@vger.kernel.org 
Envoyé: Lundi 16 Avril 2012 11:01:16 
Objet: Re: Virtio network performance on Debian 

On 12.04.2012 09:42, Hans-Kristian Bakke wrote: 
 Hi 
 
 For some reason I am not able to get good network performance using 
 virtio/vhost-net on Debian KVM host (perhaps also valid for Ubuntu 
 hosts then). 
 Disc IO is very good and the guests feels snappy so it doesn't seem 
 like there is something really wrong, just something suboptimal with 
 the networking. 

[..] 

 I have tried: 
  
 - Replacing Debian Wheezy with Debian Squeeze (stable, kernel 
 2.6.32-xx) - even worse results 
 - Replacing kernel 3.2.0-2-amd64 with vanilla kernel 3.4-rc2 and 
 config based on Debians included config - no apparent change 
 - Extracted the kernel-config file from Fedora 17 alphas kernel and 
 used this to compile a new kernel based on Debian Wheezys kernel 
 source - slightly worse results 
 - ...in addition to exchanging Debian with Fedora 17 alpha, Proxmox 
 1.9 and 2.0 and ESXi 5 which all have expected network performance 
 using virtio. 
 
 
 So, I am at a loss here. I does not seem to be kernel config related 
 (as using Fedoras config on Debian kernel source didn't do anything 
 good) so I think it must be either a kernel patch that red hat kernel 
 based distros uses to make virtio/vhost much more efficient or perhaps 
 something with Debians qemu-version, bridging or something. 


I have made some tests with a Debian Squeeze KVM host running with the 
Linux Kernel 2.6.39 from backports and the Kernel version 2.6.32-11-pve 
from Proxmox. 

(http://download.proxmox.com/debian/dists/squeeze/pve/binary-amd64/pve-kernel-2.6.32-11-pve_2.6.32-66_amd64.deb)
 

Network performance between two virtual machines on the same host is 
significantly slower with the Debian kernel: 

2.6.39-bpo.2-amd64 : 1.31 Gbits/sec 
2.6.32-11-pve : 2.20 Gbits/sec 

iperf tests between a virtual machine and the KVM host connected to the 
same local bridge interface showed similar results. 

Are there other people who can confirm this? 


Regards, 
Stefan 
-- 
To unsubscribe from this list: send the line unsubscribe kvm in 
the body of a message to majord...@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html 



-- 

-- 




Alexandre D erumier 
Ingénieur Système 
Fixe : 03 20 68 88 90 
Fax : 03 20 68 90 81 
45 Bvd du Général Leclerc 59100 Roubaix - France 
12 rue Marivaux 75002 Paris - France 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Guest hang with 5 virtio disk with recents kernel and qemu-kvm git

2012-04-11 Thread Alexandre DERUMIER
Hi,
I'm contributor on proxmox2 distrib,

we use qemu-kvm last git version and users reports some guest hang, at udev 
start at virtio devices initialization.

http://forum.proxmox.com/threads/9057-virtio-net-crashing-after-upgrade-to-proxmox-2-0
(screenshots are available in the forum thread)


if we have
 - 5 or more virtio disks
 - 4virtios disk and 1 or more virtios nics.

working guests
--
- guests with 2.6.32 kernel like debian squeeze boot fine
- debian wheezy with squeeze 2.6.32 kernel boot fine.

non working guests
---
gentoo with
- 3.0.17
- 3.1.6
- 3.2.1
- 3.2.12
kernels are hanging at udev start.

- centos6.2 with 2.6.32+backports patchs kernel is hanging also.
- debian wheezy with 3.2 kernel. 



same guests/kernel with qemu-kvm 0.15 are booting fine.

So I can't tell if it's a kernel problem or qemu-kvm problem.


command line sample:

 /usr/bin/kvm -id 100 -chardev 
socket,id=monitor,path=/var/run/qemu-server/100.mon,server,nowait -mon 
chardev=monitor,mode=readline -vnc 
unix:/var/run/qemu-server/100.vnc,x509,password -pidfile 
/var/run/qemu-server/100.pid -daemonize -usbdevice tablet -name centos-6.2 -smp 
sockets=1,cores=4 -nodefaults -boot menu=on -vga cirrus -localtime -k en-us 
-drive 
file=/dev/disk5/vm-100-disk-1,if=none,id=drive-virtio3,aio=native,cache=none 
-device virtio-blk-pci,drive=drive-virtio3,id=virtio3,bus=pci.0,addr=0xd -drive 
file=/dev/disk3/vm-100-disk-1,if=none,id=drive-virtio1,aio=native,cache=none 
-device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive 
if=none,id=drive-ide2,media=cdrom,aio=native -device 
ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200 -drive 
file=/dev/disk2/vm-100-disk-1,if=none,id=drive-virtio0,aio=native,cache=none 
-device 
virtio-blk-pci,drive=drive-virtio0,id=virtio0,bus=pci.0,addr=0xa,bootindex=10 2 
-drive 
file=/dev/disk4/vm-100-disk-1,if=none,id=drive-virtio2,aio=native,cache=none 
-device virtio-blk-pci,drive=drive-virtio2,id=virtio2,bus=pci.0,addr=0xc -m 
8192 -netdev 
type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,vhost=on
 -device virtio-net-pci,mac=6A:A3:E9:EA:51:17,netdev=net0,bus=pci.0,ad 
dr=0x12,id=net0,bootindex=300


I had try with/without vhost and changing pci addr, same results.


tests made
---

- 3 virtio disks + 1 virtio-net = OK
- 3 virtio disks + 2 virtio-net = OK
- 3 virtio disks + 1 scsi (lsi) disk + 1 virtio-net = OK
- 3 virtio disks + 1 scsi (lsi) disk + 2 virtio-net = OK
- 4 virtio disks + 1 virtio-net = NOK (hang at net init on the virtio-net)
- 4 virtio disks + 1 e1000 = OK
- 4 virtio disks + 1 e1000 + 1 virtio-net = NOK (hang at net init on the 
virtio-net)
- 5 virtio disks + 1 e1000 = NOK (udevadm settle timeout on disk N°5 which 
become unusable)
- 5 virtio disks + 2 virtio-net = NOK (udevadm settle timeout on disk N°5 + 
hang on the virtio-net)
- 5 virtio disks + 3 virtio-net = NOK (udev settle timeout on disk N°5 + hang 
on the first virtio-net)


Can someone reproduce the problem ?



Best Regards,
Alexandre Derumier
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


building with today qemu-kvm git fail (vcard_emul_nss.c)

2012-03-08 Thread Alexandre DERUMIER
Hi,

i'm trying to build the last kvm git,

./configure --prefix=/usr --datadir=/usr/share/kvm 
--docdir=/usr/share/doc/pve-qemu-kvm --sysconfdir=/etc --disable-xen 
--enable-vnc-tls --enable-sdl --enable-uuid --enable-linux-aio

and build fail on vcard_emul_nss.c

cc1: warnings being treated as errors
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: left-hand operand of comma expression has no effect
vcard_emul_nss.c:528: error: initializer element is not constant
vcard_emul_nss.c:528: error: (near initialization for 'nss_atr[0]')
make[3]: *** [vcard_emul_nss.o] Error 1
make[3]: Leaving directory `/root/proxmox2/pve-qemu-kvm/qemu-kvm/libcacard'
make[2]: *** [subdir-libcacard] Error 2
make[2]: Leaving directory `/root/proxmox2/pve-qemu-kvm/qemu-kvm'
make[1]: *** [build-stamp] Error 2


It's was working fine yesterday

Best Regards,

-Alexandre
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html