Bug#665046: qemu-kvm: bad network performance using vhost and macvtap

2012-04-16 Thread Michael Tokarev
On 22.03.2012 19:05, Michael Tokarev wrote:
 On 22.03.2012 15:06, Anthony Bourguignon wrote:
 Package: qemu-kvm
 Version: 1.0+dfsg-9
 Severity: important

 Hello,

 I encountered an issue with version 1.0+dfsg-9 of qemu-kvm (it worked fine 
 with 1.0+dfsg-8). The network performances are really bad when I'm using 
 vhost and macvtap (bridge mode). Here is the cmdline launch by libvirt :
[]

 There was no network-related changes between -8 and -9.
 Macvtap performance has always been awful from the ground up,
 but this is due to kernel not qemu.  Reportedly it become a
 bit better with latest kernels but I haven't checked.
 At any rate using macvtap isn't a good idea at this stage.

Another user encountered a similar issue, but this time,
his qemu-kvm is about 10 times slower in bridged mode,
and again, 1.0+dfsg-8 works okay for him.  I wonder if
this is the same issue...  It shows differently, but the
effect is very similar, and the same version is affected.

Here's the other bugreport: http://bugs.debian.org/668594

I'm trying to investigate further...

Thank you for your patience,

/mjt
 /mjt




-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#665046: qemu-kvm: bad network performance using vhost and macvtap

2012-03-22 Thread Anthony Bourguignon
Package: qemu-kvm
Version: 1.0+dfsg-9
Severity: important

Hello,

I encountered an issue with version 1.0+dfsg-9 of qemu-kvm (it worked fine with 
1.0+dfsg-8). The network performances are really bad when I'm using vhost and 
macvtap (bridge mode). Here is the cmdline launch by libvirt :

/usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 2048 -mem-prealloc -mem-path 
/mnt/hugepages/libvirt/qemu -smp 2,sockets=2,cores=1,threads=1 -name test -uuid 
77f086e4-fec5-dc75-792b-4cfea46642ff -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/test.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-drive 
file=/dev/vps/test,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
 -device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=21,id=hostnet0,vhost=on,vhostfd=22 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:9a:c9:16,bus=pci.0,addr=0x3,bootindex=2
 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
-usb -vnc 0.0.0.0:0 -k fr -vga std -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

The vhost_net module is loaded on the host. On the guest side, downloads are 
stalled at 15kb/s only. It works well if I'm using bridge interface instead of 
macvtap, so it's not a problem with the host. If I revert the package to 
version 1.0+dfsg-8, the guest interface is working at normal speed.

Tell me if you need further investigations.

Thanks

-- Package-specific info:


/proc/cpuinfo:

processor   : 0
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5620  @ 2.40GHz
stepping: 2
microcode   : 0x15
cpu MHz : 2394.206
cache size  : 12288 KB
physical id : 1
siblings: 8
core id : 0
cpu cores   : 4
apicid  : 32
initial apicid  : 32
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 
ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat dts 
tpr_shadow vnmi flexpriority ept vpid
bogomips: 4788.41
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 1
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5620  @ 2.40GHz
stepping: 2
microcode   : 0x15
cpu MHz : 2394.206
cache size  : 12288 KB
physical id : 0
siblings: 8
core id : 0
cpu cores   : 4
apicid  : 0
initial apicid  : 0
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 
ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat dts 
tpr_shadow vnmi flexpriority ept vpid
bogomips: 4787.98
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 2
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5620  @ 2.40GHz
stepping: 2
microcode   : 0x15
cpu MHz : 2394.206
cache size  : 12288 KB
physical id : 1
siblings: 8
core id : 1
cpu cores   : 4
apicid  : 34
initial apicid  : 34
fpu : yes
fpu_exception   : yes
cpuid level : 11
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov 
pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb 
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 
ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat dts 
tpr_shadow vnmi flexpriority ept vpid
bogomips: 4787.99
clflush size: 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

processor   : 3
vendor_id   : GenuineIntel
cpu family  : 6
model   : 44
model name  : Intel(R) Xeon(R) CPU   E5620  @ 2.40GHz
stepping: 2
microcode   : 0x15
cpu MHz : 2394.206
cache size  : 12288 KB
physical id : 0
siblings: 8
core id : 1
cpu cores   : 4
apicid  

Bug#665046: qemu-kvm: bad network performance using vhost and macvtap

2012-03-22 Thread Michael Tokarev
On 22.03.2012 15:06, Anthony Bourguignon wrote:
 Package: qemu-kvm
 Version: 1.0+dfsg-9
 Severity: important
 
 Hello,
 
 I encountered an issue with version 1.0+dfsg-9 of qemu-kvm (it worked fine 
 with 1.0+dfsg-8). The network performances are really bad when I'm using 
 vhost and macvtap (bridge mode). Here is the cmdline launch by libvirt :
 
 /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 2048 -mem-prealloc -mem-path 
 /mnt/hugepages/libvirt/qemu -smp 2,sockets=2,cores=1,threads=1 -name test 
 -uuid 77f086e4-fec5-dc75-792b-4cfea46642ff -nodefconfig -nodefaults -chardev 
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/test.monitor,server,nowait 
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
 -drive 
 file=/dev/vps/test,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
  -device 
 virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
  -netdev tap,fd=21,id=hostnet0,vhost=on,vhostfd=22 -device 
 virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:9a:c9:16,bus=pci.0,addr=0x3,bootindex=2
  -chardev pty,id=charserial0 -device 
 isa-serial,chardev=charserial0,id=serial0 -usb -vnc 0.0.0.0:0 -k fr -vga std 
 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
 
 The vhost_net module is loaded on the host. On the guest side, downloads are 
 stalled at 15kb/s only. It works well if I'm using bridge interface instead 
 of macvtap, so it's not a problem with the host. If I revert the package to 
 version 1.0+dfsg-8, the guest interface is working at normal speed.
 
 Tell me if you need further investigations.

There was no network-related changes between -8 and -9.
Macvtap performance has always been awful from the ground up,
but this is due to kernel not qemu.  Reportedly it become a
bit better with latest kernels but I haven't checked.
At any rate using macvtap isn't a good idea at this stage.

/mjt



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#665046: qemu-kvm: bad network performance using vhost and macvtap

2012-03-22 Thread Michael Tokarev
severity 665046 wishlist
tags 665046 + unreproducible
thanks

On 22.03.2012 19:05, Michael Tokarev wrote:
[]
 There was no network-related changes between -8 and -9.
 Macvtap performance has always been awful from the ground up,
 but this is due to kernel not qemu.  Reportedly it become a
 bit better with latest kernels but I haven't checked.
 At any rate using macvtap isn't a good idea at this stage.

And sure thing I completely disagree with the severity of the
bugreport.  Recommended tun device works just fine.  So reducing
severity to wishlist.

There was one networking change between 1.0 and 1.0.1 (which
went into -9): it is a bounds check in e1000 device, which
is a fix for CVE-2012-0029.  Still not performance-critical.

I tested macvtap device on -8 and -9 briefly (had to tweak
my regular bridge setup quite a bit for that to work), and
I see exactly the same (bad) performance with both versions.

Note also that qemu uses _exactly_ the same interface for tun
and macvtap devices, it is one code. Macvtap kernel module
exports the same ABI as regular tun.  So it can't be qemu-related,
because, as you confirm, vtun work fine.

/mjt



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#665046: qemu-kvm: bad network performance using vhost and macvtap

2012-03-22 Thread Michael Tokarev
On 22.03.2012 19:58, Anthony Bourguignon wrote:
[]
 I tested macvtap device on -8 and -9 briefly (had to tweak
 my regular bridge setup quite a bit for that to work), and
 I see exactly the same (bad) performance with both versions.

 I've just tested again. I ran a virtual machine with -8 and wget the
 file
 http://cdimage.debian.org/cdimage/weekly-builds/amd64/iso-dvd/debian-testing-amd64-DVD-9.iso
  . I downloaded at about 49M/s . You can see it's everything but bad 
 performance.
 Then, I stopped the vm, upgraded qemu-kvm to -9 and ran the same VM
 again. The download is now running at 13K/s. I've checked, it's the same
 ip which the file is downloaded from.
 
 I stay on my position, there is something wrong with this update.

Since I don't see this here I can't test it.  Can you please try
using regular qemu-kvm 1.0 and 1.0.1 and see if you can reproduce
the issue?  (Note: debian -8 is based on 1.0, while -9 is on 1.0.1).

Again, I don't see any change which can lead to this behavour.

/mjt



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#665046: qemu-kvm: bad network performance using vhost and macvtap

2012-03-22 Thread Anthony Bourguignon
Le jeudi 22 mars 2012 à 19:12 +0400, Michael Tokarev a écrit :
 severity 665046 wishlist
 tags 665046 + unreproducible
 thanks
 
 On 22.03.2012 19:05, Michael Tokarev wrote:
 []
  There was no network-related changes between -8 and -9.
  Macvtap performance has always been awful from the ground up,
  but this is due to kernel not qemu.  Reportedly it become a
  bit better with latest kernels but I haven't checked.
  At any rate using macvtap isn't a good idea at this stage.
 
 And sure thing I completely disagree with the severity of the
 bugreport.  Recommended tun device works just fine.  So reducing
 severity to wishlist.
 
 There was one networking change between 1.0 and 1.0.1 (which
 went into -9): it is a bounds check in e1000 device, which
 is a fix for CVE-2012-0029.  Still not performance-critical.
 
 I tested macvtap device on -8 and -9 briefly (had to tweak
 my regular bridge setup quite a bit for that to work), and
 I see exactly the same (bad) performance with both versions.
 
 Note also that qemu uses _exactly_ the same interface for tun
 and macvtap devices, it is one code. Macvtap kernel module
 exports the same ABI as regular tun.  So it can't be qemu-related,
 because, as you confirm, vtun work fine.

I've just tested again. I ran a virtual machine with -8 and wget the
file
http://cdimage.debian.org/cdimage/weekly-builds/amd64/iso-dvd/debian-testing-amd64-DVD-9.iso
 . I downloaded at about 49M/s . You can see it's everything but bad 
performance.
Then, I stopped the vm, upgraded qemu-kvm to -9 and ran the same VM
again. The download is now running at 13K/s. I've checked, it's the same
ip which the file is downloaded from.

I stay on my position, there is something wrong with this update.

Here is the dumpxml of my vm :
domain type='kvm'
  nametest/name
  uuid3b02011f-085c-abb4-a9b0-91c80d1846af/uuid
  memory2097152/memory
  currentMemory2097152/currentMemory
  memoryBacking
hugepages/
  /memoryBacking
  vcpu2/vcpu
  os
type arch='x86_64' machine='pc-1.0'hvm/type
boot dev='hd'/
boot dev='network'/
  /os
  features
acpi/
apic/
  /features
  clock offset='utc'/
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashdestroy/on_crash
  devices
emulator/usr/bin/kvm/emulator
disk type='block' device='disk'
  driver name='qemu' type='raw' cache='none' io='native'/
  source dev='/dev/vps/test'/
  target dev='vda' bus='virtio'/
  address type='pci' domain='0x' bus='0x00' slot='0x04'
function='0x0'/
/disk
interface type='direct'
  mac address='52:54:00:3e:11:28'/
  source dev='vlan222' mode='bridge'/
  model type='virtio'/
  address type='pci' domain='0x' bus='0x00' slot='0x03'
function='0x0'/
/interface
serial type='pty'
  target port='0'/
/serial
console type='pty'
  target type='serial' port='0'/
/console
input type='mouse' bus='ps2'/
graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'
keymap='fr'
  listen type='address' address='0.0.0.0'/
/graphics
video
  model type='vga' vram='9216' heads='1'/
  address type='pci' domain='0x' bus='0x00' slot='0x02'
function='0x0'/
/video
memballoon model='virtio'
  address type='pci' domain='0x' bus='0x00' slot='0x05'
function='0x0'/
/memballoon
  /devices
/domain




--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org