Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-29 Thread Reeted

On 09/29/11 02:39, Chris Wright wrote:

Can you help narrow down what is happening during the additional 12
seconds in the guest?  For example, does a quick simple boot to single
user mode happen at the same boot speed w/ and w/out vhost_net?


Not tried (would probably be too short to measure effectively) but I'd 
guess it would be the same as for multiuser, see also the FC6 sub-thread



I'm guessing (hoping) that it's the network bring-up that is slow.
Are you using dhcp to get an IP address?  Does static IP have the same
slow down?


It's all static IP.

And please see my previous post, 1 hour before yours, regarding Fedora 
Core 6: the bring-up of eth0 in Fedora Core 6 is not particularly faster 
or slower than the rest. This is an overall system slowdown (I'd say 
either CPU or disk I/O) not related to the network (apart from being 
triggered by vhost_net).



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-29 Thread Chris Wright
* Reeted (ree...@shiftmail.org) wrote:
 On 09/28/11 11:28, Daniel P. Berrange wrote:
 On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:
 On 09/28/11 09:51, Daniel P. Berrange wrote:
 You could have equivalently used
 
   -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
   -device 
  virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
 It's this! It's this!! (thanks for the line)
 
 It raises boot time by 10-13 seconds
 Ok, that is truely bizarre and I don't really have any explanation
 for why that is. I guess you could try 'vhost=off' too and see if that
 makes the difference.
 
 YES!
 It's the vhost. With vhost=on it takes about 12 seconds more time to boot.

Can you help narrow down what is happening during the additional 12
seconds in the guest?  For example, does a quick simple boot to single
user mode happen at the same boot speed w/ and w/out vhost_net?

I'm guessing (hoping) that it's the network bring-up that is slow.
Are you using dhcp to get an IP address?  Does static IP have the same
slow down?

If it's just dhcp, can you recompile qemu with this patch and see if it
causes the same slowdown you saw w/ vhost?

diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index 0b03b57..0c864f7 100644
--- a/hw/virtio-net.c
+++ b/hw/virtio-net.c
@@ -496,7 +496,7 @@ static int receive_header(VirtIONet *n, struct iovec *iov, 
int iovcnt,
 if (n-has_vnet_hdr) {
 memcpy(hdr, buf, sizeof(*hdr));
 offset = sizeof(*hdr);
-work_around_broken_dhclient(hdr, buf + offset, size - offset);
+//work_around_broken_dhclient(hdr, buf + offset, size - offset);
 }
 
 /* We only ever receive a struct virtio_net_hdr from the tapfd,

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-29 Thread Chris Wright
* Reeted (ree...@shiftmail.org) wrote:
 On 09/29/11 02:39, Chris Wright wrote:
 Can you help narrow down what is happening during the additional 12
 seconds in the guest?  For example, does a quick simple boot to single
 user mode happen at the same boot speed w/ and w/out vhost_net?
 
 Not tried (would probably be too short to measure effectively) but
 I'd guess it would be the same as for multiuser, see also the FC6
 sub-thread
 
 I'm guessing (hoping) that it's the network bring-up that is slow.
 Are you using dhcp to get an IP address?  Does static IP have the same
 slow down?
 
 It's all static IP.
 
 And please see my previous post, 1 hour before yours, regarding
 Fedora Core 6: the bring-up of eth0 in Fedora Core 6 is not
 particularly faster or slower than the rest. This is an overall
 system slowdown (I'd say either CPU or disk I/O) not related to the
 network (apart from being triggered by vhost_net).

OK, I re-read it (pretty sure FC6 had the old dhclient, which is why
I wondered).  That is odd.  No ideas are springing to mind.

thanks,
-chris

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-28 Thread Daniel P. Berrange
On Tue, Sep 27, 2011 at 08:10:21PM +0200, Reeted wrote:
 I repost this, this time by also including the libvirt mailing list.
 
 Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
 0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
 qemu-kvm are vanilla and compiled by hand as described below.
 
 My original message follows:
 
 This is really strange.
 
 I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
 compiled by me.
 
 I have created the first VM.
 This is on LVM, virtio etc... if I run it directly from bash
 console, it boots in 8 seconds (it's a bare ubuntu with no
 graphics), while if I boot it under virsh (libvirt) it boots in
 20-22 seconds. This is the time from after Grub to the login prompt,
 or from after Grub to the ssh-server up.

 I was almost able to replicate the whole libvirt command line on the
 bash console, and it still goes almost 3x faster when launched from
 bash than with virsh start vmname. The part I wasn't able to
 replicate is the -netdev part because I still haven't understood the
 semantics of it.

-netdev is just an alternative way of setting up networking that
avoids QEMU's nasty VLAN concept. Using -netdev allows QEMU to
use more efficient codepaths for networking, which should improve
the network performance.

 This is my bash commandline:
 
 /opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
 -m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
 ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
 -chardev 
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
 -boot order=dc,menu=on -drive 
 file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
 -device 
 virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
 -drive 
 if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
 -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
 -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
 -usb -vnc 127.0.0.1:0 -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by
doing:

virsh qemu-monitor-command vmname1 'info kvm'

 Which was taken from libvirt's command line. The only modifications
 I did to the original libvirt commandline (seen with ps aux) were:
 
 - Removed -S

Fine, has no effect on performance.

 - Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
 -device 
 virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
 Has been simplified to: -net nic,model=virtio -net
 tap,ifname=tap0,script=no,downscript=no
 and manual bridging of the tap0 interface.

You could have equivalently used

 -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3

That said, I don't expect this has anything todo with the performance
since booting a guest rarely involves much network I/O unless you're
doing something odd like NFS-root / iSCSI-root.

 Firstly I had thought that this could be fault of the VNC: I have
 compiled qemu-kvm with no separate vnc thread. I thought that
 libvirt might have connected to the vnc server at all times and this
 could have slowed down the whole VM.
 But then I also tried connecting vith vncviewer to the KVM machine
 launched directly from bash, and the speed of it didn't change. So
 no, it doesn't seem to be that.

Yeah, I have never seen VNC be responsible for the kind of slowdown
you describe.

 BTW: is the slowdown of the VM on no separate vnc thread only in
 effect when somebody is actually connected to VNC, or always?

Probably, but again I dont think it is likely to be relevant here.

 Also, note that the time difference is not visible in dmesg once the
 machine has booted. So it's not a slowdown in detecting devices.
 Devices are always detected within the first 3 seconds, according to
 dmesg, at 3.6 seconds the first ext4 mount begins. It seems to be
 really the OS boot that is slow... it seems an hard disk performance
 problem.


There are a couple of things that would be different between running the
VM directly, vs via libvirt.

 - Security drivers - SELinux/AppArmour
 - CGroups

If it is was AppArmour causing this slowdown I don't think you would have
been the first person to complain, so lets ignore that. Which leaves
cgroups as a likely culprit. Do a

  grep cgroup /proc/mounts

if any of them are mounted, then for each cgroups mount in turn,

 - Umount the cgroup
 - Restart libvirtd
 - Test your guest boot performance


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org   

Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-28 Thread Reeted

On 09/28/11 09:51, Daniel P. Berrange wrote:

On Tue, Sep 27, 2011 at 08:10:21PM +0200, Reeted wrote:

I repost this, this time by also including the libvirt mailing list.

Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
qemu-kvm are vanilla and compiled by hand as described below.

My original message follows:

This is really strange.

I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
compiled by me.

I have created the first VM.
This is on LVM, virtio etc... if I run it directly from bash
console, it boots in 8 seconds (it's a bare ubuntu with no
graphics), while if I boot it under virsh (libvirt) it boots in
20-22 seconds. This is the time from after Grub to the login prompt,
or from after Grub to the ssh-server up.

I was almost able to replicate the whole libvirt command line on the
bash console, and it still goes almost 3x faster when launched from
bash than with virsh start vmname. The part I wasn't able to
replicate is the -netdev part because I still haven't understood the
semantics of it.

-netdev is just an alternative way of setting up networking that
avoids QEMU's nasty VLAN concept. Using -netdev allows QEMU to
use more efficient codepaths for networking, which should improve
the network performance.


This is my bash commandline:

/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
-boot order=dc,menu=on -drive 
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
-usb -vnc 127.0.0.1:0 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by
doing:

 virsh qemu-monitor-command vmname1 'info kvm'


kvm support: enabled

I think I would see a higher impact if it was KVM not enabled.


Which was taken from libvirt's command line. The only modifications
I did to the original libvirt commandline (seen with ps aux) were:

- Removed -S

Fine, has no effect on performance.


- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
-device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net
tap,ifname=tap0,script=no,downscript=no
and manual bridging of the tap0 interface.

You could have equivalently used

  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
  -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3


It's this! It's this!! (thanks for the line)

It raises boot time by 10-13 seconds

But now I don't know where to look During boot there is a pause 
usually between /scripts/init-bottom  (Ubuntu 11.04 guest) and the 
appearance of login prompt, however that is not really meaningful  
because there is probably much background activity going on there, with 
init etc. which don't display messages



init-bottom does just this

-
#!/bin/sh -e
# initramfs init-bottom script for udev

PREREQ=

# Output pre-requisites
prereqs()
{
echo $PREREQ
}

case $1 in
prereqs)
prereqs
exit 0
;;
esac


# Stop udevd, we'll miss a few events while we run init, but we catch up
pkill udevd

# Move /dev to the real filesystem
mount -n -o move /dev ${rootmnt}/dev
-

It doesn't look like it should take time to execute.
So there is probably some other background activity going on... and that 
is slower, but I don't know what that is.



Another thing that can be noticed is that the dmesg message:

[   13.290173] eth0: no IPv6 routers present

(which is also the last message)

happens on average 1 (one) second earlier in the fast case (-net) than 
in the slow case (-netdev)




That said, I don't expect this has anything todo with the performance
since booting a guest rarely involves much network I/O unless you're
doing something odd like NFS-root / iSCSI-root.


No there is nothing like that. No network disks or nfs.

I had ntpdate, but I removed that and it changed nothing.



Firstly I had thought that this could be fault of the VNC: I have
compiled qemu-kvm with no separate vnc thread. I thought that
libvirt might have connected to the vnc server at all times and 

Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-28 Thread Daniel P. Berrange
On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:
 On 09/28/11 09:51, Daniel P. Berrange wrote:
 This is my bash commandline:
 
 /opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
 -m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
 ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
 -chardev 
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
 -boot order=dc,menu=on -drive 
 file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
 -device 
 virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
 -drive 
 if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
 -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
 -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
 -usb -vnc 127.0.0.1:0 -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
 
 This shows KVM is being requested, but we should validate that KVM is
 definitely being activated when under libvirt. You can test this by
 doing:
 
  virsh qemu-monitor-command vmname1 'info kvm'
 
 kvm support: enabled
 
 I think I would see a higher impact if it was KVM not enabled.
 
 Which was taken from libvirt's command line. The only modifications
 I did to the original libvirt commandline (seen with ps aux) were:


 - Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
 -device 
 virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
 Has been simplified to: -net nic,model=virtio -net
 tap,ifname=tap0,script=no,downscript=no
 and manual bridging of the tap0 interface.
 You could have equivalently used
 
   -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
   -device 
  virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
 
 It's this! It's this!! (thanks for the line)
 
 It raises boot time by 10-13 seconds

Ok, that is truely bizarre and I don't really have any explanation
for why that is. I guess you could try 'vhost=off' too and see if that
makes the difference.

 
 But now I don't know where to look During boot there is a pause
 usually between /scripts/init-bottom  (Ubuntu 11.04 guest) and the
 appearance of login prompt, however that is not really meaningful
 because there is probably much background activity going on there,
 with init etc. which don't display messages
 
 
 init-bottom does just this
 
 -
 #!/bin/sh -e
 # initramfs init-bottom script for udev
 
 PREREQ=
 
 # Output pre-requisites
 prereqs()
 {
 echo $PREREQ
 }
 
 case $1 in
 prereqs)
 prereqs
 exit 0
 ;;
 esac
 
 
 # Stop udevd, we'll miss a few events while we run init, but we catch up
 pkill udevd
 
 # Move /dev to the real filesystem
 mount -n -o move /dev ${rootmnt}/dev
 -
 
 It doesn't look like it should take time to execute.
 So there is probably some other background activity going on... and
 that is slower, but I don't know what that is.
 
 
 Another thing that can be noticed is that the dmesg message:
 
 [   13.290173] eth0: no IPv6 routers present
 
 (which is also the last message)
 
 happens on average 1 (one) second earlier in the fast case (-net)
 than in the slow case (-netdev)

Hmm, none of that looks particularly suspect. So I don't really have
much idea what else to try apart from the 'vhost=off' possibilty.


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-28 Thread Reeted

On 09/28/11 11:28, Daniel P. Berrange wrote:

On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:

On 09/28/11 09:51, Daniel P. Berrange wrote:

This is my bash commandline:

/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
-boot order=dc,menu=on -drive 
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
-usb -vnc 127.0.0.1:0 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by
doing:

 virsh qemu-monitor-command vmname1 'info kvm'

kvm support: enabled

I think I would see a higher impact if it was KVM not enabled.


Which was taken from libvirt's command line. The only modifications
I did to the original libvirt commandline (seen with ps aux) were:



- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
-device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net
tap,ifname=tap0,script=no,downscript=no
and manual bridging of the tap0 interface.

You could have equivalently used

  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
  -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3

It's this! It's this!! (thanks for the line)

It raises boot time by 10-13 seconds

Ok, that is truely bizarre and I don't really have any explanation
for why that is. I guess you could try 'vhost=off' too and see if that
makes the difference.


YES!
It's the vhost. With vhost=on it takes about 12 seconds more time to boot.

...meaning? :-)

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Daniel P. Berrange
On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:
 On 09/28/11 11:28, Daniel P. Berrange wrote:
 On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:
 On 09/28/11 09:51, Daniel P. Berrange wrote:
 This is my bash commandline:
 
 /opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
 -m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
 ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
 -chardev 
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
 -boot order=dc,menu=on -drive 
 file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
 -device 
 virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
 -drive 
 if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
 -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
 -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
 -usb -vnc 127.0.0.1:0 -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
 This shows KVM is being requested, but we should validate that KVM is
 definitely being activated when under libvirt. You can test this by
 doing:
 
  virsh qemu-monitor-command vmname1 'info kvm'
 kvm support: enabled
 
 I think I would see a higher impact if it was KVM not enabled.
 
 Which was taken from libvirt's command line. The only modifications
 I did to the original libvirt commandline (seen with ps aux) were:
 
 - Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
 -device 
 virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
 Has been simplified to: -net nic,model=virtio -net
 tap,ifname=tap0,script=no,downscript=no
 and manual bridging of the tap0 interface.
 You could have equivalently used
 
   -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
   -device 
  virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
 It's this! It's this!! (thanks for the line)
 
 It raises boot time by 10-13 seconds
 Ok, that is truely bizarre and I don't really have any explanation
 for why that is. I guess you could try 'vhost=off' too and see if that
 makes the difference.
 
 YES!
 It's the vhost. With vhost=on it takes about 12 seconds more time to boot.
 
 ...meaning? :-)

I've no idea. I was always under the impression that 'vhost=on' was
the 'make it go much faster' switch. So something is going wrong
here that I cna't explain.

Perhaps one of the network people on this list can explain...


To turn vhost off in the libvirt XML, you should be able to use
driver name='qemu'/ for the interface in question,eg


interface type='user'
  mac address='52:54:00:e5:48:58'/
  model type='virtio'/
  driver name='qemu'/
/interface

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Reeted

On 09/28/11 11:53, Daniel P. Berrange wrote:

On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:

YES!
It's the vhost. With vhost=on it takes about 12 seconds more time to boot.

...meaning? :-)

I've no idea. I was always under the impression that 'vhost=on' was
the 'make it go much faster' switch. So something is going wrong
here that I cna't explain.

Perhaps one of the network people on this list can explain...


To turn vhost off in the libvirt XML, you should be able to use
driver name='qemu'/  for the interface in question,eg


 interface type='user'
   mac address='52:54:00:e5:48:58'/
   model type='virtio'/
   driver name='qemu'/
 /interface



Ok that seems to work: it removes the vhost part in the virsh launch 
hence cutting down 12secs of boot time.


If nobody comes out with an explanation of why, I will open another 
thread on the kvm list for this. I would probably need to test disk 
performance on vhost=on to see if it degrades or it's for another reason 
that boot time is increased.


Thanks so much for your help Daniel,
Reeted

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Daniel P. Berrange
On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:
 On 09/28/11 11:53, Daniel P. Berrange wrote:
 On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:
 YES!
 It's the vhost. With vhost=on it takes about 12 seconds more time to boot.
 
 ...meaning? :-)
 I've no idea. I was always under the impression that 'vhost=on' was
 the 'make it go much faster' switch. So something is going wrong
 here that I cna't explain.
 
 Perhaps one of the network people on this list can explain...
 
 
 To turn vhost off in the libvirt XML, you should be able to use
 driver name='qemu'/  for the interface in question,eg
 
 
  interface type='user'
mac address='52:54:00:e5:48:58'/
model type='virtio'/
driver name='qemu'/
  /interface
 
 
 Ok that seems to work: it removes the vhost part in the virsh launch
 hence cutting down 12secs of boot time.
 
 If nobody comes out with an explanation of why, I will open another
 thread on the kvm list for this. I would probably need to test disk
 performance on vhost=on to see if it degrades or it's for another
 reason that boot time is increased.

Be sure to CC the qemu-devel mailing list too next time, since that has
a wider audience who might be able to help


Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Richard W.M. Jones
On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:
 On 09/28/11 11:53, Daniel P. Berrange wrote:
 On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:
 YES!
 It's the vhost. With vhost=on it takes about 12 seconds more time to boot.
 
 ...meaning? :-)
 I've no idea. I was always under the impression that 'vhost=on' was
 the 'make it go much faster' switch. So something is going wrong
 here that I cna't explain.
 
 Perhaps one of the network people on this list can explain...
 
 
 To turn vhost off in the libvirt XML, you should be able to use
 driver name='qemu'/  for the interface in question,eg
 
 
  interface type='user'
mac address='52:54:00:e5:48:58'/
model type='virtio'/
driver name='qemu'/
  /interface
 
 
 Ok that seems to work: it removes the vhost part in the virsh launch
 hence cutting down 12secs of boot time.
 
 If nobody comes out with an explanation of why, I will open another
 thread on the kvm list for this. I would probably need to test disk
 performance on vhost=on to see if it degrades or it's for another
 reason that boot time is increased.

Is it using CPU during this time, or is the qemu-kvm process idle?

It wouldn't be the first time that a network option ROM sat around
waiting for an imaginary console user to press a key.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Reeted

On 09/28/11 14:56, Richard W.M. Jones wrote:

On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:

Ok that seems to work: it removes the vhost part in the virsh launch
hence cutting down 12secs of boot time.

If nobody comes out with an explanation of why, I will open another
thread on the kvm list for this. I would probably need to test disk
performance on vhost=on to see if it degrades or it's for another
reason that boot time is increased.

Is it using CPU during this time, or is the qemu-kvm process idle?

It wouldn't be the first time that a network option ROM sat around
waiting for an imaginary console user to press a key.

Rich.


Of the two qemu-kvm processes (threads?) which I see consuming CPU for 
that VM, one is at about 20%, the other at about 10%. I think it's doing 
something but maybe not much, or maybe it's really I/O bound and the I/O 
is slow (as I originarily thought). I will perform some disk benchmarks 
and follow up, but I can't do that right now...

Thank you

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Reeted

On 09/28/11 16:51, Reeted wrote:

On 09/28/11 14:56, Richard W.M. Jones wrote:

On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:

Ok that seems to work: it removes the vhost part in the virsh launch
hence cutting down 12secs of boot time.

If nobody comes out with an explanation of why, I will open another
thread on the kvm list for this. I would probably need to test disk
performance on vhost=on to see if it degrades or it's for another
reason that boot time is increased.

Is it using CPU during this time, or is the qemu-kvm process idle?

It wouldn't be the first time that a network option ROM sat around
waiting for an imaginary console user to press a key.

Rich.


Of the two qemu-kvm processes (threads?) which I see consuming CPU for 
that VM, one is at about 20%, the other at about 10%. I think it's 
doing something but maybe not much, or maybe it's really I/O bound and 
the I/O is slow (as I originarily thought). I will perform some disk 
benchmarks and follow up, but I can't do that right now...

Thank you


Ok still didn't do benchmarks but I am now quite a lot convinced that 
it's either a disk performance problem or cpu problem with vhostnet on. 
Not a network performance problem or idle wait.


Because I have installed another virtual machine now, which is a fedora 
core 6 (old!),  but with a debian natty kernel vmlinuz + initrd so that 
it supports virtio devices. The initrd part from Ubuntu is extremely 
short so finishes immediately, but then the fedora core 6 boot is much 
longer than with my previous ubuntu-barebone virtual machine, and with 
more messages, and I can see the various daemons being brought up one by 
one, and I can tell you such boot (and also the teardown of services 
during shutdown) is very much faster with vhostnet disabled.


With vhostnet disabled it takes 30seconds to come up (since after grub), 
and 28 seconds to shutdown.
With vhostnet enabled it takes 1m19sec to come up (since after grub), 
and 1m04sec to shutdown.



I have some ideas about disk benchmarking, that would be fio or simple 
dd. What could I use for CPU benchmarking? Would openssl speed be too 
simple?


Thank you

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list