[libvirt] Group for accessing one/all VM graphics and not virsh

2011-12-05 Thread Reeted

Hello libvirt people,

is there a (preferably simple) way in Linux to allow a certain set of 
users to be able to do:


virt-viewer --connect qemu+ssh://username@virthost/system vmname

for connecting to virt-viewer BUT without letting them do all the other 
things that can be done with virsh?


I know that if I add them to the libvirtd and kvm groups, they will be 
able to connect with virt-viewer to any virtual machine AND ALSO do any 
virsh command on any virtual machine. This is too much permission.


I can accept the first part (a way to allow a group of user to connect 
with virt-viewer to all the virtual machines of the host) since more 
restriction can be enforced by using VNC passwords... But if they are 
also able to do anything in virsh that's too much.


I am using only qemu and kvm in libvirt.

Thank you
R.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] Virtual serial logging server?

2011-11-06 Thread Reeted

Dear all,
please excuse the almost-OT question,

I see various possibilities in quemu-kvm and libvirt for sending virtual 
serial port data to files, sockets, pipes, etc on the host.

In particular, the TCP socket seems interesting.

Can you suggest a server application to receive all such TCP connections 
and log serial data for many virtual machines at once?


In particular I would be interested in something with quotas, i.e. 
something that deletes old lines from the logs of a certain VM when the 
filesystem space occupied by the serial logs of such VM gets over a 
certain amount of space. So that the log space for other VMs is not 
starved in case one of them loops.


Thank you
R.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM guest boots 2x slower with vhost_net

2011-10-09 Thread Reeted

On 10/05/11 01:12, Reeted wrote:

.
I found that virtual machines in my host booted 2x slower ... to the 
vhost_net presence

...


Just a small update,

Firstly: I cannot reproduce any slowness after boot by doing:

# time /etc/init.d/chrony restart
Restarting time daemon: Starting /usr/sbin/chronyd...
chronyd is running and online.
real0m3.022s
user0m0.000s
sys 0m0.000s

since this is a network service I expected it to show the problem, but 
it doesn't. It takes exactly same time with and without vhost_net.



Secondly, vhost_net appears to work correctly, because I have performed 
a NPtcp performance test between two guests in the same host, and these 
are the results:


vhost_net deactivated for both hosts:

NPtcp -h 192.168.7.81
Send and receive buffers are 16384 and 87380 bytes
(A bug in Linux doubles the requested buffer sizes)
Now starting the main loop
0:   1 bytes917 times --  0.08 Mbps in  92.07 usec
1:   2 bytes   1086 times --  0.18 Mbps in  86.04 usec
2:   3 bytes   1162 times --  0.27 Mbps in  85.08 usec
3:   4 bytes783 times --  0.36 Mbps in  85.34 usec
4:   6 bytes878 times --  0.54 Mbps in  85.42 usec
5:   8 bytes585 times --  0.72 Mbps in  85.31 usec
6:  12 bytes732 times --  1.07 Mbps in  85.52 usec
7:  13 bytes487 times --  1.16 Mbps in  85.52 usec
8:  16 bytes539 times --  1.43 Mbps in  85.26 usec
9:  19 bytes659 times --  1.70 Mbps in  85.43 usec
10:  21 bytes739 times --  1.77 Mbps in  90.71 usec
11:  24 bytes734 times --  2.13 Mbps in  86.13 usec
12:  27 bytes822 times --  2.22 Mbps in  92.80 usec
13:  29 bytes478 times --  2.35 Mbps in  94.02 usec
14:  32 bytes513 times --  2.60 Mbps in  93.75 usec
15:  35 bytes566 times --  3.15 Mbps in  84.77 usec
16:  45 bytes674 times --  4.01 Mbps in  85.56 usec
17:  48 bytes779 times --  4.32 Mbps in  84.70 usec
18:  51 bytes811 times --  4.61 Mbps in  84.32 usec
19:  61 bytes465 times --  5.08 Mbps in  91.57 usec
20:  64 bytes537 times --  5.22 Mbps in  93.46 usec
21:  67 bytes551 times --  5.73 Mbps in  89.20 usec
22:  93 bytes602 times --  8.28 Mbps in  85.73 usec
23:  96 bytes777 times --  8.45 Mbps in  86.70 usec
24:  99 bytes780 times --  8.71 Mbps in  86.72 usec
25: 125 bytes419 times -- 11.06 Mbps in  86.25 usec
26: 128 bytes575 times -- 11.38 Mbps in  85.80 usec
27: 131 bytes591 times -- 11.60 Mbps in  86.17 usec
28: 189 bytes602 times -- 16.55 Mbps in  87.14 usec
29: 192 bytes765 times -- 16.80 Mbps in  87.19 usec
30: 195 bytes770 times -- 17.11 Mbps in  86.94 usec
31: 253 bytes401 times -- 22.04 Mbps in  87.59 usec
32: 256 bytes568 times -- 22.64 Mbps in  86.25 usec
33: 259 bytes584 times -- 22.68 Mbps in  87.12 usec
34: 381 bytes585 times -- 33.19 Mbps in  87.58 usec
35: 384 bytes761 times -- 33.54 Mbps in  87.36 usec
36: 387 bytes766 times -- 33.91 Mbps in  87.08 usec
37: 509 bytes391 times -- 44.23 Mbps in  87.80 usec
38: 512 bytes568 times -- 44.70 Mbps in  87.39 usec
39: 515 bytes574 times -- 45.21 Mbps in  86.90 usec
40: 765 bytes580 times -- 66.05 Mbps in  88.36 usec
41: 768 bytes754 times -- 66.73 Mbps in  87.81 usec
42: 771 bytes760 times -- 67.02 Mbps in  87.77 usec
43:1021 bytes384 times -- 88.04 Mbps in  88.48 usec
44:1024 bytes564 times -- 88.30 Mbps in  88.48 usec
45:1027 bytes566 times -- 88.63 Mbps in  88.40 usec
46:1533 bytes568 times -- 71.75 Mbps in 163.00 usec
47:1536 bytes408 times -- 72.11 Mbps in 162.51 usec
48:1539 bytes410 times -- 71.71 Mbps in 163.75 usec
49:2045 bytes204 times -- 95.40 Mbps in 163.55 usec
50:2048 bytes305 times -- 95.26 Mbps in 164.02 usec
51:2051 bytes305 times -- 95.33 Mbps in 164.14 usec
52:3069 bytes305 times --141.16 Mbps in 165.87 usec
53:3072 bytes401 times --142.19 Mbps in 164.83 usec
54:3075 bytes404 times --150.68 Mbps in 155.70 usec
55:4093 bytes214 times --192.36 Mbps in 162.33 usec
56:4096 bytes307 times --193.21 Mbps in 161.74 usec

[libvirt] Qemu/KVM guest boots 2x slower with vhost_net

2011-10-04 Thread Reeted

Hello all,
for people in qemu-devel list, you might want to have a look at the 
previous thread about this topic, at

http://www.spinics.net/lists/kvm/msg61537.html
but I will try to recap here.

I found that virtual machines in my host booted 2x slower (on average 
it's 2x slower, but probably some parts are at least 3x slower) under 
libvirt compared to manual qemu-kvm launch. With the help of Daniel I 
narrowed it down to the vhost_net presence (default active when launched 
by libvirt) i.e. with vhost_net, boot process is *UNIFORMLY* 2x slower.


The problem is still reproducible on my systems but these are going to 
go to production soon and I am quite busy, I might not have many more 
days for testing left. Might be just next saturday and sunday for 
testing this problem, so if you can write here some of your suggestions 
by saturday that would be most appreciated.



I have performed some benchmarks now, which I hadn't performed in the 
old thread:


openssl speed -multi 2 rsa : (cpu benchmark) show no performance 
difference with or without vhost_net

disk benchmarks : show no performance difference with or without vhost_net
the disk benchmarks were: (both with cache=none and cache=writeback)
dd streaming read
dd streaming write
fio 4k random read in all cases of cache=none, cache=writeback with host 
cache dropped before test, cache=writeback with all fio data in host 
cache (measures context switch)

fio 4k random write

So I couldn't reproduce the problem with any benchmark that came to my mind.

But in the boot process this is very visible.
I'll continue the description below
before that, here are the System Specifications:
---
Host is with kernel 3.0.3 and Qemu-KVM 0.14.1, both vanilla and compiled 
by me.
Libvirt is the version in Ubuntu 11.04 Natty which is 0.8.8-1ubuntu6.5 . 
I didn't recompile this one


VM disks are LVs of LVM on MD raid array.
The problem shows identically on both cache=none and cache=writeback. 
Aio native.


Physical CPUs are: dual westmere 6-core (12 cores total, + hyperthreading)
2 vCPUs per VM.
All VMs are idle or off except the VM being tested.

Guests are:
- multiple Ubuntu 11.04 Natty 64bit with their 2.6.38-8-virtual kernel: 
very-minimal Ubuntu installs with deboostrap (not from the Ubuntu installer)
- one Fedora Core 6 32bit with a 32bit 2.6.38-8-virtual kernel + initrd 
both taken from Ubuntu Natty 32bit (so I could have virtio). Standard 
install (except kernel replaced afterwards).

Always static IP address in all guests
---

All types of guests show this problem, but it is more visible in the FC6 
guest because the boot process is MUCH longer than in the 
debootstrap-installed ubuntus.


Please note that most of boot process, at least from a certain point 
onwards, appears to the eye uniformly 2x or 3x slower under vhost_net, 
and by boot process I mean, roughly, copying by hand from some screenshots:



Loading default keymap
Setting hostname
Setting up LVM - no volume groups found
checking ilesystems... clean ...
remounting root filesystem in read-write mode
mounting local filesystems
enabling local filesystems quotas
enabling /etc/fstab swaps
INIT entering runlevel 3
entering non-interactive startup
Starting sysstat: calling the system activity data collector (sadc)
Starting background readahead

** starting from here it is everything, or almost everything, 
much slower


Checking for hardware changes
Bringing up loopback interface
Bringing up interface eth0
starting system logger
starting kernel logger
starting irqbalance
starting potmap
starting nfs statd
starting rpc idmapd
starting system message bus
mounting other filesystems
starting PC/SC smart card daemon (pcscd)
starint hidd ... can't open HIDP control socket : address familiy not 
supported by protocol (this is an error due to backporting a new ubuntu 
kernel to FC6)

starting autofs: loading autofs4
starting automount
starting acpi daemon
starting hpiod
starting hpssd
starting cups
starting sshd
starting ntpd
starting sendmail
starting sm-client
startingg console mouse services
starting crond
starting xfs
starting anacron
starting atd
starting youm-updatesd
starting Avahi daemon
starting HAL daemon


From the point I marked, onwards, most are services, i.e. daemons 
listening from sockets, so I have thought that maybe the binding to a 
socket could have been slower under vhost_net, but trying to put nc in 
listening with: nc -l 15000 is instantaneous, so I am not sure.


The shutdown of FC6 with basically the same services as above which tear 
down, is *also* much slower on vhost_net.


Thanks for any suggestions
R.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-29 Thread Reeted

On 09/29/11 02:39, Chris Wright wrote:

Can you help narrow down what is happening during the additional 12
seconds in the guest?  For example, does a quick simple boot to single
user mode happen at the same boot speed w/ and w/out vhost_net?


Not tried (would probably be too short to measure effectively) but I'd 
guess it would be the same as for multiuser, see also the FC6 sub-thread



I'm guessing (hoping) that it's the network bring-up that is slow.
Are you using dhcp to get an IP address?  Does static IP have the same
slow down?


It's all static IP.

And please see my previous post, 1 hour before yours, regarding Fedora 
Core 6: the bring-up of eth0 in Fedora Core 6 is not particularly faster 
or slower than the rest. This is an overall system slowdown (I'd say 
either CPU or disk I/O) not related to the network (apart from being 
triggered by vhost_net).



--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-28 Thread Reeted

On 09/28/11 09:51, Daniel P. Berrange wrote:

On Tue, Sep 27, 2011 at 08:10:21PM +0200, Reeted wrote:

I repost this, this time by also including the libvirt mailing list.

Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
qemu-kvm are vanilla and compiled by hand as described below.

My original message follows:

This is really strange.

I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
compiled by me.

I have created the first VM.
This is on LVM, virtio etc... if I run it directly from bash
console, it boots in 8 seconds (it's a bare ubuntu with no
graphics), while if I boot it under virsh (libvirt) it boots in
20-22 seconds. This is the time from after Grub to the login prompt,
or from after Grub to the ssh-server up.

I was almost able to replicate the whole libvirt command line on the
bash console, and it still goes almost 3x faster when launched from
bash than with virsh start vmname. The part I wasn't able to
replicate is the -netdev part because I still haven't understood the
semantics of it.

-netdev is just an alternative way of setting up networking that
avoids QEMU's nasty VLAN concept. Using -netdev allows QEMU to
use more efficient codepaths for networking, which should improve
the network performance.


This is my bash commandline:

/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
-boot order=dc,menu=on -drive 
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
-usb -vnc 127.0.0.1:0 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by
doing:

 virsh qemu-monitor-command vmname1 'info kvm'


kvm support: enabled

I think I would see a higher impact if it was KVM not enabled.


Which was taken from libvirt's command line. The only modifications
I did to the original libvirt commandline (seen with ps aux) were:

- Removed -S

Fine, has no effect on performance.


- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
-device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net
tap,ifname=tap0,script=no,downscript=no
and manual bridging of the tap0 interface.

You could have equivalently used

  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
  -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3


It's this! It's this!! (thanks for the line)

It raises boot time by 10-13 seconds

But now I don't know where to look During boot there is a pause 
usually between /scripts/init-bottom  (Ubuntu 11.04 guest) and the 
appearance of login prompt, however that is not really meaningful  
because there is probably much background activity going on there, with 
init etc. which don't display messages



init-bottom does just this

-
#!/bin/sh -e
# initramfs init-bottom script for udev

PREREQ=

# Output pre-requisites
prereqs()
{
echo $PREREQ
}

case $1 in
prereqs)
prereqs
exit 0
;;
esac


# Stop udevd, we'll miss a few events while we run init, but we catch up
pkill udevd

# Move /dev to the real filesystem
mount -n -o move /dev ${rootmnt}/dev
-

It doesn't look like it should take time to execute.
So there is probably some other background activity going on... and that 
is slower, but I don't know what that is.



Another thing that can be noticed is that the dmesg message:

[   13.290173] eth0: no IPv6 routers present

(which is also the last message)

happens on average 1 (one) second earlier in the fast case (-net) than 
in the slow case (-netdev)




That said, I don't expect this has anything todo with the performance
since booting a guest rarely involves much network I/O unless you're
doing something odd like NFS-root / iSCSI-root.


No there is nothing like that. No network disks or nfs.

I had ntpdate, but I removed that and it changed nothing.



Firstly I had thought that this could be fault of the VNC: I have
compiled qemu-kvm with no separate vnc thread. I thought that
libvirt might have connected to the vnc server at all times

Re: [libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-28 Thread Reeted

On 09/28/11 11:28, Daniel P. Berrange wrote:

On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:

On 09/28/11 09:51, Daniel P. Berrange wrote:

This is my bash commandline:

/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
-boot order=dc,menu=on -drive 
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
-usb -vnc 127.0.0.1:0 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by
doing:

 virsh qemu-monitor-command vmname1 'info kvm'

kvm support: enabled

I think I would see a higher impact if it was KVM not enabled.


Which was taken from libvirt's command line. The only modifications
I did to the original libvirt commandline (seen with ps aux) were:



- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
-device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net
tap,ifname=tap0,script=no,downscript=no
and manual bridging of the tap0 interface.

You could have equivalently used

  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
  -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3

It's this! It's this!! (thanks for the line)

It raises boot time by 10-13 seconds

Ok, that is truely bizarre and I don't really have any explanation
for why that is. I guess you could try 'vhost=off' too and see if that
makes the difference.


YES!
It's the vhost. With vhost=on it takes about 12 seconds more time to boot.

...meaning? :-)

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Reeted

On 09/28/11 11:53, Daniel P. Berrange wrote:

On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:

YES!
It's the vhost. With vhost=on it takes about 12 seconds more time to boot.

...meaning? :-)

I've no idea. I was always under the impression that 'vhost=on' was
the 'make it go much faster' switch. So something is going wrong
here that I cna't explain.

Perhaps one of the network people on this list can explain...


To turn vhost off in the libvirt XML, you should be able to use
driver name='qemu'/  for the interface in question,eg


 interface type='user'
   mac address='52:54:00:e5:48:58'/
   model type='virtio'/
   driver name='qemu'/
 /interface



Ok that seems to work: it removes the vhost part in the virsh launch 
hence cutting down 12secs of boot time.


If nobody comes out with an explanation of why, I will open another 
thread on the kvm list for this. I would probably need to test disk 
performance on vhost=on to see if it degrades or it's for another reason 
that boot time is increased.


Thanks so much for your help Daniel,
Reeted

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Reeted

On 09/28/11 14:56, Richard W.M. Jones wrote:

On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:

Ok that seems to work: it removes the vhost part in the virsh launch
hence cutting down 12secs of boot time.

If nobody comes out with an explanation of why, I will open another
thread on the kvm list for this. I would probably need to test disk
performance on vhost=on to see if it degrades or it's for another
reason that boot time is increased.

Is it using CPU during this time, or is the qemu-kvm process idle?

It wouldn't be the first time that a network option ROM sat around
waiting for an imaginary console user to press a key.

Rich.


Of the two qemu-kvm processes (threads?) which I see consuming CPU for 
that VM, one is at about 20%, the other at about 10%. I think it's doing 
something but maybe not much, or maybe it's really I/O bound and the I/O 
is slow (as I originarily thought). I will perform some disk benchmarks 
and follow up, but I can't do that right now...

Thank you

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)

2011-09-28 Thread Reeted

On 09/28/11 16:51, Reeted wrote:

On 09/28/11 14:56, Richard W.M. Jones wrote:

On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:

Ok that seems to work: it removes the vhost part in the virsh launch
hence cutting down 12secs of boot time.

If nobody comes out with an explanation of why, I will open another
thread on the kvm list for this. I would probably need to test disk
performance on vhost=on to see if it degrades or it's for another
reason that boot time is increased.

Is it using CPU during this time, or is the qemu-kvm process idle?

It wouldn't be the first time that a network option ROM sat around
waiting for an imaginary console user to press a key.

Rich.


Of the two qemu-kvm processes (threads?) which I see consuming CPU for 
that VM, one is at about 20%, the other at about 10%. I think it's 
doing something but maybe not much, or maybe it's really I/O bound and 
the I/O is slow (as I originarily thought). I will perform some disk 
benchmarks and follow up, but I can't do that right now...

Thank you


Ok still didn't do benchmarks but I am now quite a lot convinced that 
it's either a disk performance problem or cpu problem with vhostnet on. 
Not a network performance problem or idle wait.


Because I have installed another virtual machine now, which is a fedora 
core 6 (old!),  but with a debian natty kernel vmlinuz + initrd so that 
it supports virtio devices. The initrd part from Ubuntu is extremely 
short so finishes immediately, but then the fedora core 6 boot is much 
longer than with my previous ubuntu-barebone virtual machine, and with 
more messages, and I can see the various daemons being brought up one by 
one, and I can tell you such boot (and also the teardown of services 
during shutdown) is very much faster with vhostnet disabled.


With vhostnet disabled it takes 30seconds to come up (since after grub), 
and 28 seconds to shutdown.
With vhostnet enabled it takes 1m19sec to come up (since after grub), 
and 1m04sec to shutdown.



I have some ideas about disk benchmarking, that would be fio or simple 
dd. What could I use for CPU benchmarking? Would openssl speed be too 
simple?


Thank you

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[libvirt] Qemu/KVM is 3x slower under libvirt

2011-09-27 Thread Reeted

I repost this, this time by also including the libvirt mailing list.

Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is 
0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and 
qemu-kvm are vanilla and compiled by hand as described below.


My original message follows:

This is really strange.

I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1 
compiled by me.


I have created the first VM.
This is on LVM, virtio etc... if I run it directly from bash console, it 
boots in 8 seconds (it's a bare ubuntu with no graphics), while if I 
boot it under virsh (libvirt) it boots in 20-22 seconds. This is the 
time from after Grub to the login prompt, or from after Grub to the 
ssh-server up.


I was almost able to replicate the whole libvirt command line on the 
bash console, and it still goes almost 3x faster when launched from bash 
than with virsh start vmname. The part I wasn't able to replicate is the 
-netdev part because I still haven't understood the semantics of it.


This is my bash commandline:

/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm -m 
2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid 
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot 
order=dc,menu=on -drive 
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native 
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native 
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -net 
nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no  -usb -vnc 
127.0.0.1:0 -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5



Which was taken from libvirt's command line. The only modifications I 
did to the original libvirt commandline (seen with ps aux) were:


- Removed -S

- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net 
tap,ifname=tap0,script=no,downscript=no

and manual bridging of the tap0 interface.


Firstly I had thought that this could be fault of the VNC: I have 
compiled qemu-kvm with no separate vnc thread. I thought that libvirt 
might have connected to the vnc server at all times and this could have 
slowed down the whole VM.
But then I also tried connecting vith vncviewer to the KVM machine 
launched directly from bash, and the speed of it didn't change. So no, 
it doesn't seem to be that.


BTW: is the slowdown of the VM on no separate vnc thread only in 
effect when somebody is actually connected to VNC, or always?


Also, note that the time difference is not visible in dmesg once the 
machine has booted. So it's not a slowdown in detecting devices. Devices 
are always detected within the first 3 seconds, according to dmesg, at 
3.6 seconds the first ext4 mount begins. It seems to be really the OS 
boot that is slow... it seems an hard disk performance problem.


Thank you
R.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list