Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-19 Thread Paolo Bonzini
Il 19/09/2014 07:53, Fam Zheng ha scritto:
 Any ideas?

The obvious, but hardish one is to switch to epoll (one epoll fd per
AioContext, plus one for iohandler.c).

This would require converting iohandler.c to a GSource.

Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-18 Thread Fam Zheng
On Tue, 09/02 12:06, Amit Shah wrote:
 On (Mon) 01 Sep 2014 [20:52:46], Zhang Haoyu wrote:
   Hi, all
   
   I start a VM with virtio-serial (default ports number: 31), and found 
   that virtio-blk performance degradation happened, about 25%, this 
   problem can be reproduced 100%.
   without virtio-serial:
   4k-read-random 1186 IOPS
   with virtio-serial:
   4k-read-random 871 IOPS
   
   but if use max_ports=2 option to limit the max number of virio-serial 
   ports, then the IO performance degradation is not so serious, about 5%.
   
   And, ide performance degradation does not happen with virtio-serial.
  
  Pretty sure it's related to MSI vectors in use.  It's possible that
  the virtio-serial device takes up all the avl vectors in the guests,
  leaving old-style irqs for the virtio-blk device.
  
  I don't think so,
  I use iometer to test 64k-read(or write)-sequence case, if I disable the 
  virtio-serial dynamically via device manager-virtio-serial = disable,
  then the performance get promotion about 25% immediately, then I re-enable 
  the virtio-serial via device manager-virtio-serial = enable,
  the performance got back again, very obvious.
  add comments:
  Although the virtio-serial is enabled, I don't use it at all, the 
  degradation still happened.
 
 Using the vectors= option as mentioned below, you can restrict the
 number of MSI vectors the virtio-serial device gets.  You can then
 confirm whether it's MSI that's related to these issues.

Amit,

It's related to the big number of ioeventfds used in virtio-serial-pci. With
virtio-serial-pci's ioeventfd=off, the performance is not affected no matter if
guest initializes it or not.

In my test, there are 12 fds to poll in qemu_poll_ns before loading guest
virtio_console.ko, whereas 76 once modprobe virtio_console.

Looks like the ppoll takes more time to poll more fds.

Some trace data with systemtap:

12 fds:

time  rel_time  symbol
15(+1)  qemu_poll_ns  [enter]
18(+3)  qemu_poll_ns  [return]

76 fd:

12(+2)  qemu_poll_ns  [enter]
18(+6)  qemu_poll_ns  [return]

I haven't looked at virtio-serial code, I'm not sure if we should reduce the
number of ioeventfds in virtio-serial-pci or focus on lower level efficiency.

Haven't compared with g_poll but I think the underlying syscall should be the
same.

Any ideas?

Fam


 
  So, I think it has no business with legacy interrupt mode, right?
  
  I am going to observe the difference of perf top data on qemu and perf kvm 
  stat data when disable/enable virtio-serial in guest,
  and the difference of perf top data on guest when disable/enable 
  virtio-serial in guest,
  any ideas?
  
  Thanks,
  Zhang Haoyu
  If you restrict the number of vectors the virtio-serial device gets
  (using the -device virtio-serial-pci,vectors= param), does that make
  things better for you?
 
 
 
   Amit
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-02 Thread Amit Shah
On (Mon) 01 Sep 2014 [20:52:46], Zhang Haoyu wrote:
  Hi, all
  
  I start a VM with virtio-serial (default ports number: 31), and found 
  that virtio-blk performance degradation happened, about 25%, this problem 
  can be reproduced 100%.
  without virtio-serial:
  4k-read-random 1186 IOPS
  with virtio-serial:
  4k-read-random 871 IOPS
  
  but if use max_ports=2 option to limit the max number of virio-serial 
  ports, then the IO performance degradation is not so serious, about 5%.
  
  And, ide performance degradation does not happen with virtio-serial.
 
 Pretty sure it's related to MSI vectors in use.  It's possible that
 the virtio-serial device takes up all the avl vectors in the guests,
 leaving old-style irqs for the virtio-blk device.
 
 I don't think so,
 I use iometer to test 64k-read(or write)-sequence case, if I disable the 
 virtio-serial dynamically via device manager-virtio-serial = disable,
 then the performance get promotion about 25% immediately, then I re-enable 
 the virtio-serial via device manager-virtio-serial = enable,
 the performance got back again, very obvious.
 add comments:
 Although the virtio-serial is enabled, I don't use it at all, the degradation 
 still happened.

Using the vectors= option as mentioned below, you can restrict the
number of MSI vectors the virtio-serial device gets.  You can then
confirm whether it's MSI that's related to these issues.

 So, I think it has no business with legacy interrupt mode, right?
 
 I am going to observe the difference of perf top data on qemu and perf kvm 
 stat data when disable/enable virtio-serial in guest,
 and the difference of perf top data on guest when disable/enable 
 virtio-serial in guest,
 any ideas?
 
 Thanks,
 Zhang Haoyu
 If you restrict the number of vectors the virtio-serial device gets
 (using the -device virtio-serial-pci,vectors= param), does that make
 things better for you?



Amit
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-02 Thread Andrey Korolyov
On Tue, Sep 2, 2014 at 10:36 AM, Amit Shah amit.s...@redhat.com wrote:
 On (Mon) 01 Sep 2014 [20:52:46], Zhang Haoyu wrote:
  Hi, all
 
  I start a VM with virtio-serial (default ports number: 31), and found 
  that virtio-blk performance degradation happened, about 25%, this 
  problem can be reproduced 100%.
  without virtio-serial:
  4k-read-random 1186 IOPS
  with virtio-serial:
  4k-read-random 871 IOPS
 
  but if use max_ports=2 option to limit the max number of virio-serial 
  ports, then the IO performance degradation is not so serious, about 5%.
 
  And, ide performance degradation does not happen with virtio-serial.
 
 Pretty sure it's related to MSI vectors in use.  It's possible that
 the virtio-serial device takes up all the avl vectors in the guests,
 leaving old-style irqs for the virtio-blk device.
 
 I don't think so,
 I use iometer to test 64k-read(or write)-sequence case, if I disable the 
 virtio-serial dynamically via device manager-virtio-serial = disable,
 then the performance get promotion about 25% immediately, then I re-enable 
 the virtio-serial via device manager-virtio-serial = enable,
 the performance got back again, very obvious.
 add comments:
 Although the virtio-serial is enabled, I don't use it at all, the 
 degradation still happened.

 Using the vectors= option as mentioned below, you can restrict the
 number of MSI vectors the virtio-serial device gets.  You can then
 confirm whether it's MSI that's related to these issues.

 So, I think it has no business with legacy interrupt mode, right?
 
 I am going to observe the difference of perf top data on qemu and perf kvm 
 stat data when disable/enable virtio-serial in guest,
 and the difference of perf top data on guest when disable/enable 
 virtio-serial in guest,
 any ideas?
 
 Thanks,
 Zhang Haoyu
 If you restrict the number of vectors the virtio-serial device gets
 (using the -device virtio-serial-pci,vectors= param), does that make
 things better for you?



 Amit
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html


Can confirm serious degradation comparing to the 1.1 with regular
serial output  - I am able to hang VM forever after some tens of
seconds after continuously printing dmest to the ttyS0. VM just ate
all available CPU quota during test and hanged over some tens of
seconds, not even responding to regular pings and progressively
raising CPU consumption up to the limit.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-02 Thread Amit Shah
On (Tue) 02 Sep 2014 [22:05:45], Andrey Korolyov wrote:

 Can confirm serious degradation comparing to the 1.1 with regular
 serial output  - I am able to hang VM forever after some tens of
 seconds after continuously printing dmest to the ttyS0. VM just ate
 all available CPU quota during test and hanged over some tens of
 seconds, not even responding to regular pings and progressively
 raising CPU consumption up to the limit.

Entirely different to what's being discussed here.  You're observing
slowdown with ttyS0 in the guest -- the isa-serial device.  This
thread is discussing virtio-blk and virtio-serial.

Amit
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-02 Thread Andrey Korolyov
On Tue, Sep 2, 2014 at 10:11 PM, Amit Shah amit.s...@redhat.com wrote:
 On (Tue) 02 Sep 2014 [22:05:45], Andrey Korolyov wrote:

 Can confirm serious degradation comparing to the 1.1 with regular
 serial output  - I am able to hang VM forever after some tens of
 seconds after continuously printing dmest to the ttyS0. VM just ate
 all available CPU quota during test and hanged over some tens of
 seconds, not even responding to regular pings and progressively
 raising CPU consumption up to the limit.

 Entirely different to what's being discussed here.  You're observing
 slowdown with ttyS0 in the guest -- the isa-serial device.  This
 thread is discussing virtio-blk and virtio-serial.

 Amit

Sorry for thread hijacking, the problem definitely not related to the
interrupt rework, will start a new thread.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Zhang Haoyu
 Hi, all
 
 I start a VM with virtio-serial (default ports number: 31), and found that 
 virtio-blk performance degradation happened, about 25%, this problem can be 
 reproduced 100%.
 without virtio-serial:
 4k-read-random 1186 IOPS
 with virtio-serial:
 4k-read-random 871 IOPS
 
 but if use max_ports=2 option to limit the max number of virio-serial ports, 
 then the IO performance degradation is not so serious, about 5%.
 
 And, ide performance degradation does not happen with virtio-serial.

Pretty sure it's related to MSI vectors in use.  It's possible that
the virtio-serial device takes up all the avl vectors in the guests,
leaving old-style irqs for the virtio-blk device.

I don't think so,
I use iometer to test 64k-read(or write)-sequence case, if I disable the 
virtio-serial dynamically via device manager-virtio-serial = disable,
then the performance get promotion about 25% immediately, then I re-enable the 
virtio-serial via device manager-virtio-serial = enable,
the performance got back again, very obvious.
So, I think it has no business with legacy interrupt mode, right?

I am going to observe the difference of perf top data on qemu and perf kvm stat 
data when disable/enable virtio-serial in guest,
and the difference of perf top data on guest when disable/enable virtio-serial 
in guest,
any ideas?

Thanks,
Zhang Haoyu
If you restrict the number of vectors the virtio-serial device gets
(using the -device virtio-serial-pci,vectors= param), does that make
things better for you?


   Amit

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Amit Shah
On (Mon) 01 Sep 2014 [20:38:20], Zhang Haoyu wrote:
  Hi, all
  
  I start a VM with virtio-serial (default ports number: 31), and found that 
  virtio-blk performance degradation happened, about 25%, this problem can 
  be reproduced 100%.
  without virtio-serial:
  4k-read-random 1186 IOPS
  with virtio-serial:
  4k-read-random 871 IOPS
  
  but if use max_ports=2 option to limit the max number of virio-serial 
  ports, then the IO performance degradation is not so serious, about 5%.
  
  And, ide performance degradation does not happen with virtio-serial.
 
 Pretty sure it's related to MSI vectors in use.  It's possible that
 the virtio-serial device takes up all the avl vectors in the guests,
 leaving old-style irqs for the virtio-blk device.
 
 I don't think so,
 I use iometer to test 64k-read(or write)-sequence case, if I disable the 
 virtio-serial dynamically via device manager-virtio-serial = disable,
 then the performance get promotion about 25% immediately, then I re-enable 
 the virtio-serial via device manager-virtio-serial = enable,
 the performance got back again, very obvious.
 So, I think it has no business with legacy interrupt mode, right?
 
 I am going to observe the difference of perf top data on qemu and perf kvm 
 stat data when disable/enable virtio-serial in guest,
 and the difference of perf top data on guest when disable/enable 
 virtio-serial in guest,
 any ideas?

So it's a windows guest; it could be something windows driver
specific, then?  Do you see the same on Linux guests too?

Amit
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Zhang Haoyu
 Hi, all
 
 I start a VM with virtio-serial (default ports number: 31), and found that 
 virtio-blk performance degradation happened, about 25%, this problem can be 
 reproduced 100%.
 without virtio-serial:
 4k-read-random 1186 IOPS
 with virtio-serial:
 4k-read-random 871 IOPS
 
 but if use max_ports=2 option to limit the max number of virio-serial 
 ports, then the IO performance degradation is not so serious, about 5%.
 
 And, ide performance degradation does not happen with virtio-serial.

Pretty sure it's related to MSI vectors in use.  It's possible that
the virtio-serial device takes up all the avl vectors in the guests,
leaving old-style irqs for the virtio-blk device.

I don't think so,
I use iometer to test 64k-read(or write)-sequence case, if I disable the 
virtio-serial dynamically via device manager-virtio-serial = disable,
then the performance get promotion about 25% immediately, then I re-enable the 
virtio-serial via device manager-virtio-serial = enable,
the performance got back again, very obvious.
add comments:
Although the virtio-serial is enabled, I don't use it at all, the degradation 
still happened.

So, I think it has no business with legacy interrupt mode, right?

I am going to observe the difference of perf top data on qemu and perf kvm 
stat data when disable/enable virtio-serial in guest,
and the difference of perf top data on guest when disable/enable virtio-serial 
in guest,
any ideas?

Thanks,
Zhang Haoyu
If you restrict the number of vectors the virtio-serial device gets
(using the -device virtio-serial-pci,vectors= param), does that make
things better for you?


  Amit

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Christian Borntraeger
On 01/09/14 14:52, Zhang Haoyu wrote:
 Hi, all

 I start a VM with virtio-serial (default ports number: 31), and found that 
 virtio-blk performance degradation happened, about 25%, this problem can 
 be reproduced 100%.
 without virtio-serial:
 4k-read-random 1186 IOPS
 with virtio-serial:
 4k-read-random 871 IOPS

 but if use max_ports=2 option to limit the max number of virio-serial 
 ports, then the IO performance degradation is not so serious, about 5%.

 And, ide performance degradation does not happen with virtio-serial.

 Pretty sure it's related to MSI vectors in use.  It's possible that
 the virtio-serial device takes up all the avl vectors in the guests,
 leaving old-style irqs for the virtio-blk device.

 I don't think so,
 I use iometer to test 64k-read(or write)-sequence case, if I disable the 
 virtio-serial dynamically via device manager-virtio-serial = disable,
 then the performance get promotion about 25% immediately, then I re-enable 
 the virtio-serial via device manager-virtio-serial = enable,
 the performance got back again, very obvious.
 add comments:
 Although the virtio-serial is enabled, I don't use it at all, the degradation 
 still happened.

This is just wild guessing:
If virtio-blk and virtio-serial share an IRQ, the guest operating system has to 
check each virtqueue for activity. Maybe there is some inefficiency doing that.
AFAIK virtio-serial registers 64 virtqueues (on 31 ports + console) even if 
everything is unused.

Christian


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Paolo Bonzini
Il 01/09/2014 15:09, Christian Borntraeger ha scritto:
 This is just wild guessing:
 If virtio-blk and virtio-serial share an IRQ, the guest operating system has 
 to check each virtqueue for activity. Maybe there is some inefficiency doing 
 that.
 AFAIK virtio-serial registers 64 virtqueues (on 31 ports + console) even if 
 everything is unused.

That could be the case if MSI is disabled.

Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Christian Borntraeger
On 01/09/14 15:12, Paolo Bonzini wrote:
 Il 01/09/2014 15:09, Christian Borntraeger ha scritto:
 This is just wild guessing:
 If virtio-blk and virtio-serial share an IRQ, the guest operating system has 
 to check each virtqueue for activity. Maybe there is some inefficiency doing 
 that.
 AFAIK virtio-serial registers 64 virtqueues (on 31 ports + console) even if 
 everything is unused.
 
 That could be the case if MSI is disabled.
 
 Paolo
 

Do the windows virtio drivers enable MSIs, in their inf file?

Christian

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Paolo Bonzini
Il 01/09/2014 15:22, Christian Borntraeger ha scritto:
   If virtio-blk and virtio-serial share an IRQ, the guest operating system 
   has to check each virtqueue for activity. Maybe there is some 
   inefficiency doing that.
   AFAIK virtio-serial registers 64 virtqueues (on 31 ports + console) even 
   if everything is unused.
  
  That could be the case if MSI is disabled.
 
 Do the windows virtio drivers enable MSIs, in their inf file?

It depends on the version of the drivers, but it is a reasonable guess
at what differs between Linux and Windows.  Haoyu, can you give us the
output of lspci from a Linux guest?

Paolo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Christian Borntraeger
On 01/09/14 15:29, Paolo Bonzini wrote:
 Il 01/09/2014 15:22, Christian Borntraeger ha scritto:
 If virtio-blk and virtio-serial share an IRQ, the guest operating system 
 has to check each virtqueue for activity. Maybe there is some inefficiency 
 doing that.
 AFAIK virtio-serial registers 64 virtqueues (on 31 ports + console) even 
 if everything is unused.

 That could be the case if MSI is disabled.

 Do the windows virtio drivers enable MSIs, in their inf file?
 
 It depends on the version of the drivers, but it is a reasonable guess
 at what differs between Linux and Windows.  Haoyu, can you give us the
 output of lspci from a Linux guest?
 
 Paolo

Zhang Haoyu, which virtio drivers did you use?

I just checked the Fedora virtio driver. The INF file does not contain the MSI 
enablement as described in
http://msdn.microsoft.com/en-us/library/windows/hardware/ff544246%28v=vs.85%29.aspx
That would explain the performance issues - given that the link information is 
still true.



Christian





--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [question] virtio-blk performance degradationhappened with virito-serial

2014-09-01 Thread Christian Borntraeger
On 01/09/14 16:03, Christian Borntraeger wrote:
 On 01/09/14 15:29, Paolo Bonzini wrote:
 Il 01/09/2014 15:22, Christian Borntraeger ha scritto:
 If virtio-blk and virtio-serial share an IRQ, the guest operating system 
 has to check each virtqueue for activity. Maybe there is some 
 inefficiency doing that.
 AFAIK virtio-serial registers 64 virtqueues (on 31 ports + console) even 
 if everything is unused.

 That could be the case if MSI is disabled.

 Do the windows virtio drivers enable MSIs, in their inf file?

 It depends on the version of the drivers, but it is a reasonable guess
 at what differs between Linux and Windows.  Haoyu, can you give us the
 output of lspci from a Linux guest?

 Paolo
 
 Zhang Haoyu, which virtio drivers did you use?
 
 I just checked the Fedora virtio driver. The INF file does not contain the 
 MSI enablement as described in
 http://msdn.microsoft.com/en-us/library/windows/hardware/ff544246%28v=vs.85%29.aspx
 That would explain the performance issues - given that the link information 
 is still true.

Sorry, looked at the wrong inf file. The fedora driver does use MSI for serial 
and block.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html