At 12/20/2016 06:01 PM, Stefan Hajnoczi wrote:
On Tue, Dec 20, 2016 at 9:54 AM, Dou Liyang <[email protected]> wrote:
At 12/20/2016 05:39 PM, Stefan Hajnoczi wrote:

On Tue, Dec 20, 2016 at 12:32:40AM +0800, Fam Zheng wrote:

On Mon, 12/19 15:02, Stefan Hajnoczi wrote:

On Mon, Dec 19, 2016 at 04:51:22PM +0800, Dou Liyang wrote:

These patches aim to refactor the qmp_query_blockstats() and
improve the performance by reducing the running time of it.

qmp_query_blockstats() is used to monitor the blockstats, it
querys all the graph_bdrv_states or monitor_block_backends.

There are the two jobs:

1 For the performance:

1.1 the time it takes(ns) in each time:
the disk numbers     | 10    | 500
-------------------------------------
before these patches | 19429 | 667722
after these patches  | 17516 | 557044

1.2 the I/O performance is degraded(%) during the monitor:

the disk numbers     | 10    | 500
-------------------------------------
before these patches | 1.3   | 14.2
after these patches  | 0.8   | 9.1


Do you know what is consuming the remaining 9.1%?

I'm surprised to see such a high performance impact caused by a QMP
command.


If it's "performance is 9.1% worse only during the 557044 ns when the QMP
command is being processed", it's probably becaues the main loop is
stalled a
bit, and it's not a big problem. I'd be very surprised if the degradation
is
more longer than that.


It would be interesting to compare against virtio-blk dataplane.  That
way the QMP command can execute without interfering with disk I/O
activity.


Yes, I will try to do it for you.

I have compared against the disk of IDE/SATA, I think the I/O requests
is handled by the vcpu threads. it isn't be disturbed like virtio-blk
dataplane. but its performance also degraded.

IDE/SATA does I/O request *submission* in the vcpu thread.  Completion
is processed by the main loop.  Therefore it is also affected by slow
monitor commands.


Can you tell me one of the completion method in Qemu? I want to learn
more.

Thanks,

  Liyang.

Stefan






Reply via email to