Re: [Qemu-devel] [5323] Implement an fd pool to get real AIO with posix-aio

2008-09-26 Thread Ryan Harper
* Anthony Liguori <[EMAIL PROTECTED]> [2008-09-26 13:37]:
> Ryan Harper wrote:
> >* Anthony Liguori <[EMAIL PROTECTED]> [2008-09-26 11:03]:

> >kvm: cache=on  posix-aio w/o patch |127.0 |  115.78  |   9.19
> >kvm: cache=on  posix-aio w/ patch  |126.0 |   67.35  |   9.30
> >  
> 
> It looks like 127mb/s is pretty close to the optimal cached write time.  
> When using caching, writes can complete almost immediately so it's not 
> surprising that submission latency is so low (even though it's blocking 
> during submission).
> 
> I am surprised that w/patch has a latency that's so high.  I think that 
> suggests that requests are queuing up.  I bet increasing the aio_num 
> field would reduce this number.

Yeah, there is plenty of room to twiddle with the threads and number of
outstanding ios, but that'll take quite a bit of time to generate the
data and compare.

> > new results 
> >--+--+--+--
> >kvm:cache=off posix-aio fd_pool[16]| 33.5 |   14.28  |  49.19
> >kvm:cache=off posix-aio fd_pool[64]| 51.1 |   14.86  |  23.66
> >  
> 
> I assume you tried to bump from 64 to something higher and couldn't make 
> up the lost bandwidth?

Very slightly, switching to 128 threads/fds gave another 1MB/s. 

> >16k write 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp 
> >lat (ms)
> >---+--+--+--
> >baremetal (O_DIRECT, aka cache=off)|128.1 |   10.90  |   9.45
> >kvm: cache=off posix-aio w/o patch |  5.1 | 3152.00  | 231.06 
> >kvm: cache=off linux-aio   |130.0 |   83.83  |   8.99
> >kvm: cache=on  posix-aio w/o patch |184.0 |   80.46  |   6.35
> >kvm: cache=on  posix-aio w/ patch  |165.0 |   70.90  |   7.09
> > new results 
> >--+--+--+--
> >kvm:cache=off posix-aio fd_pool[16]| 78.2 |   58.24  |  15.43
> >kvm:cache=off posix-aio fd_pool[64]|129.0 |   71.62  |   9.11
> >  
> 
> That's a nice result.  We could probably improve the latency by tweaking 
> the queue sizes.

Yeah, I was quite pleased to see a simpler solution perform so well.
> 
> Very nice work!  Thanks for doing the thorough analysis.

Thanks, very happy to see a signficant improvement in IO here.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [5323] Implement an fd pool to get real AIO with posix-aio

2008-09-26 Thread Anthony Liguori

Ryan Harper wrote:

* Anthony Liguori <[EMAIL PROTECTED]> [2008-09-26 11:03]:
  

Revision: 5323
  http://svn.sv.gnu.org/viewvc/?view=rev&root=qemu&revision=5323
Author:   aliguori
Date: 2008-09-26 15:59:29 + (Fri, 26 Sep 2008)

Log Message:
---
Implement an fd pool to get real AIO with posix-aio

This patch implements a simple fd pool to allow many AIO requests with
posix-aio.  The result is significantly improved performance (identical to that
reported for linux-aio) for both cache=on and cache=off.

The fundamental problem with posix-aio is that it limits itself to one thread
per-file descriptor.  I don't know why this is, but this patch provides a simple
mechanism to work around this (duplicating the file descriptor).

This isn't a great solution, but it seems like a reasonable intermediate step
between posix-aio and a custom thread-pool to replace it.

Ryan Harper will be posting some performance analysis he did comparing posix-aio
with fd pooling against linux-aio.  The size of the posix-aio thread pool and
the fd pool were largely determined by him based on this analysis.



I'll have some more data to post in a bit, but for now, bumping the fd
pool up to 64 and ensuring we init aio to support a thread per fd, we
mostly match linux aio performance with a simpler implementation.  For
randomwrites, fd_pool lags a bit, but I've got other data that shows in
most scenarios, fd_pool matches linux aio performance and does so with
less CPU consumption.

Results:

16k randwrite 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms)
---+--+--+--
baremetal (O_DIRECT, aka cache=off)| 61.2 |   13.07  |  19.59
kvm: cache=off posix-aio w/o patch |  4.7 | 3467.44  | 254.08
  


So with posix-aio, once we have many requests, each request is going to 
block until the request completes.  I don't fully understand why the 
average completion latency is so high because in theory, there should be 
no delay between completion and submission.  Maybe it has to do with the 
fact that we spend so much time blocking during submission, that the 
io-thread doesn't get a chance to run.  I bet if we dropped the 
qemu_mutex during submission, the completion latency would drop to a 
very small number.  Not worth actually testing.



kvm: cache=off linux-aio   | 61.1 |   75.35  |  19.57
  


The fact that the submission latency is so high confirms what I've been 
about linux-aio submissions being very unoptimal.  That is really quite 
high.



kvm: cache=on  posix-aio w/o patch |127.0 |  115.78  |   9.19
kvm: cache=on  posix-aio w/ patch  |126.0 |   67.35  |   9.30
  


It looks like 127mb/s is pretty close to the optimal cached write time.  
When using caching, writes can complete almost immediately so it's not 
surprising that submission latency is so low (even though it's blocking 
during submission).


I am surprised that w/patch has a latency that's so high.  I think that 
suggests that requests are queuing up.  I bet increasing the aio_num 
field would reduce this number.



 new results --+--+--+--
kvm:cache=off posix-aio fd_pool[16]| 33.5 |   14.28  |  49.19
kvm:cache=off posix-aio fd_pool[64]| 51.1 |   14.86  |  23.66
  


I assume you tried to bump from 64 to something higher and couldn't make 
up the lost bandwidth?



16k write 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms)
---+--+--+--
baremetal (O_DIRECT, aka cache=off)|128.1 |   10.90  |   9.45
kvm: cache=off posix-aio w/o patch |  5.1 | 3152.00  | 231.06 
kvm: cache=off linux-aio   |130.0 |   83.83  |   8.99

kvm: cache=on  posix-aio w/o patch |184.0 |   80.46  |   6.35
kvm: cache=on  posix-aio w/ patch  |165.0 |   70.90  |   7.09
 new results --+--+--+--
kvm:cache=off posix-aio fd_pool[16]| 78.2 |   58.24  |  15.43
kvm:cache=off posix-aio fd_pool[64]|129.0 |   71.62  |   9.11
  


That's a nice result.  We could probably improve the latency by tweaking 
the queue sizes.


Very nice work!  Thanks for doing the thorough analysis.

Regards,

Anthony Liguori


  


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] [5323] Implement an fd pool to get real AIO with posix-aio

2008-09-26 Thread Ryan Harper
* Anthony Liguori <[EMAIL PROTECTED]> [2008-09-26 11:03]:
> Revision: 5323
>   http://svn.sv.gnu.org/viewvc/?view=rev&root=qemu&revision=5323
> Author:   aliguori
> Date: 2008-09-26 15:59:29 + (Fri, 26 Sep 2008)
> 
> Log Message:
> ---
> Implement an fd pool to get real AIO with posix-aio
> 
> This patch implements a simple fd pool to allow many AIO requests with
> posix-aio.  The result is significantly improved performance (identical to 
> that
> reported for linux-aio) for both cache=on and cache=off.
> 
> The fundamental problem with posix-aio is that it limits itself to one thread
> per-file descriptor.  I don't know why this is, but this patch provides a 
> simple
> mechanism to work around this (duplicating the file descriptor).
> 
> This isn't a great solution, but it seems like a reasonable intermediate step
> between posix-aio and a custom thread-pool to replace it.
> 
> Ryan Harper will be posting some performance analysis he did comparing 
> posix-aio
> with fd pooling against linux-aio.  The size of the posix-aio thread pool and
> the fd pool were largely determined by him based on this analysis.

I'll have some more data to post in a bit, but for now, bumping the fd
pool up to 64 and ensuring we init aio to support a thread per fd, we
mostly match linux aio performance with a simpler implementation.  For
randomwrites, fd_pool lags a bit, but I've got other data that shows in
most scenarios, fd_pool matches linux aio performance and does so with
less CPU consumption.

Results:

16k randwrite 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms)
---+--+--+--
baremetal (O_DIRECT, aka cache=off)| 61.2 |   13.07  |  19.59
kvm: cache=off posix-aio w/o patch |  4.7 | 3467.44  | 254.08
kvm: cache=off linux-aio   | 61.1 |   75.35  |  19.57
kvm: cache=on  posix-aio w/o patch |127.0 |  115.78  |   9.19
kvm: cache=on  posix-aio w/ patch  |126.0 |   67.35  |   9.30
 new results --+--+--+--
kvm:cache=off posix-aio fd_pool[16]| 33.5 |   14.28  |  49.19
kvm:cache=off posix-aio fd_pool[64]| 51.1 |   14.86  |  23.66


16k write 1 thread, 74 iodepth | MB/s | avg sub lat (us) | avg comp lat (ms)
---+--+--+--
baremetal (O_DIRECT, aka cache=off)|128.1 |   10.90  |   9.45
kvm: cache=off posix-aio w/o patch |  5.1 | 3152.00  | 231.06 
kvm: cache=off linux-aio   |130.0 |   83.83  |   8.99
kvm: cache=on  posix-aio w/o patch |184.0 |   80.46  |   6.35
kvm: cache=on  posix-aio w/ patch  |165.0 |   70.90  |   7.09
 new results --+--+--+--
kvm:cache=off posix-aio fd_pool[16]| 78.2 |   58.24  |  15.43
kvm:cache=off posix-aio fd_pool[64]|129.0 |   71.62  |   9.11


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
[EMAIL PROTECTED]
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html