On Aug 31, 2011, at 1:53 PM, Johannes Pfau wrote:

> Andrei Alexandrescu wrote:
>> 
>> This will happen all the more if you have multiple threads.
>> 
>> You clearly have a good expertise on OS, I/O, and related matters, and
>> I am honoured by your presence and by the value you're adding to this 
>> group. However, in this particular argument you're only firing blanks. 
>> Are you sure you have a case?
> 
> So why don't we benchmark this?
> Here's a first, naive attempt:
> https://gist.github.com/1184671
> 
> There's not much to do for the sync implementation, but the async one
> should maybe keep a pool of buffers and reuse those. However, the async
> implementation was already ~10% faster in simple tests (copying a 700mb
> ubuntu .iso; source and target on the same harddisk). I wouldn't have
> expected this, but it seems the async copy is actually be faster.

Interesting.  I ran similar tests a while back using socket IO where one side 
of the operation was on a machine with ipfw limiting bandwidth to 1.5Mbps and 
couldn't produce a meaningful difference between the synchronous and 
asynchronous copy schemes.  I've included a snippet of my results below.  
copy_std is synchronous copy and copy_msg is asynchronous using 
message-passing.  I believe the code is modeled on samples from TDPL.


Tests were run using a 1.6MB file, which should take roughly 8 seconds to 
transfer over the wire at 1.5Mbs.  I ran the tests using "time copy_xxx" and 
the times below are the "real" time reported.  Numbers are ballpark averages 
from 3 tests.

                    local/local  remote/local  local/remote
copy_std       0.018s       8.677s             0.085s
copy_msg     0.022s       8.710s             0.088s

The results weren't what I expected, so I tried again with a 4.8MB file for 
comparison:

                   local/local  remote/local  local/remote
copy_std       0.035s      26.522s          0.210s
copy_msg     0.040s      26.382s          0.232s

Reply via email to