On 12/19/18 12:54 PM, H. S. Teoh wrote:
On Wed, Dec 19, 2018 at 11:56:44AM -0500, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
I had expected *some* improvement, I even wrote a "grep-like" example
that tries to keep a lot of data in the buffer such that moving the
data will be an expensive copy. I got no measurable difference.

I would suspect due to that experience that any gains made in not
copying would be dwarfed by the performance of network i/o vs. disk
i/o.
[...]

Ahh, that makes sense.  Did you test async I/O?  Not that I expect any
difference there either if you're I/O-bound; but reducing CPU load in
that case frees it up for other tasks.  I don't know how easy it would
be to test this, but I'm curious about what results you might get if you
had a compute-intensive background task that you run while waiting for
async I/O, then measure how much of the computation went through while
running the grep-like part of the code with either the circular buffer
or the moving buffer when each async request comes back.

Though that seems like a rather contrived example, since normally you'd
just spawn a different thread and let the OS handle the async for you.

The expectation in iopipe is that async i/o will be done a-la vibe.d style fiber-based i/o.

But even then, the cost of copying doesn't go up -- if it's negligable in synchronous i/o, it's going to be negligible in async i/o. If anything, it's going to be even less noticeable. It was quite a disappointment to me, actually.

-Steve

Reply via email to