On Wed, Nov 13, 2013 at 3:08 PM, Ben Noordhuis <[email protected]> wrote:
> On Wed, Nov 13, 2013 at 11:45 AM, Daniel Micay <[email protected]> wrote:
>> Rust's requirements for asynchronous I/O would be filled well by direct usage
>> of IOCP on Windows. However, Linux only has solid support for non-blocking
>> sockets because file operations usually just retrieve a result from cache and
>> do not truly have to block. This results in libuv being significantly slower
>> than blocking I/O for most common cases for the sake of scalable socket
>> servers.
>
> Libuv maintainer here.  Libuv's current approach to dealing with
> blocking I/O is fairly crude: it offloads everything to a rather
> unsophisticated thread pool.  There is plenty of room for improvement.
>
> So far relatively little effort has gone into optimizing file I/O
> because it's not very critical for node.js.  I've been looking for an
> excuse to spend more time on it so please file issues or post
> suggestions.  If you have test cases or benchmarks where libuv is
> significantly lagging, please point me to them and I'll see what I can
> do.

I expect that regardless of how much effort will be put into libuv, it
won't be *as fast* as blocking I/O for the common small scale cases
most people care about. Rust wants to appeal to C++ programmers, and
it isn't going to do that if there's anything more than a 10-15%
performance hit (for CPU-bound or IO-bound work).

In a 64-bit world, even traditional blocking I/O looks pretty bad when
you can memory map everything on many disks many times over.

> Apropos IOCP, it's no panacea.  It has the same issue that native AIO
> on Linux has: it can silently turn asynchronous operations into
> synchronous ones.  It's sometimes useful but it's not a general
> solution for all things AIO.
>
>> On modern systems with flash memory, including mobile, there is a 
>> *consistent*
>> and relatively small worst-case latency for accessing data on the disk so
>> blocking is essentially a non-issue.
>
> I'm not sure if I can agree with that.  One of the issues with
> blocking I/O is that the calling thread gets rescheduled when the
> operation cannot be completed immediately.  Depending on workload and
> system scheduler, it may get penalized when blocking often.

Assuming Google lands their user-mode scheduling work, we would have
control over the scheduling of tasks with I/O-bound workloads. They
could be queued in a round-robin style, or weighted by whatever we
consider important (arrival time of a request, etc.).

This is already available on the 64-bit versions of Windows
7/8/Server, and I don't think we need to worry about XP/Vista as even
security updates will be cut off soon and tasks won't scale any better
than threads on 32-bit.

It would make sense to keep using libuv for cases where it's faster
than the blocking calls, but we wouldn't *have* to use it because
control would be returned to the scheduler on a blocking call or page
fault.
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to