On Wed, Nov 12, 2008 at 1:50 PM, erik quanstrom <[EMAIL PROTECTED]> wrote:
>> protocol itself. The problem is it forces the server and client to
>> synchronise on every read/write syscall, which results in terrible
>> bandwidth utilisation. Unless we see some remarkable improvements in
> [...]
>>
>>  I'm sure I've missed something, but readahead is safe for all these
>> constructs except event files, where if you readahead (and get data)
>> but the client closes the file before actually read()-ing the buffered
>
> unfortunately, client caching is problematic.
>
> the open research question is, how to make latency the
> domain of the file server(s) and not of the application.

 I don't think that is possible without the current set of
syscalls/algorithms... Currently we have this (where time is measured
in one-way trips across the network):

0: the client calls pread()
0: devmnt sends a Tread to the server
1: server recieves Tread
1: server sends Rread
2: client recieves Rread
2: pread() returns

 You can't do any better than this if the server is involved, so what
saving is there to be made aside from:

0: the client calls pread()
0: devmnt notices it has the requested data cached
0: pread() returns

 devmnt doesn't have to be the one to choose what to cache, I suppose.
You could have the file server readahead safely, at the cost of having
to teach devmnt (and clients in general) to deal with unsolicited
messages from file servers, and be careful about rate limiting...

 The only other way around the problem I can see is an alternate set
of syscalls (or library calls, but either way you'd need to rewrite
anything that uses read()).
-sqweek

Reply via email to