Benjamin Franksen wrote:
I'd be careful. Introducing a network connection into the equation makes the
object (its methods) susceptible to a whole new bunch of failure modes;
think indefinite delays, connection loss, network buffer overflow, etc etc.
It may be a mistake to abstract all that away; in fact I am convinced that
the old Unix habit of sweeping all these failure modes and potentially long
delays under a big carpet named 'file abstraction' was a bad idea to begin
with. The ages old and still not solved problems with web browsers hanging
indefinitely (w/o allowing any GUI interaction) while name resolution waits
for completion is only the most prominent example.

IMO it's just a terribly stupid bug in the best web browsers. Maybe inefficient, poorly, or not-at-all-used multithreading?

"file abstraction" has its points. We just need a (type-level?) clear-to-program-with distinction between operations that may block indefinitely, and operations that have particular bounds on their difficulty. Although, modern OSes try to balance too many things, don't usually make any such hard real-time guarantees, in favor of everything turning out more-or-less correct eventually. Back to "file abstraction" - well, considering the benefits of mounting remote systems as a filesystem. The hierarchy abstraction of the filesystem didn't stay the same performance characteristics... And all kinds of potential problems result too, when the connection breaks down!

How do you program with all those error conditions explicitly? It is difficult. You need libraries to do it well - and I'm not at all sure whether there exist such libraries yet! I mean, programs are much too complicated already without infesting them with a lot of special cases.

> indefinite delays
I can create with `someCommand | haskellProgram` too
> connection loss
Is there a correct way to detect this? I find it rather odd when I lose my IRC connection for a moment and then it comes back a moment later (Wesnoth games are worse, apparently, as they don't reconnect automatically). I often prefer considering them an indefinite delay.
> network buffer overflow
that is: too much input, not processing it fast enough? (or similar). Memory size limitations are considerably unhandled in programs of all sorts, not just networked ones, though they(networked) may suffer the most. We wish we had true unlimited-memory Turing machines :) ...this is possibly the most difficult issue to deal with formally. Probably requires limiting input data rates artificially.


Isaac
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to