I agree that message-passing in general can be made as incomprehensible and non-deterministic as threads. So one should carefully choose their message-passing abstraction. But for example Unix pipes guarantee that messages between two processes will be received in the same order they are sent, which is already a much stronger guarantee than what you get with threads and their do-it-yourself approach to synchronization.
So even an abstraction as simple as pipes already hides a lot of complexity that shared-memory designs have to deal with explicitly. If you look at Erlang, which is a language designed to be concurrent at the core, its principal concurrency abstraction is message-passing, not shared memory. Not to mention that most research efforts into concurrent languages ended up devising process calculi with much, much stronger determinism than shared-memory concurrency. The moral of the story is that systems whose behavior is governed by an explicit, unchanging set of states and transitions is much easier to reason and think about, than are shared-memory systems whose behavior may be determined by instruction timing-- something that the programmer has no control over, and cannot monitor other than with very sophisticated debugging tools. Message passing can provide this explicit determinism, while shared-memory threads might be able to do so only at the expense of very complicated (and slow) locking semantics. -Ivan Aleksej Saushev <[EMAIL PROTECTED]> writes: > If you have two threads, that communicate passing messages, you > have the same case, only without unnecessary context switches > and other overhead. Some debugging problems are solved either. > > Not all of them, of course, since the complexity just moves to > message passing protocol instead of resource usage policy. _______________________________________________ Chicken-users mailing list Chicken-users@nongnu.org http://lists.nongnu.org/mailman/listinfo/chicken-users