Sean Kelly wrote:
Walter Bright Wrote:

Russel Winder wrote:
At the heart of all this is that programmers are taught that algorithm
is a sequence of actions to achieve a goal.  Programmers are trained to
think sequentially and this affects their coding.  This means that
parallelism has to be expressed at a sufficiently high level that
programmers can still reason about algorithms as sequential things.
I think it's more than being trained to think sequentially. I think it is in the inherent nature of how we think.

Distributed programming is essentially a bunch of little sequential program 
that interact, which is basically how people cooperate in the real world.  I 
think that is by far the most intuitive of any concurrent programming model, 
though it's still a significant conceptual shift from the traditional 
monolithic imperative program.

The Erlang people seem to say that a lot. The thing they omit to say, though, is that it is very, very difficult in the real world! Consider managing a team of ten people. Getting them to be ten times as productive as a single person is extremely difficult -- virtually impossible, in fact.

I agree with Walter -- I don't think it's got much to do with programmer training. It's a problem that hasn't been solved in the real world in the general case.

The analogy with the real world suggests to me that there are three cases that work well:
* massively parallel;
* _completely_ independent tasks; and
* very small teams.

Large teams are a management nightmare, and I see no reason to believe that wouldn't hold true for a large number of cores as well.

Reply via email to