On Sat, Jul 24, 2010 at 12:46 PM, Marcus Daniels <mar...@snoutfarm.com>wrote:

> Roger Critchlow wrote:
>
>> I found it even more apparent on this pass through that the language is
>> very well built for the kind of parallel programming that I've become
>> comfortable with in erlang.  That is, go makes it very easy to spin off a
>> new thread/process/goroutine and establish communications using channels.
>>  This is a matter of being able to easily instantiate the appropriate graph
>> of communicating sequential processes to a computational task, receive the
>> result of the computation when it finishes or fails, and know that all the
>> cruft got cleaned up.  So if your computation can be pipelined or fanned out
>> onto multiple cores,
>>
> I can see that goroutines and channels are appealing programming
> abstractions, but have a hard time believing they could scale.  Seems like
> the more goroutines you have the more CPU cycles that will be absorbed in
> switching amongst them.    I could see how distributed Erlang would scale
> with lots of high latency _network_ messages in flight -- the amount of time
> for switching would be small compared to the latency of the message.   That
> wouldn't seem to be the case with Google Go, which would all be in core.
>
> Right, but is that a Google Go problem or is it our failure to build useful
multi-core processors?

All my Erlang programs are running on one machine, but that doesn't make the
factoring into communicating processes any less pleasing to my sense of
algorithmic correctness.  If I am comfortable correctly expressing the
parallel granularity of a computation, then a compiler can transform to any
equivalent sequential form up to simply simulating the parallelism I wrote
on a single core.  But if I can't express the parallel granularity, then who
will ever know what I was trying to do?

Erlang can scale with distribution, but it can also discover that processes
which cooperated when locally hosted fail when distributed, or vice versa.
 Every receive in an Erlang program has a timeout which typically reports
what failed to happen in the expected time and dies.  Which is why Erlang
comes bundled with the uselessly misnamed OTP (Open Telecom Platform)
libraries so you can monitor process deaths and specify how much of the
system needs to be torn down and restarted when part of it chokes, and give
up when it chokes repeatedly, and write logs of stultifying detail about
what happened.  At which point you open up the logs, see who repeatedly
timed out, and tweak the timeout until it gets happy again.  You can, in
general, tune things to work at different scales, but not all things and not
at all scales.

Locally hosted Erlang programs can scale linearly in performance with the
number of cores, but they will probably run into the same problem that you
anticipate for Google Go at some point.

-- rec --
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
lectures, archives, unsubscribe, maps at http://www.friam.org

Reply via email to