I think David Ungar hand-waves over way too many solutions here in pursuit
of his conclusion that we must abandon determinism.

Kahn Process Networks? No mention.
Time warp protocols (or the modern, lightweight variations)? No mention.
Synchronous reactive programming? No mention.
Functional reactive programming? No mention.
Dedalus, Bloom, and the CALM principles? No mention.
Temporal databases? No mention.
Multi-dimensional dataflow through cellular automata? Well, sorta, with
that MOLAP cube (*).

There's a huge community of work that is being ignored here. David Ungar is
arguing from a position that assumes an imperative, shared-memory model and
implicit time.

Amdahl's law holds well enough, of course. But so does Parkinson's law.
It's easy to find more useful work for those processor cycles.

If we model computations as smart networks with dumb nodes - e.g. a
reactive dataflow network - we immediately have more well-partitioned work
to do (hence better scalability) than if we model computations as smart
nodes with a dumb network (like actors). This is because the number of
relationships (edges) in a graph can grow much more rapidly than the number
of components (vertices).

Another observation is that programmers get to choose their problem. We
aren't sitting ducks for Amdahl's law. Our goal is to meet a set of
requirements, and we have much freedom to make tradeoffs in architecture
and design to better choose which obstacles we'll encounter. We can often
have scalability with determinism if we choose our problems carefully.

Determinism is too valuable to abandon easily. Where we lose determinism,
loss of productivity usually follows. It is difficult to debug, maintain,
verify, and regression test indeterministic programs. Black-box testing is
hindered because the number of failure modes is potentially larger than the
number of inputs. Coarser granularity for indeterminism does help, but
ultimately means the problems happen at larger scales.

Where we do give up determinism, it should be explicit and carefully
considered, and we should have a lot of control over exactly where it leaks
into our programs.

Regards,

Dave

On Tue, Mar 27, 2012 at 8:15 AM, Eugen Leitl <eu...@leitl.org> wrote:

>
> http://splashcon.org/2011/program/dls/245-invited-talk-2
>
> Mon 2:00-3:00 pm - Pavilion East
>
> Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed
> about the Future
>
> invited speakerDavid Ungar, IBM Research, USA
>
> In the 1970’s, researchers at Xerox PARC gave themselves a glimpse of the
> future by building computers that, although wildly impractical at the time,
> let them experience plentiful fast cycles and big memories. PARC
> researchers
> invented Smalltalk, and the freedom afforded by such a dynamic, yet safe,
> language, led them to create a new experience of computing, which has
> become
> quite mainstream today.
>
> In the end of the first decade of the new century, chips such as Tilera’s
> can
> give us a glimpse of a future in which manycore microprocessors will become
> commonplace: every (non-hand-held) computer’s CPU chip will contain 1,000
> fairly homogeneous cores. Such a system will not be programmed like the
> cloud, or even a cluster because communication will be much faster relative
> to computation. Nor will it be programmed like today’s multicore processors
> because the illusion of instant memory coherency will have been dispelled
> by
> both the physical limitations imposed by the 1,000-way fan-in to the memory
> system, and the comparatively long physical lengths of the inter- vs.
> intra-core connections. In the 1980’s we changed our model of computation
> from static to dynamic, and when this future arrives we will have to change
> our model of computation yet again.
>
> If we cannot skirt Amdahl’s Law, the last 900 cores will do us no good
> whatsoever. What does this mean? We cannot afford even tiny amounts of
> serialization. Locks?! Even lock-free algorithms will not be parallel
> enough.
> They rely on instructions that require communication and synchronization
> between cores’ caches. Just as we learned to embrace languages without
> static
> type checking, and with the ability to shoot ourselves in the foot, we will
> need to embrace a style of programming without any synchronization
> whatsoever.
>
> In our Renaissance project at IBM, Brussels, and Portland State,
> (http://soft.vub.ac.be/~smarr/renaissance/) we are investigating what we
> call
> “anti-lock,” “race-and-repair,” or “end-to-end nondeterministic” computing.
> As part of this effort, we have build a Smalltalk system that runs on the
> 64-core Tilera chip, and have experimented with dynamic languages atop this
> system. When we give up synchronization, we of necessity give up
> determinism.
> There seems to be a fundamental tradeoff between determinism and
> performance,
> just as there once seemed to be a tradeoff between static checking and
> performance.
>
> The obstacle we shall have to overcome, if we are to successfully program
> manycore systems, is our cherished assumption that we write programs that
> always get the exactly right answers. This assumption is deeply embedded in
> how we think about programming. The folks who build web search engines
> already understand, but for the rest of us, to quote Firesign Theatre:
> Everything You Know Is Wrong!
>
> David Ungar is an out-of-the-box thinker who enjoys the challenge of
> building
> computer software systems that work like magic and fit a user's mind like a
> glove. He received the 2009 Dahl-Nygaard award for outstanding career
> contributions in the field of object-orientation, and was honored as an ACM
> Fellow in 2010. Three of his papers have been honored by the Association
> for
> Computing Machinery for lasting impact over ten to twenty-four years: for
> the
> design of the prototype-based Self language, dynamic optimization
> techniques,
> and the application of cartoon animation ideas to user interfaces. He
> enjoys
> a position at IBM Research, where he is taking on a new challenge:
> investigating how application programmers can exploit manycore systems, and
> testing those ideas to see if they can help scale up analytics.
>
> [NOTE] this session is organized as a joint event with the AGERE! workshop
>
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>



-- 
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to