On Tue, Apr 3, 2012 at 8:02 AM, Miles Fidelman
<mfidel...@meetinghouse.net>wrote:

> On Tue, Apr 3, 2012 at 7:23 AM, Tom Novelli<tnove...@gmail.com>  wrote:
>
>> Even if there does turn out to be a simple and general way to do parallel
>>
>>> programming, there'll always be tradeoffs weighing against it - energy
>> usage
>> and design complexity, to name two obvious ones.
>>
> To design complexity: you have to be kidding.  For huge classes of
> problems - anything that's remotely transactional or event driven,
> simulation, gaming come to mind immediately - it's far easier to
> conceptualize as spawning a process than trying to serialize things.  The
> stumbling block has always been context switching overhead.  That problem
> goes away as your hardware becomes massively parallel.
>
>
Important complexity issues of traditional approaches to parallel code
include verification, maintenance, regression testing, deadlock,
starvation, priority inversion. A good complexity metric is the number of
observably distinct failure modes.

Context switching overhead is a moderate performance concern, not a
complexity concern.

Conceptualizing task parallelism isn't enough. You need a way to
effectively control the resulting program.

That said, I also disagree with Tom, there: design complexity doesn't need
to increase with parallelism. The tradeoff between complexity vs.
parallelism is more an artifact of sticking with imperative programming.
Dataflows and pipelines can be parallelized without issue and remain
deterministic. If we push more of the parallelism to the code, the hardware
can also be less complex - i.e. less architecture to hide latency for
memory access.

Regards,

Dave

-- 
bringing s-words to a pen fight
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to