"Donn Cave" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]
Quoth Skip Montanaro <[EMAIL PROTECTED]>:
|
| Jp> How often do you run 4 processes that are all bottlenecked on CPU?
|
| In scientific computing I suspect this happens rather frequently.


I think he was trying to say more or less the same thing - responding
to "(IBM mainframes) ... All those systems ran multiple programs ...
My current system has 42 processes running ...", his point was that
however many processes on your desktop, on the rare occasion that
your CPU is pegged, it will be 1 process.  The process structure of
a system workload doesn't make it naturally take advantage of SMP.
So "there will still need to be language innovations" etc. -- to
accommodate scientific computing or whatever.  Your 4 processes are
most likely not a natural architecture for the task at hand, but
rather a complication introduced specifically to exploit SMP.

Exactly. I wasn't addressing some of the known areas where one can take advantage of multiple processors, or where one can take advantage of threading on a single processor to avoid delays.

At this point in time, though, I see multithreading for compute
intensive tasks to be an intermediate step. The final step is to
restructure it so it can take advantage of cluster architectures.
Then you can simply ignore all of the complexity of threads.

That still leaves putting long running tasks (such as printing)
into the background so the UI stays responsive.


Personally I wouldn't care to predict anything here. For all I know, someday we may decide that we need cooler and more efficient computers more than we need faster ones.

Chuckle. I basically think of shared memory multiprocessing as being perverse: the bottleneck is memory, not compute speed, so adding more processors accessing the same memory doesn't strike me as exactly sane. Nor does pushing compute speed up and up and up when it just stressed the memory bottleneck.

Donn Cave, [EMAIL PROTECTED]

-- http://mail.python.org/mailman/listinfo/python-list

Reply via email to