On Nov 10, 2007 12:48 AM, Rhamphoryncus <[EMAIL PROTECTED]> wrote:
> On Nov 9, 1:45 pm, "Terry Reedy" <[EMAIL PROTECTED]> wrote:
> > 2. If micro-locked Python ran, say, half as fast, then you can have a lot
> > of IPC (interprocess communition) overhead and still be faster with
> > multiple processes rather than multiple threads.
>
> Of course you'd be faster still if you rewrote key portions in C.
> That's usually not necessary though, so long as Python gives a roughly
> constant overhead compared to C, which in this case would be true so
> long as Python scaled up near 100% with the number of cores/threads.
>
> The bigger question is one of usability.  We could make a usability/
> performance tradeoff if we had more options, and there's a lot that
> can give good performance, but at this point they all offer poor to
> moderate usability, none having good usability.  The crux of the
> "multicore crisis" is that lack of good usability.

Certainly. I guess it would be possible to implement GIL-less
threading in Python quite easily if we required the programmer to
synchronize all data access (like the synchronized keyword in Java for
example), but that gets harder to use. Am I right that this is the
problem?

Actually, I would prefer to do parallell programming at a higher
level. If Python can't do efficient threading at low level (such as in
Java or C), then so be it. Perhaps multiple processes with message
passing is the way to go. It just that it seems so... primitive.

-- 
[EMAIL PROTECTED]
http://www.librador.com
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to