Christian Heimes wrote: > [EMAIL PROTECTED] wrote: >> Multi-threaded control flow is a worthwhile priority. > > It is? That's totally new to me. Given the fact that threads don't scale > I highly doubt your claim, too.
There's plenty that can be done to automatically extract parallelism from programs, but given the architecture of CPython, with the "global interpreter lock" and the inability to detect dynamism at compile time, it's probably not going to be feasible with that implementation. I've been to a recent talk at Stanford (in EE380) in which someone was describing an optimizing compiler for Matlab intended to generate production DSP code. I've heard two other talks on how to automatically parallelize loops for execution in a GPU. It's quite possible to do extreme optimizations like that. But those are things you do after solving the problems of being 10x-30x slower than C. The real optimization trick for Python is figuring out at compile time what might change at run time, and what won't be. Then all the things that can't change can be hard-bound during compilation. Shed Skin does some of that. John Nagle -- http://mail.python.org/mailman/listinfo/python-list