Jesse Noller wrote: > Even luminaries such as Brian Goetz and many, many others have pointed > out that threading, as it exists today is fundamentally difficult to > get right. Ergo the "renaissance" (read: echo chamber) towards > Erlang-style concurrency.
I think this is slightly missing what Andy is saying. Andy is trying something that would look much more like Erlang-style concurrency than classic threads - "green processes" to use someone else's term. AFAIK, Erlang "processes" aren't really processes at the OS level. Instead, they are named processes because they only communicate through message passing. When multiple "processes" are running in the same os-level-multi-threaded interpreter, the interpreter cheats to make the message passing fast. I think Andy is thinking along the same lines. With a Python subinterpreter per thread, he is suggesting intra-process message passing as a way to get concurrency. Its actually not too far from what he is doing already, but he is fighting OS-level shared library semantics to do it. Instead, if Python supported a per-subinterpreter GIL and per-subinterpreter state, then you could theoretically get to a good place: - You only initialize subinterpreters if you need them, so single-process Python doesn't pay a large (any?) penalty - Intra-process message passing can be fast, but still has the no-shared-state benefits of the Erlang concurrency model - There are fewer changes to the Python core, because the GIL doesn't go away No, this isn't whole-hog free threading (or safe threading), there are restrictions that go along with this model - but there would be benefits. -- http://mail.python.org/mailman/listinfo/python-list