>
> The Global Interpreter Lock is fundamentally designed to make the
> interpreter easier to maintain and safer: Developers do not need to
> worry about other code stepping on their namespace. This makes things
> thread-safe, inasmuch as having multiple PThreads within the same
> interpreter space modifying global state and variable at once is,
> well, bad. A c-level module, on the other hand, can sidestep/release
> the GIL at will, and go on it's merry way and process away.

...Unless part of the C module execution involves the need do CPU-
bound work on another thread through a different python interpreter,
right? (even if the interpreter is 100% independent, yikes).  For
example, have a python C module designed to programmatically generate
images (and video frames) in RAM for immediate and subsequent use in
animation.  Meanwhile, we'd like to have a pthread with its own
interpreter with an instance of this module and have it dequeue jobs
as they come in (in fact, there'd be one of these threads for each
excess core present on the machine).  As far as I can tell, it seems
CPython's current state can't CPU bound parallelization in the same
address space (basically, it seems that we're talking about the
"embarrassingly parallel" scenario raised in that paper).  Why does it
have to be in same address space?  Convenience and simplicity--the
same reasons that most APIs let you hang yourself if the app does dumb
things with threads.  Also, when the data sets that you need to send
to and from each process is large, using the same address space makes
more and more sense.


> So, just to clarify - Andy, do you want one interpreter, $N threads
> (e.g. PThreads) or the ability to fork multiple "heavyweight"
> processes?

Sorry if I haven't been clear, but we're talking the app starting a
pthread, making a fresh/clean/independent interpreter, and then being
responsible for its safety at the highest level (with the payoff of
each of these threads executing without hinderance).  No different
than if you used most APIs out there where step 1 is always to make
and init a context object and the final step is always to destroy/take-
down that context object.

I'm a lousy writer sometimes, but I feel bad if you took the time to
describe threads vs processes.  The only reason I raised IPC with my
"messaging isn't very attractive" comment was to respond to Glenn
Linderman's points regarding tradeoffs of shared memory vs no.


Andy



--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to