>> A c-level module, on the other hand, can sidestep/release
>> the GIL at will, and go on it's merry way and process away.
> 
> ...Unless part of the C module execution involves the need do CPU-
> bound work on another thread through a different python interpreter,
> right? 

Wrong.

> (even if the interpreter is 100% independent, yikes).

Again, wrong.

> For
> example, have a python C module designed to programmatically generate
> images (and video frames) in RAM for immediate and subsequent use in
> animation.  Meanwhile, we'd like to have a pthread with its own
> interpreter with an instance of this module and have it dequeue jobs
> as they come in (in fact, there'd be one of these threads for each
> excess core present on the machine).

I don't understand how this example involves multiple threads. You
mention a single thread (running the module), and you mention designing
a  module. Where is the second thread?

Let's assume there is another thread producing jobs, and then
a thread that generates the images. The structure would be this

  while 1:
    job = queue.get()
    processing_module.process(job)

and in process:

  PyArg_ParseTuple(args, "s", job_data);
  result = PyString_New(bufsize);
  buf = PyString_AsString(result);
  Py_BEGIN_ALLOW_THREADS
    compute_frame(job_data, buf);
  Py_END_ALLOW_THREADS
  return PyString_FromString(buf);

All these compute_frames could happily run in parallel.

> As far as I can tell, it seems
> CPython's current state can't CPU bound parallelization in the same
> address space.

That's not true.

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to