On Tue, Dec 25, 2012 at 5:52 AM, John Zwinck <jzwi...@gmail.com> wrote:

>
> Yes, the main point is to provide a reasonably safe, RAII-style mechanism
> for releasing the GIL around long-running C++ functions.  But in addition
> to a class for releasing (and later reacquiring, hence RAII), I made a
> class for acquiring the GIL (if it is not held) and releasing it at the
> end.  This allows nesting: complex C++ computations can be done with the
> GIL released, yet some bits of C++ can call back into Python by using the
> second class to acquire the GIL when needed.
>
> You needn't have sway with Boost to help here--simply reviewing the code
> and letting us know if it would help you would be great.
>
>
It might be a solution to a problem I always bring up now when I try to get
Python to interoperate with another language.  The situation goes something
like this:
1. Define an interface in the other language.
2. Create some background thread agent in the other language that takes
instances of that interface and calls some prescribe method in them.
3. Throw a Python implementation of that interface into an instance of that
background agent object.
4. Start the background agent and step back.  There may be blood.

Assuming that survives:
5. Create some other class in the other language that also calls back to a
Python agent.  It doesn't need it's own thread.
6. Chain some more Python code up as a callback to it.
7. Set up the prior test so now the other/Python interaction looks like:
other->Python callback->other->Python callback->other . . .

#7 there showed that I walked into the whole thing very naively.  I was
trying to use the GIL manipulation code in Python's C headers for working
with the GIL, I'd double lock, and die.  Since I was dealing with multiple
threads, I was really asking for the worst kind of asynchronous,
non-deterministic bugs.  Eventually I was using Stackless Python and taking
explicit control over when the interpreter does its thing at all.  There
would be a queue on the side for calls from C++ threads which would muck
with objects created from the Python side.  Each would get their turn.  At
that point I didn't need any GIL management at all since all the relevant
code executed on the thread that started the Python interpreter.  The GIL
stopped acting up then, but I don't know how inefficient the whole thing
could be.  Consider with multiple other/Python interactions going on over
and over, it would mean constantly putting something on a queue somewhere.
 At least having some kind of recurse lock on the GIL would have probably
been simpler.  I don't know if it would be any more or less efficient.

Also if you're curious, I tried that above experiment with .NET code using
IronPython and Python.NET.  IronPython took it in stride, and Python.NET
died at #3; it couldn't create a Python implementation of a .NET interface
because it would choke somewhere in the module the moment I'd finish
defining the Python implementation.  I think it was while filling in the
__new__ operator for the just-defined object.

So my own problems have been somewhat different--I haven't even been
thinking about releasing the lock on longer operations.  However, having
some smarts around the GIL would be nice.  In my own head I thought of it
like the interpreter state having a special mutex, from which one could use
all the Boost locks as if it were any other mutex.  I never thought about
what that would be like to implement.
_______________________________________________
Cplusplus-sig mailing list
Cplusplus-sig@python.org
http://mail.python.org/mailman/listinfo/cplusplus-sig

Reply via email to