On Jun 4, 10:11 pm, Josiah Carlson <[EMAIL PROTECTED]>
wrote:

> However, locking isn't just for refcounts, it's to make sure that thread
> A isn't mangling your object while thread B is traversing it.


> With
> object locking (course via the GIL, or fine via object-specific locks),
> you get the same guarantees, with the problem being that fine-grained
> locking is about a bazillion times more difficult to verify the lack of
> deadlocks than a GIL-based approach.

I think this is just as much a question of what the runtime should
guarantee. One don't need a guarantee that two threads are not
mangling the same object simultaneously. Instead, the runtime could
leave it to the programmer to use explicit locks on the object or
synchronized blocks to guarantee this for himself.


> It was done a while ago.  The results?  On a single-processor machine,
> Python code ran like 1/4-1/3 the speed of the original runtime.  When
> using 4+ processors, there were some gains in threaded code, but not
> substantial at that point.

I am not surprised. Reference counts are quite efficient, contrary to
common belief. The problem with reference counts is cyclic references
involving objects that define a __del__ method. As these objects are
not eligible for cyclic garbage collection, this can produce resource
leaks.


> My current favorite is the processing package (available from the Python
>   cheeseshop).

Thanks. I'll take a look at that.




-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to