On 09. 03. 22 4:58, Eric Snow wrote:
On Mon, Feb 28, 2022 at 6:01 PM Eric Snow <ericsnowcurren...@gmail.com> wrote:
The updated PEP text is included below.  The largest changes involve
either the focus of the PEP (internal mechanism to mark objects
immortal) or the possible ways that things can break on older 32-bit
stable ABI extensions.  All other changes are smaller.

In particular, I'm hoping to get your thoughts on the "Accidental
De-Immortalizing" section.  While I'm confident we will find a good
solution, I'm not yet confident about the specific solution.  So
feedback would be appreciated.  Thanks!

Hi,
I like the newest version, except this one section is concerning.


"periodically reset the refcount for immortal objects (only enable this if a stable ABI extension is imported?)" -- that sounds quite expensive, both at runtime and maintenance-wise.

"provide a runtime flag for disabling immortality" also doesn't sound workable to me. We'd essentially need to run all tests twice every time to make sure it stays working.


"Special-casing immortal objects in tp_dealloc() for the relevant types (but not int, due to frequency?)" sounds promising.

The "relevant types" are those for which we skip calling incref/decref entirely, like in Py_RETURN_NONE. This skipping is one of the optional optimizations, so we're entirely in control of if/when to apply it. How much would it slow things back down if it wasn't done for ints at all?



Some more reasoning for not worrying about de-immortalizing in types without this optimization: These objects will be de-immortalized with refcount around 2^29, and then incref/decref go back to being paired properly. If 2^29 is much higher than the true reference count at de-immortalization, this'll just cause a memory leak at shutdown. And it's probably OK to assume that the true reference count of an object can't be anywhere near 2^29: most of the time, to hold a reference you also need to have a pointer to the referenced object, and there ain't enough memory for that many pointers. This isn't a formally sound assumption, of course -- you can incref a million times with a single pointer if you pair the decrefs correctly. But it might be why we had no issues with "int won't overflow", an assumption which would fail with just 4× higher numbers.

Of course, the this argument would apply to immortalization and 64-bit builds as well. I wonder if there are holes in it :)

Oh, and if the "Special-casing immortal objects in tp_dealloc()" way is valid, refcount values 1 and 0 can no longer be treated specially. That's probably not a practical issue for the relevant types, but it's one more thing to think about when applying the optimization.


There's also the other direction to consider: if an old stable-ABI extension does unpaired *increfs* on an immortal object, it'll eventually overflow the refcount. When the refcount is negative, decref will currently crash if built with Py_DEBUG, and I think we want to keep that check/crash. (Note that either be Python itself or any extension could be built with Py_DEBUG.) Hopefully we can live with that, and hope anyone running with Py_DEBUG will send a useful use case report.
Or is there another bit before the sign this'll mess up?
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/7ZSLUMOIOV676UH42LIWGQASFMXBWSBN/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to