On 12. 03. 22 2:45, Eric Snow wrote:
responses inline

I'll snip some discussion for a reason I'll get to later, and get right to the third alternative:


[...]
"Special-casing immortal objects in tp_dealloc() for the relevant types
(but not int, due to frequency?)" sounds promising.

The "relevant types" are those for which we skip calling incref/decref
entirely, like in Py_RETURN_NONE. This skipping is one of the optional
optimizations, so we're entirely in control of if/when to apply it.

We would definitely do it for those types.  NoneType and bool already
have a tp_dealloc that calls Py_FatalError() if triggered.  The
tp_dealloc for str & tuple have special casing for some singletons
that do likewise.  In PyType_Type.tp_dealloc we have a similar assert
for static types.  In each case we would instead reset the refcount to
the initial immortal value.  Regardless, in practice we may only need
to worry (as noted above) about the problem for the most commonly used
global objects, so perhaps we could stop there.

However, it depends on what the level of risk is, such that it would
warrant incurring additional potential performance/maintenance costs.
What is the likelihood of actual crashes due to pathological
de-immortalization in older stable ABI extensions?  I don't have a
clear answer to offer on that but I'd only expect it to be a problem
if such extensions are used heavily in (very) long-running processes.

How much would it slow things back down if it wasn't done for ints at all?

I'll look into that.  We're talking about the ~260 small ints, so it
depends on how much they are used relative to all the other int
objects that are used in a program.

Not only that -- as far as I understand, it's only cases where we know at compile time that a small int is being returned. AFAIK, that would be fast branches of aruthmetic code, but not much else.

If not optimizing small ints is OK performance-wise, then everything looks good: we say that the “skip incref/decref” optimization can only be done for types whose instances are *all* immortal, leave it to future discussions to relax the requirement, and PEP 683 is good to go!

With that I mind I snipped your discussion of the previous alternative. Going with this one wouldn't prevent us from doing something more clever in the future.


Some more reasoning for not worrying about de-immortalizing in types
without this optimization:
These objects will be de-immortalized with refcount around 2^29, and
then incref/decref go back to being paired properly. If 2^29 is much
higher than the true reference count at de-immortalization, this'll just
cause a memory leak at shutdown.
And it's probably OK to assume that the true reference count of an
object can't be anywhere near 2^29: most of the time, to hold a
reference you also need to have a pointer to the referenced object, and
there ain't enough memory for that many pointers. This isn't a formally
sound assumption, of course -- you can incref a million times with a
single pointer if you pair the decrefs correctly. But it might be why we
had no issues with "int won't overflow", an assumption which would fail
with just 4× higher numbers.

Yeah, if we're dealing with properly paired incref/decref then the
worry about crashing after de-immortalization is mostly gone.  The
problem is where in the runtime we would simply not call Py_INCREF()
on certain objects because we know they are immortal.  For instance,
Py_RETURN_NONE (outside the older stable ABI) would no longer incref,
while the problematic stable ABI extension would keep actually
decref'ing until we crash.

Again, I'm not sure what the likelihood of this case is.  It seems
very unlikely to me.

Of course, the this argument would apply to immortalization and 64-bit
builds as well. I wonder if there are holes in it :)

With the numbers involved on 64-bit the problem is super unlikely due
to the massive numbers we're talking about, so we don't need to worry.
Or perhaps I misunderstood your point?

That's true. However, as we're adjusting incref/decref documentation for this PEP anyway, it looks like we could add “you should keep a pointer around for each reference you hold”, and go from “super unlikely” to “impossible in well-behaved code” :)

Oh, and if the "Special-casing immortal objects in tp_dealloc()" way is
valid, refcount values 1 and 0 can no longer be treated specially.
That's probably not a practical issue for the relevant types, but it's
one more thing to think about when applying the optimization.

Given the low chance of the pathological case, the nature of the
conditions where it might happen, and the specificity of 0 and 1
amongst all the possible values, I wouldn't consider this a problem.

+1. But it's worth mentioning that it's not a problem.
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/WB2QG7KJUWMHRTMEIVV6V2S5NL6AECYF/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to