On 2020-06-23, Thomas Wouters wrote:
> I think the ability for per-type allocation/deallocation routines isn't
> really about efficiency, but more about giving more control to embedding
> systems (or libraries wrapped by extension modules) about how *their*
> objects are allocated. It doesn't make much sense, however, because Python
> wouldn't allocate their objects anyway, just the Python objects wrapping
> theirs. Allocating CPython objects should be CPython's job.

My thinking is that, eventually, we would like to allow CPython to
use something other than reference counting for internal PyObject
memory management.  In other systems with garbage collection, the
memory allocator is typically tightly integrated with the garbage
collector.  To get good efficiency, they need to cooperate.  E.g.
newly allocated objects are allocated in nursery memory arenas.  

The current API doesn't allow that because you can allocate memory
via some custom allocator and then pass that memory to be
initialized and treated as a PyObject.  That's one thing locking
us into reference counting.

This relates to the sub-interpreter discussion.  I think the
sub-interpreter cleanup work is worth doing, if only because it will
make embedding CPython cleaner.  I have some doubts that
sub-interpreters will help much in terms of multi-core utilization.
Efficiently sharing data between interpreters seems like a huge
challenge.  I think we should also pursue Java style multi-threading
and complete the "gilectomy".  To me, that means killing reference
counting for internal PyObject management.
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FQS5TB6G77EE35QW3JHRU7ISZ4ASDCTQ/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to