On Mon, 21 Sep 2020 20:35:33 +0200 Victor Stinner <vstin...@python.org> wrote: > > When I proposed my PEP 620 "Hide implementation details from the C > API", I was asked about a proof that the PEP unlocks real optimization > possibilities. So I wrote an implementation of tagged pointers: > https://github.com/vstinner/cpython/pull/6 > > The main benefit is the memory usage. For example, list(range(200)) > uses 1656 bytes instead of 7262 (4x less memory).
Hmm, how come? Aren't those tiny integers singletons already? I suppose you're thinking of something like `list(range(2000, 2200))`. > Sadly, my current simple implementation is 1.1x slower than the > reference. I suspect that adding a condition to Py_INCREF() and > Py_DECREF() explains a large part of this overhead. And adding a condition in every place an object is inspected. Even something as simple as Py_TYPE() is not a mere lookup anymore. > It would be nice to use tagged pointers for a wide range of integer > numbers, but I wrote a simple implementation: _Py_TAGPTR_UNBOX() has > to return a borrowed reference. This function should return a strong > reference to support a larger range. Hmm, it sounds a bit weird. The point of tagged pointers, normally, is to avoid creating objects at all. If you create an object dynamically each time a tagged pointer is "dereferenced", then I suspect you won't gain anything. Regards Antoine. _______________________________________________ Python-Dev mailing list -- python-dev@python.org To unsubscribe send an email to python-dev-le...@python.org https://mail.python.org/mailman3/lists/python-dev.python.org/ Message archived at https://mail.python.org/archives/list/python-dev@python.org/message/HFPMKQEY5LCWW6VCE27KGQWTVNZCNRRQ/ Code of Conduct: http://python.org/psf/codeofconduct/