On Wed, Dec 11, 2019, 6:40 AM Mark Shannon <m...@hotpy.org> wrote:

> P.S. On the subject of tradeoffs, here's a bonus question:
> What, in your opinion, increase in memory consumption is acceptable for a
> 1% improvement in speed, or vice versa?


I NEVER care about memory at all... except inasmuch as it effects speed.
This question is a rare opportunity to use the phrase "begs the question"
correctly.

I care about avoiding swap memory solely because swap is 100x slower than
RAM. I care about avoiding RAM because L2 cache is 25x faster than RAM. I
care about avoiding L2 because L1 is 4x faster than L2. Memory savings are
only ever about speed.

These limit proposals seem to be irrelevant to main memory. If I can no
longer have a million and one classes in memory, but only a million, the
savings a few bits for a class counter is trivial... Except when it is
POSSIBLY not trivial in L1 where the counter lives but the objects
themselves don't usually.

On the other hand, if we actually have a huge number of classes, or
coroutines, or bytecodes, then maybe shaving a couple bits off the memory
representation of each one could matter. But that is 100% orthogonal to the
hard limit idea.
_______________________________________________
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DFC6J5DZBCWIGXWLODTEDRNL5ZAFHX6D/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to