On Jun 4, 2010, at 12:06 PM, Bryan wrote:
Emin.shopper wrote:
dmtr wrote:
I'm still unconvinced that it is a memory fragmentation problem.
It's
very rare.
You could be right. I'm not an expert on python memory management.
But
if it isn't memory fragmentation, then why is it that I can create
lists which use up 600 more MB but if I try to create a dict that
uses
a couple more MB it dies? My guess is that python dicts want a
contiguous chunk of memory for their hash table. Is there a reason
that you think memroy fragmentation isn't the problem?
Your logic makes some sense. You wrote that you can create a dict with
1300 items, but not 1400 items. If my reading of the Python source is
correct, the dict type decides it's overloaded when 2/3 full, and
enlarges by powers of two, so the 1366'th item will trigger allocation
of an array of 4096 PyDictEntry's.
At PyCon 2010, Brandon Craig Rhodes presented about how dictionaries
work under the hood:
http://python.mirocommunity.org/video/1591/pycon-2010-the-mighty-dictiona
I found that very informative.
There's also some slides if you don't like the video; I haven't looked
at 'em myself.
http://us.pycon.org/2010/conference/schedule/event/12/
Cheers
Philip
--
http://mail.python.org/mailman/listinfo/python-list