Steven D'Aprano wrote: > But that is precisely the same for the other timeit tests too. > > for _i in _it: > x = range(100000) Allocate list. Allocate ob_item array to hold pointers to 10000 objects Allocate 99900 integer objects setup list
> del x[:] Calls list_clear which: decrements references to 99900 integer objects, freeing them frees the ob_item array ... next time round ... > x = range(100000) Allocate new list list. Allocate ob_item array probably picks up same memory block as last time Allocate 99900 integer objects, probably reusing same memory as last time setup list > > etc. > > The question remains -- why does it take longer to do X than it takes to > do X and then Y? >>> for _i in _it: ... x = range(100000); Allocate list. Allocate ob_item array to hold pointers to 10000 objects Allocate 99900 integer objects setup list ... next time round ... Allocate another list allocate a second ob_item array allocate another 99900 integer objects setup the list then deletes the original list, decrements and releases original integers, frees original ob_item array. Uses twice as much everything except the actual list object. The actual work done is the same but I guess there are likely to be more cache misses. Also there is the question whether the memory allocation does or does not manage to reuse the recently freed blocks. With one large block I expect it might well end up reusing it, with two large blocks being freed alternately it might not manage to reuse either (but that is just a guess and maybe system dependant). -- http://mail.python.org/mailman/listinfo/python-list