On Fri, 25 Mar 2016 02:03 am, Jon Ribbens wrote:

> On 2016-03-24, BartC <b...@freeuk.com> wrote:
>> On 24/03/2016 14:08, Jon Ribbens wrote:
>>> if you thought you needed to do that then most likely what you
>>> should actually be doing is re-writing your code so you no longer
>>> need to. However, you could do:
>>>
>>>    L[:] = [0] * len(L)
>>
>> OK, but that's just building a new list as I've already mentioned.
> 
> No it isn't, it's replacing the elements in-place, that's what the
> "L[:]" bit means. Replacing the list object itself would be:
> L = [0] * len(L)

Yes it is: a new list is built, copied, then discarded.

Python builds the [0]*len(L) list first, then copies the elements from it
into the existing L (which in CPython is just a fast memcopy of a whole
bunch of pointers, plus a resize of L), then garbage collects the new list.

A *sufficiently smart* Python interpreter might be able to optimize that
even further, but since it's just copying pointers from one array to
another, it's already pretty quick.

I imagine that, if you had enough memory, a *sufficiently enormous* list
would actually be faster to modify in place using the Pascal-ish idiom:

for i in range(len(L)):
    L[i] = 0


rather than building a temporary list and copying it. But I've never been
able to demonstrate that: no matter how big a list I created, it was always
faster to create a second one and then do slice assignment than to iterate
over it in a for loop changing each item in place.

This demonstrates that what logically ought to be more efficient (changing
values in place) is not necessarily *actually* more efficient.



-- 
Steven

-- 
https://mail.python.org/mailman/listinfo/python-list

Reply via email to