On 2019-12-01 10:11 a.m., Kyle Stanley wrote:
>>> for item in data.items(): item[0], item[1]
874 µs ± 21.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
>>> for key, value in data.items(): key, value
524 µs ± 4.26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
>>> for item in items_tuple(data): item.key, item.value
5.82 ms ± 117 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Thanks for sharing the results, in particular the amount of difference between "for item in data.items(): item[0], item[1]" and "for key in data.items(): key, value" is a bit surprising to me. I'd have assumed they'd be a bit closer in performance. I expected the named tuple to be significantly slower than the other two, but not quite by that much. Good to know.

I'm -1 on the proposal overall. It's not a bad idea, but in practice it would likely be too much of a detriment to performance and backwards compatibility to "dict.items()". I wouldn't be opposed to considering a different method though, such as "dict.named_items()" or something similar that allowed usage of "item.key" and "item.value".


I see no reason why named items couldn't be optimized on the C side, especially for the common case of destructuring. I'd like to see a run for "for key, value in items_tuple(data): key, value". I wonder how much is the cost of the generator, how much of the namedtuple creation itself, and how much of the attribute access.
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/GXWHI6AQJFR2YSUGOSZ672Q7H75KJTTB/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to