Daniel Fleischman <[email protected]> added the comment:
Thank you for your message!
I'm not particularly looking for an implementation, I was just surprised by
this behavior.
It can get even worse. Consider this example:
```
d = large_dict()
# remove all but one element of d, runs in quadratic time as explained above
while len(d) > 1:
del d[next(iter(d))]
# now iterating over d takes O(d), even when d has only one item:
# the following prints one key, but takes O(N)
for key in d:
print(key)
```
I think this example is better, since a person would expect iterating over
a singleton dictionary would be really fast, but it can be made as slow as
one wants. A "print_keys()" function would reasonably be expected to take
O(size of dictionary), but it doesn't.
Am I right to think that this is a performance bug?
On Fri, Jul 2, 2021, 8:10 PM Dennis Sweeney <[email protected]> wrote:
>
> Dennis Sweeney <[email protected]> added the comment:
>
> For what it's worth, using collections.OrderedDict gives constant-time
> behavior you're looking for.
>
> ----------
> nosy: +Dennis Sweeney
>
> _______________________________________
> Python tracker <[email protected]>
> <https://bugs.python.org/issue44555>
> _______________________________________
>
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue44555>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com