Tim Peters <[email protected]> added the comment:
Stefan, I have scant memory of ever caring, but, if I did, I got over it ;-)
>>> math.nan == math.nan
False
>>> {math.nan : 5}[math.nan]
5
That is, PyObject_RichCompareBool() takes object identity as overriding __eq__;
that's why the dict lookup works. But this one doesn't:
>>> {math.nan : 5}[float("nan")]
... Traceback (most recent call last):
KeyError: nan
Although that may change too.
I used to care a little, but not at all anymore. There's no sense trying to
_make_ sense of what sorting could possibly mean in the absence of a total
ordering.
> If you sort objects that always return True for both `<` and `==`,
A case of "garbage in, garbage out" to me.
> this would also have the opposite problem, considering tuple u smaller
> than v when it shouldn't.
What justifies "shouldn't"? If u[0] < v[0], then by the definition of
lexicographic ordering, u < v. But if u[0] == v[0], which apparently is _also_
the case, then the same definition says the ordering of u and v is inherited
from the ordering of u[1:] and v[1:]. There's no principled way of declaring
one of those possibly contradicting definitions "the right one".
> That said, maybe the function could be optimized for
> known "well-behaving" types?
A type is well-behaving to me if and only if it implements a total ordering. If
a type doesn't, what you get is an implementation accident, and code relying on
any specific accident is inherently buggy.
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue45530>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com