Mark Dickinson <dicki...@gmail.com> added the comment:

@cwg: Yep, we're aware of this. There are no good solutions here - only a mass 
of constraints, compromises and trade-offs. I think we're already somewhere on 
the Pareto boundary of the "best we can do" given the constraints. Moving to 
another point on the boundary doesn't seem worth the code churn.

What concrete action would you propose that the Python core devs take at this 
point?

> it was possible to convert a tuple of floats into a numpy array and back into 
> a tuple, and the hash values of both tuples would be equal.  This is no 
> longer the case.

Sure, but the problem isn't really with hash; that's just a detail. It lies 
deeper than that - it's with containment itself:

>>> import numpy as np
>>> import math
>>> x = math.nan
>>> some_list = [1.5, 2.3, x]
>>> x in some_list
True
>>> x in list(np.array(some_list))  # expect True, get False
False

The result of the change linked to this PR is that the hash now also reflects 
that containment depends on object identity, not just object value. Reverting 
the change would solve the superficial hash problem, but not the underlying 
containment problem, and would re-introduce the performance issue that was 
fixed here.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue43475>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to