Tim Peters <[email protected]> added the comment:
The lack of exactness (and possibility of platform-dependent results,
including, e.g., when a single platform changes its math libraries) certainly
works against it.
But I think Raymond is more bothered by that there's no apparently _compelling_
use case, in the sense of something frequent enough in real life to warrant
including it in the standard library.
For example, there's really no problem right now if you have a giant iterable
_and_ you know its length. I had a case where I had to sample a few hundred
doubles from giant binary files of floats. The "obvious" solution worked great:
for o in sorted(random.sample(range(0, file_size, 8), 1000)):
seek to offset o and read up 8 bytes
Now random access to that kind of iterable is doable, but a similar approach
works fine too if it's sequential-only access to a one-shot iterator of known
length: pick the indices in advance, and skip over the iterator until each
index is hit in turn.
It doesn't score much points for being faster than materializing a set or dict
into a sequence first, since that's a micro-optimization justified only by
current CPython implementation accidents.
Where it's absolutely needed is when there's a possibly-giant iterable of
unknown length. Unlike Raymond, I think that's possibly worth addressing (it's
not hard to find people asking about it on the web). But it's not a problem
I've had in real life, so, ya, it's hard to act enthusiastic ;-)
PS: you should also guard against W > 1.0. No high-quality math library will
create such a thing given these inputs, but testing only for exact equality to
1.0 is no cheaper and so needlessly optimistic.
----------
_______________________________________
Python tracker <[email protected]>
<https://bugs.python.org/issue41311>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com