Tim Peters <t...@python.org> added the comment:

About test_frac.py, I changed the main loop like so:

            got = [float(expected)] # NEW
            for hypot in hypots:
                actual = hypot(*coords)
                got.append(float(actual)) # NEW
                err = (actual - expected) / expected
                bits = round(1 / err).bit_length()
                errs[hypot][bits] += 1
            if len(set(got)) > 1: # NEW
                print(got) # NEW

That is, to display every case where the four float results weren't identical.

Result: nothing was displayed (although it's still running the n=1000 chunk, 
there's no sign that will change).  None of these variations made any 
difference to results users actually get.

Even the "worst" of these reliably develops dozens of "good bits" beyond IEEE 
double precision, but invisibly (under the covers, with no visible effect on 
delivered results).

So if there's something else that speeds the code, perhaps it's worth pursuing, 
but we're already long beyond the point of getting any payback for pursuing 
accuracy.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41513>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to