Hi all,

We are developing a code that heavily relies on NumPy. Some of our regression 
tests rely on floating point number comparisons and we are a bit lost in 
determining how to choose atol and rtol (we are trying to do all operations in 
double precision). We would like to set atol and rtol as low as possible but 
still have the tests pass if we run on different architectures or introduce 
some ‘cosmetic’ changes like using different similar NumPy routines.

For example, let’s say we want some powers of the matrix A and compute them as:

A = np.array(some_array)
A2 = np.dot(A, A)
A3 = np.dot(A2, A)
A4 = np.dot(A3, A)

If we alternatively computed A4 like:

A4 = np.linalg.matrix_power(A, 4),

we get different values in our final outputs because obviously the operations 
are not equivalent up to machine accuracy.

Is there any reference that one could share providing guidelines on how to 
choose reasonable values for atol and rtol in this kind of situation? For 
example, does the NumPy package use a fixed set of values for its own 
development? the default ones?

Thanks in advance for any help,
Cheers,
Bernard.




_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to