Dear Ralph, Kevin and Ilhan,

Many thanks for your inputs! As I understand, there is no easy bulletproof way 
of specifying the tolerances and it’s probably better to adjust them based on 
the actual test performed.

Thanks again,
Bernard.

> On 25 Feb 2021, at 08:33, Ilhan Polat <ilhanpo...@gmail.com> wrote:
> 
> Matrix powers are annoyingly tricky to keep under control due to the fact 
> that things to explode or implode rather quickly. In fact the famous quote 
> from Moler, Van Loan "Unfortunately, the roundoff errors in the mth power of 
> a matrix, say B^m ,are usually small relative to ||B||^m rather than ||B^m||" 
> So thanks for nothing smart people. 
> 
> Then there is a typical bound on rounding errors of matrix multiplication 
> which says if C is the exact result and Ce the result of our operation then 
> for some number k this expression holds  |C - Ce| <= k * |A| * |B| From that, 
> hoping that matrix power won't result with an error too far from the manual 
> multiplication, it is a matter of selecting a sensible k for atol and rtol=0. 
> I would go about this as 
> 
> (some arbitrary constant I am randomly throwing in)*(matrix size 
> n)*np.finfo(dtype).eps*norm(A, 1)**k
> 
> As an example, get a matrix and artificially bloat the (0,0) entry
> 
> n = 100
> A = np.random.rand(n, n)
> A += np.diag([10.]+[0.]*99)
> A4 = np.linalg.matrix_power(A, 4)
> AA = A @ A @ A @ A
> print('Max entry error', np.max(np.abs(AA-A4)))
> print('My atol value', 100*n*np.finfo(A.dtype).eps*np.linalg.norm(A, 1)*4)
> 
> This accidentally makes it a tight bound but depending on how wildly your A 
> varies or how the spectrum of A is structured you might need to change these 
> constants. 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On Wed, Feb 24, 2021 at 1:53 PM Kevin Sheppard <kevin.k.shepp...@gmail.com 
> <mailto:kevin.k.shepp...@gmail.com>> wrote:
> In my experience it is most common to use reasonable but not exceedingly 
> tight bounds in complex applications where there isn’t a proof that the 
> maximum error must be smaller than some number.  I would also caution against 
> using a single system to find the tightest tolerance a test passes at.  For 
> example, if you can pass at a rol 1e-13 on Linux/AMD64/GCC 9, then you might 
> want to set a tolerance around 1e-11 so that you don’t get caught out on 
> other platforms. Notoriously challenging platforms in my experience (mostly 
> from statsmodels) are 32-bit Windows, 32-bit Linux and OSX (and I suspect 
> OSX/ARM64 will be another difficult one).
> 
>  
> 
> This advice is moot if you have a precise bound for the error.
> 
>  
> 
> Kevin
> 
>  
> 
>  
> 
> From: Ralf Gommers <mailto:ralf.gomm...@gmail.com>
> Sent: Wednesday, February 24, 2021 12:25 PM
> To: Discussion of Numerical Python <mailto:numpy-discussion@python.org>
> Subject: Re: [Numpy-discussion] Guidelines for floating point comparison
> 
>  
> 
>  
> 
>  
> 
> On Wed, Feb 24, 2021 at 11:29 AM Bernard Knaepen <bknae...@gmail.com 
> <mailto:bknae...@gmail.com>> wrote:
> 
> Hi all,
> 
> We are developing a code that heavily relies on NumPy. Some of our regression 
> tests rely on floating point number comparisons and we are a bit lost in 
> determining how to choose atol and rtol (we are trying to do all operations 
> in double precision). We would like to set atol and rtol as low as possible 
> but still have the tests pass if we run on different architectures or 
> introduce some ‘cosmetic’ changes like using different similar NumPy routines.
> 
> For example, let’s say we want some powers of the matrix A and compute them 
> as:
> 
> A = np.array(some_array)
> A2 = np.dot(A, A)
> A3 = np.dot(A2, A)
> A4 = np.dot(A3, A)
> 
> If we alternatively computed A4 like:
> 
> A4 = np.linalg.matrix_power(A, 4),
> 
> we get different values in our final outputs because obviously the operations 
> are not equivalent up to machine accuracy.
> 
> Is there any reference that one could share providing guidelines on how to 
> choose reasonable values for atol and rtol in this kind of situation? For 
> example, does the NumPy package use a fixed set of values for its own 
> development? the default ones?
> 
>  
> 
> I don't think there's a clear guide in docs or blog post anywhere. You can 
> get a sense of what works by browsing the unit tests for numpy and scipy. 
> numpy.linalg, scipy.linalg and scipy.special are particularly relevant 
> probably. For a rough rule of thumb: if you test on x86_64 and precision is 
> on the order of 1e-13 to 1e-16, then set a relative tolerance 10 to 100 times 
> higher to account for other hardware, BLAS implementations, etc.
> 
>  
> 
> Cheers,
> 
> Ralf
> 
>  
> 
> 
> Thanks in advance for any help,
> Cheers,
> Bernard.
> 
> 
> 
> 
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org <mailto:NumPy-Discussion@python.org>
> https://mail.python.org/mailman/listinfo/numpy-discussion 
> <https://mail.python.org/mailman/listinfo/numpy-discussion>
>  
> 
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org <mailto:NumPy-Discussion@python.org>
> https://mail.python.org/mailman/listinfo/numpy-discussion 
> <https://mail.python.org/mailman/listinfo/numpy-discussion>
> _______________________________________________
> NumPy-Discussion mailing list
> NumPy-Discussion@python.org
> https://mail.python.org/mailman/listinfo/numpy-discussion

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to