In my experience it is most common to use reasonable but not exceedingly tight bounds in complex applications where there isn’t a proof that the maximum error must be smaller than some number. I would also caution against using a single system to find the tightest tolerance a test passes at. For example, if you can pass at a rol 1e-13 on Linux/AMD64/GCC 9, then you might want to set a tolerance around 1e-11 so that you don’t get caught out on other platforms. Notoriously challenging platforms in my experience (mostly from statsmodels) are 32-bit Windows, 32-bit Linux and OSX (and I suspect OSX/ARM64 will be another difficult one). This advice is moot if you have a precise bound for the error. Kevin From: Ralf Gommers On Wed, Feb 24, 2021 at 11:29 AM Bernard Knaepen <bknae...@gmail.com> wrote:
I don't think there's a clear guide in docs or blog post anywhere. You can get a sense of what works by browsing the unit tests for numpy and scipy. numpy.linalg, scipy.linalg and scipy.special are particularly relevant probably. For a rough rule of thumb: if you test on x86_64 and precision is on the order of 1e-13 to 1e-16, then set a relative tolerance 10 to 100 times higher to account for other hardware, BLAS implementations, etc. Cheers, Ralf
|
_______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@python.org https://mail.python.org/mailman/listinfo/numpy-discussion