On Wed, 2021-12-29 at 13:35 -0800, Volker Braun wrote:
> There are doctests of the form 
> 
>     sage: x = random_value()
>     sage: abs(floating_point_computation(x) - exact_value(x)) < tolerance
>     True 
> 
> but every floating point computation has SOME values where it is 
> ill-conditioned. I'm finding a steady trickle of test failures due to the 
> (now) random seeds for each run. Whats the plan for that?
> 
> 

I think we have to consider what the point of these tests is. If
someone picked a tolerance that turned out to be too naive, then either
the code or the test is wrong, and we should fix whichever one it is.

If the existing tolerance was just someone's pulled-it-out-of-my-butt
guess, then we can fix it by adding an order of magnitude to the
tolerance without much thought. But if someone carefully picked a
tolerance that is actually violated (as in your example), someone
should probably take a look at it and do the analysis to pick a better
value.


-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sage-devel/fa4ccc2a865a641e233cdf281f82de54c996d61f.camel%40orlitzky.com.

Reply via email to