On Fri, 2 Jan 2015, Duncan Murdoch wrote:

On 01/01/2015 10:05 PM, Mike Miller wrote:

This is how big those errors are:

512*.Machine$double.eps
[1] 1.136868e-13

Under other conditions you also were seeing errors of twice that, or 1024*.Machine$double.eps. It might not be a coincidence that the largest number giving me an error was 1023.

2^-43
[1] 1.136868e-13

.Machine$double.eps
[1] 2.220446e-16

2^-52
[1] 2.220446e-16

I guess the 52 comes from the IEEE floating point spec...

http://en.wikipedia.org/wiki/Double-precision_floating-point_format#IEEE_754_double-precision_binary_floating-point_format:_binary64

...but why are we seeing errors so much bigger than the machine precision?

You are multiplying by 1000.  That magnifies the error.

Why does it change at 2?

Because (most) floating point numbers are stored as (1 + x) * 2^y, where x is a number between 0 and 1, and y is an integer value between -1023 and 1023. The value of y changes at 2, and this means errors in x become twice as big. (The exceptions are 0, Inf, NaN, etc., as well as "denormals", where y is -1024 and the format changes to x * 2^(y+1).)


That is a great explanation.  Thanks very much!

Mike

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to