On Mon, 11 Sep 2006, Duncan Murdoch wrote: > On 9/11/2006 8:20 AM, Prof Brian Ripley wrote: > > On Mon, 11 Sep 2006, Duncan Murdoch wrote: > > > > > On 9/11/2006 6:15 AM, Robin Hankin wrote: > > > > Hi Sundar > > > > > > > > > > > > thanks for this. But I didn't make it clear that I'm interested in > > > > extreme numbers > > > > such as 1e300 and 1e-300. > > > > That's not relevant, unless you are interested in extremely small numbers. > > > > > > Then > > > > > > > > > f(1e300) > > > > [1] 7.911257e+283 > > > > (That is inaccurate) > > > > > > is different from > > > > > > > > 1e300*.Machine$double.eps > > > > Yes, since 1e300 is not a power of two. However, Sundar is right in the > > sense that this is an upper bound for normalized numbers. > > > > f(1) is .Machine$double.neg.eps, but so it is for all 1 <= x < 2. > > This gives you the answer: .Machine$double.neg.eps * 2^floor(log2(x)) > > I'm not sure what is going wrong, but that is too small (on my machine, at > least): > > > f1 <- function(x) .Machine$double.neg.eps * 2^floor(log2(x)) > > f1(1e300) > [1] 7.435085e+283 > > 1e300 + f1(1e300) == 1e300 > [1] TRUE > > Notice the difference in the 3rd decimal place from the empirical answer from > my bisection search below.
I wasn't going into that much detail: just what accuracy are we looking for here? .Machine$double.neg.eps isn't that accurate (as its help page says). > > Similarly for going below (but carefully as you get an extra halving on the > > powers of two). > > > > These results hold for all but denormalized numbers (those below 1e-308). > > > > > > > > [I'm interested in the gap between successive different exactly > > > > representable > > > > numbers right across the IEEE range] > > > > > > I'm not sure the result you're looking for is well defined, because on at > > > least the Windows platform, R makes use of 80 bit temporaries as well as > > > 64 bit double precision reals. I don't know any, but would guess there > > > exist examples of apparently equivalent formulations of your question that > > > give different answers because one uses the temporaries and the other > > > doesn't. > > > > Not at R level. For something to get stored in a real vector, it will be a > > standard 64-bit double. > > I don't think that's a proof, since R level code can call C functions, and > there are an awful lot of callable functions in R, but I don't have a > counter-example. Oh, it is proof: the storage is declared as C double, and extended precision double only exists on the chip. Since R has to look up 'x' whenever it sees the symbol, it looks at the stored value, not on a register. A compiler could do better, but the current code cannot. -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595 ______________________________________________ R-help@stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.