I believe that the IEEE standard guarantees you 80 bits (though it's
only 64 bits of mantissa or something like that). The trouble is, you
aren't guaranteed the IEEE standard.

I've spent much time researching this, but either I didn't look at the
right websites, or this stuff isn't documented well.

At any rate, does autoconf allow you to check this? Unfortunately I
don't know about such things. I'm an admitted ignoramous when it comes
to anything to do with runtime/compile time checking.

Bill.

On 31 Jul, 21:33, Jonathan Bober <[EMAIL PROTECTED]> wrote:
> On Mon, 2007-07-30 at 17:24 -0700, Bill Hart wrote:
> > Wow!! Excellent work indeed.
>
> > In fact on 64 bit X86 systems you could actually use the 128 bit long
> > doubles to give you a little bit more precision (I believe it only
> > gives you 80 bits including exponent and sign, so probably 64 bit
> > mantissa).
>
> Actually, even on my 32 bit core duo, the long double type seems to give
> 64 bits of precision, so using it might help a little. Do you have any
> idea how to check at run/compile time what the precision of a double or
> a long double is?
>
> > It would be interesting to see the time for Mathematica on a 32 bit
> > X86 machine, since this would tell us if that is what they do.
>
> > Certainly I think you are right that any remaining optimization would
> > be in making sure it uses no unnecessary precision. Here is a page
> > giving information about the remainder of the series after N terms:
>
> >http://mathworld.wolfram.com/PartitionFunctionP.html   (see eqn 26).
>
> That's what I'm using, but I took it straight from Rademacher's paper. I
> think that what is needed is a careful analysis of how many terms (N)
> will need to be computed to get the remainder to be less than, say, r
> < .5. Then we know that the error in each term will have to be less than
> (.5 - r)/N to guarantee that when we round, we will get the right
> answer.
>
> > Also in Pari, I noted that the computation of the Psi function could
> > dramatically slow the whole computation. I was surprised to find it
> > figured in the runtime. It may be worthwhile checking if this is
> > slowing things down at all. It should be computed relatively quickly
> > if implemented correctly. The main issue was again using the minimum
> > possible precision.
>
> I don't think that this is slowing things down much, but I could be
> wrong. I think that most time is spent in the actual computation of
> A(n,k), (this is the notation I use for the sum of exponentials -- I
> think that is the notation that Mathworld uses also.) If I set s(h,k) to
> return 1 every time, without computing anything, I only save about 20
> seconds on a 2m 30s computation of p(10^9), I think.


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to