On Apr 29, 8:00 am, "William Stein" <[EMAIL PROTECTED]> wrote:
> On Mon, Apr 28, 2008 at 10:47 PM, Jonathan Bober <[EMAIL PROTECTED]> wrote:
>
> >  I'm not sure exactly what the speed differences are, but I think that
> >  they are quite significant. When writing the partition counting code,
> >  which uses quaddouble, I recall that things ran much slower if "sloppy"
> >  multiplication and division were not enabled. (However, I have no hard
> >  benchmarks to back this up right now, so this statement shouldn't be
> >  taken too seriously, and I could be wrong -- it would be nice if I had
> >  some real data.)
>
> >  Anyway, I don't think that this isn't necessarily an issue of
> >  correctness vs. noncorrectness. It's an issue of precision -- in one
> >  implementation multiplication/division is correct to within X bits, and
> >  in the other it is correct to within (X + a few more) bits, but it takes
> >  [a lot?] longer. In at least some applications, it is very desirable to
> >  accept a small loss of precision in exchange for a large speedup.

Yes, as long as people are aware of that fact. Your partition counting
code returns different results than expected on PPC and Sparc, so the
differences do have an impact. IIRC for PPC the solution to the above
problem was to switch to arbitrary precision at a lower bound, so we
can do the same on Sparc. But if we are talking about a ten percent
difference in performance I would prefer to switch to the correct mode
since sooner or later somebody else will write code on x86-64 that
will magically break on Sparc for example. If the performance
difference is a factor of two and we leave it as is we *must* document
this behavior and make it very, very clear that the expect precision
is lower and that in reality those rounding issues creep up and are
not some "once in a blue moon if you are standing on your left leg
while holding your breath" kind of freak occurrence.

<SNIP>
>
> >  quaddouble is certainly faster than mpfr, but it seems likely that in
> >  any application from with sage the python overhead will eat up most of
> >  the speed difference. (quaddouble is certainly good thing to have
> >  "underneath the hood", but I just don't know that there is much use to
> >  having it externally visible.)
>
> I have also questioned having quaddouble in Sage via Python
> and wondered about it.   There might be some overhead because
> of C++ being involved in the wrapping.  Also, though arithmetic
> is the same speed, special functions (e.g., trig functions) are twice
> as fast with quad double as with mpfr.

Interesting. Maybe Paul can shed some light on this if it turns out
that even full precision special functions with quaddouble are still
significantly faster than mpfr with the same precision.

> William

Cheers,

Michael
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to