On Apr 29, 2010, at 10:58 PM, rjf wrote:

On Apr 29, 10:58 am, Robert Bradshaw <rober...@math.washington.edu>
wrote:
On Apr 29, 2010, at 8:30 AM, rjf wrote:

(RJF)Again, I see no definition of what you mean by accuracy in the result of polynomial multiplication.The easiest position to take is that of MPFR--
considering the inputs as exact rationals, how far off is the output
(say, coefficient by coefficient) from the actual result. One could
also use other measures, such as an L^2 norm over some compact region
(like [-1,1]). What may make sense for many applications is some kind
of a "slopped absolute" precision, as has been discussed with p-adic
polynomials. These would have good behavior on |x| < a or |x| > a
depending on the slope.

There is a large literature on the meaning of the "approximate GCD" of
two polynomials with floating-point coefficients.  The term
"stability"
also has a pretty good definition, and it does not seem to correspond
to how you are using it here.

I don't need GCDs, I'm talking about what's been useful to me.

It all really depends on your application, and currently we don't
define, let alone make guarantees, on the precision of the result (and
nor does Maxima as far as I understand).

Maxima was designed around exact arithmetic, and generally offers to
convert floats to their corresponding exact rationals before doing
anything
that requires arithmetic. It makes no claims about floats per se,
partly
because saying anything sensible is a challenge, compounded by
particular issues that are not mathematical but practical .. overflow,
different machine floating point formats, etc

For the most part, that's how it is with Sage as well, so I'm sure you'll cut us the same slack :) We would like to do multi-precision numerics better, but for machine-precision stuff we leverage NumPy/ SciPy (and many of the folks there are experts in the field, for any reasonable definition of expert). Of course things like linear algebra and approximate solutions to differential equations are more well known than floating point polynomial multiplication.

We do, however, try to avoid
algorithms that are known to be numerically unstable.

The lack of serious understanding of numerical computation in Sage is unfortunate.

That would be unfortunate if it were true, but thankfully we're a pretty diverse crowd.

It might be cured by someone taking an interest in the
topic and taking a course or two.

Fortunately for us, many asymptotically fast algorithms do have
cutoffs well within the size ranges we regularly encounter, and low
enough that it makes a significant difference in the runtime.

There are a few, but not many compared to the huge number that you
will encounter if you read certain theoretical journals.

If you care about specifics of the failure of theoretical computer
scientists to address computation, write to me privately.

No thanks, I run into enough of this on my own :)

- Robert

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to