On 04/26/2010 10:54 PM, Robert Bradshaw wrote:

I should comment on this, as I wrote the code and comments in question.
There actually is a fair amount of research out there on stable
multiplication of polynomials over the real numbers, but (if I remember
right, it was a while ago) there were some results to the effect that no
asymptotically fast algorithm has good stability properties over all
possible coefficients.


If you have a moment and if you remember, could you point out a reference or two? I searched for several hours before posting, and couldn't find anything that seemed to address the question of stability satisfactorily.



Of course, in practice one often runs into "well
behavied" polynomials in R[x]. (In particular, most of the interest is,
understandably, about truncated power series, though there are other
uses.) The obvious thing to do is a (non-discrete) FFT. What I was
thinking about implementing (though I never got back to it) is something
that works like this:

1) Choose some a and rescale f(x) -> f(alpha*x) so all coefficients have
roughly the same size.
2) Multiply the resulting polynomial over Z using the insanely fast code
in FLINT. The resulting answer will be exact product of the truncated
input, and the precision loss (conversely, required precision for no
loss) was fairly easy to work out if I remember right .
3) Cast the result back into R[x].


Thanks!

Jason

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to