There is the following paper on numerically stable polynomial
multiplication by Joris van der Hoeven:

 http://www.texmacs.org/joris/stablemult/stablemult-abs.html

But please note carefully the note on that page about an error in
bound 3.

Joris writes:

"Basically, when doing
the multiplication (1 - z)^n (1 + z)^n, as suggested by one of the
referees,
there is a lot of cancellation going on, so you will need to work
at precision c n at least in order to guarantee the first digits of
the non zero coefficients. So basically, there is no good algorithm
at fixed precision in the Newton polygon sense.

On the other hand, I do believe that my algorithm somehow behaves best
from the complexity point of view in the case that nothing bad
of the above kind occurs.

Notice that I did implement the algorithm in Mathemagix.
I used it to design an asymptotically efficient complex root
finder based on repeated applications of Graeffe transformations.
However, this also required the introduction of larger exponents,
with a lot of additional overhead, making the whole method not
really competitive, except possibly at really huge degrees."

I came to know about this paper by a round about route. Somebody
mentioned it to somebody who mentioned it to somebody else who
mentioned it to me. So, given that I know Joris, I just wrote and
asked him about it.

Bill.


On Apr 30, 10:17 am, Bill Hart <goodwillh...@googlemail.com> wrote:
> Actually, I lie, slightly. I did find one instance of `numerical
> stability' used in reference to the FFT, and that is on wikipedia (so
> now we all know it must be true). There I presume the context is
> signal processing rather than polynomial multiplication. I did track
> down one article which specifically states that the Rader-Brennen FFT
> is numerically unstable and it gave a reason why (the size of the
> twiddle factors used), though I don't recall a definition of what they
> meant by numerical stability. Again, I assume this concept is well-
> understood by the signal processing people. How that concept actually
> applies to polynomial multiplication I don't know.
>
> What I imagine is the correct way of looking at the problem is via
> pontrjagin duality, where the group algebras of Z/nZ and its dual are
> isomorphic via the fourier transform. I suppose that polynomials
> modulo x^n - 1 are complex valued functions on Z/nZ and that the
> numerical stability of FFT's is examined in that context. Even as I
> say that though, I am uncertain how inexact representations of R are
> handled under this interpretation....
>
> I'm sure it is an interesting problem. Just not one I have much time
> to study theoretically at this time.
>
> Bill.
>
> On Apr 30, 9:57 am, Bill Hart <goodwillh...@googlemail.com> wrote:
>
>
>
>
>
> > On Apr 30, 6:58 am, rjf <fate...@gmail.com> wrote:
>
> > > On Apr 29, 10:58 am, Robert Bradshaw <rober...@math.washington.edu>
> > > wrote:
>
> > > > On Apr 29, 2010, at 8:30 AM, rjf wrote:
>
> > > > > (RJF)Again, I see no definition of what you mean by accuracy in the 
> > > > > result
> > > > > of polynomial multiplication.The easiest position to take is that of 
> > > > > MPFR--
> > > > considering the inputs as exact rationals, how far off is the output  
> > > > (say, coefficient by coefficient) from the actual result. One could  
> > > > also use other measures, such as an L^2 norm over some compact region  
> > > > (like [-1,1]). What may make sense for many applications is some kind  
> > > > of a "slopped absolute" precision, as has been discussed with p-adic  
> > > > polynomials. These would have good behavior on |x| < a or |x| > a  
> > > > depending on the slope.
>
> > > There is a large literature on the meaning of the "approximate GCD" of
> > > two polynomials with floating-point coefficients.  The term
> > > "stability"
> > > also has a pretty good definition, and it does not seem to correspond
> > > to how you are using it here.
>
> > Approximate GCD? That's a curious concept. What is it used for? I
> > can't imagine defining a GCD in this context as divisibility is an
> > exact phenomenon.
>
> > I hear the term numerical stability used quite a lot. The two contexts
> > I've encountered it are in linear algebra and in differential
> > equations. Actually, I might have heard it used in polynomial
> > evaluation too.
>
> > I happily admit that I've no idea of a definition in the case of
> > polynomial multiplication. However, regardless of what the correct
> > terminology is, the problem I am studying is precisely the one stated,
> > which I took to be the subject of the thread.
>
> > I'm not interested in numerical stability in any of the other senses
> > that I know of, at this point. I don't even claim that what I am
> > studying is a useful notion, But feel free to suggest some references
> > if there is something I ought to read. Oh wait, every single published
> > article out there is nonsense. Damn, I forgot. You don't recommend any
> > articles.
>
> > Bill.
>
> > > > It all really depends on your application, and currently we don't  
> > > > define, let alone make guarantees, on the precision of the result (and  
> > > > nor does Maxima as far as I understand).
>
> > > Maxima was designed around exact arithmetic, and generally offers to
> > > convert floats to their corresponding exact rationals before doing
> > > anything
> > > that requires arithmetic. It makes no claims about floats per se,
> > > partly
> > > because saying anything sensible is a challenge, compounded by
> > > particular issues that are not mathematical but practical .. overflow,
> > > different machine floating point formats, etc
>
> > > > We do, however, try to avoid  
> > > > algorithms that are known to be numerically unstable.
>
> > > The lack of serious understanding of numerical computation in Sage is
> > > unfortunate. It might be cured by someone taking an interest in the
> > > topic
> > > and taking a course or two.
>
> > > > Fortunately for us, many asymptotically fast algorithms do have  
> > > > cutoffs well within the size ranges we regularly encounter, and low  
> > > > enough that it makes a significant difference in the runtime.
>
> > > There are a few, but not many compared to the huge number that you
> > > will encounter
> > > if you read certain theoretical journals.
>
> > > If you care about specifics of the failure of theoretical computer
> > > scientists
> > > to address computation, write to me privately.
>
> > > RJF
>
> > > --
> > > To post to this group, send an email to sage-devel@googlegroups.com
> > > To unsubscribe from this group, send an email to 
> > > sage-devel+unsubscr...@googlegroups.com
> > > For more options, visit this group 
> > > athttp://groups.google.com/group/sage-devel
> > > URL:http://www.sagemath.org
>
> > --
> > To post to this group, send an email to sage-devel@googlegroups.com
> > To unsubscribe from this group, send an email to 
> > sage-devel+unsubscr...@googlegroups.com
> > For more options, visit this group 
> > athttp://groups.google.com/group/sage-devel
> > URL:http://www.sagemath.org
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to