This isn't really to do with it being a power series. If you simply
work with random polynomials where the magnitude varies by 3 or 4
times the working precision, then you lose all meaningful data, unless
you use classical multiplication.

But karatsuba and FFT are fine if you are prepared to work at some
appropriate multiple of the desired final precision, even for the
example you give. In fact you'll lose no data at all.

Of course that comes at a cost. For length 2^k the example you give
requires you to work at a bit over precision k*2^k for the FFT, I
think. The bit complexity is then comparable to using classical
multiplication. But at least in the case where you don't have this
exponential behaviour of the coefficients you don't get penalised by
the O(n^2) time of the classical algorithm.

Bill.

On May 1, 4:53 am, Robert Bradshaw <rober...@math.washington.edu>
wrote:
> On Apr 30, 2010, at 8:00 PM, Bill Hart wrote:
>
>
>
>
>
> > Finally, the figures for karatsuba with signed coefficients that vary
> > wildly. Hardly distinguishable from the figures for classical
> > multiplication. With that I finish my little study. And in the
> > inimitable words of Adam and Jamie (and with about as much "science"
> > to back it up): MYTH BUSTED!!
>
> > So what have we learned:
>
> > * The Sage timings for multiplication in RR[x] are not great, (please
> > note however that my original timings of the classical algorithm in
> > flint2 were incorrect due to a bug in my code; flint is actually only
> > 10-20% faster for this - the times for the Hartley Transform were also
> > misreported and should have been 9ms and 125ms for degree 1000 and
> > 10000 multiplications, respectively, still considerably faster than
> > Sage).
>
> > * Karatsuba does not display noticeably worse behaviour than classical
> > multiplication. Toom-Cook is also acceptable.
>
> > * There is no such thing as Fateman multiplication.
>
> > * The "Fateman multiplication" code is broken.
>
> > * The Hartley transform is much better than Rader-Brennen.
>
> > * For fixed point arithmetic the FFT techniques are fine, but for
> > floating point they will suck when the "magnitude" of the coefficients
> > varies greatly.
>
> > * To answer the question asked by the OP, yes, there is a better
> > algorithm; see the algorithm of Joris vdH
>
> > * NO algorithm that we have looked at will deal with the kind of
> > cancellation talked about in the docstring which started this whole
> > thread off.
>
> May I add another datapoint: rather than looking at random  
> polynomials, consider power series expansions. Here I compute the  
> expansion for e^x to 100 terms via classical, karatsuba, and fft over  
> RR (53-bits) vs. exactly over QQ:
>
> def rel_prec(approx, actual):
>      return [oo if a==b else -log(abs(a-b)/abs(b), 2) for (a,b) in  
> zip(approx, actual)]
>
> from numpy.fft import rfft, irfft
> def fft_mul(f, g):
>      n = f.degree() + g.degree() + 1
>      fft_f = rfft(list(f) + [0] * (n-f.degree()))
>      fft_g = rfft(list(g) + [0] * (n-g.degree()))
>      return f.parent()(irfft( fft_f * fft_g ))
>
> sage: R.<x> = QQ[]
> sage: f = exp(x+O(x^100)).polynomial()
> sage: ff = f.change_ring(RR)
>
> sage: sorted(rel_prec(ff*ff, f*f))
> [52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52,  
> 52, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity,  
> +Infinity, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity,  
> +Infinity, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity,  
> +Infinity, +Infinity, +Infinity]
>
> sage: sorted(rel_prec(ff._mul_karatsuba(ff), f*f))
> [7, 9, 10, 11, 12, 12, 12, 14, 15, 16, 20, 21, 21, 25, 25, 26, 26, 28,  
> 30, 30, 33, 35, 39, 40, 41, 41, 43, 44, 45, 45, 46, 50, 50, 50,  
> +Infinity, +Infinity, +Infinity, +Infinity, +Infinity]
>
> sage: sorted(rel_prec(fft_mul(ff, ff), f*f))
> [-61, -54, -46, -44, -42, -35, -32, -24, -24, -18, -13, -12, -8, -3,  
> -1, 0, 4, 7, 10, 13, 17, 24, 27, 29, 31, 33, 35, 37, 40, 43, 45, 48,  
> 48, 50, 51, 52, 52, 52, +Infinity]
>
> Observation:
>
> - Classical -- Not bad, 1 bit loss max.
> - Karatsuba -- Things get pretty bad (especially towards the middle of  
> the product).
> - FFT -- Yes, that's a negative amount of precision, i.e. off by  
> orders of magnitude, for half the coefficients. It just can't handle a  
> range of coefficient magnitudes greater than the working precision.
>
> I don't have time to do extensive experiments or draw a conclusion,  
> but the code above lends itself easily to more investigations in this  
> direction.
>
> - Robert
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to