I implemented the scaling thing. If both polynomials want different
scalings then obviously just a simple scaling won't work, you have to
use the same scaling for both and then unscale by the common factor at
the end. But thanks to a trick in Joris' paper, you can deal with the
case of two different scalings too.

So now anything that grows regularly can be multiplied with basically
zero loss, asymptotically fast. That probably covers most of the
interesting cases anyhow.

Bill.

On May 1, 6:33 pm, Bill Hart <goodwillh...@googlemail.com> wrote:
> I agree that we shouldn't just drop in something that could be worse.
> But offering asymptotically fast multiplication with a carefully
> written docstring detailing when it would be useful isn't a bad idea.
>
> Anyhow, the best algorithm to implement would be Joris' algorithm. The
> scaling idea is only part of that.
>
> Scaling is only useful, as Joris points out, when the roots of the
> polynomial are all in a narrow annulus centred at the origin (scaling
> simply brings them down to an annulus of radius 1).
>
> For other polynomials, Joris suggests figuring out which terms of the
> cross product actually contribute substantially to the product. This
> set of terms can then be broken into segments which can be multiplied
> using fast multiplication with scaling. His algorithm is
> asymptotically fast, but should be about as accurate as classical
> multiplication. Basically you can't really beat that.
>
> It's also implemented in mathemagix I believe.
>
> Bill.
>
> On May 1, 6:07 pm, Robert Bradshaw <rober...@math.washington.edu>
> wrote:
>
>
>
>
>
> > On May 1, 2010, at 7:17 AM, Bill Hart wrote:
>
> > > This isn't really to do with it being a power series. If you simply
> > > work with random polynomials where the magnitude varies by 3 or 4
> > > times the working precision, then you lose all meaningful data, unless
> > > you use classical multiplication.
>
> > The two cases that I've ever run into polynomial arithmetic over the  
> > reals are chebyshev polynomials and power series expansions, both of  
> > which in general have coefficients of hugely varying magnitude, but in  
> > a regular manner, which is why I think they're important to consider.
>
> > > But karatsuba and FFT are fine if you are prepared to work at some
> > > appropriate multiple of the desired final precision, even for the
> > > example you give. In fact you'll lose no data at all.
>
> > Of course. In fact, you could say the same about nearly any algorithm.  
> > I am not arguing that such algorithms not be used, but they shouldn't  
> > just be used naively because often they're worse, and here the naive  
> > classical algorithm does have an advantage.
>
> > On May 1, 2010, at 8:12 AM, Bill Hart wrote:
>
> > > Actually, in the situation where the magnitude of the terms varies
> > > exponentially like this in a regular way, you can just evaluate your
> > > power series at c*x for some constant c (e.g. 2 in the example you
> > > give), then do the multiplication asymptotically fast, then evaluate
> > > the result at x/c.
>
> > Which was my *exactly* original proposal when this thread first  
> > opened. We should implement it.
>
> > > If the magnitude doesn't vary regularly, you are screwed no matter
> > > what algorithm you use.
>
> > Or you choose the best c you can, figure out what precision you need  
> > (which could be a lot, but is I think easy to compute for the FFT  
> > case), and do it there.
>
> > > In fact, if you do exp(2*x) times exp(-x) you'll see the classical
> > > algorithm also sucks.
>
> > I'm not claiming it's always right, but it still does way better than  
> > anything else (short of bumping the precision a lot, and/or rescaling  
> > the coefficients which is what we should do when we have time to  
> > implement it).
>
> > sage: R.<x> = QQ[[]]
> > sage: f = x.exp()
> > sage: g = (2*x).exp().polynomial()
> > sage: f = (-x).exp().polynomial()
> > sage: ff = f.change_ring(RR)
> > sage: gg = g.change_ring(RR)
>
> > sage: rel_prec(ff*gg, f*g)
> > [+Infinity, +Infinity, +Infinity, 51, 49, 49, 46, 45, 42, 41, 38, 39,  
> > 35, 35, 33, 31, 31, 27, 26, 29, 43, 51, 52, 50, 50, +Infinity, 50, 51,  
> > 52, 51, 52, 50, 52, +Infinity, 51, +Infinity, 52, 52, +Infinity]
>
> > sage: (rel_prec(ff._mul_karatsuba(gg), f*g))
> > [+Infinity, +Infinity, +Infinity, 48, 45, 42, 41, 39, 34, 31, 27, 27,  
> > 21, 19, 12, 8, 6, 2, -2, -8, 7, 12, 11, 17, 15, 17, 17, 22, 26, 24,  
> > 30, 32, 33, 36, 41, 45, 47, 48, +Infinity]
>
> > sage: rel_prec(fft_mul(ff, gg), f*g)
> > [50, +Infinity, 49, 50, 46, 46, 43, 41, 37, 36, 29, 27, 23, 26, 18,  
> > 14, 7, 6, -1, -3, 9, 12, 8, 8, 5, 4, 0, -1, -4, -4, -11, -12, -17,  
> > -20, -20, -30, -33, -39, -38]
>
> > Until someone comes up with something with better behavior, we  
> > shouldn't drop in something that degrades much worse.
>
> > - Robert
>
> > > On May 1, 4:53 am, Robert Bradshaw <rober...@math.washington.edu>
> > > wrote:
> > >> On Apr 30, 2010, at 8:00 PM, Bill Hart wrote:
>
> > >>> Finally, the figures for karatsuba with signed coefficients that  
> > >>> vary
> > >>> wildly. Hardly distinguishable from the figures for classical
> > >>> multiplication. With that I finish my little study. And in the
> > >>> inimitable words of Adam and Jamie (and with about as much "science"
> > >>> to back it up): MYTH BUSTED!!
>
> > >>> So what have we learned:
>
> > >>> * The Sage timings for multiplication in RR[x] are not great,  
> > >>> (please
> > >>> note however that my original timings of the classical algorithm in
> > >>> flint2 were incorrect due to a bug in my code; flint is actually  
> > >>> only
> > >>> 10-20% faster for this - the times for the Hartley Transform were  
> > >>> also
> > >>> misreported and should have been 9ms and 125ms for degree 1000 and
> > >>> 10000 multiplications, respectively, still considerably faster than
> > >>> Sage).
>
> > >>> * Karatsuba does not display noticeably worse behaviour than  
> > >>> classical
> > >>> multiplication. Toom-Cook is also acceptable.
>
> > >>> * There is no such thing as Fateman multiplication.
>
> > >>> * The "Fateman multiplication" code is broken.
>
> > >>> * The Hartley transform is much better than Rader-Brennen.
>
> > >>> * For fixed point arithmetic the FFT techniques are fine, but for
> > >>> floating point they will suck when the "magnitude" of the  
> > >>> coefficients
> > >>> varies greatly.
>
> > >>> * To answer the question asked by the OP, yes, there is a better
> > >>> algorithm; see the algorithm of Joris vdH
>
> > >>> * NO algorithm that we have looked at will deal with the kind of
> > >>> cancellation talked about in the docstring which started this whole
> > >>> thread off.
>
> > >> May I add another datapoint: rather than looking at random
> > >> polynomials, consider power series expansions. Here I compute the
> > >> expansion for e^x to 100 terms via classical, karatsuba, and fft over
> > >> RR (53-bits) vs. exactly over QQ:
>
> > >> def rel_prec(approx, actual):
> > >>      return [oo if a==b else -log(abs(a-b)/abs(b), 2) for (a,b) in
> > >> zip(approx, actual)]
>
> > >> from numpy.fft import rfft, irfft
> > >> def fft_mul(f, g):
> > >>      n = f.degree() + g.degree() + 1
> > >>      fft_f = rfft(list(f) + [0] * (n-f.degree()))
> > >>      fft_g = rfft(list(g) + [0] * (n-g.degree()))
> > >>      return f.parent()(irfft( fft_f * fft_g ))
>
> > >> sage: R.<x> = QQ[]
> > >> sage: f = exp(x+O(x^100)).polynomial()
> > >> sage: ff = f.change_ring(RR)
>
> > >> sage: sorted(rel_prec(ff*ff, f*f))
> > >> [52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52,
> > >> 52, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity,
> > >> +Infinity, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity,
> > >> +Infinity, +Infinity, +Infinity, +Infinity, +Infinity, +Infinity,
> > >> +Infinity, +Infinity, +Infinity]
>
> > >> sage: sorted(rel_prec(ff._mul_karatsuba(ff), f*f))
> > >> [7, 9, 10, 11, 12, 12, 12, 14, 15, 16, 20, 21, 21, 25, 25, 26, 26,  
> > >> 28,
> > >> 30, 30, 33, 35, 39, 40, 41, 41, 43, 44, 45, 45, 46, 50, 50, 50,
> > >> +Infinity, +Infinity, +Infinity, +Infinity, +Infinity]
>
> > >> sage: sorted(rel_prec(fft_mul(ff, ff), f*f))
> > >> [-61, -54, -46, -44, -42, -35, -32, -24, -24, -18, -13, -12, -8, -3,
> > >> -1, 0, 4, 7, 10, 13, 17, 24, 27, 29, 31, 33, 35, 37, 40, 43, 45, 48,
> > >> 48, 50, 51, 52, 52, 52, +Infinity]
>
> > >> Observation:
>
> > >> - Classical -- Not bad, 1 bit loss max.
> > >> - Karatsuba -- Things get pretty bad (especially towards the middle  
> > >> of
> > >> the product).
> > >> - FFT -- Yes, that's a negative amount of precision, i.e. off by
> > >> orders of magnitude, for half the coefficients. It just can't  
> > >> handle a
> > >> range of coefficient magnitudes greater than the working precision.
>
> > >> I don't have time to do extensive experiments or draw a conclusion,
> > >> but the code above lends itself easily to more investigations in this
> > >> direction.
>
> > >> - Robert
>
> > >> --
> > >> To post to this group, send an email to sage-devel@googlegroups.com
> > >> To unsubscribe from this group, send an email to 
> > >> sage-devel+unsubscr...@googlegroups.com
> > >> For more options, visit this group 
> > >> athttp://groups.google.com/group/sage-devel
> > >> URL:http://www.sagemath.org
>
> > > --
> > > To post to this group, send an email to sage-devel@googlegroups.com
> > > To unsubscribe from this group, send an email to 
> > > sage-devel+unsubscr...@googlegroups.com
> > > For more options, visit this group 
> > > athttp://groups.google.com/group/sage-devel
> > > URL:http://www.sagemath.org
>
> > --
> > To post to this group, send an email to sage-devel@googlegroups.com
> > To unsubscribe from this group, send an email to 
> > sage-devel+unsubscr...@googlegroups.com
> > For more options, visit this group 
> > athttp://groups.google.com/group/sage-devel
> > URL:http://www.sagemath.org
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to