Well, I coded up a mini mpfr_poly module and Fast Hartley Transform in
flint2 using mpfr's as coefficients.

It's hard to know if it is doing the right thing, there's no test code
yet. But it compiles and doesn't segfault. :-)

A little bit more work required to turn it into a convolution, but as
far as I know, no inverse transform required, as it is its own
inverse. Maybe by tomorrow I'll have a fast convolution and we'll be
able to answer some of these questions heuristically.

It won't be super efficient, no truncation, no radix 4, no special
case for double precision coefficients, no cache friendly matrix
fourier stuff, etc, but I did reuse the twiddle factors as much as
possible, I think.

Bill.

On Apr 27, 5:18 pm, Bill Hart <goodwillh...@googlemail.com> wrote:
> Also see pages 252 and following of Polynomial and Matrix
> Computations, Volume 1 by Dario Bini and Victor Pan, which seems to
> answer your question in detail.
>
> Bill.
>
> On Apr 27, 4:57 pm, Bill Hart <goodwillh...@googlemail.com> wrote:
>
>
>
>
>
> > Numerical stability is not something I have any experience with. I am
> > not sure if is is equivalent in some sense to the loss of precision
> > which occurs when doing FFT arithmetic using a floating point FFT.
>
> > The issue seems to be accumulation of "numerical noise". There are
> > proven bounds on how many bits are lost to this as the computation
> > proceeds, but the general consensus has been that for "random
> > coefficients" the numerical noise is much, much less than the proven
> > bounds would indicate.
>
> > Zimmermann, Gaudry, Kruppa suggest that the following paper contains
> > information about proven bounds for the floating point FFT:
>
> > Percival, C. Rapid multiplication modulo the sum
> > and difference of highly composite numbers. Math.
> > Comp. 72, 241 (2003), 387–395
>
> > Paul Zimmermann would almost certainly know what the current State of
> > the Art is.
>
> > Similar stability issues arise in matrix arithmetic with floating
> > point coefficient. It seems that one can always prove something when
> > you know something about the successive minima. But I am not too sure
> > what the equivalent for an FFT would be. You might be into the area of
> > research rather than something that is known, i.e. it might just be
> > conventional wisdom that the FFT behaves well.
>
> > The people to ask about this are definitely Joris van der Hoeven and
> > Andreas Enge. If you find a very nice algorithm, let me know and we'll
> > implement it in flint2, which should have a type for polynomials with
> > floating point coefficients (using MPFR).
>
> > Bill.
>
> > On Apr 27, 2:45 pm, Jason Grout <jason-s...@creativetrax.com> wrote:
>
> > > On 04/26/2010 10:54 PM, Robert Bradshaw wrote:
>
> > > > I should comment on this, as I wrote the code and comments in question.
> > > > There actually is a fair amount of research out there on stable
> > > > multiplication of polynomials over the real numbers, but (if I remember
> > > > right, it was a while ago) there were some results to the effect that no
> > > > asymptotically fast algorithm has good stability properties over all
> > > > possible coefficients.
>
> > > If you have a moment and if you remember, could you point out a
> > > reference or two?  I searched for several hours before posting, and
> > > couldn't find anything that seemed to address the question of stability
> > > satisfactorily.
>
> > > > Of course, in practice one often runs into "well
> > > > behavied" polynomials in R[x]. (In particular, most of the interest is,
> > > > understandably, about truncated power series, though there are other
> > > > uses.) The obvious thing to do is a (non-discrete) FFT. What I was
> > > > thinking about implementing (though I never got back to it) is something
> > > > that works like this:
>
> > > > 1) Choose some a and rescale f(x) -> f(alpha*x) so all coefficients have
> > > > roughly the same size.
> > > > 2) Multiply the resulting polynomial over Z using the insanely fast code
> > > > in FLINT. The resulting answer will be exact product of the truncated
> > > > input, and the precision loss (conversely, required precision for no
> > > > loss) was fairly easy to work out if I remember right .
> > > > 3) Cast the result back into R[x].
>
> > > Thanks!
>
> > > Jason
>
> > > --
> > > To post to this group, send an email to sage-devel@googlegroups.com
> > > To unsubscribe from this group, send an email to 
> > > sage-devel+unsubscr...@googlegroups.com
> > > For more options, visit this group 
> > > athttp://groups.google.com/group/sage-devel
> > > URL:http://www.sagemath.org
>
> > --
> > To post to this group, send an email to sage-devel@googlegroups.com
> > To unsubscribe from this group, send an email to 
> > sage-devel+unsubscr...@googlegroups.com
> > For more options, visit this group 
> > athttp://groups.google.com/group/sage-devel
> > URL:http://www.sagemath.org
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to