Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Tim Daly
I have also found that it has the side-effect you mention. It makes debugging easier, if it is needed at all. Hopefully this will also be true of the person who ends up maintaining our code after we're gone. Thanks for the permission. Your quote appears on the documentation page of the axiom webs

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread rjf
Could I be agreeing with Tom? Well, sort of. If you are writing a program in the context of some on-going project, trying to improve the program that does (say) multiplication, then it is exactly relevant to compare your new program to the one you propose to replace. And if you have yet another id

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Ralf Hemmecke
On 05/04/2010 12:16 AM, Bill Hart wrote: [snip] > But I was surprised at how much difference it made to the debugging > time. I made the same experience. Literate programming is not only beneficial for the people who read the literate program, but if the matter is complex enough, it's also good f

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Tom Boothby
> I've always been confused about people who present benchmarks of their > own code against more of their own code, rather than to have the > courage to compare against other people's code. I think this can be useful in some contexts. It can "normalize away" the skill of the programmer and the am

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Bill Hart
On May 3, 9:32 pm, rjf wrote: > Your comments are interesting: > 1. Theorists tend to reject all papers that merely demonstrate working > programs, > in favor of papers that have proofs.  That is what has made the ISSAC > conferences > so boring and low in attendance. Interesting observation. I

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Bill Hart
Ah you arrived right on cue. LOL! Ha ha, you can quote me if you want, but I have written a couple of literate programs in my life, so I'm hardly an expert. But I was surprised at how much difference it made to the debugging time. Bill. On May 3, 10:04 pm, Tim Daly wrote: > Bill Hart wrote: >

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Tim Daly
Bill Hart wrote: That's actually a very interesting paper. I've recently been playing with Forth, which is a kind of "Lisp type language" (yeah I know you won't agree with that), based on a data stack. I also worked through a book on Lisp up to the point where macros were defined, as I wanted t

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread rjf
Your comments are interesting: 1. Theorists tend to reject all papers that merely demonstrate working programs, in favor of papers that have proofs. That is what has made the ISSAC conferences so boring and low in attendance. 2. I don't have a separate implementation of the Schoenhage-Strassen FF

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Bill Hart
I didn't see any mention in your paper of the Schoenhage-Strassen FFT for exact arithmetic. It would be faster than using CRT for a whole lot of multiplications modulo small primes. No number theorist would accept the results of "exact arithmetic" done using FFTW or other floating point FFT's unle

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Bill Hart
That's actually a very interesting paper. I've recently been playing with Forth, which is a kind of "Lisp type language" (yeah I know you won't agree with that), based on a data stack. I also worked through a book on Lisp up to the point where macros were defined, as I wanted to understand how that

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Bill Hart
There is such a module in Sage. It's just not the one we are talking about here. It's actually not necessary to use Colins' arithmetic to get good speed. The algorithm I'm currently using can make use of a fast Kronecker Segmentation algorithm and will actually work *faster* than Colins' arithmeti

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread Bill Hart
I finished rewriting and cleaning up my code. There is now a function mpfr_poly_mul(mpfr_poly_t res, mpfr_poly_t pol1, mpfr_poly_t pol2, ulong guard_bits). If pol1 and pol2 are computed to sufficient precision in the first place (i.e. a bit more than the precision you want in the end) and guard_bi

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-03 Thread rjf
If you are not doing floating point arithmetic with machine arithmetic, but using MPFR, then you are sacrificing a huge amount of time. You might as well be using rational arithmetic, or the kind of arithmetic that Collins once proposed, where the denominator is a power of 2. Makes reducing to l

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread Bill Hart
That should say arbitrary exponents, not arbitrary precision. On May 3, 2:36 am, Bill Hart wrote: > This thread is about multiple precision floating point arithmetic. > What have machine floats got to do with it? > > I'm using mpfr, which is what Sage uses. It has guaranteed rounding > for *arbit

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread Bill Hart
This thread is about multiple precision floating point arithmetic. What have machine floats got to do with it? I'm using mpfr, which is what Sage uses. It has guaranteed rounding for *arbitrary precision* floats with essentially arbitrary precision exponents (there is a limit of course). There's

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread rjf
On May 2, 9:02 am, Bill Hart wrote: > On May 2, 4:14 pm, rjf wrote: > > > I repeat, > > > The interesting cases are obvious those which are not covered. > > Sorry, I don't know what you mean. Are you saying that by definition > they are interesting because they are not covered by Joris' algorit

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread Bill Hart
On May 2, 4:14 pm, rjf wrote: > I repeat, > > The interesting cases are obvious those which are not covered. Sorry, I don't know what you mean. Are you saying that by definition they are interesting because they are not covered by Joris' algorithm, whatever they may be? > > I don't know what y

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread rjf
I repeat, The interesting cases are obvious those which are not covered. I don't know what your fix is, nor do I especially care, but I gather that, now, at least your "stable" word is meant to indicate something like a small bound in the maximum over all coefficients of the difference in relativ

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread Bill Hart
Yes, in fact, this method is definitely able to handle these cases. There is a subtlety in the implementation which I had not paid attention to. So yes, it precisely retrieves the answer to this problem. Next challenge? Bill. On May 2, 2:45 pm, Bill Hart wrote: > Hmm, very interestingly, the n

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread Bill Hart
Hmm, very interestingly, the new method *can* be made to output the correct answer for this problem. I'll have to think about whether doing this will screw anything else up. I think, in the class of problems it is designed to solve, the answer is no, it won't screw others up. Bill. On May 2, 2:1

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-02 Thread Bill Hart
It doesn't quite handle this case. But as I'm sure you are aware, neither does the classical algorithm: sage: def rel_prec(approx, actual): return [oo if a==b else -log(abs(a-b)/abs(b), 2) for (a,b) in zip(approx, actual)] sage: R. = QQ[] sage: f = R((x^100-10^100)/(x-10)) sage: g = x-10 sag

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread rjf
On May 1, 5:42 pm, Bill Hart wrote: > ... > So now anything that grows regularly can be multiplied with basically > zero loss, asymptotically fast. That probably covers most of the > interesting cases anyhow. > > The interesting cases are obvious those which are not covered. They include the

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread Bill Hart
I implemented the scaling thing. If both polynomials want different scalings then obviously just a simple scaling won't work, you have to use the same scaling for both and then unscale by the common factor at the end. But thanks to a trick in Joris' paper, you can deal with the case of two differen

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread Bill Hart
I agree that we shouldn't just drop in something that could be worse. But offering asymptotically fast multiplication with a carefully written docstring detailing when it would be useful isn't a bad idea. Anyhow, the best algorithm to implement would be Joris' algorithm. The scaling idea is only p

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread Robert Bradshaw
On May 1, 2010, at 7:17 AM, Bill Hart wrote: This isn't really to do with it being a power series. If you simply work with random polynomials where the magnitude varies by 3 or 4 times the working precision, then you lose all meaningful data, unless you use classical multiplication. The two ca

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread Bill Hart
In fact, if you do exp(2*x) times exp(-x) you'll see the classical algorithm also sucks. Bill. On May 1, 4:12 pm, Bill Hart wrote: > Actually, in the situation where the magnitude of the terms varies > exponentially like this in a regular way, you can just evaluate your > power series at c*x for

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread Bill Hart
Actually, in the situation where the magnitude of the terms varies exponentially like this in a regular way, you can just evaluate your power series at c*x for some constant c (e.g. 2 in the example you give), then do the multiplication asymptotically fast, then evaluate the result at x/c. If the

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread Bill Hart
This isn't really to do with it being a power series. If you simply work with random polynomials where the magnitude varies by 3 or 4 times the working precision, then you lose all meaningful data, unless you use classical multiplication. But karatsuba and FFT are fine if you are prepared to work

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Robert Bradshaw
On Apr 30, 2010, at 8:00 PM, Bill Hart wrote: Finally, the figures for karatsuba with signed coefficients that vary wildly. Hardly distinguishable from the figures for classical multiplication. With that I finish my little study. And in the inimitable words of Adam and Jamie (and with about as m

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
Finally, the figures for karatsuba with signed coefficients that vary wildly. Hardly distinguishable from the figures for classical multiplication. With that I finish my little study. And in the inimitable words of Adam and Jamie (and with about as much "science" to back it up): MYTH BUSTED!! So w

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
Now karatsuba for signed coefficients: len = 1, min = 0, av = 0, max = 0, prec = 106 len = 2, min = 0, av = 1, max = 5, prec = 106 len = 3, min = 0, av = 1, max = 4, prec = 106 len = 4, min = 0, av = 0, max = 3, prec = 106 len = 6, min = 0, av = 0, max = 6, prec = 106 len = 8, min = 0, av = 1, max

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
And now for karatsuba. First for unsigned coefficients all about the same magnitude: len = 1, min = 0, av = 0, max = 0, prec = 106 len = 2, min = 0, av = 1, max = 2, prec = 106 len = 3, min = 0, av = 1, max = 3, prec = 106 len = 4, min = 0, av = 1, max = 3, prec = 106 len = 6, min = 0, av = 0, max

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
Now for toom cook with wildly varying signed coefficients. A little over twice the precision loss of the classical algorithm. Hardly what I'd call an unmitigated disaster. Certainly very usable. len = 1, min = 0, av = 18, max = 79, prec = 106 len = 2, min = 0, av = 27, max = 101, prec = 106 len =

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
Now toom cook with signed coefficients: len = 1, min = 0, av = 0, max = 0, prec = 106 len = 2, min = 0, av = 1, max = 5, prec = 106 len = 3, min = 0, av = 2, max = 7, prec = 106 len = 4, min = 0, av = 0, max = 3, prec = 106 len = 6, min = 0, av = 2, max = 6, prec = 106 len = 8, min = 0, av = 2, ma

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
As promised, here with toom cook figures, first for unsigned coefficients all about the same magnitude: len = 1, min = 0, av = 0, max = 0, prec = 106 len = 2, min = 0, av = 1, max = 2, prec = 106 len = 3, min = 0, av = 2, max = 6, prec = 106 len = 4, min = 0, av = 1, max = 3, prec = 106 len = 6, m

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread rjf
On Apr 30, 12:23 am, Robert Bradshaw wrote: > > > (RJF) Maxima was designed around exact arithmetic, and generally offers to > > convert floats to their corresponding exact rationals before doing > > anything > > that requires arithmetic. It makes no claims about floats per se, > > partly > > b

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread rjf
On Apr 30, 2:17 am, Bill Hart wrote: > Actually, I lie, slightly. I did find one instance of `numerical > stability' used in reference to the FFT, and that is on wikipedia (so > now we all know it must be true). Again, Accuracy and stability of numerical algorithms By Nicholas J. Higham I thi

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread rjf
On Apr 30, 1:57 am, Bill Hart wrote: > On Apr 30, 6:58 am, rjf wrote: concept. What is it used for? I > can't imagine defining a GCD in this context as divisibility is an > exact phenomenon. Google for "approximate GCD". > > I hear the term numerical stability used quite a lot. The two context

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
And finally, the figures for Rader-Brennan for signed coefficients whose magnitudes also vary wildly: as with FHT, on average, no usable information results. No FFT algorithm will be of use in that situation. Joris vdH's algorithm is your only hope. After dinner, figures for Karatsuba and Toom Coo

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
Here are the figures for Rader-Brennan for signed coefficients. On average, the precision loss is radically worse in this case than Fast Hartley, but still quite usable: len = 1, min = 0, av = 0, max = 1, prec = 106 len = 2, min = 0, av = 1, max = 5, prec = 106 len = 3, min = 0, av = 2, max = 12,

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
I installed Andreas Enge's mpfrcx package: http://www.multiprecision.org/index.php?prog=mpfrcx&page=html which just worked for me. I took a closer look and the Rader-Brennan FFT is a complex FFT taking mpcx polynomials as input. When multiplying polynomials over the reals, it simply converts to

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread luisfe
> > Approximate GCD? That's a curious concept. What is it used for? I > can't imagine defining a GCD in this context as divisibility is an > exact phenomenon. For example, in an inverse parametrization problem. Suppose that you have a rational curve given by a parametrization with float cofficien

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
There is the following paper on numerically stable polynomial multiplication by Joris van der Hoeven: http://www.texmacs.org/joris/stablemult/stablemult-abs.html But please note carefully the note on that page about an error in bound 3. Joris writes: "Basically, when doing the multiplication (

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
Actually, I lie, slightly. I did find one instance of `numerical stability' used in reference to the FFT, and that is on wikipedia (so now we all know it must be true). There I presume the context is signal processing rather than polynomial multiplication. I did track down one article which specifi

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Bill Hart
On Apr 30, 6:58 am, rjf wrote: > On Apr 29, 10:58 am, Robert Bradshaw > wrote: > > > On Apr 29, 2010, at 8:30 AM, rjf wrote: > > > > (RJF)Again, I see no definition of what you mean by accuracy in the result > > > of polynomial multiplication.The easiest position to take is that of > > > MPFR-

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-30 Thread Robert Bradshaw
On Apr 29, 2010, at 10:58 PM, rjf wrote: On Apr 29, 10:58 am, Robert Bradshaw wrote: On Apr 29, 2010, at 8:30 AM, rjf wrote: (RJF)Again, I see no definition of what you mean by accuracy in the result of polynomial multiplication.The easiest position to take is that of MPFR-- considering

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread rjf
On Apr 29, 10:58 am, Robert Bradshaw wrote: > On Apr 29, 2010, at 8:30 AM, rjf wrote: > > > (RJF)Again, I see no definition of what you mean by accuracy in the result > > of polynomial multiplication.The easiest position to take is that of MPFR-- > considering the inputs as exact rationals, how

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
Indeed, if one doubles the precision before calling the FHT multiplication routine (which will not affect the time terribly much) then one gets stability about the same as classical multiplication. Tomorrow I'll look at the multiplication algorithms in MPFRCX and see how their stability behaves.

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
If the coefficients are allowed to vary in sign and vary wildly in the number of bits, then nothing will save you. Here are the figures for classical multiplication: len = 1, min = 0, av = 18, max = 79, prec = 106 len = 2, min = 0, av = 23, max = 101, prec = 106 len = 3, min = 0, av = 19, max = 98

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
Here are the figures for signed coefficients. First for classical multiplication: len = 1, min = 0, av = 0, max = 0, prec = 106 len = 2, min = 0, av = 0, max = 3, prec = 106 len = 3, min = 0, av = 0, max = 2, prec = 106 len = 4, min = 0, av = 0, max = 1, prec = 106 len = 6, min = 0, av = 0, max =

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
I wrote a little program to compute the "exact" product of two polynomials with uniformly random unsigned coefficients < 2^prec for some precision prec. I then compute the number of bits that are correct in each coefficient and subtract from the precision. That gives a measure of how many bits are

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
Should this happen? : sage: f._mul_fateman(f) ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (22841, 0)) ERROR: An unexpected error occurred while tokenizing input The following t

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
Being a little more precise about it. Here are the Sage timings performed on the same machine as I did the FLINT timings: sage: f=(x+1)^1000 sage: timeit("f*f") 5 loops, best of 3: 87.5 ms per loop sage: timeit("f._mul_karatsuba(f)") 5 loops, best of 3: 370 ms per loop flint2 (with FHT): 5.5 ms

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Robert Bradshaw
On Apr 29, 2010, at 8:30 AM, rjf wrote: Again, I see no definition of what you mean by accuracy in the result of polynomial multiplication. If you look at the actual algebraic formula for any given coefficient in the result, you see it is the sum of products of coefficients from the inputs, and

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
On Apr 29, 4:30 pm, rjf wrote: > Again, I see no definition of what you mean by accuracy in the result > of polynomial multiplication. I'm pretty naive and unknowledgeable about such things, but I don't see why you need a definition. My reading of the original context of the thread is that the

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread Bill Hart
I made the speedup I mentioned to the Fast Hartley Transform code in flint2. Again I pick on the benchmarks Robert gave: > sage: f = (x+1)^1000 > sage: timeit("f*f") > 5 loops, best of 3: 142 ms per loop > sage: timeit("f._mul_karatsuba(f)") > 5 loops, best of 3: 655 ms per loop That time is dow

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-29 Thread rjf
Again, I see no definition of what you mean by accuracy in the result of polynomial multiplication. If you look at the actual algebraic formula for any given coefficient in the result, you see it is the sum of products of coefficients from the inputs, and the possibility of cancellation depends on

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-28 Thread Bill Hart
I coded up a basic classical multiplication (no attempt whatsoever to prevent cancellation). Fortunately the result of the classical and Hartley multiplication routines agree (up to some precision loss). I picked on the timings Robert gave: sage: f = (x+1)^1000 sage: timeit("f*f") 5 loops, best o

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-28 Thread Bill Hart
Rader-Brenner is less stable because it uses purely imaginary twiddle factors whose absolute value is considerably bigger than 1. It has been largely discarded in favour of the split-radix method which uses less real multiplications and additions anyhow. It's very interesting to see an implementat

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-28 Thread Bill Hart
OK, the reason the Rader-Brenner FFT is unstable is it uses purely imaginary twiddle factors with absolute value significantly greater than 1 when n is large. This causes instability. There are apparently other FFT's which use less real multiplications and are much better behaved. Bill. On Apr 2

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-28 Thread Bill Hart
A very nice looking library, but it uses Karatsuba and Toom Cook, the very things we decided we unstable. I also read that the Rader-Brenner FFT is very unstable. Bill. On Apr 28, 2:14 pm, YannLC wrote: > This might be of interest: > > from the people of MPFR, > > "Mpfrcx is a library for the ar

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-28 Thread YannLC
This might be of interest: from the people of MPFR, "Mpfrcx is a library for the arithmetic of univariate polynomials over arbitrary precision real (Mpfr) or complex (Mpc) numbers, without control on the rounding. For the time being, only the few functions needed to implement the floating point a

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart
You can access the code here: http://selmer.warwick.ac.uk/gitweb/flint2.git?a=shortlog;h=refs/heads/FHT You need the latest MPIR release candidate, the latest MPFR and either an x86 or x86_64 machine to run it. set the liib and include paths in the top level makefile, set the LD_LIBRARY_PATH's i

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart
Well I got the polynomial convolution working with the Fast Hartley Transform. It seems to pass my primitive test code. As an example of a first timing, 1000 iterations of multiplying polynomials of length 512 with 100 floating point bits precision per coefficient takes 17s. I think I can make it

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart
Well, I coded up a mini mpfr_poly module and Fast Hartley Transform in flint2 using mpfr's as coefficients. It's hard to know if it is doing the right thing, there's no test code yet. But it compiles and doesn't segfault. :-) A little bit more work required to turn it into a convolution, but as f

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart
Also see pages 252 and following of Polynomial and Matrix Computations, Volume 1 by Dario Bini and Victor Pan, which seems to answer your question in detail. Bill. On Apr 27, 4:57 pm, Bill Hart wrote: > Numerical stability is not something I have any experience with. I am > not sure if is is equ

Re: [sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Robert Bradshaw
On Apr 27, 2010, at 6:45 AM, Jason Grout wrote: On 04/26/2010 10:54 PM, Robert Bradshaw wrote: I should comment on this, as I wrote the code and comments in question. There actually is a fair amount of research out there on stable multiplication of polynomials over the real numbers, but (if

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart
Numerical stability is not something I have any experience with. I am not sure if is is equivalent in some sense to the loss of precision which occurs when doing FFT arithmetic using a floating point FFT. The issue seems to be accumulation of "numerical noise". There are proven bounds on how many

[sage-devel] Re: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Jason Grout
On 04/26/2010 10:54 PM, Robert Bradshaw wrote: I should comment on this, as I wrote the code and comments in question. There actually is a fair amount of research out there on stable multiplication of polynomials over the real numbers, but (if I remember right, it was a while ago) there were som