[sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-05-01 Thread Dima Pasechnik
On Apr 28, 4:44 am, Bill Hart  wrote:
> On Apr 27, 8:55 pm, rjf  wrote:
>
> > Oh, just another note.
>
> > There are people who have made their whole careers on devising
> > asymptotically fast algorithms
> > which have never been implemented, or (if they have been implemented)
> > are not fast because
> > their overly-simplified analysis does not fit the currently available
> > computer efficiency profile.
> > Indeed, their analysis may not fit any past profile of computers, and
> > their algorithms were never
> > practical.
>
> Interesting observation. Furer's algorithm is not practical, as are
> some matrix multiplication algorithms. Do you have any other examples
> in mind?
>

Fastest algorithms in semi-algebraic geometry (see e.g. the book by
S.Basu, R.Pollack, M.-F.Roy), such as
quantifier elimination over the reals, fall under this, too.
Nobody has found how to implement the symbolic infinitesimal
deformations
needed there efficiently, it seems.

Dima

>
>
> > Note, for example, that memory access is far more expensive than
> > arithmetic, and that
> > many computers now can multiply or add in comparable time; and they
> > have several multipliers. Etc.
>
> I would say that memory access if far more expensive than addition,
> but not arithmetic in general. FFT multiplication is n log n
> (sometimes with a log log n, depending on what you are multiplying)
> and I would say the memory access perhaps adds a hidden log n factor.
>
> Can you perhaps be more specific as to what you are hinting at?
>
>
>
> > RJF
>
> > --
> > To post to this group, send an email to sage-devel@googlegroups.com
> > To unsubscribe from this group, send an email to 
> > sage-devel+unsubscr...@googlegroups.com
> > For more options, visit this group 
> > athttp://groups.google.com/group/sage-devel
> > URL:http://www.sagemath.org
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart
Certainly if *all* your memory accesses take time log n then you are
in trouble. But if your algorithm is cache friendly, it should take
time O(n log n) to access memory overall.

So I agree with what you say. Your implementation must be cache
friendly.

Bill.

On Apr 28, 12:11 am, Tim Daly  wrote:
> Bill Hart wrote:
> > On Apr 27, 8:55 pm, rjf  wrote:
>
> >> Oh, just another note.
>
> >> There are people who have made their whole careers on devising
> >> asymptotically fast algorithms
> >> which have never been implemented, or (if they have been implemented)
> >> are not fast because
> >> their overly-simplified analysis does not fit the currently available
> >> computer efficiency profile.
> >> Indeed, their analysis may not fit any past profile of computers, and
> >> their algorithms were never
> >> practical.
>
> > Interesting observation. Furer's algorithm is not practical, as are
> > some matrix multiplication algorithms. Do you have any other examples
> > in mind?
>
> >> Note, for example, that memory access is far more expensive than
> >> arithmetic, and that
> >> many computers now can multiply or add in comparable time; and they
> >> have several multipliers. Etc.
>
> > I would say that memory access if far more expensive than addition,
> > but not arithmetic in general. FFT multiplication is n log n
> > (sometimes with a log log n, depending on what you are multiplying)
> > and I would say the memory access perhaps adds a hidden log n factor.
>
> > Can you perhaps be more specific as to what you are hinting at?
>
> On current microprocessors a multiply can be done in a few clock ticks
> (probably one
> apparent clock tick if the in-processor pipelines are fully overlapped).
> An L1 cache access
> is about 3 clock ticks, an L2 can be dozens, and a memory access can be
> several hundred
> or more (depending on DDR/2/3/..., inter-processor MESI cache, prefetch
> instructions, etc.)
>
> By analogy, it is more expensive to peel an egg than to slice a carrot
> but the time is buried
> under the time it takes to get one out of the fridge (L1), the
> supermarket (L2), the farm
> (main memory) or grow a new one (disk).
>
> So if an FFT that is optimized for, say 128 byte cache lines with
> minimal cache collisions,
> then the multiply times will matter. But changing the FFT to ignore
> cache line sizes so the
> fetch breaks a cache line on each read and the multiply times are
> completely irrelevant.
>
>
>
> >> RJF
>
> >> --
> >> To post to this group, send an email to sage-devel@googlegroups.com
> >> To unsubscribe from this group, send an email to 
> >> sage-devel+unsubscr...@googlegroups.com
> >> For more options, visit this group 
> >> athttp://groups.google.com/group/sage-devel
> >> URL:http://www.sagemath.org
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


Re: [sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Tim Daly



Bill Hart wrote:

On Apr 27, 8:55 pm, rjf  wrote:
  

Oh, just another note.

There are people who have made their whole careers on devising
asymptotically fast algorithms
which have never been implemented, or (if they have been implemented)
are not fast because
their overly-simplified analysis does not fit the currently available
computer efficiency profile.
Indeed, their analysis may not fit any past profile of computers, and
their algorithms were never
practical.



Interesting observation. Furer's algorithm is not practical, as are
some matrix multiplication algorithms. Do you have any other examples
in mind?

  

Note, for example, that memory access is far more expensive than
arithmetic, and that
many computers now can multiply or add in comparable time; and they
have several multipliers. Etc.



I would say that memory access if far more expensive than addition,
but not arithmetic in general. FFT multiplication is n log n
(sometimes with a log log n, depending on what you are multiplying)
and I would say the memory access perhaps adds a hidden log n factor.

Can you perhaps be more specific as to what you are hinting at?
  
On current microprocessors a multiply can be done in a few clock ticks 
(probably one
apparent clock tick if the in-processor pipelines are fully overlapped). 
An L1 cache access
is about 3 clock ticks, an L2 can be dozens, and a memory access can be 
several hundred
or more (depending on DDR/2/3/..., inter-processor MESI cache, prefetch 
instructions, etc.)


By analogy, it is more expensive to peel an egg than to slice a carrot 
but the time is buried
under the time it takes to get one out of the fridge (L1), the 
supermarket (L2), the farm

(main memory) or grow a new one (disk).

So if an FFT that is optimized for, say 128 byte cache lines with 
minimal cache collisions,
then the multiply times will matter. But changing the FFT to ignore 
cache line sizes so the
fetch breaks a cache line on each read and the multiply times are 
completely irrelevant.
  

RJF

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group athttp://groups.google.com/group/sage-devel
URL:http://www.sagemath.org



  


--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart


On Apr 27, 8:55 pm, rjf  wrote:
> Oh, just another note.
>
> There are people who have made their whole careers on devising
> asymptotically fast algorithms
> which have never been implemented, or (if they have been implemented)
> are not fast because
> their overly-simplified analysis does not fit the currently available
> computer efficiency profile.
> Indeed, their analysis may not fit any past profile of computers, and
> their algorithms were never
> practical.

Interesting observation. Furer's algorithm is not practical, as are
some matrix multiplication algorithms. Do you have any other examples
in mind?

>
> Note, for example, that memory access is far more expensive than
> arithmetic, and that
> many computers now can multiply or add in comparable time; and they
> have several multipliers. Etc.

I would say that memory access if far more expensive than addition,
but not arithmetic in general. FFT multiplication is n log n
(sometimes with a log log n, depending on what you are multiplying)
and I would say the memory access perhaps adds a hidden log n factor.

Can you perhaps be more specific as to what you are hinting at?

>
> RJF
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread rjf
Oh, just another note.

There are people who have made their whole careers on devising
asymptotically fast algorithms
which have never been implemented, or (if they have been implemented)
are not fast because
their overly-simplified analysis does not fit the currently available
computer efficiency profile.
Indeed, their analysis may not fit any past profile of computers, and
their algorithms were never
practical.

Note, for example, that memory access is far more expensive than
arithmetic, and that
many computers now can multiply or add in comparable time; and they
have several multipliers. Etc.

RJF

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread rjf


On Apr 27, 8:43 am, Bill Hart  wrote:
> That's called Kronecker Substitution (or Segmentation), not Fateman
> mulitplication.

.. so you can imagine MY confusion..

Since it is an algorithm for multiplying polynomials over ZZ, it
doesn't seem relevant.


It's probably not fair to blame Kronecker for this, but who cares

And besides, the notion of accurate results has not been defined for
the purposes here.
I gave an example of 10^200*x +Interval[-1,1]...  which is either
accurate or not.

As for the stability of the FFT, you should be able to find literature
on this.  It is probably better than the
n^2 algorithm, but it "depends".
.

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Bill Hart
That's called Kronecker Substitution (or Segmentation), not Fateman
mulitplication.

On Apr 27, 2:38 pm, Jason Grout  wrote:
> On 04/27/2010 12:38 AM, William Stein wrote:
>
> > RJF asks me to forward this, since it bounced for him, evidently.
>
> Thanks.
>
>
>
>
>
>
>
> > -- Forwarded message --
> > From: Richard Fateman
> > Date: Mon, Apr 26, 2010 at 10:15 PM
> > Subject: Re: [sage-devel] numerically stable fast univariate
> > polynomial multiplication over RR[x]
> > To: William Stein
> > Cc: sage-devel@googlegroups.com, rjf
>
> > William Stein wrote:
>
> >> On Mon, Apr 26, 2010 at 8:00 PM, Jason Grout
> >>   wrote:
>
> >>> When multiplying two univariate polynomials over RR[x], Sage uses the 
> >>> naive
> >>> O(n^2) algorithm.  The docstring for _mul_ cites numerical stability as 
> >>> the
> >>> reason to not use asymptotically faster algorithms:
>
> >>> "Here we use the naive `O(n^2)` algorithm, as asymptotically faster
> >>> algorithms such as Karatsuba can have very inaccurate results due to
> >>> intermediate rounding errors. "
>
> > If you are doing arithmetic over the reals (RR?) then there is no
> > rounding error. But you apparently are doing
> > arithmetic in some approximation to the reals-- presumably doing n-bit
> > floating-point arithmetic for
> > some particular n.
>
> Yes, that's what I had in mind.  Specifically double-precision, but more
> generally for any specific fixed n.
>
>
>
> > If you want to get the right answer, then convert all the numbers to
> > exact rationals in QQ. Multiply and then
> > convert back.   Or  multiply all the coefficients by 2^k so they are
> > all integers. Do the arithmetic
> > in ZZ.
>
> Thanks.
>
>
>
> > You can also multiply (approximately) by a fast Fourier transform
> > convolution, which is reputedly rather stable.
> > The stability may in fact directly address the notion of accuracy, at
> > least if you are talking about dense polynomials.
>
> Okay, that was one of the implicit subquestions---is FFT polynomial
> multiplication stable (in a fixed precision that does not change)?
> (Yes, I was thinking of dense polynomials.)
>
> > How are you judging whether one polynomial is "messed up" relative to
> > another?  I am unable to find the
> > example or the documentation by searching from the Sage home page for
> > f.mul_fateman or f.mul or karatsuba.
> > None of which come up with anything. So I don't know what examples you
> > are talking about.
>
> Sorry for not including these.  Here are the examples:
>
> sage: R.=RR[]
> sage: x._mul_? # for the documentation I mentioned
> sage: a=x+R('1e30')
> sage: b=x+R('1e20')
> sage: a
> x + 1.00e30
> sage: b
> x + 1.00e20
> sage: a*b # naive multiplication
> x^2 + 1.01e30*x + 1.00e50
> sage: a._mul_karatsuba(b) # missing the x term
> x^2 + 1.00e50
> sage: a._mul_fateman(b) # congratulations!
> x^2 + 1.01e30*x + 1.00e50
>
> > I don't know what you are calling Fateman multiplication, so I can't
> > comment. Maybe you could email me whatever
> > documentation you are referring to.  I do NOT have Sage installed.
>
> The docs for _mul_fateman say:
>
> ALGORITHM: Based on a paper by R. Fateman
>
> http://www.cs.berkeley.edu/~fateman/papers/polysbyGMP.pdf
>
> The idea is to encode dense univariate polynomials as big integers,
> instead of sequences of coefficients. The paper argues that because
> integer multiplication is so cheap, that encoding 2 polynomials to
> big numbers and then decoding the result might be faster than
> popular multiplication algorithms. This seems true when the degree
> is larger than 200.
>
> > I hope this has given you something to think about.
>
> Yes; thanks!
>
> Jason
>
> --
> Jason Grout
>
> --
> To post to this group, send an email to sage-devel@googlegroups.com
> To unsubscribe from this group, send an email to 
> sage-devel+unsubscr...@googlegroups.com
> For more options, visit this group athttp://groups.google.com/group/sage-devel
> URL:http://www.sagemath.org

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org


[sage-devel] Re: Fwd: numerically stable fast univariate polynomial multiplication over RR[x]

2010-04-27 Thread Jason Grout

On 04/27/2010 12:38 AM, William Stein wrote:

RJF asks me to forward this, since it bounced for him, evidently.


Thanks.




-- Forwarded message --
From: Richard Fateman
Date: Mon, Apr 26, 2010 at 10:15 PM
Subject: Re: [sage-devel] numerically stable fast univariate
polynomial  multiplication over RR[x]
To: William Stein
Cc: sage-devel@googlegroups.com, rjf


William Stein wrote:


On Mon, Apr 26, 2010 at 8:00 PM, Jason Grout
  wrote:


When multiplying two univariate polynomials over RR[x], Sage uses the naive
O(n^2) algorithm.  The docstring for _mul_ cites numerical stability as the
reason to not use asymptotically faster algorithms:

"Here we use the naive `O(n^2)` algorithm, as asymptotically faster
algorithms such as Karatsuba can have very inaccurate results due to
intermediate rounding errors. "


If you are doing arithmetic over the reals (RR?) then there is no
rounding error. But you apparently are doing
arithmetic in some approximation to the reals-- presumably doing n-bit
floating-point arithmetic for
some particular n.



Yes, that's what I had in mind.  Specifically double-precision, but more 
generally for any specific fixed n.





If you want to get the right answer, then convert all the numbers to
exact rationals in QQ. Multiply and then
convert back.   Or  multiply all the coefficients by 2^k so they are
all integers. Do the arithmetic
in ZZ.


Thanks.




You can also multiply (approximately) by a fast Fourier transform
convolution, which is reputedly rather stable.
The stability may in fact directly address the notion of accuracy, at
least if you are talking about dense polynomials.



Okay, that was one of the implicit subquestions---is FFT polynomial 
multiplication stable (in a fixed precision that does not change)? 
(Yes, I was thinking of dense polynomials.)




How are you judging whether one polynomial is "messed up" relative to
another?  I am unable to find the
example or the documentation by searching from the Sage home page for
f.mul_fateman or f.mul or karatsuba.
None of which come up with anything. So I don't know what examples you
are talking about.


Sorry for not including these.  Here are the examples:

sage: R.=RR[]
sage: x._mul_? # for the documentation I mentioned
sage: a=x+R('1e30')
sage: b=x+R('1e20')
sage: a
x + 1.00e30
sage: b
x + 1.00e20
sage: a*b # naive multiplication
x^2 + 1.01e30*x + 1.00e50
sage: a._mul_karatsuba(b) # missing the x term
x^2 + 1.00e50
sage: a._mul_fateman(b) # congratulations!
x^2 + 1.01e30*x + 1.00e50



I don't know what you are calling Fateman multiplication, so I can't
comment. Maybe you could email me whatever
documentation you are referring to.  I do NOT have Sage installed.



The docs for _mul_fateman say:

ALGORITHM: Based on a paper by R. Fateman

http://www.cs.berkeley.edu/~fateman/papers/polysbyGMP.pdf

The idea is to encode dense univariate polynomials as big integers,
instead of sequences of coefficients. The paper argues that because
integer multiplication is so cheap, that encoding 2 polynomials to
big numbers and then decoding the result might be faster than
popular multiplication algorithms. This seems true when the degree
is larger than 200.




I hope this has given you something to think about.


Yes; thanks!

Jason

--
Jason Grout

--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org