For bigger inputs, I think current implementation is bad both ways:
a) For sparse cases, simply chain the MxN terms together, sort and
dedupe is O(N^2*log(N)) better than current O(N^3).
For sparse cases, another important factor is cell allocation:
M terms times N terms could result in MxN
I have an idea for sparse/dense distinguish in the context of
multiplication:
For polynomial p and q, define:
sp := numberOfMonomials p -- number of cells of sparse rep
sq := numberOfMonomials q
dp := degree p - minimumDegree p + 1 -- length of dense rep
dq := degree q - minimumDegree q + 1
Correction for the Maxima case:
I used expression instead of polynomial in my previous version.
The following version is corrected. (Using "rat" is the key.)
The conclusion also changes: the Maxima implementation is O(n^2)
when polys are dense and O(n^3) when polys are sparse.
Similar to
On 10/2/22 13:18, Qian Yun wrote:
For bigger inputs, I think current implementation is bad both ways:
a) For sparse cases, simply chain the MxN terms together, sort and
dedupe is O(N^2*log(N)) better than current O(N^3).
This way the time complexity is O(N^2*log(N^2)) -> O(2*N^2*log(N)),
Now some comparison with Maxima and Reduce:
IIUC, Maxima implementation is "ptimes" at
https://github.com/calyau/maxima/blob/master/src/rat3a.lisp#L850
And Reduce implementation is "poly!-multf" at
https://github.com/reduce-algebra/reduce-algebra/blob/master/packages/poly/polrep.red#L199
The
On 02.10.2022 12:37, Qian Yun wrote:
On 10/2/22 13:18, Qian Yun wrote:
So first conclusion is to optimize for small inputs. There's not much
room for it, I think.
For bigger inputs, I think current implementation is bad both ways:
a) For sparse cases, simply chain the MxN terms together,
On 10/2/22 13:18, Qian Yun wrote:
So first conclusion is to optimize for small inputs. There's not much
room for it, I think.
For bigger inputs, I think current implementation is bad both ways:
a) For sparse cases, simply chain the MxN terms together, sort and
dedupe is O(N^2*log(N))