On 10/2/22 13:18, Qian Yun wrote:
So first conclusion is to optimize for small inputs.  There's not much
room for it, I think.

For bigger inputs, I think current implementation is bad both ways:

a) For sparse cases, simply chain the MxN terms together, sort and
dedupe is O(N^2*log(N)) better than current O(N^3).

b) For dense cases, using actual dense representation like array
should be a great improvement over current implementation:
we only need a (slightly larger than) (M+N) length array to store
the temp result.

No need to create an array to store temp result, we only need to
create array for input (p, q) and the coefficient of (p * q) can
be computed directly, for example:

https://github.com/sympy/sympy/blob/sympy-1.11.1/sympy/polys/densearith.py#L735

I think this will be valid optimization for small (but dense)
inputs as well.  Will need data to determine when to use this
method -- what qualifies a polynomial as "dense"?
For example, max degree - min degree < 2 * number of monomials ?

- Qian

--
You received this message because you are subscribed to the Google Groups "FriCAS - 
computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to fricas-devel+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/fricas-devel/6a42e34f-fd6e-ca50-e5f4-6835842078ad%40gmail.com.

Reply via email to