On 02.10.2022 12:37, Qian Yun wrote:
On 10/2/22 13:18, Qian Yun wrote:
So first conclusion is to optimize for small inputs. There's not much
room for it, I think.
For bigger inputs, I think current implementation is bad both ways:
a) For sparse cases, simply chain the MxN terms together,
On 10/2/22 13:18, Qian Yun wrote:
So first conclusion is to optimize for small inputs. There's not much
room for it, I think.
For bigger inputs, I think current implementation is bad both ways:
a) For sparse cases, simply chain the MxN terms together, sort and
dedupe is O(N^2*log(N))