This topic is quite far in time. I try to give you my advice.

Turn the dense tensor into a sparse one using the indices of the sparse tensor. 
Then do sparse-vs-sparse dot product. It might accelerate your execution when 
the sparsity ratio is high.

It has less time complexity than the full dot product in a thought experiment, 
not sure it would work well in practice.





---
[Visit 
Topic](https://discuss.mxnet.apache.org/t/is-it-possible-to-speed-up-fullyconnected-calculation-for-sparse-input/351/10)
 or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.mxnet.apache.org/email/unsubscribe/d07a26dc90e933f65ba7079b2739ddda05b153b2f1269b691c6dde0f1ee75f5f).

Reply via email to