antinucleon commented on pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#issuecomment-750438905


   > One thing I missed on previous PRs, is that this is a new op. That means 
we should follow 
https://tvm.apache.org/docs/contribute/code_review.html#deliberate-on-api-and-data-structures.
 @codeislife99 could you do the following:
   > 
   > 1. write up the differences between tf, pytorch, and mxnets's versions of 
this operation.
   > 2. If this operation is TF specific, can it be implemented just using 
existing relay operators?
   > 3. Answer these same questions for all the other sparse PRs you currently 
have open.
   > 
   > Given we are adding so many sparse operations, I think there are a couple 
of questions to answer:
   > 
   > 1. Should we create a new module for sparse operations in `tvm.relay`? 
i.e. `tvm.relay.sparse`
   > 2. Instead of hard coding add these specific one-off sparse operations, 
should we take a high level approach like XLA?
   > 
   > @tqchen @jroesch @ANSHUMAN87 (please ping anyone else who has been working 
on sparse, not sure if I got everyone).
   > 
   > It may be worth discussing this on discuss.
   
   1. I am against to over document here. If the target is for TF frontend, why 
requires comparison to all framework? Link to the TF is fine.
   
   2. What do you mean by high level approach like XLA?
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to