samskalicky commented on issue #15921: dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-539344942 > Hey, thank you guys for the nice work! BTW, would you mind if you guys clearly state why DLTensor is not adopted, which I believe would be useful for other community members for refenrece @wkcn (who implemented DLTensor support in MXNet) and I had a long discussion about this. In fact, we did investigate supporting DLTensor: https://github.com/samskalicky/incubator-mxnet/blob/custom_op/example/custom_op/test.cc#L14 One takeaway we had was that it would not be easy or convenient to modify the structure of DLPack. The reason is that DLPack is used commonly in multiple deep learning frameworks, and we should keep the consistence of DLPack. So building MXNet custom operators on top of DLTensor would limit our future extensibility of MXNet and custom operator support. One example of this would be adding a "layout" field to the tensor structure (ie. NCHW). This is something that I have heard as a request, but is [not currently something the DLPack community is willing to accept](https://github.com/dmlc/dlpack/pull/42). The MXTensor structure in this work is compatible with DLPack/DLTensor, so any user that wants to convert from MXTensor to DLTensor can do so by simply setting the fields in a DLTensor without copying data, and with a very small overhead. Just because we're not using DLTensor now, does not mean that we cannot support it directly in a future PR. If enough users want this feature, the work in this PR can easily be extended to simply pass DLTensors from MXNet to the custom operators in the external library.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services