I think we should keep ONNX APIs, since it is able to export many basic models,
although it is not perfect. Users will train their models in MXNet 2.0, and
export ONNX model, then use the ONNX model in their deployment frameworks.
(http://onnx.ai/supported-tools).
It is useful to attract
@larroy Users may need matrix operators and DNN Op(e.g. ReLU, Conv) when
writing a custom Op. Although they can implement it by third-party libraries,
it is more convenient to use the built-in functions in MXNet.
--
You are receiving this because you are subscribed to this thread.
Reply to
Hi @samskalicky , thank you for the contribution!
I have several suggestions.
- custom GPU operators
1. Provide CUDA stream in `OpResource`.
2. Share the same function on CPU and GPU.
Users can discriminate the context by `MXTensor::dltensor::ctx`
- Call framework specific math helper
Reopened #14253.
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/14253#event-2592397494
Hi @reminisce , I try to pass a numpy-compatible array into a legacy operator,
and it raises this error.
```python
>>> import mxnet.numpy as np
>>> import mxnet as mx
>>> import mxnet.numpy as np
>>> a = np.array([1,2])
>>> b = np.array([3,4])
>>> mx.nd.broadcast_add(a,b)
Traceback (most recent
Closed #14253.
--
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-mxnet/issues/14253#event-2592397357