[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526482892 We don't need to change the NNVM code. For multiple registration, we can use different operator names to register them in the back-end. Users only focus on the front-end, which uses `namespace` to distinguish different implementations of CustomOp. For example, users want to register a ConvOp to override the original Convolutional Op: ```python mx.library.load('MyConvOp.so', override=True) # mx.nd.Convolution will be the custom operator ``` If users doesn't want to override the original ConvOp, ```python mx.library.load('MyConvOp.so', namespace='myop') # mx.nd.myop.Convolution is the custom operator, and mx.nd.Convolution is not overrided. ``` The flag `override` is `False` by default. If users don't set it `True` and register an operator multiple times, it will raises the exception to prompt that `The operator xxx has been registered. Please rename it`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on issue #15921: [WIP] dynamic custom operator support
wkcn commented on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526482892 We don't need to change the NNVM code. For multiple registration, we can use different names to register them in the back-end. Users only focus on the front-end, which uses `namespace` to distinguish different implementations of CustomOp. For example, users want to register a ConvOp to override the original Convolutional Op: ```python mx.library.load('MyConvOp.so', override=True) # mx.nd.Convolution will be the custom operator ``` If users doesn't want to override the original ConvOp, ```python mx.library.load('MyConvOp.so', namespace='myop') # mx.nd.myop.Convolution is the custom operator, and mx.nd.Convolution is not overrided. ``` The flag `override` is `False` by default. If users don't set it `True` and register an operator multiple times, it will raises the exception to prompt that `The operator xxx has been registered. Please rename it`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #16048: [Numpy] Automatic type inference mismatch with numpy
sxjscience commented on issue #16048: [Numpy] Automatic type inference mismatch with numpy URL: https://github.com/apache/incubator-mxnet/issues/16048#issuecomment-526482844 Also, would be more consistent if we could print `float32`, `int64` instead of the raw class name This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #16048: [Numpy] Automatic type inference mismatch with numpy
sxjscience commented on issue #16048: [Numpy] Automatic type inference mismatch with numpy URL: https://github.com/apache/incubator-mxnet/issues/16048#issuecomment-526481965 @haojin2 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #16048: [Numpy] Automatic type inference mismatch with numpy
sxjscience edited a comment on issue #16048: [Numpy] Automatic type inference mismatch with numpy URL: https://github.com/apache/incubator-mxnet/issues/16048#issuecomment-526481965 @haojin2 @reminisce This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience opened a new issue #16048: [Numpy] Automatic type inference mismatch with numpy
sxjscience opened a new issue #16048: [Numpy] Automatic type inference mismatch with numpy URL: https://github.com/apache/incubator-mxnet/issues/16048 ```python import mxnet.numpy as np a = 123123213123 b = np.array(a) print(b.dtype) # ``` ```python import numpy as np a = 123123213123 b = np.array(a) print(b.dtype) # int64 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16048: [Numpy] Automatic type inference mismatch with numpy
mxnet-label-bot commented on issue #16048: [Numpy] Automatic type inference mismatch with numpy URL: https://github.com/apache/incubator-mxnet/issues/16048#issuecomment-526481886 Hey, this is the MXNet Label Bot. Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] samskalicky edited a comment on issue #15921: [WIP] dynamic custom operator support
samskalicky edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526477697 @wkcn should we consider removing the nnvm pvalue restriction that attrs can be set multiple time here: https://github.com/dmlc/nnvm/blob/dab5ce8ab6adbf4edd8bd2fa89f1a99f343b6e38/include/nnvm/op.h#L461-L464 Or add an argument to optionally allow override, but default to current behavior? Like this: ```c++ inline Op& set_attr(const std::string& attr_name, // NOLINT(*) const ValueType& value, int plevel = 10, bool override=false); template inline Op& Op::set_attr( // NOLINT(*) const std::string& attr_name, const ValueType& value, int plevel, bool override) { CHECK_GT(plevel, 0) << "plevel in set_attr must be greater than 0"; // update the attribute map of the key by creating new empty if needed. UpdateAttrMap(attr_name, [this, attr_name, value, plevel](any* pmap) { // the callback is in lockscope so is threadsafe. if (pmap->empty()) { OpMap pm; pm.attr_name_ = attr_name; *pmap = std::move(pm); } CHECK(pmap->type() == typeid(OpMap)) << "Attribute " << attr_name << " of operator " << this->name << " is registered as inconsistent types" << " previously " << pmap->type().name() << " current " << typeid(OpMap).name(); std::vector >& vec = nnvm::get >(*pmap).data_; // resize the value type. if (vec.size() <= index_) { vec.resize(index_ + 1, std::make_pair(ValueType(), 0)); } std::pair& p = vec[index_]; CHECK(p.second != plevel && !override) << "Attribute " << attr_name << " of operator " << this->name << " is already registered with same plevel=" << plevel; if (p.second < plevel) { vec[index_] = std::make_pair(value, plevel); } }); return *this; } ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] samskalicky commented on issue #15921: [WIP] dynamic custom operator support
samskalicky commented on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526477697 @wkcn should we consider removing the nnvm pvalue restriction that attrs can be set multiple time here: https://github.com/dmlc/nnvm/blob/dab5ce8ab6adbf4edd8bd2fa89f1a99f343b6e38/include/nnvm/op.h#L461-L464 Or add an argument to optionally allow override, but default to current behavior? Like this: ‘’’ inline Op& set_attr(const std::string& attr_name, // NOLINT(*) const ValueType& value, int plevel = 10, bool override=false); template inline Op& Op::set_attr( // NOLINT(*) const std::string& attr_name, const ValueType& value, int plevel, bool override) { CHECK_GT(plevel, 0) << "plevel in set_attr must be greater than 0"; // update the attribute map of the key by creating new empty if needed. UpdateAttrMap(attr_name, [this, attr_name, value, plevel](any* pmap) { // the callback is in lockscope so is threadsafe. if (pmap->empty()) { OpMap pm; pm.attr_name_ = attr_name; *pmap = std::move(pm); } CHECK(pmap->type() == typeid(OpMap)) << "Attribute " << attr_name << " of operator " << this->name << " is registered as inconsistent types" << " previously " << pmap->type().name() << " current " << typeid(OpMap).name(); std::vector >& vec = nnvm::get >(*pmap).data_; // resize the value type. if (vec.size() <= index_) { vec.resize(index_ + 1, std::make_pair(ValueType(), 0)); } std::pair& p = vec[index_]; CHECK(p.second != plevel && !override) << "Attribute " << attr_name << " of operator " << this->name << " is already registered with same plevel=" << plevel; if (p.second < plevel) { vec[index_] = std::make_pair(value, plevel); } }); return *this; } ‘’’ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zixuanweeei commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed
zixuanweeei commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed URL: https://github.com/apache/incubator-mxnet/issues/16037#issuecomment-526472185 @matteosal Thanks for you reporting this issue. We are addressing the problem. PR is on the way. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526468908 Agree that users are allowed to define the namespace manually and override internal operators. In the case you gave, ``` Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} ``` Users may want opA, opB, opE, opD, and opC in library1. They likewise choose opA, opB, opE, opD, and opC in library2. In addition, users may want opC in lib1 and lib2 simultaneously. I don't think pvalue can address the problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526468908 Agree that users are allowed to define the namespace manually and override internal operators. In the case you gave, ``` Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} ``` Users may want opA, opB, opE, opD, and opC in library1. They likewise choose opA, opB, opE, opD, and opC in library2. I don't think pvalue can address the problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on issue #15921: [WIP] dynamic custom operator support
wkcn commented on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526468908 Agree that users is allowed to define the namespace manually and override internal operators. In the case you gave, ``` Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} ``` Users may want opA, opB, opE, opD, and opC in library1. They likewise choose opA, opB, opE, opD, and opC in library2. I don't think pvalue can address the problem. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (65928b1 -> 9173dad)
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 65928b1 NumPy-compatible infrastructure on Gluon (#16024) add 9173dad [MKLDNN] fix uint8 batch norm memory misuse (#16034) No new revisions were added by this update. Summary of changes: example/quantization/README.md | 2 +- example/quantization/imagenet_gen_qsym_mkldnn.py| 3 ++- src/operator/nn/mkldnn/mkldnn_batch_norm-inl.h | 13 ++--- .../quantization/mkldnn/mkldnn_quantized_batch_norm.cc | 2 +- 4 files changed, 14 insertions(+), 6 deletions(-)
[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16034: [MKLDNN] fix uint8 batch norm memory misuse
pengzhao-intel merged pull request #16034: [MKLDNN] fix uint8 batch norm memory misuse URL: https://github.com/apache/incubator-mxnet/pull/16034 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ElaineBao edited a comment on issue #16034: [MKLDNN] fix uint8 batch norm memory misuse
ElaineBao edited a comment on issue #16034: [MKLDNN] fix uint8 batch norm memory misuse URL: https://github.com/apache/incubator-mxnet/pull/16034#issuecomment-526465867 > Please add a uint8 test for this. uint8 bn test case has been added at: https://github.com/apache/incubator-mxnet/blob/master/tests/python/quantization/test_quantization.py#L675-L678 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ElaineBao commented on issue #16034: [MKLDNN] fix uint8 batch norm memory misuse
ElaineBao commented on issue #16034: [MKLDNN] fix uint8 batch norm memory misuse URL: https://github.com/apache/incubator-mxnet/pull/16034#issuecomment-526465867 > Please add a uint8 test for this. uint8 bn test case has been added at: https://github.com/apache/incubator-mxnet/blob/master/tests/python/quantization/test_quantization.py#L675-L678 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] reminisce opened a new pull request #16047: Support range as advanced index for ndarrays
reminisce opened a new pull request #16047: Support range as advanced index for ndarrays URL: https://github.com/apache/incubator-mxnet/pull/16047 ## Description ## Support `range` as advanced indices for ndarrays. For example, ```python x[range(5)] is equivalent to x[[0, 1, 2, 3, 4]] ``` ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [x] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution
Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-526460672 issue also reported here: https://github.com/apache/incubator-mxnet/issues/8112 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Jerryzcn commented on issue #14883: [Discussion] Overhead in MXNet Execution
Jerryzcn commented on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-526460672 related: https://github.com/apache/incubator-mxnet/issues/8112 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution
Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-526457745 MXNet function call overhead is quite high NumPy call overhead: 0.7 microsecond TVM call overhead: 3.4 microsecond MXNet call overhead: 28.3 microsecond Code attached for benchmarking the function call overhead. ``` import numpy as np from matplotlib import pyplot as plt from IPython import display def benchmark(func, n_start, n_end, n_stride=1): avg_times, sizes = [], (2**np.arange(n_start, n_end, n_stride)) np.random.seed(0) for size in sizes: avg_times.append(func(size)) return sizes, np.array(avg_times) def np_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) res = %timeit -o -q np.copyto(y, x) return res.average _, times = benchmark(np_copy, 1, 8) print('NumPy call overhead: %.1f microsecond'% (times.mean()*1e6,)) import tvm def tvm_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = tvm.nd.array(x), tvm.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(tvm_copy, 1, 8) print('TVM call overhead: %.1f microsecond'% (times.mean()*1e6,)) import mxnet as mx def mx_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = mx.nd.array(x), mx.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(mx_copy, 1, 8) print('MXNet call overhead: %.1f microsecond'% (times.mean()*1e6,)) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution
Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-526457745 MXNet function call overhead is quite high NumPy call overhead: 0.7 microsecond TVM call overhead: 3.4 microsecond MXNet call overhead: 28.3 microsecond ``` import numpy as np from matplotlib import pyplot as plt from IPython import display def benchmark(func, n_start, n_end, n_stride=1): avg_times, sizes = [], (2**np.arange(n_start, n_end, n_stride)) np.random.seed(0) for size in sizes: avg_times.append(func(size)) return sizes, np.array(avg_times) def np_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) res = %timeit -o -q np.copyto(y, x) return res.average _, times = benchmark(np_copy, 1, 8) print('NumPy call overhead: %.1f microsecond'% (times.mean()*1e6,)) import tvm def tvm_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = tvm.nd.array(x), tvm.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(tvm_copy, 1, 8) print('TVM call overhead: %.1f microsecond'% (times.mean()*1e6,)) import mxnet as mx def mx_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = mx.nd.array(x), mx.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(mx_copy, 1, 8) print('MXNet call overhead: %.1f microsecond'% (times.mean()*1e6,)) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution
Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-526457745 MXNet function call overhead is quite high, based on benchmark done here: http://tvm.d2l.ai.s3-website-us-west-2.amazonaws.com/chapter_cpu_schedule/call_overhead.html#overhead-of-numpy-tvm-and-mxnet NumPy call overhead: 0.7 microsecond TVM call overhead: 3.4 microsecond MXNet call overhead: 28.3 microsecond ``` import numpy as np from matplotlib import pyplot as plt from IPython import display def benchmark(func, n_start, n_end, n_stride=1): avg_times, sizes = [], (2**np.arange(n_start, n_end, n_stride)) np.random.seed(0) for size in sizes: avg_times.append(func(size)) return sizes, np.array(avg_times) def np_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) res = %timeit -o -q np.copyto(y, x) return res.average _, times = benchmark(np_copy, 1, 8) print('NumPy call overhead: %.1f microsecond'% (times.mean()*1e6,)) import tvm def tvm_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = tvm.nd.array(x), tvm.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(tvm_copy, 1, 8) print('TVM call overhead: %.1f microsecond'% (times.mean()*1e6,)) import mxnet as mx def mx_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = mx.nd.array(x), mx.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(mx_copy, 1, 8) print('MXNet call overhead: %.1f microsecond'% (times.mean()*1e6,)) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution
Jerryzcn edited a comment on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-526457745 MXNet function call overhead is quite high, based on benchmark done here: http://tvm.d2l.ai.s3-website-us-west-2.amazonaws.com/chapter_cpu_schedule/call_overhead.html#overhead-of-numpy-tvm-and-mxnet NumPy call overhead: 0.7 microsecond TVM call overhead: 3.4 microsecond MXNet call overhead: 28.3 microsecond `import numpy as np from matplotlib import pyplot as plt from IPython import display def benchmark(func, n_start, n_end, n_stride=1): avg_times, sizes = [], (2**np.arange(n_start, n_end, n_stride)) np.random.seed(0) for size in sizes: avg_times.append(func(size)) return sizes, np.array(avg_times) def np_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) res = %timeit -o -q np.copyto(y, x) return res.average _, times = benchmark(np_copy, 1, 8) print('NumPy call overhead: %.1f microsecond'% (times.mean()*1e6,)) import tvm def tvm_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = tvm.nd.array(x), tvm.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(tvm_copy, 1, 8) print('TVM call overhead: %.1f microsecond'% (times.mean()*1e6,)) import mxnet as mx def mx_copy(size): x = np.random.normal(size=size).astype('float32') y = np.empty_like(x) x, y = mx.nd.array(x), mx.nd.array(y) res = %timeit -o -q x.copyto(y) return res.average _, times = benchmark(mx_copy, 1, 8) print('MXNet call overhead: %.1f microsecond'% (times.mean()*1e6,))` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Jerryzcn commented on issue #14883: [Discussion] Overhead in MXNet Execution
Jerryzcn commented on issue #14883: [Discussion] Overhead in MXNet Execution URL: https://github.com/apache/incubator-mxnet/issues/14883#issuecomment-526457745 MXNet function call overhead is quite high http://tvm.d2l.ai.s3-website-us-west-2.amazonaws.com/chapter_cpu_schedule/call_overhead.html#overhead-of-numpy-tvm-and-mxnet This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pilhoon commented on issue #8358: 'module' object has no attribute 'nd'
pilhoon commented on issue #8358: 'module' object has no attribute 'nd' URL: https://github.com/apache/incubator-mxnet/issues/8358#issuecomment-526455910 I just reinstalled mxnet, and this was fixed. try, `pip install -U mxnet-cu90` or (`pip install -U mxnet`). Don’t miss the `U` flag. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] stu1130 opened a new pull request #16046: numpy compatible max min
stu1130 opened a new pull request #16046: numpy compatible max min URL: https://github.com/apache/incubator-mxnet/pull/16046 ## Description ## numpy compatible max min ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [x] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [x] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### ## Comments ## This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (36ac85a -> 65928b1)
This is an automated email from the ASF dual-hosted git repository. reminisce pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 36ac85a Removes ccache setup from static build add 65928b1 NumPy-compatible infrastructure on Gluon (#16024) No new revisions were added by this update. Summary of changes: python/mxnet/contrib/text/embedding.py| 29 +-- python/mxnet/gluon/data/dataloader.py | 34 ++-- python/mxnet/gluon/data/vision/datasets.py| 12 ++- python/mxnet/gluon/data/vision/transforms.py | 25 ++ python/mxnet/gluon/loss.py| 78 ++ python/mxnet/gluon/model_zoo/vision/resnet.py | 19 +++-- python/mxnet/gluon/nn/activations.py | 7 +- python/mxnet/gluon/nn/basic_layers.py | 27 +++--- python/mxnet/gluon/nn/conv_layers.py | 54 +--- python/mxnet/gluon/rnn/rnn_layer.py | 31 --- python/mxnet/gluon/utils.py | 29 +-- python/mxnet/image/detection.py | 17 +++- python/mxnet/image/image.py | 42 +++--- python/mxnet/initializer.py | 15 +++- python/mxnet/numpy/multiarray.py | 4 +- python/mxnet/symbol/numpy/_symbol.py | 4 +- tests/python/unittest/test_numpy_gluon.py | 113 ++ tests/python/unittest/test_numpy_op.py| 2 +- 18 files changed, 436 insertions(+), 106 deletions(-) create mode 100644 tests/python/unittest/test_numpy_gluon.py
[GitHub] [incubator-mxnet] reminisce merged pull request #16024: NumPy-compatible infrastructure on Gluon
reminisce merged pull request #16024: NumPy-compatible infrastructure on Gluon URL: https://github.com/apache/incubator-mxnet/pull/16024 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] apeforest opened a new issue #16045: [Doc] operator source points to the incorrect file
apeforest opened a new issue #16045: [Doc] operator source points to the incorrect file URL: https://github.com/apache/incubator-mxnet/issues/16045 http://mxnet.incubator.apache.org/versions/master/api/python/ndarray/ndarray.html#mxnet.ndarray.mean This operator is defined in https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/broadcast_reduce_sum_value.cc#L83 However, the auto-generated doc file points to https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/broadcast_reduce_op.h#L83 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16045: [Doc] operator source points to the incorrect file
mxnet-label-bot commented on issue #16045: [Doc] operator source points to the incorrect file URL: https://github.com/apache/incubator-mxnet/issues/16045#issuecomment-526451609 Hey, this is the MXNet Label Bot. Thank you for submitting the issue! I will try and suggest some labels so that the appropriate MXNet community members can help resolve it. Here are my recommended label(s): Doc This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] KellenSunderland edited a comment on issue #15613: [Discussion] 1.5.1 Patch Release
KellenSunderland edited a comment on issue #15613: [Discussion] 1.5.1 Patch Release URL: https://github.com/apache/incubator-mxnet/issues/15613#issuecomment-526448615 Hey @TaoLv . Updated the last two: https://github.com/apache/incubator-mxnet/pull/16043 https://github.com/apache/incubator-mxnet/pull/16044 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] KellenSunderland opened a new pull request #16044: [v1.5.x] Update TRT tutorial with new APIs
KellenSunderland opened a new pull request #16044: [v1.5.x] Update TRT tutorial with new APIs URL: https://github.com/apache/incubator-mxnet/pull/16044 ## Description ## Update with tutorial with new targets for the 1.5 release, describe the new API and remove future work sections. ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [x] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ## Notes ## This is a cherry-pick of https://github.com/apache/incubator-mxnet/pull/14860 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] KellenSunderland commented on a change in pull request #14860: Update TRT tutorial with new APIs
KellenSunderland commented on a change in pull request #14860: Update TRT tutorial with new APIs URL: https://github.com/apache/incubator-mxnet/pull/14860#discussion_r319351615 ## File path: docs/tutorials/tensorrt/inference_with_trt.md ## @@ -83,26 +76,23 @@ end = time.time() print(time.process_time() - start) ``` -For this experiment we are strictly interested in inference performance, so to simplify the benchmark we'll pass a tensor filled with zeros as an input. We then bind a symbol as usual, returning a normal MXNet executor, and we run forward on this executor in a loop. To help improve the accuracy of our benchmarks we run a small number of predictions as a warmup before running our timed loop. This will ensure various lazy operations, which do not represent real-world usage, have completed before we measure relative performance improvement. On a modern PC with a Titan V GPU the time taken for our MXNet baseline is **33.73s**. Next we'll run the same model with TensorRT enabled, and see how the performance compares. - -While TensorRT integration remains experimental, we require users to set an environment variable to enable graph compilation. You can see that at the start of this test we explicitly disabled TensorRT graph compilation support. Next, we will run the same predictions using TensorRT. This will require us to explicitly enable the MXNET_USE_TENSORRT environment variable, and we'll also use a slightly different API to bind our symbol. +We are interested in inference performance, so to simplify the benchmark we'll pass a tensor filled with zeros as an input. We bind a symbol as usual, returning an MXNet executor, and we run forward on this executor in a loop. To help improve the accuracy of our benchmarks we run a small number of predictions as a warmup before running our timed loop. On a modern PC with an RTX 2070 GPU the time taken for our MXNet baseline is **17.20s**. Next we'll run the same model with TensorRT enabled, and see how the performance compares. ## MXNet with TensorRT Integration Performance ```python # Execute with TensorRT print('Building TensorRT engine') -os.environ['MXNET_USE_TENSORRT'] = '1' -arg_params.update(aux_params) -all_params = dict([(k, v.as_in_context(mx.gpu(0))) for k, v in arg_params.items()]) -executor = mx.contrib.tensorrt.tensorrt_bind(sym, ctx=mx.gpu(0), all_params=all_params, - data=batch_shape, grad_req='null', force_rebind=True) +trt_sym = sym.get_backend_symbol('TensorRT') +mx.contrib.tensorrt.init_tensorrt_params(trt_sym, arg_params, aux_params) Review comment: Updated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] KellenSunderland commented on issue #14860: Update TRT tutorial with new APIs
KellenSunderland commented on issue #14860: Update TRT tutorial with new APIs URL: https://github.com/apache/incubator-mxnet/pull/14860#issuecomment-526449459 Should be ok to go now, would appreciate a review @aaronmarkham. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16042: Error when calling get_backend_symbol
ZhennanQin commented on issue #16042: Error when calling get_backend_symbol URL: https://github.com/apache/incubator-mxnet/issues/16042#issuecomment-526449028 TensorRT integration isn't built by default. To enable this backend, you need to build from source with `USE_TENSORRT=1` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] KellenSunderland commented on issue #15613: [Discussion] 1.5.1 Patch Release
KellenSunderland commented on issue #15613: [Discussion] 1.5.1 Patch Release URL: https://github.com/apache/incubator-mxnet/issues/15613#issuecomment-526448615 Added: https://github.com/apache/incubator-mxnet/pull/16043 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] KellenSunderland opened a new pull request #16043: add deconv in TRT subgraph (#15666)
KellenSunderland opened a new pull request #16043: add deconv in TRT subgraph (#15666) URL: https://github.com/apache/incubator-mxnet/pull/16043 ## Description ## This PR add deconv layer to TRT subgraph ## Checklist ## ### Essentials ### - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage: - added a unit test in tests/python/tensorrt/test_tensorrt_deconv - [x] Code is well-documented: ### Changes ### - [x] added deconv in subgraph conversion ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] KellenSunderland commented on issue #15690: Deadlock when using trivial Gluon Dataset
KellenSunderland commented on issue #15690: Deadlock when using trivial Gluon Dataset URL: https://github.com/apache/incubator-mxnet/issues/15690#issuecomment-526446930 Also able to get this to work with threading. Preference is to use multi-processing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (36455b2 -> 36ac85a)
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 36455b2 Add RROIAlign (#16017) add 35d943e Updates git_init Jenkins utility function to support checking out a particular commit id add 1597498 Adds artifact repository scripts add 87207c5 Adds CD pipeline framework add b73b8d4 Adds static libmxnet release pipeline add 0ed97f1 Updates CD pipeline add 5fe1516 Adds documentation add 1196c15 Updates kvstore functions to use pushd and popd add 23a7a58 Throws exceptions instead o magic numbers add e539370 Updates artifact repository cli to use --libtype instead of --static or --dynamic add fff8c82 Clarifies ci_utils and cd_utils origin remark add 0570892 Adds clarifying note on why ubuntu 14.04 is being used for compilation add 2b12c59 Removes MXNET_SHA add a8c0fe8 Removes set_release_job_name add 5cb26fd Adds license headers add 98cdf30 Updates artifact repository to expect licenses add 3027296 Moves ci/cd to cd directory add f6d0fc2 Takes downstream job name from environment add 8241c52 Updates order of parameters add 749492f Updates job type parameter to dropdown add 759e76e Adds libmxnet feature extraction code comments add 36ac85a Removes ccache setup from static build No new revisions were added by this update. Summary of changes: cd/Jenkinsfile_cd_pipeline | 62 +++ cd/Jenkinsfile_release_job | 99 cd/Jenkinsfile_utils.groovy| 101 cd/README.md | 181 ++ cd/mxnet_lib/mxnet_lib_pipeline.groovy | 168 ++ cd/mxnet_lib/static/Jenkins_pipeline.groovy| 59 ++ cd/utils/artifact_repository.md| 105 cd/utils/artifact_repository.py| 619 + cd/utils/requirements.txt | 2 + cd/utils/test_artifact_repository.py | 530 ++ ci/Jenkinsfile_utils.groovy| 7 +- ci/docker/runtime_functions.sh | 65 +++ ci/jenkins/Jenkins_steps.groovy| 11 + .../{Jenkinsfile_sanity => Jenkinsfile_tools} | 7 +- 14 files changed, 2010 insertions(+), 6 deletions(-) create mode 100644 cd/Jenkinsfile_cd_pipeline create mode 100644 cd/Jenkinsfile_release_job create mode 100644 cd/Jenkinsfile_utils.groovy create mode 100644 cd/README.md create mode 100644 cd/mxnet_lib/mxnet_lib_pipeline.groovy create mode 100644 cd/mxnet_lib/static/Jenkins_pipeline.groovy create mode 100644 cd/utils/artifact_repository.md create mode 100755 cd/utils/artifact_repository.py create mode 100644 cd/utils/requirements.txt create mode 100644 cd/utils/test_artifact_repository.py copy ci/jenkins/{Jenkinsfile_sanity => Jenkinsfile_tools} (92%)
[GitHub] [incubator-mxnet] szha merged pull request #15051: CD Framework + static binary release
szha merged pull request #15051: CD Framework + static binary release URL: https://github.com/apache/incubator-mxnet/pull/15051 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] szha commented on issue #15051: CD Framework + static binary release
szha commented on issue #15051: CD Framework + static binary release URL: https://github.com/apache/incubator-mxnet/pull/15051#issuecomment-526435550 Approved for now. The concern around access control will be visited separately. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] haojin2 commented on issue #16014: NumPy-compatible Mean upstream
haojin2 commented on issue #16014: NumPy-compatible Mean upstream URL: https://github.com/apache/incubator-mxnet/pull/16014#issuecomment-526435323 @marcoabreu gentle ping for feedback on disabling TEST_COVERAGE build on Clang 3.9 MKLDNN build. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zixuanweeei commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed
zixuanweeei commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed URL: https://github.com/apache/incubator-mxnet/issues/16037#issuecomment-526434063 @ZhennanQin Sure. Just as you have said, it is definitely caused by that stateful RNN op won't check weights again after it has been initialized with MKL-DNN memory format in inference procedure. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch v1.5.x updated: Fix _copy_to on MKLDNN backend (#15637) (#15803)
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a commit to branch v1.5.x in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/v1.5.x by this push: new ac920d3 Fix _copy_to on MKLDNN backend (#15637) (#15803) ac920d3 is described below commit ac920d310de0798befbe1dbb9f986624f4ab5945 Author: Shufan <33112206+juliusshu...@users.noreply.github.com> AuthorDate: Fri Aug 30 10:32:22 2019 +0800 Fix _copy_to on MKLDNN backend (#15637) (#15803) * Fix _copy_to * Add comment --- src/imperative/imperative_utils.h | 34 +++--- 1 file changed, 31 insertions(+), 3 deletions(-) diff --git a/src/imperative/imperative_utils.h b/src/imperative/imperative_utils.h index 4e63e4d..067bb2e 100644 --- a/src/imperative/imperative_utils.h +++ b/src/imperative/imperative_utils.h @@ -419,7 +419,14 @@ inline void PushFCompute(const FCompute& fn, // mapping from index in input_blobs to index in pre_temp_dst std::unordered_map in_temp_idx_map; #if MXNET_USE_MKLDNN == 1 - InvalidateOutputs(outputs, req); + if (exec_type != ExecType::kCrossDeviceCopy) { +// kCrossDeviceCopy is used for `_copy_to` operator, which doesn't compute immediately in +// its FCcomputeEx, but AsyncPush the copy operation to engine. +// So for the case that A is holding mkldnn memory, and then copy A to B, and then copy B +// back to A, we shouldn't invalidate outputs for copying B back to A, because at this time, +// copying A to B may not happen, and will corrupt A's memory. +InvalidateOutputs(outputs, req); + } #endif std::vector tmp_req = req; // setup blobs @@ -461,7 +468,14 @@ inline void PushFComputeEx(const FComputeEx& fn, const auto& run = [=](RunContext rctx) { OpContext opctx{need_grad, is_train, rctx, engine::CallbackOnComplete(), requested}; #if MXNET_USE_MKLDNN == 1 - InvalidateOutputs(outputs, req); + if (exec_type != ExecType::kCrossDeviceCopy) { +// kCrossDeviceCopy is used for `_copy_to` operator, which doesn't compute immediately in +// its FCcomputeEx, but AsyncPush the copy operation to engine. +// So for the case that A is holding mkldnn memory, and then copy A to B, and then copy B +// back to A, we shouldn't invalidate outputs for copying B back to A, because at this time, +// copying A to B may not happen, and will corrupt A's memory. +InvalidateOutputs(outputs, req); + } #endif fn(attrs, opctx, inputs, req, outputs); if (ctx.dev_mask() == gpu::kDevMask && exec_type == ExecType::kSync) { @@ -508,7 +522,14 @@ inline void PushOperator(const OpStatePtr& state, engine::CallbackOnComplete on_complete) { OpContext opctx{need_grad, is_train, rctx, on_complete, requested}; #if MXNET_USE_MKLDNN == 1 - InvalidateOutputs(outputs, req); + if (exec_type != ExecType::kCrossDeviceCopy) { +// kCrossDeviceCopy is used for `_copy_to` operator, which doesn't compute immediately in +// its FCcomputeEx, but AsyncPush the copy operation to engine. +// So for the case that A is holding mkldnn memory, and then copy A to B, and then copy B +// back to A, we shouldn't invalidate outputs for copying B back to A, because at this time, +// copying A to B may not happen, and will corrupt A's memory. +InvalidateOutputs(outputs, req); + } #endif fcompute_ex(state, opctx, inputs, req, outputs); if (ctx.dev_mask() == gpu::kDevMask && exec_type == ExecType::kSync @@ -547,7 +568,14 @@ inline void PushOperator(const OpStatePtr& state, // mapping from index in input_blobs to index in pre_temp_dst std::unordered_map in_temp_idx_map; #if MXNET_USE_MKLDNN == 1 + if (exec_type != ExecType::kCrossDeviceCopy) { +// kCrossDeviceCopy is used for `_copy_to` operator, which doesn't compute immediately in +// its FCcomputeEx, but AsyncPush the copy operation to engine. +// So for the case that A is holding mkldnn memory, and then copy A to B, and then copy B +// back to A, we shouldn't invalidate outputs for copying B back to A, because at this time, +// copying A to B may not happen, and will corrupt A's memory. InvalidateOutputs(outputs, req); + } #endif std::vector tmp_req = req; // populate input blobs and output blobs
[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #15803: [v1.5.x] Fix _copy_to on MKLDNN backend (#15637)
pengzhao-intel merged pull request #15803: [v1.5.x] Fix _copy_to on MKLDNN backend (#15637) URL: https://github.com/apache/incubator-mxnet/pull/15803 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #15803: [v1.5.x] Fix _copy_to on MKLDNN backend (#15637)
pengzhao-intel commented on issue #15803: [v1.5.x] Fix _copy_to on MKLDNN backend (#15637) URL: https://github.com/apache/incubator-mxnet/pull/15803#issuecomment-526431691 merging now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Zheweiqiu opened a new issue #16042: Vote Error when calling get_backend_symbol
Zheweiqiu opened a new issue #16042: Vote Error when calling get_backend_symbol URL: https://github.com/apache/incubator-mxnet/issues/16042 I am using the following code to use tensorrt to speed up inference but got some errors: ``` sym, arg_params, aux_params = mx.model.load_checkpoint(prefix, epoch) trt_sym = sym.get_backend_symbol(‘TensorRT’) mx.contrib.tensorrt.init_tensorrt_params(trt_sym, arg_params, aux_params) mx.contrib.tensorrt.set_use_fp16(False) self.model = trt_sym.simple_bind(ctx=self.ctx, data = (1,3,image_size[0], image_size[1]), grad_req=‘null’, force_rebind=True) self.model.copy_params_from(arg_params, aux_params) ``` File “/home/qiuzhewei/RetinaFace/retinaface.py”, line 224, in init trt_sym = sym.get_backend_symbol(‘TensorRT’) File “/opt/conda/lib/python3.7/site-packages/mxnet/symbol/symbol.py”, line 2564, in get_backend_symbol check_call(_LIB.MXGenBackendSubgraph(self.handle, c_str(backend), ctypes.byref(out))) File “/opt/conda/lib/python3.7/site-packages/mxnet/base.py”, line 253, in check_call raise MXNetError(py_str(LIB.MXGetLastError())) mxnet.base.MXNetError: [09:39:00] src/c_api/…/operator/subgraph/subgraph_property.h:367: Check failed: it != prop_ptr_map.end(): SubgraphProperty TensorRT is not found in SubgraphPropertyRegistry Stack trace: [bt] (0) /opt/conda/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x4a357b) [0x7f683cbe657b] [bt] (1) /opt/conda/lib/python3.7/site-packages/mxnet/libmxnet.so(MXGenBackendSubgraph+0x1ab) [0x7f683ed5a5db] [bt] (2) /opt/conda/lib/python3.7/lib-dynload/…/…/libffi.so.6(ffi_call_unix64+0x4c) [0x7f6876a4aec0] [bt] (3) /opt/conda/lib/python3.7/lib-dynload/…/…/libffi.so.6(ffi_call+0x22d) [0x7f6876a4a87d] [bt] (4) /opt/conda/lib/python3.7/lib-dynload/_ctypes.cpython-37m-x86_64-linux-gnu.so(_ctypes_callproc+0x2ce) [0x7f6876ec9f7e] [bt] (5) /opt/conda/lib/python3.7/lib-dynload/_ctypes.cpython-37m-x86_64-linux-gnu.so(+0x139b4) [0x7f6876eca9b4] [bt] (6) python(_PyObject_FastCallKeywords+0x49b) [0x55b2e1d13d2b] [bt] (7) python(_PyEval_EvalFrameDefault+0x537e) [0x55b2e1d6f7ae] [bt] (8) python(_PyFunction_FastCallKeywords+0xfb) [0x55b2e1d1279b] The code is basically from /tests/python/tensorrt/test_resnet18.py I have no idea how to solve it and google doesn't provide any solution. Any help will be appreciate! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] juliusshufan commented on issue #15803: [v1.5.x] Fix _copy_to on MKLDNN backend (#15637)
juliusshufan commented on issue #15803: [v1.5.x] Fix _copy_to on MKLDNN backend (#15637) URL: https://github.com/apache/incubator-mxnet/pull/15803#issuecomment-526430715 @TaoLv CI passed. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed
ZhennanQin commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed URL: https://github.com/apache/incubator-mxnet/issues/16037#issuecomment-526427358 @zixuanweeei Would you please have a look for this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #15886: Graph Partition API
mseth10 commented on a change in pull request #15886: Graph Partition API URL: https://github.com/apache/incubator-mxnet/pull/15886#discussion_r319333297 ## File path: python/mxnet/symbol/symbol.py ## @@ -1437,6 +1437,54 @@ def _gen_atomic_symbol(self): return Symbol(handle) +def optimize_for(self, backend, args=None, **kwargs): Review comment: I agree with you on having support for passing shape/type/stype dicts just like simple_bind. I will add that support in a later PR. Since we accept kwargs now, it would not be a breaking API change. I understand your concern about having a more explicit api, but I prefer keeping it this way for the following reason. Users are expected to know what arguments are required by the backend. The backend errors out if the required arguments are not provided whether or not we have an explicit flag. At this point, it's not a choice for users to not infer shape/type/stype when the backend requires it, or vice versa. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16012: [mkldnn-v1.0] Update MKL-DNN to v1.0.2
pengzhao-intel commented on issue #16012: [mkldnn-v1.0] Update MKL-DNN to v1.0.2 URL: https://github.com/apache/incubator-mxnet/pull/16012#issuecomment-526422311 @rongzha1 could you work with Tao together to merge more MKL-DNN 1.0 op into this branch? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] stu1130 commented on a change in pull request #15981: Disable test coverage of C++ codebase on CI
stu1130 commented on a change in pull request #15981: Disable test coverage of C++ codebase on CI URL: https://github.com/apache/incubator-mxnet/pull/15981#discussion_r319329299 ## File path: include/mxnet/tuple.h ## @@ -199,7 +199,11 @@ class Tuple { * \return the corresponding dimension size */ inline ValueType& operator[](int i) { -CHECK(i >= 0 && i < ndim()) << "index = " << i << " must be in range [0, " << ndim() << ")"; +// it fixes the false alarm of assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Werror=strict-overflow] +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wstrict-overflow" + CHECK(i >= 0 && i < ndim()) << "index = " << i << " must be in range [0, " << ndim() << ")"; +#pragma GCC diagnostic pop return begin()[i]; } /*! Review comment: done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] samskalicky commented on issue #15921: [WIP] dynamic custom operator support
samskalicky commented on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526418617 I think that hurts the ease of use to force namespacing. Plus if we internally set lib1.opA then users wont know how to call mx.nd.lib1.opA or mx.nd.lib1_opA (whatever the naming scheme is). If we give them a way to set the namespace name, its still not as easy to use as mx.nd.opA. Plus we want users to be able to override internal MXNet ops. For example mx.nd.Convolution does not support FP16 on CPU. So if a user had a need for that they could write their own. So we need ops to be overwritten, and be in the mx.nd namespace. Im thinking the best solution might be to track the number of times an operator is registered and just increment the pvalue for now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ZHAIXINGZHAIYUE commented on issue #16036: ndarray error
ZHAIXINGZHAIYUE commented on issue #16036: ndarray error URL: https://github.com/apache/incubator-mxnet/issues/16036#issuecomment-526415394 @apeforest thank you. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ZHAIXINGZHAIYUE closed issue #16036: ndarray error
ZHAIXINGZHAIYUE closed issue #16036: ndarray error URL: https://github.com/apache/incubator-mxnet/issues/16036 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on issue #15678: [MXNET-1418]Add contrib op into cpp package
wkcn commented on issue #15678: [MXNET-1418]Add contrib op into cpp package URL: https://github.com/apache/incubator-mxnet/pull/15678#issuecomment-526415353 I have updated dmlc-core, but the CI failed in MXNet Scala binding. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #15981: Disable test coverage of C++ codebase on CI
anirudh2290 commented on a change in pull request #15981: Disable test coverage of C++ codebase on CI URL: https://github.com/apache/incubator-mxnet/pull/15981#discussion_r319325786 ## File path: include/mxnet/tuple.h ## @@ -199,7 +199,11 @@ class Tuple { * \return the corresponding dimension size */ inline ValueType& operator[](int i) { -CHECK(i >= 0 && i < ndim()) << "index = " << i << " must be in range [0, " << ndim() << ")"; +// it fixes the false alarm of assuming signed overflow does not occur when assuming that (X - c) > X is always false [-Werror=strict-overflow] +#pragma GCC diagnostic push +#pragma GCC diagnostic ignored "-Wstrict-overflow" + CHECK(i >= 0 && i < ndim()) << "index = " << i << " must be in range [0, " << ndim() << ")"; +#pragma GCC diagnostic pop return begin()[i]; } /*! Review comment: we can add for the below function too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on issue #15921: [WIP] dynamic custom operator support
wkcn commented on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526413206 Hi @rondogency , do you have any idea? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] pengzhao-intel merged pull request #16017: Add RROIAlign
pengzhao-intel merged pull request #16017: Add RROIAlign URL: https://github.com/apache/incubator-mxnet/pull/16017 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (5d0d335 -> 36455b2)
This is an automated email from the ASF dual-hosted git repository. patriczhao pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 5d0d335 Update README.md (#16035) add 36455b2 Add RROIAlign (#16017) No new revisions were added by this update. Summary of changes: .../contrib/{roi_align-inl.h => rroi_align-inl.h} | 53 ++-- src/operator/contrib/rroi_align.cc | 326 + tests/python/unittest/test_operator.py | 137 + 3 files changed, 488 insertions(+), 28 deletions(-) copy src/operator/contrib/{roi_align-inl.h => rroi_align-inl.h} (53%) create mode 100644 src/operator/contrib/rroi_align.cc
[GitHub] [incubator-mxnet] ZhennanQin commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed
ZhennanQin commented on issue #16037: LSTM with MKL-DNN produces wrong output after weights are changed URL: https://github.com/apache/incubator-mxnet/issues/16037#issuecomment-526412366 Probably it's because the stateful RNN op doesn't check if weight is changed. We will look at this. @pengzhao-intel This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] 01/01: [DOC] Consistent capitalization: mxnet -> MXNet, scala -> Scala
This is an automated email from the ASF dual-hosted git repository. terrytangyuan pushed a commit to branch terrytangyuan-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit d24c7b7f0bb1bc52250b91274ba7b43fba03b562 Author: Yuan Tang AuthorDate: Thu Aug 29 20:46:40 2019 -0400 [DOC] Consistent capitalization: mxnet -> MXNet, scala -> Scala --- CONTRIBUTORS.md | 26 +- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md index 86c20cb..8eb540d 100644 --- a/CONTRIBUTORS.md +++ b/CONTRIBUTORS.md @@ -26,37 +26,37 @@ Committers are people who have made substantial contribution to the project and The committers are the granted write access to the project. * [Bing Xu](https://github.com/antinucleon) - - Bing is the initiator and major contributor of operators and ndarray modules of mxnet. + - Bing is the initiator and major contributor of operators and ndarray modules of MXNet. * [Tianjun Xiao](https://github.com/sneakerkg) - Tianqjun is the master behind the fast data loading and preprocessing. * [Yutian Li](https://github.com/hotpxl) - - Yutian is the ninja behind the dependency and storage engine of mxnet. + - Yutian is the ninja behind the dependency and storage engine of MXNet. * [Mu Li](https://github.com/mli) - - Mu is the contributor of the distributed key-value store in mxnet. + - Mu is the contributor of the distributed key-value store in MXNet. * [Tianqi Chen](https://github.com/tqchen) - - Tianqi is one of the initiator of the mxnet project. + - Tianqi is one of the initiator of the MXNet project. * [Min Lin](https://github.com/mavenlin) - - Min is the guy behind the symbolic magics of mxnet. + - Min is the guy behind the symbolic magics of MXNet. * [Naiyan Wang](https://github.com/winstywang) - - Naiyan is the creator of static symbolic graph module of mxnet. + - Naiyan is the creator of static symbolic graph module of MXNet. * [Mingjie Wang](https://github.com/jermainewang) - Mingjie is the initiator, and contributes the design of the dependency engine. * [Chuntao Hong](https://github.com/hjk41) - Chuntao is the initiator and provides the initial design of engine. * [Chiyuan Zhang](https://github.com/pluskid) - - Chiyuan is the creator of MXNet Julia Package. + - Chiyuan is the creator of MXNet Julia package. * [Junyuan Xie](https://github.com/piiswrong) * [Haibin Lin](https://github.com/eric-haibin-lin) * [Qiang Kou](https://github.com/thirdwing) - KK is a R ninja, he makes mxnet available for R users. * [Tong He](https://github.com/hetong007) - - Tong is the major maintainer of MXNetR, he designs the mxnet interface and wrote many of the tutorials on R. + - Tong is the major maintainer of MXNet R package, he designs the MXNet interface and wrote many of the tutorials on R. * [Yizhi Liu](https://github.com/yzhliu) - Yizhi is the main creator on mxnet scala project to make deep learning available for JVM stacks. * [Zixuan Huang](https://github.com/yanqingmen) - - Zixuan is one of major maintainers of mxnet scala package. + - Zixuan is one of major maintainers of MXNet Scala package. * [Yuan Tang](https://github.com/terrytangyuan) - - Yuan is one of major maintainers of mxnet scala package. + - Yuan is one of major maintainers of MXNet Scala package. * [Chris Olivier](https://github.com/cjolivier01) * [Sergey Kolychev](https://github.com/sergeykolychev) - Sergey is original author and current maintainer of Perl5 interface. @@ -89,9 +89,9 @@ List of Contributors * [Full List of Contributors](https://github.com/apache/incubator-mxnet/graphs/contributors) - To contributors: please add your name to the list when you submit a patch to the project:) * [Feng Wang](https://github.com/happynear) - - Feng makes mxnet compatible with Windows Visual Studio. + - Feng makes MXNet compatible with Windows Visual Studio. * [Jack Deng](https://github.com/jdeng) - - Jack created the amalgamation script and Go bind for mxnet. + - Jack created the amalgamation script and Go bind for MXNet. * [Li Dong](https://github.com/donglixp) * [Piji Li](https://github.com/lipiji) * [Hu Shiwen](https://github.com/yajiedesign) @@ -103,7 +103,7 @@ List of Contributors * [Nan Xiao](https://github.com/road2stat) * [Wei Wu](https://github.com/tornadomeet) * [Michaël Benesty](https://github.com/pommedeterresautee) - -Michaël contributes the R visualization module of mxnet + -Michaël contributes the R visualization module of MXNet * [Kublai Jing](https://github.com/Kublai-Jing) * [chenjx1005](https://github.com/chenjx1005) * [ry](https://github.com/ry)
[incubator-mxnet] branch terrytangyuan-patch-1 created (now d24c7b7)
This is an automated email from the ASF dual-hosted git repository. terrytangyuan pushed a change to branch terrytangyuan-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. at d24c7b7 [DOC] Consistent capitalization: mxnet -> MXNet, scala -> Scala This branch includes the following new commits: new d24c7b7 [DOC] Consistent capitalization: mxnet -> MXNet, scala -> Scala The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[GitHub] [incubator-mxnet] terrytangyuan opened a new pull request #16041: [DOC] Consistent capitalization: mxnet -> MXNet, scala -> Scala
terrytangyuan opened a new pull request #16041: [DOC] Consistent capitalization: mxnet -> MXNet, scala -> Scala URL: https://github.com/apache/incubator-mxnet/pull/16041 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526407709 @samskalicky I see. Could we add a prefix name to avoid name conflict? For example, Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} Firstly, we load lib1. They are registered as 1_opA, 1_opC, 1_opD. Then we load lib2, and lib2 registers 1_opB, *2_opC*, 1_opE. If we load lib2 again, they are called 2_opB, 3_opC, 2_opE. There is not any name conflict in nnvm. In the front end, we can map them to original names in specific namespace. e.g. lib1 = mx.library.load(“lib1.so”, namespace=“lib1”) op = mx.nd.lib1.opA(...) op2 = lib1.nd.opA(...) mx.library.load(“lib1.so”, namespace=“lib1.nn”) mx.nd.lib1.nn.opA(...) mx.library.load(“lib1.so”) # no namespace mx.nd.opA mx.library.load(“lib1.so”) # raise an exception since mx.nd.opA has been registered. I prefer ‘mx.nd.lib1.opA’ because users may use the CustomOp in Gluon, and call them by F.lib1.opA. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526407709 @samskalicky I see. Could we add a prefix name to avoid name conflict? For example, Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} Firstly, we load lib1. They are registered as 1_opA, 1_opC, 1_opD. Then we load lib2, and lib2 registers 1_opB, *2_opC*, 1_opE. If we load lib2 again, they are called 2_opB, 3_opC, 2_opE. There is not any name conflict in nnvm. In the front end, we can map them to original names in specific namespace. e.g. lib1 = mx.library.load(“lib1.so”, namespace=“lib1”) op = mx.nd.lib1.opA(...) op2 = lib1.nd.opA(...) mx.library.load(“lib1.so”, namespace=“lib1.nn”) mx.nd.lib1.nn.opA(...) mx.library.load(“lib1.so”) # no namespace mx.nd.opA(...) mx.library.load(“lib1.so”) # raise an exception since mx.nd.opA has been registered. I prefer ‘mx.nd.lib1.opA’ because users may use the CustomOp in Gluon, and call them by F.lib1.opA. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] anirudh2290 commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was: APIs that might be a good idea to break in 2.0)
anirudh2290 commented on issue #9686: [Discussion] MXNet 2.0 Roadmap (was: APIs that might be a good idea to break in 2.0) URL: https://github.com/apache/incubator-mxnet/issues/9686#issuecomment-526409252 `module.bind` API has a param named data_shapes which is misleading because the param is not limited to just shapes but they are data descriptors and accept DataDesc instances. I think this should be fixed in 2.0 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (61f3dbc -> 5d0d335)
This is an automated email from the ASF dual-hosted git repository. wkcn pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 61f3dbc numpy-compatible cumsum upstream (#15924) add 5d0d335 Update README.md (#16035) No new revisions were added by this update. Summary of changes: README.md | 1 + 1 file changed, 1 insertion(+)
[GitHub] [incubator-mxnet] wkcn commented on issue #16035: Update README.md
wkcn commented on issue #16035: Update README.md URL: https://github.com/apache/incubator-mxnet/pull/16035#issuecomment-526408986 Thank you for the update! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn merged pull request #16035: Update README.md
wkcn merged pull request #16035: Update README.md URL: https://github.com/apache/incubator-mxnet/pull/16035 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526407709 @samskalicky I see. Could we add a prefix name to avoid name conflict? For example, Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} Firstly, we load lib1. They are registered as 1_opA, 1_opC, 1_opD. Then we load lib2, and lib2 registers 1_opB, *2_opC*, 1_opE. If we load lib2 again, they are called 2_opB, 3_opC, 2_opE. There is not any name conflict in nnvm. In the front end, we can map them to original names in specific namespace. e.g. lib1 = mx.library.load(“lib1.so”, namespace=“lib1”) op = mx.nd.lib1.opA(...) op2 = lib1.nd.opA(...) I prefer ‘mx.nd.lib1.opA’ because users may use the CustomOp in Gluon, and call them by F.lib1.opA This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526407709 @samskalicky I see. Could we add a prefix name to avoid name conflict? For example, Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} Firstly, we load lib1. They are registered as 1_opA, 1_opC, 1_opD. Then we load lib2, and lib2 registers 1_opB, *2_opC*, 1_opE. If we load lib2 again, they are called 2_opB, 3_opC, 2_opE. There is not any name conflict in nnvm. In the front end, we can map them to original names in specific namespace. e.g. lib1 = mx.library.load(“lib1.so”, namespace=“lib1”) op = mx.nd.lib1.opA(...) op2 = lib1.nd.opA(...) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support
wkcn edited a comment on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526407709 @samskalicky I see. Could we add a prefix name to avoid name conflict? For example, Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} Firstly, we load lib1. They are registered as 1_opA, 1_opC, 1_opD. Then we load lib2, and lib2 registers 1_opB, *2_opC*, 1_opE. If we load lib2 again, they are called as 2_opB, 3_opC, 2_opE. There is not any name conflict in nnvm. In the front end, we can map them to original names in specific namespace. e.g. lib1 = mx.library.load(“lib1.so”, namespace=“lib1”) op = mx.nd.lib1.opA(...) op2 = lib1.nd.opA(...) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] wkcn commented on issue #15921: [WIP] dynamic custom operator support
wkcn commented on issue #15921: [WIP] dynamic custom operator support URL: https://github.com/apache/incubator-mxnet/pull/15921#issuecomment-526407709 @samskalicky I see. Could we add a prefix name to avoid name conflict? For example, Library1 has {opA, opC, opD} Library2 has {opB, opC, opE} Firstly, we load lib1. They are registered as 1_opA, 1_opC, 1_opD. Then we load lib2, and lib2 registers 1_opB, *2_opC*, 1_opE. If we load lib2 again, they are called as 2_opB, 3_opC, 2_opE. There is not any name conflict in nnvm. In the front end, we can map them to some new names. e.g. lib1 = mx.library.load(“lib1.so”, namespace=“lib1”) op = mx.nd.lib1.opA(...) op2 = lib1.nd.opA(...) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] anirudhacharya commented on a change in pull request #14942: ONNX export: Slice op - Handle None value for ends
anirudhacharya commented on a change in pull request #14942: ONNX export: Slice op - Handle None value for ends URL: https://github.com/apache/incubator-mxnet/pull/14942#discussion_r319309924 ## File path: python/mxnet/contrib/onnx/mx2onnx/_op_translations.py ## @@ -1499,17 +1500,19 @@ def convert_slice_axis(node, **kwargs): axes = int(attrs.get("axis")) starts = int(attrs.get("begin")) -ends = int(attrs.get("end", None)) -if not ends: -raise ValueError("Slice: ONNX doesnt't support 'None' in 'end' attribute") +ends = attrs.get("end", None) +if not ends or ends == 'None': +# ONNX doesn't support None for ends. Since ends=None depicts Review comment: why should None be mapped to INT_MAX? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] roywei edited a comment on issue #15613: [Discussion] 1.5.1 Patch Release
roywei edited a comment on issue #15613: [Discussion] 1.5.1 Patch Release URL: https://github.com/apache/incubator-mxnet/issues/15613#issuecomment-516937546 Update: moved this to 1.6.0 scope nightly test failure need to be fixed: https://github.com/apache/incubator-mxnet/issues/15374 http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/NightlyTestsForBinaries/detail/master/395/pipeline/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] roywei commented on issue #15589: [Discussion] 1.6.0 Roadmap
roywei commented on issue #15589: [Discussion] 1.6.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-526396099 Moving fixing nightly failure from 1.5.1 scope to 1.6.0 as they are failing on master branch not 1.5.x branch. https://github.com/apache/incubator-mxnet/issues/15613#issuecomment-516937546 > > nightly test failure need to be fixed: > #15374 > > http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/NightlyTestsForBinaries/detail/master/395/pipeline/ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15994: ONNX import/export: Upsampling
anirudhacharya commented on issue #15994: ONNX import/export: Upsampling URL: https://github.com/apache/incubator-mxnet/pull/15994#issuecomment-526394665 > Depends on #15811 if it depends on the above PR, which is not merged, then how are the tests passing? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] reminisce commented on issue #16014: NumPy-compatible Mean upstream
reminisce commented on issue #16014: NumPy-compatible Mean upstream URL: https://github.com/apache/incubator-mxnet/pull/16014#issuecomment-526394357 @marcoabreu I would like to disable test coverage build that failed in this PR. Let me know if you are not okay with it. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] anirudhacharya commented on issue #15993: Add a contrib operator for Constant
anirudhacharya commented on issue #15993: Add a contrib operator for Constant URL: https://github.com/apache/incubator-mxnet/pull/15993#issuecomment-526394354 can you reference this issue in the PR description - https://github.com/apache/incubator-mxnet/issues/6087 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch revert-15762-getenv_fixes updated (8b174e2 -> 574a09f)
This is an automated email from the ASF dual-hosted git repository. wkcn pushed a change to branch revert-15762-getenv_fixes in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 8b174e2 Revert "Refactor LibraryInitializer so it's thread safe. Fixes random sporadical concurrency crashes. (#15762)" add 574a09f Retrigger CI No new revisions were added by this update. Summary of changes:
[GitHub] [incubator-mxnet] larroy opened a new pull request #16040: Revert accidental change to CMakelists
larroy opened a new pull request #16040: Revert accidental change to CMakelists URL: https://github.com/apache/incubator-mxnet/pull/16040 ## Description ## Revert change coming in another PR: https://github.com/apache/incubator-mxnet/pull/15808 @apeforest ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] samskalicky closed pull request #16027: [v1.5.x] FP16 Support for C Predict API (#15245)
samskalicky closed pull request #16027: [v1.5.x] FP16 Support for C Predict API (#15245) URL: https://github.com/apache/incubator-mxnet/pull/16027 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] samskalicky commented on issue #16027: [v1.5.x] FP16 Support for C Predict API (#15245)
samskalicky commented on issue #16027: [v1.5.x] FP16 Support for C Predict API (#15245) URL: https://github.com/apache/incubator-mxnet/pull/16027#issuecomment-526380968 This fix depends on #15118 which is a big feature, so im closing this PR for the 1.5.x branch. It will have to wait until 1.6.x This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #15960: Added more tests for Large Indices
ChaiBapchya commented on a change in pull request #15960: Added more tests for Large Indices URL: https://github.com/apache/incubator-mxnet/pull/15960#discussion_r319289789 ## File path: tests/nightly/test_large_vector.py ## @@ -170,6 +170,225 @@ def test_topk(): assert val.sum() == (LARGE_X - 1) +def test_shape(): +b = create_vector(size=LARGE_X) +mx.nd.waitall() Review comment: #15941 review for wait all This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx edited a comment on issue #15589: [Discussion] 1.6.0 Roadmap
ptrendx edited a comment on issue #15589: [Discussion] 1.6.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-526373840 We have multiple improvements to BERT inference and training speed that we would like to be part of 1.6 release: - [x] Softmax optimizations (#15545 ) - [ ] Pointwise fusion for GPU (#15167 ) - [ ] Eliminate common expressions (#15657 ) - [ ] Bias speed improvements (#16039 ) - [ ] Aggregated AdamW optimizer (not yet PR'ed) - [ ] Aggregated zeroing of the gradients (not yet PR'ed) - [ ] Aggregated sum of squares operator (also used in LARS, @Caenorst is working on a PR) - [ ] Embedding gradient optimization (not yet PR'ed) - [ ] Faster multihead attention operator (not yet PR'ed) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac
apeforest commented on a change in pull request #14535: [DOC] Updated install instructions for mac URL: https://github.com/apache/incubator-mxnet/pull/14535#discussion_r319284515 ## File path: docs/install/osx_setup.md ## @@ -91,23 +95,29 @@ Install the dependencies, required for MXNet, with the following commands: # For visualization of network graphs pip install graphviz==0.8.4 Review comment: If we use pip3, we should change to pip3 all over the doc. Also, as I said this will break for people who uses virtualenv, as I always use pip in my virtualenv. It's up to @aaronmarkham 's call. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx commented on issue #15589: [Discussion] 1.6.0 Roadmap
ptrendx commented on issue #15589: [Discussion] 1.6.0 Roadmap URL: https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-526373840 We have multiple improvements to BERT inference and training speed that we would like to be part of 1.6 release: [x] Softmax optimizations (#15545 ) [ ] Pointwise fusion for GPU (#15167 ) [ ] Eliminate common expressions (#15657 ) [ ] Bias speed improvements (#16039 ) [ ] Aggregated AdamW optimizer (not yet PR'ed) [ ] Aggregated zeroing of the gradients (not yet PR'ed) [ ] Aggregated sum of squares operator (also used in LARS, @Caenorst is working on a PR) [ ] Embedding gradient optimization (not yet PR'ed) [ ] Faster multihead attention operator (not yet PR'ed) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (b7cca01 -> 61f3dbc)
This is an automated email from the ASF dual-hosted git repository. reminisce pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from b7cca01 [MXNET-895] ONNX import/export: TopK (#13627) add 61f3dbc numpy-compatible cumsum upstream (#15924) No new revisions were added by this update. Summary of changes: python/mxnet/_numpy_op_doc.py | 51 ++ src/operator/numpy/np_cumsum-inl.h | 188 + src/operator/numpy/np_cumsum.cc| 94 +++ .../{random/np_uniform_op.cu => np_cumsum.cu} | 14 +- tests/python/unittest/test_numpy_op.py | 43 - 5 files changed, 383 insertions(+), 7 deletions(-) create mode 100644 src/operator/numpy/np_cumsum-inl.h create mode 100644 src/operator/numpy/np_cumsum.cc copy src/operator/numpy/{random/np_uniform_op.cu => np_cumsum.cu} (74%)
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #15960: Added more tests for Large Indices
access2rohit commented on a change in pull request #15960: Added more tests for Large Indices URL: https://github.com/apache/incubator-mxnet/pull/15960#discussion_r319283652 ## File path: src/ndarray/ndarray_function.cc ## @@ -38,7 +38,7 @@ void Copy(const TBlob &from, TBlob *to, RunContext ctx) { MSHADOW_TYPE_SWITCH(to->type_flag_, DType, { if (to->type_flag_ == from.type_flag_) { - const index_t size = from.Size(); + const index_t size = static_cast(from.Size()); Review comment: Is automatic casting good practice ? I am not sure. Even google style guide doesn't say anything about it: https://google.github.io/styleguide/cppguide.html#Casting I my opinion explicit casting improves code readability and helps the person reading the code understand that `Size()` doesn't return `index_t`. @apeforest what do you think ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] reminisce merged pull request #15924: Numpy-compatible cumsum upstream
reminisce merged pull request #15924: Numpy-compatible cumsum upstream URL: https://github.com/apache/incubator-mxnet/pull/15924 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15960: Added more tests for Large Indices
apeforest commented on a change in pull request #15960: Added more tests for Large Indices URL: https://github.com/apache/incubator-mxnet/pull/15960#discussion_r319281539 ## File path: src/operator/tensor/indexing_op.h ## @@ -1208,9 +1208,9 @@ template struct one_hot { template MSHADOW_XINLINE static void Map(int i, DType* out, const IType* indices, Review comment: What about i? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #15960: Added more tests for Large Indices
apeforest commented on a change in pull request #15960: Added more tests for Large Indices URL: https://github.com/apache/incubator-mxnet/pull/15960#discussion_r319281319 ## File path: src/ndarray/ndarray_function.cc ## @@ -38,7 +38,7 @@ void Copy(const TBlob &from, TBlob *to, RunContext ctx) { MSHADOW_TYPE_SWITCH(to->type_flag_, DType, { if (to->type_flag_ == from.type_flag_) { - const index_t size = from.Size(); + const index_t size = static_cast(from.Size()); Review comment: Is this needed? Size() returns a size_t and sholud be automatiocally casted This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx opened a new pull request #16039: FullyConnected Bias performance improvement on GPU
ptrendx opened a new pull request #16039: FullyConnected Bias performance improvement on GPU URL: https://github.com/apache/incubator-mxnet/pull/16039 ## Description ## This PR improves performance of bias addition kernel (both fwd and bwd) in FullyConnected operator. @eric-haibin-lin FYI ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage: - [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (196d1f4 -> b7cca01)
This is an automated email from the ASF dual-hosted git repository. roshrini pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 196d1f4 [MXNET-1399] multiclass-mcc metric enhancements (#14874) add b7cca01 [MXNET-895] ONNX import/export: TopK (#13627) No new revisions were added by this update. Summary of changes: .../mxnet/contrib/onnx/mx2onnx/_op_translations.py | 33 ++ .../mxnet/contrib/onnx/onnx2mx/_import_helper.py | 5 ++-- .../mxnet/contrib/onnx/onnx2mx/_op_translations.py | 8 ++ tests/python-pytest/onnx/test_cases.py | 3 +- 4 files changed, 46 insertions(+), 3 deletions(-)
[GitHub] [incubator-mxnet] Roshrini merged pull request #13627: [MXNET-895] ONNX import/export: TopK
Roshrini merged pull request #13627: [MXNET-895] ONNX import/export: TopK URL: https://github.com/apache/incubator-mxnet/pull/13627 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. marcoabreu pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 4bd2f97 Bump the publish timestamp. 4bd2f97 is described below commit 4bd2f975dc92eed5d2781a9449b08835fc3088a8 Author: mxnet-ci AuthorDate: Thu Aug 29 21:01:51 2019 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..bf18177 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Thu Aug 29 21:01:51 UTC 2019
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. marcoabreu pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 76719d2 Bump the publish timestamp. 76719d2 is described below commit 76719d27a7bd9e017b457038c5e8d864b36f0432 Author: mxnet-ci AuthorDate: Thu Aug 29 19:38:39 2019 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..31fa6f2 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Thu Aug 29 19:38:39 UTC 2019
[GitHub] [incubator-mxnet] kkurni commented on issue #15998: Build MXNET with GPU but fail to import mxnet on CPU machine
kkurni commented on issue #15998: Build MXNET with GPU but fail to import mxnet on CPU machine URL: https://github.com/apache/incubator-mxnet/issues/15998#issuecomment-526330847 I cannot find the "libcuda.so.1" on my CPU machine. "/usr/lib/x86_64-linux-gnu" (I am using nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04 docker image) So my question is how to build mxnet that is compatible with GPU but also CPU. Is there any way to do this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services